UX is not living up to it’s humanistic potential

Applied, effective UX practice can have a heartbreaking cost.

UX is not living up to it’s humanistic potential
Colossal head of a youth, the remnant of a Greek statue on display at the Metropolitan Museum of Art in New York City. Photo by Levi Meir Clancy on Unsplash

UX is not living up to its humanistic potential

Applied, effective UX practice can have a heartbreaking cost.

The context

Recently, a family whose orbit I am a part of has had a trauma.

My first feeling was heartbreak for their trauma. My first coherent thought was the cognitive pathways a traumatized child might be practicing.

Then a caregiver mentioned that the children were shutting down, cutting the world out and focusing just on screen time. My reaction, after a bout of empathy? What are the apps being used?

The feelings coursing through me on that idea were panic, another shock of heartbreak, followed by remorse. These are emotions, but they are guided by knowledge and the underpinnings are worth considering.

Effective UX leverages cognitive psychology

UX is not a simple, straightforward practice. Part of UX is practical cognitive psychology. So, really, are most processes, business or otherwise — I’m not advocating for a psychology degree to practice UX. I am, however, ultimately advocating for us to better understand the power we’re wielding and urge people to think through the ramifications.

Whatever we say about user advocacy, we all know that investing a more humane ethos in our UX are usually long-term battles. We also acknowledge (sometimes only after a strong talking-to) that business’ primary motivator is making money.

The goal in our digital economy is rarely about the health and wellbeing of people; even then, that might be the outcome we are searching for, but the first goal is financial survival.

We need to get ethics involved. I am not the first to say this, and I’m a nobody compared to Mike Monteiro; in a way this is only a different framing to support ideas in the same ballpark as his.

But let me give you the why’s behind my emotional reactions, because the surge was emotion (your mind’s way of saying, “Pay attention to this!!!”), but the foundation is logical.

Panic

Some of the most popular apps out there are tweaking behavior along addiction lines.

Facebook is the primary and most successful use case of deliberately using cognitive and behavioral psychology to keep people rapt, in their app, and continually returning.

Someone in their business understands that neural networks get stronger in our cognitive hierarchy through practice. The more you use a certain pathway, the easier it is to slip into it and the more you miss it when it’s not being used. They have learned over time that it supports their business model. We see limited evidence that they might care beyond that simple thing: it supports their business model.

Look at the deepening political/reality polarization. The business does not care about the damage it’s doing to society, what possible care could they have for a child searching for a semblance of peace?

The app promotes use of addictive pathways. Put a person with trauma in those addiction pathways, supported by algorithms that will put the most intriguing content in their path to keep them there and engaged, and there’s no telling where they’ll wind up. Except they will be attached to their screens, and likely some niche where they feel more normal.

Is Facebook the only problematic app? Not by a long shot. But it’s fairly universal these days; if you’re not on the platform, you’re at least aware of it.

Heartbreak

Where will the traumatized child feel more normal? Will it be in the heavy ingestion of pop culture, or will they go down a content rabbit hole?

Worst case, will the content rabbit hole in search of ‘more normal’ be finding a group that normalizes the trauma? Normalizing trauma usually involves some variation of ‘proving’ that the traumatic event is universal, and thus you should get over it; that pain is over-accentuated, not as bad as the first-person feels; that the emotions being generated are actually good; or even that the traumatizing event can’t exist so it must be a lie. I don’t think Facebook cares. It’s goal is engagement, and any use case suffices.

Regardless of the in-app scenario, the true heartbreak is that few of these apps are interested in a real individual’s health and wellbeing. Business models do not generally qualify useful and humanistic as a money-making proposition — we aren’t in a tool-making business, unless it also happens to generate a profit.

These apps are not set up to worry about whether it would be better for a person with a significant level of trauma or stress to engage with the world around them. They are set up for each individual to take care of themselves before opening the app.

It’s easier to build user flows assuming that everyone is already mentally healthy enough to take care of themselves — to aim at the people most likely to be able to disengage, and tweak their behaviors so they stay. Except so very many apps want us to not be mentally healthy enough to disengage, so that we continue to escalate our use. Their aim is to practice the neural pathways that will supply the behavior the business can cash in on.

Apps are also generally built around one defining mental model. There is no single mental model to achieve wellbeing, and there should not be. We have yet to get anywhere close to understanding the breadth and depth of possible transferrable mental models. We keep showing each other that there’s more to learn, and forcing users through specific thought process is counterproductive at best.

Remorse

I contributed.

I love my craft, truly. It’s complex, nuanced, aims to iteratively solve real issues, and is helping to put human back into process…when we get to use it ethically.

Profit doesn’t care. Businesses will say that they are humanistic, but someone, somewhere along the hierarchy will hear, acknowledge, and pressure downward that more profit needs to come out of a business unit. That someone usually has the power to fire you.

And at the end of every day, I want to continue to earn a living. I push as much as I feel safe pushing; I try to be careful and look for humanistic markers before accepting a job or project. In the first instance, our reality is caught up in the power of money above all else, and I’m not independently wealthy. In the second instance, I have a confirmation bias and pulling in information from people who also have a confirmation bias.

Most of us want to believe we aren’t really adding to the toxicity of the world.

Humanism in our apps

To build towards a humanistically coded system takes time. It takes not only acknowledging edge cases, but building them into the functionality and creating the data and information architecture substrates to support them. It takes talking to users as people with a bevy of problems that won’t be easy to balance, and not to come up with some impossible perfect single unit that no one person can ever realistically, actually be.

It takes being transparent about the architecture involved, and allowing people to modify their core in-app experience. Ask yourself if Facebook would let you turn off ads, or determine your own level for information overload, or prioritize your threads according to your personal preference. Consider if they allowed each user to chose on an individual basis what data the company is allowed to harvest and use for its profit. All of these actions would diminish their bottom line and increase production costs; as it accumulative across all their users, it could effectively bankrupt the business. Because their business is built on our information and uses, and leveraging those to put us in the path of those who would manipulate us.

To build towards a humanistically coded system would include comprehending and living the difference between getting buy-in and gaining understanding. Both might ask the same root questions, but the first will be phrased to prompt people for what the “right” answer will be, and the second allows every individual to come to their own conclusion. It also means more time has to be built into the process for synthesis, and dealing with the fallout when they feel heard but their ideas aren’t implemented.

Gaining understanding opens us to political battles, and the need to really understand our own thinking to a degree that we don’t really have time to do in our society. It requires insight into others and ourselves, and a willingness to learn, change, and grow. It takes truly understanding that what we think today won’t be what we think tomorrow or next year, and having the wherewithal to explain our own adaptation to a slew of people who aren’t comfortable with ANY change.

Gaining understanding is messy, multivariate, and would ultimately push us to building inclusive systems that we don’t yet have a template for.

Our failures for inclusivity are vast and systemic. Our ability and ease of adding dismissal on top of trauma is growing.

Can humanism survive?