Trustlessness
Something has been resonating in my brain since I called the last post ‘done’, specifically this statement:
Something has been resonating in my brain since I called the last post ‘done’, specifically this statement:
We have entered a trustless society. GenAI tipped over what was already teetering.
I know that I’ve been talking around this idea with a few of my friends for months now. I think it’s contributed to our tribalism / current administration, generational shifts, and, yeah, generative AI. I had no idea that I had bought into the concept so deeply that I would state it bluntly, and then not even think to support it.
So it’s time to walk people through how I got here, and why I think it’s important.
How I got to this conclusion
It started with personas. I’m on a project where we’re wrapping our heads around generational shifts. The available analysis didn’t balance with the core shifts we were trying to account for. In fact, if I trusted only the available analysis, it rather assumed that the core shifts were one-offs that could/should be hard coded. Looking towards history and anthropology, the logic of ‘blips’ didn’t hold.
I started triangulating, mostly through news and media, paying more attention to the reactions of the people having them — GenZ — than the interpretations of the people with those reactions.
Caveat: there are many deeply embedded shifts in that project, and I worked on other aspects while watching for insights (GenZ are everywhere, but the ideas about what-defines-GenZ are mostly from older generations). This is actually as close as I’ve been to articulating how I’m approaching this.
One thing to understand with marketing personas is that they are focused on, “how can we get traction with this demographic?” Which also means that most marketing agencies are developing narratives with widely understood, but biased information and “known” processes.
Marketing is complex, but a large part of what the effort is aimed at it to get someone (customer) to do something (buy) without a self-initialized, agency-driven decision point (the stuff they didn’t know they ‘needed’).This is a deeply embedded priority, and it shifts what information is perceived and how it is interpreted. The standard demographics and analysis are not about shifting goals (sell more) to account for new patterns; nor empathizing (pain is a fulcrum, not a wound); nor recalibrating for shifting attitudes (the intent is to align people to culture, not culture to people).
From a purely informational lens, it’s moribund, borderline false in its best case scenario, and focused on fitting people into a status quo that only evolves to be a heightened form of key characteristics. We’re on a path — in a defined process — and the path defines people instead of people figuring out how to navigate a wide world.
When culture shifts in ways that can’t be force-fed into the path, marketing labels it ‘generational’, and various people start working on how the demographic has changed from their point of view. It’s very much a, “I am on high, looking down, and these are the shifts,” kind of narrative, and it’s using juxtaposition with the pathway as understood to explain what the shifts are.
To be clear: we do this frequently, especially those with a ‘curse of knowledge’ about the discipline at hand. We are orienting against the abstracted connectome that we believe we understand to find the differences and navigate them.
There’s a further complexity in this standard operating procedure: outliers are dismissed. They are approaching the information with confirmation bias: find the stats that support that bias, and shift the message or messaging apparatus to ‘bring in’ more people. The variability of reaction and response to environmental factors are too disparate, so they look for evolving patterns in large-scale behavior shifts instead of what could be sparking those shifts and then checking to see if there’s evidence of those behaviors.
Parsing it against orientation/findability/navigation again: instead of assuming that the orientation point is non-fungible, I assume it can be shifted — a different point of view. Instead of finding a “difference”, I’m searching for emerging patterns. Instead of navigating around “difference” to support myself with the understood pathways, I’m willing to carve out new connections that more closely support the emerging patterns.
In short, instead of assuming that the ‘deviation’ is the novel statistically-relevant characteristic, I assume that newly-expressed variable factors are in the environment, how we’re orienting, and thinking of the existing connectome as an early (replaceable) attempt.
In order to do this I need to approach with the more agnostic research starting point of ‘what are some current surprising behavior trends, and how are people wrapping their heads around why they do this?” I started very literally: with the information, and people’s articulations, and then seeing if those ideas had broader implications.
But, and here’s a catch: I’m not a good marketer. Never will be, because I understand just how manipulative it can be. I deeply enjoy setting people up for success, and I consider the ultimate success to be people comfortable in their own skins. These things do not fit in the current prevailing marketing practices (there are exceptions). I’m not here to provide a new marketing paradigm; I’m just trying to understand people.
Example
My hypothesis is that there are wide-brushstroke societal shifts that impact generationally. It’s still a hypothesis, and I don’t know when/if I’ll get to fleshing out the ‘backstory’ because the key element is now: GenZ and GenA.
One of the more talked-about aspects of GenZ is that they were causing lots of pain entering the workforce, and the workforce is blaming GenZ. Basically, hiring standards are saying, “this is the way, conform.” GenZ is saying, “No.” And they actually have reasons, which are being dismissed because the employment standards are prioritized over understanding.
The reasons mostly circle around behavior. Those GenZ responses mostly say, “this behavior by the company/manager/team is suspect, smells of someone who’s trying to use/abuse me, and I’m done with that shit.”
Then look at the environment. How many investigations have their been about false online narratives (ranging from “best life” to faked wealth to catfishing), child grooming, bullying, etc.? How many patterns exist in our technological infrastructure that not only support new connections super-easily, but makes it so that the abuser has to be shut down early, strongly, and with no ‘pivot points’ to be able to be effectively stalled — because the technology supports the invasive behavior, not cutting it off*? One person can now effect the trauma of hundreds or even thousands of perpetrators, depending on their personal tools within the technology.
*Check out Bluesky if you want to see the shift that one tiny difference — a feed that respects an individual’s decisions — can make
Then we blame the people who have HAD to learn to spot bad actors, and effect what they can in an environment that is built to support bad actors, not who they act upon.
From one point of view, we’ve raised a generation to be victims; they aren’t putting up with it; and then we’re blaming them for not being compliant.
Back to trustlessness
The example actually feeds into my current theory:
We have entered a trustless society. GenAI tipped over what was already teetering.
GenA grew up surrounded by tribalism that shifted to political polarization, which itself was brought on by skewing the information building of individuals on a mass scale. The big players didn’t shift the underlying technology. That made an entire generation show a stereotype-level willingness to walk away from work environments that sniffed of bad actors as the safest possible path, and GenA are still dealing with that. GenA watched and wrapped their heads around how adults were responding to, naming, and then shifting the underlying meaning of the name of “fake news.” They see stressed climate, and the climate deniers.
In short: they see a lie that will make their lives significantly harder. They saw that information understood one way on Monday could mean something else entirely by Thursday. They saw gaslighting, and goal post moving, and siloing, and stalking, and how cutting off bad actors with no sympathy shown is the best way to deal with them. They saw how those manipulated information streams depended on willful dismissal of information streams that were willing to question sources.
Chances are they didn’t understand it well enough to articulate it, but their emerging behavior hints that they’ve smelled the information, and it doesn’t smell good.
They are now learning that information can be silo’ed from understanding. Information is as quick as typing a question into an AI and trusting it; understanding is as slow as learning how to think while their friends speed ahead of them with just enough close-enough-to-workable information to get shit done. How often have you already heard, “oh, you can’t blame me, ChatGPT said it — I’m not responsible!”
They have to choose: efficient & blameless, or hard. In world that wants everything yesterday, it takes a stubborn soul with plenty of multifaceted support to choose the hard path.
Add generative AI to the mix
Generative AI wasn’t built to be truthful. It was built to sound about right — to pass a sniff test, read as human-enough or real-enough, and convince people they found a short-cut to hard work in all of its forms.
It puts its own generated hallucinations (and, seriously, why didn’t we call them lies?) into it’s stream. The more it’s used, the more the hallucinated information it supplied, feeds into the next computation.
And it touches every digital information store we have. Words, images, sound, video — any of it could come from AI, and we don’t know. We can no longer easily tell with a millisecond scan what is fantasy, and fantasy is being portrayed as real.
Scams that your family are in trouble. Deepfakes. Cumshots. The list goes on.
These AI companies depend on their product being indistinguishable from reality. That’s their mark of quality, that’s what they are selling.
They are actively fighting attribution: they don’t want truthfulness. As best I can find, all labeling of AI generated content is externalized to the user: they don’t want to be clear that this is not real. There’s more regulations posited, and AI wants them all to fail.
Information was already teetering under the weight of not being able to disconnect from bad actors, and then bad actors slipping into the information stream to realign meaning to better support their narrative. Information is fast becoming something that we can only trust what we experience in our bodies. Generative AI doesn’t want to be truthful, and doesn’t want to give people a heads-up that it’s been used.
This is trustlessness in a nutshell.
Trustlessness, pushed to an extreme, leads to lost connections. Information without connection loses meaning.
People need meaning.

A trustless world is a hard world, with short lives and ever-watchfulness. It’s a world stuck in survival mode. Except we can’t be stuck in survival mode and actually survive because of what’s happening with our climate.