Information technology design ethics
Once upon a time, I looked for missing books at my college library. I had one of the best finding rates of anyone that had that position before me, and for years after (footnote). The secret to my success was in looking for each book as though it were a story: a person pulled it off the shelf for a reason, and a person put it away with context.
I used the information I had — the Library of Congress (LOC) codes, or if some subjects had multiple books missing. But it was always how people interacted with information that made the story, and it was the story that helped me find the books. The data was static, the LOC code very clearly telling everyone where it belonged in space. In between removing it and replacing it on a shelf were all the things that could happen as a book was moved and leveraged, and all the things that tired, stressed, excited, etc., people do. The LOC was orientation, findability, and navigation. The book was the encapsulated rich data, the information it contained the future-sense desire, and the people…well, people.
As I traveled through life, these insights kept popping up. I thought about them as I painted, wondering about the magnets of perception and how so much of our perceptions are predicated on our existing stories. Those existing stories, meanwhile, were everywhere. I spent entire days watching people navigate their surrounding world while applying their existing stories — usually in the form of stereotypes. I watched people meeting for the first time bring into the interactions their own sense making, built on stereotypes and personal comfort levels around a slew of characteristics that would shift by person and context.
I just wanted to understand. I’d had goalposts moved so frequently and repeatedly that I actively distrusted when people dangled equivocal acceptance, “do this and I’ll like you,” or “do this and you’ll get that.” Almost everyone will do that eventually, for the simple reason that they want something, and they see you as able to contribute. I learned to ignore the transaction, and focus on the ethics of the ask.
People can ask some pretty fucked up things in the name of love and belonging. I've also been asked for grace.
This is ultimately a model built on acceptance, because that’s what I wanted for myself. We are urged to think the worst of people, to use arbitrary characteristics as signals of quality. We're urged to use people and assume that everyone else — or at least everyone "smart enough" to want to work with — is doing the same to us, until and unless they prove otherwise.
But people are not that simple. Language, religion, the color of skin/eyes/hair — none of it actually signals trust, an adhesion to reality, or a willingness to problem solve. None of it signals how much they care about people, and how far that care will expand. None of it signals how much they will use people, to what degree of damage before or if they'll think of mercy. I’ve wandered widely within the cultures of the US, and this is a pervasive trait. People are getting burned over and over again, just because they wish so hard for the solution to be that simple: to know in an instant whether someone is going to screw you over.
Information technology is a tool. As with any tool, it can be used to help or harm. The difference is that this tool is showing itself to be going into production with defined ethics deeply embedded. Those ethics are pulled into its use without the person wielding it able to moderate the ethics or even necessarily understanding what ethics they're leveraging. And with the ethical choices being made, they're often using a warhead to hammer a nail.
Ethics is, ultimately, our developing understanding of the help/harm spectrum. When we're talking about information, ethics is always in play. Overwhelming or stripped too deep; verified but misplaced; unverified and emphasized. All of these scenarios, and more, impact information ingestion. All the scenarios are set in context: who has access, their background, information literacy, interest, need, the time constraints involved, the environment pushing and pulling on their focus. There is a massive space for harm, and a massive space for help.
My goal is humanist design. I want people to work. Not “work” as in “get a job,” but for us, as an intelligent species, to keep existing so long that any of the ideas I've had would elicit a reaction of antiquated wrongheadedness or cliche understanding.
There’s a lot of hope in that wish. The hope that what I’ve begun understanding will be useful enough to others. The hope that we’ll continue to have cultures that embrace learning and the change that comes with it. It also, at it’s furthest reaches, embraces a hope that our species and planet survives.
We also have information structures that are hitting another point of growing pains. We’ve been expanding our data stores dramatically. Data is being defined and captured everywhere, and being logged and kept and referenced. Many of us trust the patterns of those historical data points without insight or context, leveraging our cognitive biases to supply meaning, and promoting the outcomes we think beneficial. If it only benefits a few? If the data captured only tells the story supported by the assumed outcomes? If the data is outright wrong, infected with the poison of LLM's hallucinations or the will to do harm in the name of profit?
In information architecture, the way I practice it, context is everything. People, beyond the data captured and computed, are the meaning makers. Until our technology can keep up with the meaning and reality people are wrapping their heads around, we’re in growing pains. Technology cannot tell people who they are; people must have the ability to define themselves, even if it's at odds with what's accepted by a particular culture. Technology must be wrestled into an informational substrate that supports every person of every culture; past, present, future; known and unknown. It must be emergent into the future, not static. Only the past is static, and our understanding of the past will become richer as we develop new lenses and understanding.
Those growing pains can be seen in the constant sense of failing personal agency that make so many people so very angry. It’s in the inability to find the help needed – medical, to fix an account, to actually get what is being billed, in trying to meet expectations that look good in a computed formula but don’t match experience. It's in a hierarchy that is so certain they know how everything functions that they can't see the rampant dysfunction. They proceed to apply more screws that put more people into pain and suffering, all while congratulating themselves for their supposed prowess.
The cool thing about growing pains: if they are acknowledged and worked, it’s an opportunity to shift gears. Information did this. Information, I hope, can undo this.
There are a few key things that can make our software more humane. The most important — and hardest — is allowing for individual agency, as unbiased as possible.
Individual agency needs function, without deceptive design. It needs control over personal information, so our individual patterns aren’t used to our detriment. It needs access to relevant information for decision making, without authoritarian controls. The information also needs to be findable and navigable, with traceable signals of who-ness and underlying method.
Stating this a different way: each of us needs the freedom to access and determine the information that is relevant to our decisions, understand what thinking and cognitive biases helped to determined the information structures, and if/how the data was validated. Each of us need the agency to decide for ourselves whether we are going to take a provided solution, or try for another. Each of us requires data privacy that reflects the privacy we had before technology. Which also means we need multiple unusual solutions; not monopolies, not copy-cats, and not all adhering to all of the same best practices. We need modularity, and interoperability, and the ability to move at a whim.
It’s not about culling data to fit outcomes of ‘right’ and ‘wrong’, but about making sure people can more easily and/or fully contextualize the story of it, understand the lenses and priorities in place when another human was deciding what information was presented, and starting to form an idea of how fulsome or curtailed their personal information stream is.
If there are more of us looking to build for agency, a reasonable starting place is to understand the abstracts of the information structures and how people make decisions. We live in the stew of our cultures, and those cultures define agreement-shortcuts that aren't as infinitely applicable as we treat them. Those cultures have their own core precepts and cognitive biases that we generally don’t acknowledge, but still get built, unmodifiable, into the solutions and experiences provided. So we have to make those agnostic to culture, or something that can be chosen.
These cultural standards are not unlike the Y2K bug. The code was built with the passing thought that a potential future change is too far away to make a difference. Then it was leveraged repetitively as a ‘best practice’, making its way into so many systems that no one really knew where it might be missed. It caused a scramble to avert potential disasters as the limitation was acknowledged — but first it had to be understood as something that wasn’t just the way it is.
There are systems we need to build that honor culture — like context-specific agricultural knowledge. There are system we have built that are horrifyingly dismissive — like the pervasive, US-centric names fields.
The culture embedded in so many of US systems are predicated on simply binaries. On/off. Right/wrong. Male/female. But one of the most pervasive and subliminal binaries is win/lose. We know it’s a binary for so many social cues. Shareholders can never get enough ‘winning’ to support the idea of workers being paid fairly. Eugenics doesn’t just try to get their race to breed more, but work hard to see that other races breed less. A 32% vote, with 34% abstaining, is called a “mandate”.
We don't need to invade privacy to do this. We need anthropology, cognitive anthropology (and less psychology, which is so often about helping people fit the accepted model), linguistics, and mathematics. The should help us aim toward applied large-strata patterns, not pre-defined solutions.
The simplest step to becoming more humanist is to ask: who is set up for success, who is set up for failure? If we assume both (which is a fair assumption in our current culture), we can develop the awareness and brutal self-honesty to see the harm/failure as well as the benefit. Is there really no way in which the predefined failed state could ever work? What are the emotions sparking as that failed state is actually considered as able to succeed? What “reasons” spark the same or balancing emotions? How can the failure be mitigated? Eradicated? How is it contextualized? What are the ethical and philosophical foundations of the failure states?
Information technology can be agnostic of culture instead of built with culture as the starting point; but a void is a hard concept to build against. The positive-space form that can be more easily built against is “humanism”.
Calls: bad actors, cognitive bias, core precepts, encapRD, failing information states, future-sense, implicit process, lenses, prioritization, reality adhesion, story, trust
Sends: n/a
Citations
Agency Is the Highest Level of Personal Competence. (March 27 2022). Psychology Today.
Anthony Dukes, Y. Z. (2019, February 28). Why Is Customer Service So Bad? Because It’s Profitable. Harvard Business Review.
Based on Richardson et al. 2023, Steffen et al. 2015, and Rockström et al. (2023). Planetary boundaries. Stockholm Resilience Center.
Gettleman, J. (2015, January 25). Meant to Keep Malaria Out, Mosquito Nets Are Used to Haul Fish In. The New York Times.
Haslem, N. (2017, April 17). Why it’s so offensive when we call people animals. The Conversation.
Madsen, A. (2022). Deconstructing US Healthcare.
Katherine Richardson, X. B. (2023, September 19). What are planetary boundaries and why should we care.
Malaria innovation: new nets, old challenges. (2023). Bulletin of the World Health Organization, 101(10), 622–623.
Moore JW. What Is the Sense of Agency and Why Does it Matter? Front Psychol. 2016 Aug 29;7:1272. doi: 10.3389/fpsyg.2016.01272. PMID: 27621713; PMCID: PMC5002400.
O’Neil, C. (2022). Weapons of math destruction: How big data increases inequality and threatens democracy. Allen Lane.
Persona (documentary film), Directed by Tim Travers Hawkins, Mark Monroe. HBO Max, 2021.
Peter D. Hutchinson, Kelly Nyks, Jared P. Scott. (2015, April 18). Requiem for the American Dream with Noam Chomsky.
Richardson, K., Steffen, W., Lucht, W., Bendtsen, J., Cornell, S. E., Donges, J. F., Drüke, M., Fetzer, I., Bala, G., von Bloh, W., Feulner, G., Fiedler, S., Gerten, D., Gleeson, T., Hofmann, M., Huiskamp, W., Kummu, M., Mohan, C., Nogués-Bravo, D., … Rockström, J. (2023). Earth beyond six of nine planetary boundaries. Science Advances, 9(37), eadh2458.
Rockstrom, J. (June 1 2023). Planetary boundaries: scientific advancements | Frontiers Forum Live 2023. Frontiers.
Sapolsky, R. M. (2023). Determined: Life without free will. Bodley Head.
Tims, A. (2025, April 17). The death of customer service: why has it become so, so bad? The Guardian.
Understanding Personal Agency. (March 21 2017). Philosophical Therapist.
Why nets? Against Malaria Foundation.
Wikipedia contributors. Agency (philosophy). Wikipedia, The Free Encyclopedia.
Wikipedia contributors. Agency (psychology). Wikipedia, The Free Encyclopedia.
Wikipedia contributors. Agency (sociology). Wikipedia, The Free Encyclopedia.
footnotes
for years after
I stayed in contact with the student manager for far longer than normal.