A few weeks ago, I attended the Digital City Festival in Manchester, hosted by Prolific North, a news hub for the North of England focusing on media and creative sectors. I was invited to speak on a panel about “Tech for Good.” It often happens that panel discussions prompt me to spend more time than is strictly necessary for the purposes of the event thinking about the topic and the pre-prepared questions. I have a tendency to write copious notes and then never refer to them again because I only have a minute or two to speak (why didn’t I think of that before writing all those notes?!). This is both the thrill and the frustration of a panel discussion. But thankfully, I also have a newsletter. So, I’m determined those pages of notes will not have been in vain…
Therefore, this edition is a deeper dive on one of the questions we were given to prepare in advance of the panel.
There are actually three questions I wanted to cover, but then I got a bit carried away, and the first question generated enough material for one e-mail on its own, so I’ll address that one now. And you can expect a Part 2 e-mail in a week, or so. (As someone who typically leaves about a year between mail-outs, this two-part series feels like a big commitment I’m not entirely comfortable making, but hey ho… here we go!)
Q: What does “tech for good” mean to you?
I have to confess that this expression doesn’t mean a whole lot to me, and I’d go further to suggest it doesn’t have much semantic content in general. I’m well aware that this isn’t perhaps the best starter-for-ten on a panel titled TECH FOR GOOD, but when we say that we want tech to stand for something, it’s worth considering what that means it’s standing against. So, let’s get right into it and contemplate what the alternatives to “tech for good” might be.
Is there such a thing as “tech for evil”?
The tech giant Google famously included “don’t be evil” in its code of conduct (and this remains an unofficial company motto to this day). But the evolution of Google has perfectly exposed the profound limitations of this conceptual framing — who gets to define what is “good” and what is “evil”? When thousands of Google employees signed a letter of protest over the company’s work supporting the U.S. Department of Defense, it became apparent that perhaps not enough time had been spent defining what the company — let alone users of Google services — meant by “evil.” Another historical example further illustrates the point. Some evidence has surfaced that during WWII U.S. tech behemoth IBM leased Hollerith punch-card machines to the Nazi regime in Germany, which were used to facilitate the Holocaust. The technology introduced data-driven efficiency into the identification, persecution, and murder of Jews. In 2018, this shameful example of tech industry complicity was cited by Amazon workers, who were protesting their company’s sales of facial recognition technology to law enforcement. Is (or was) IBM a “tech for evil” company? What about Microsoft? Amazon?
The reality is that these definitions are forged at the intersection of ideology, values, and outcomes. And as a result, they often need to be (re)negotiated in the public sphere. I’d argue that the vast majority of tech designers and developers don’t set out with nefarious intent (leaving aside black-hat hackers and maybe the founders of 4-chan or 8-chan). Innovators and inventors throughout industrial history have overwhelmingly set out to make our lives better. The problem with tech is usually a problem of unintended consequences or unconscious bias — or, perhaps, murky and ill-defined moral imperatives – and that makes doing good a much more nuanced endeavor, fraught with difficulty.
Instead of talking about “tech for good,” I’d like to suggest an alternative that might hopefully force us to engage with what we mean by “good” and where our definitions came from. I’d like to suggest we talk about values — not in the abstract, but in the specific. No technology exists in a vacuum, and talking about values helps to de-mythologise tech neutrality. If we start from the premise that your product, your platform, your devices have values, we can be more concrete about what tech stands for and what it is against. And in the process, we might become more aware of our own values and how they influence our assumptions.
This is perhaps best illustrated with a simple exercise.
Put two columns on a piece of paper and write “tech for good” above one column. Don’t label the other column. Now, list as many companies or organizations you can think of that qualify for the “tech for good” column.
Next, start on the other column — in this column list all the companies or organizations working in the tech space that you would not describe as “tech for good.”
Once you’ve made your lists, look at the not-tech-for-good column. What unites these companies/organizations? Write down the qualities they share. Are those qualities similar? Do they generally fall into one or a couple of buckets? What is the dominant quality?
Name the column for that quality: Tech for [Insert Here].
Is it: Tech for Profit? Tech for the Benefit of Shareholders Rather than Society? Tech for Terrorism? Tech for the Military? Tech for the Exploitation of People’s Data without Compensation or Adequate Consent? Tech for the Police? Tech for Invading Privacy? Tech for Spreading Misinformation? Tech for Extracting Scarce Resources from the Earth?
There are many, many possibilities. And reading this list, you might be thinking: mine looks nothing like any of those!
Now, consider for a moment whether the qualities that unite these organizations are clearly, unequivocally, not good — even bad? Think of some counter arguments. Think about whether your categories are likely to look the same as my categories, or anyone else’s.
What you have, at the end of this exercise, is not “tech for good” in one column and “tech for bad” in another — you have tech for certain values in one column and tech for different values in the other. You can label those values. You can name them. And you can (in fact, you will have to) engage with why you believe certain values embedded in technology will lead to beneficial outcomes and others to deleterious outcomes (and for whom?).
In other words, “good” is meaningless unless we know what we mean by “bad” — unless we’re actually willing to call out certain practices, technologies, platforms, policies as bad. In polite society, we’re largely unwilling to do this, and this ethical paralysis sabotages any attempt to change the tech industry in the fundamental ways that would refocus its efforts on human and environmental wellbeing.
I had a moment of pause in organizing my thoughts on this topic when I turned a page in a book I recently finished reading, The Whale and the Reactor: A Search for Limits in an Age of High Technology by Langdon Winner. I landed on Chapter 9, titled “Brandy, Cigars, and Human Values.” Having dressed down the concepts of “nature” and “risk” as effective ways to hold technology to account in previous chapters, Winner now turned to “values,” writing critically about its “vacuity as a concept” and the proliferation of “values” in technocratic discourse. But (much to my relief) Winner winds up in a rather similar place in his argument to the one I’m making here. “One obvious cure for the hollowness of ‘values’ talk is to seek out terms that are more concrete, more specific,” he writes.
Ultimately the specificity Winner celebrates in the book defaults perhaps a bit too readily to “universal” or “general” moral and political principles, which generations of critical theorists and empiricists have rightly challenged on the grounds that universality is a socially constructed concept. And, in Anglo-European philosophy and political science, it reflects an understanding of the world that predominantly derives from white male experience. In Feminism Confronts Technology, Judy Wajcman examines the social origins of “values” from the outset, observing that “the very definition of technology [...] has a male bias.” And more recently Caroline Criado-Perez, in her book Invisible Women, refers to the “myth of male universality.” Ruha Benjamin’s work on what she calls the “New Jim Code” reveals how racial bias is reproduced by technology and also introduces new forms of social control, underpinned by the “default Whiteness of tech development.” “Does this mean that every form of technological prediction or personalization has racist effects?” she asks in her book, Race After Technology. “Not necessarily,” she writes. “It means that, whenever we hear the promises of tech being extolled, our antennae should pop up to question what all that hype of ‘better, faster, fairer’ might be hiding and making us ignore.”
The obvious shortcomings of “universal” values as a social corrective perfectly expose a need for a constant exegesis of our values and a commitment to revisit them out in the open. To give Winner his due, however, his chapter winds up pointing to a way forward that responds actively to problem of values — “good” and “bad” among them — and their inescapable roots in lived experience: “The inquiry we need can only be a shared enterprise, a project of redemption that can and ought to include everyone.”
As I’ll elaborate in Part 2 of this newsletter, I think perhaps we need to do as much work on the how (the process of developing, interrogating, sharing, exposing, and implementing) of values in tech as on the what (the conceptual content or the technological products and outcomes) of those values. Examining the so-called “black box” of technology requires not only bringing to light the technical specifications that make it work, but also the values that inspired and inform it.
This is by far the most under-developed part of the tech lifecycle: our mechanisms for oversight, accountability, consequence scanning, and regulation. And it is often the least glamorous.
Journalist Steven Johnson captured this conundrum nicely in a recent article for the New York Times, examining the life and work of Thomas Midgley Jr., the inventor of two of the most socially and environmentally damaging inventions of the 20th century: leaded gasoline and chlorofluorocarbons. The article presents these two innovations as similar but different in the sense that they both have lessons to teach us about the unintended consequences of industrial invention — but that they also each reveal a complex confluence of variables, from profit margins to the pitfalls of our predictive capabilities, that result in a decision to put a new technology out into the world.
Reflecting on our limited toolkit for preventing technological harms, Johnson questions how we define “innovation” and whether this hinders our reparative repertoire: “Despite their limitations, all of these things — the regulatory institutions, the risk-management tools — should be understood as innovations in their own right, ones that are rarely celebrated the way consumer breakthroughs like Ethyl or Freon are. There are no ad campaigns promising ‘better living through deliberation and oversight,’ even though that is precisely what better laws and institutions can bring us.”
There is very little glory in innovating for social empowerment — giving people the rights they ought to have anyway results in pretty much zero commercially viable IP. But this is where some of the most important and exciting innovation in tech and society can happen and needs to happen. And it’s an area where some companies, localities, or nations could become leaders and set the bar.
There’s an interesting case study in giving people affected by technology more of a voice in its development in Voices in the Code by David G. Robinson. The book chronicles the evolution of KAS, the U.S. algorithm that allocates kidney transplants and the different kinds of governance and oversight — including patient participation — that were applied in the process. Robinson writes, “the hardest ethical choices inside software – choices that belong in the democratic spotlight – are often buried under a mountain of technical detail, and are themselves treated as though they were technical rather than ethical.”
The problem with this, he points out, is that algorithms (and I think this can be extended to many technological products) have a moral logic as well as a technical logic. What makes a “good” algorithm involves moral trade-offs, and “the big problem here is the relationship between technical expertise and moral authority.” The book argues that when technologies are high-stakes and involve those kinds of trade-offs, they need to be subject to more democratic governance and oversight: “it’s better for political communities to face the hard moral choices together than to abdicate and ignore those choices, abandoning them to the technical experts.” (More on this in the next post.)
When we talk about moral trade-offs and the values embedded in technological systems and products, we’re talking about similar things. The difficult part is defining what a morally “good” technology is. This is a particularly important challenge to reflect on at this moment in time, when ChatGPT is dominating the news, and a dazzling array of outrageous headlines flood our news feeds about the rise of super-intelligent machines and what it means for humanity. In one of his Reith Lectures on Artificial Intelligence, Stuart Russell, an eminent computer scientist and early AI pioneer, hones in on why our definitions matter — especially when they become code. “Machines are intelligent to the extent that their actions can be expected to achieve their objectives,” he says. But “if we put the wrong objective into a super-intelligent machine, we create a conflict that we are bound to lose. The machine stops at nothing to achieve the specified objective.”
By contrast, one of the inherent qualities of human consciousness is uncertainty — and this is a feature, not a bug. Knowing that we do not (always) know what is good, what is right, or what the future holds is a remarkably effective governance mechanism. It keeps us going back to the drawing board, deliberating, asking each other for input, and checking our expectations against reality.
So, if there is a consistent lesson here, it’s to resist the temptation to base the logic of our technologies on generic terms like “good,” which make it far too easy to justify and uphold the status quo. We can take Winner’s advice that “a depleted language exacerbates many problems; a lively and concrete vocabulary offers the hope of renewal.”