The role of tech in climate change and climate justice

In this interview, Insolvent author Christoph Becker discusses tech's complex effects on climate and society and the need to center social justice and sustainability in computing.

Technology's environmental and societal harms are evident: data centers' energy and water consumption, discriminatory algorithmic decisions, AI-generated disinformation and hate speech. To pay back these debts to the planet, computing's fundamental norms need to shift towards "just sustainability," argues Christoph Becker, a professor at the Faculty of Information and director of the Digital Curation Institute at the University of Toronto.

In Insolvent: How to Reorient Computing for Just Sustainability, a recent release from MIT Press, Becker explores the deep connections among technology, sustainability and social justice, arguing that the three are not only related but inextricable. Over the course of the book, he assesses the limits of prevailing norms in computing, drawing on his own experience in the IT industry and systems design as well as literature from "critical friends" in the social sciences and humanities.

In this interview with TechTarget Editorial, Becker discusses technology's multifaceted role in sustainability and social justice, including the challenge of assessing technology's true climate impact and ways to take climate action as a tech worker. For more from Insolvent, read an excerpt from the book's introduction, "Just Sustainability Design."

Editor's note: The following has been edited for length and clarity.

What led you to undertake the critique of current narratives in computing that you present in Insolvent?

Christoph Becker: It was a serendipitous and slightly winding road that led me there. I was doing my Ph.D. in a cultural heritage context, about longevity and sustainability of software-dependent algorithmic objects. That was an interdisciplinary context; it was about applying computer science techniques, methods and engineering to solve problems at large national libraries, archives and scientific organizations.

Sustainability is never just an environmental question. Environmental sustainability is always an ethical question and a social justice question.
Christoph BeckerAuthor of Insolvent: How to Reorient Computing for Just Sustainability

At some point we wanted to shift, and I got a grant to support that work of shifting the focus from always reacting afterwards to designing systems that last. Over the years, I continued that project, and I started reading a lot in different areas. I started recognizing how this concern was similar in nature to the concerns of other people who were also talking about longevity, longer-term impact and sustainability.

I started reaching out to these communities, and that's what led to this sustainability design manifesto, the Karlskrona Manifesto. [We had] this realization that this long-term concern also involves concerns that are outside the technical realm, that are not just concerns about the system itself to be maintained. It's about broader impacts that are outside of the technical space. In the end, that broader impact matters -- especially from an ethical perspective -- much more than whether that piece of software lives or not. It matters what happens to everyone around it.

That automatically led to this shift in emphasis. As I got deep into that space, I did a lot of reading, and part of that involved a reckoning with the fact that sustainability is never just an environmental question. Environmental sustainability is always an ethical question and a social justice question. We cannot really talk about environmental sustainability and climate change and pollution and any related subjects without recognizing that these are social justice questions and equity questions and ethical questions. That has been forcefully recognized, but outside of computing. Computing has only recently really grappled and reckoned with what that means and what it should mean.

Your work is situated in an academic tradition that includes critical assessments of technology's environmental and social impacts. But many of the problems in computing that we're discussing are, unsurprisingly, occurring in industry contexts. What are your thoughts on how to bridge that gap between theory and practice?

Becker: I find that this so-called gap, or this categorization between theory and practice, often is more in our heads than elsewhere. I think the gap we have is that we're in a world where we have technology and processes and structures in place that aren't the kind of technology we would like to have. Many of us in the tech world -- and that includes industry and academia; I'm a tech worker as much as many others -- are very uncomfortable with where things are going.

But it's not so much a gap between two different things. It's just that the situation is problematic in many ways, and there are real conflicts of interest between the ecology of the planet and capitalist interests, for example. The question is, what are the roles of each of us individually and, especially, collectively in shifting the trajectory?

It can often feel powerless, and it's often unclear, because everything is complicated and all the different responsibilities are divided up. Division of labor can often make it appear that each of us has almost no role to play, but that's not really quite true. I think the difficult question is, what is the room for action that each of us has in the place where we are? How does that relate to others' actions and others' room for action, and how can we increase that? Part of the book, for me, is also explaining how I thought through that as an academic and how that changes how I think about my teaching.

But whatever our job is, we do have a role. Recognizing and reflecting on that is the first step, and it depends on where we are. There's no universal response for any of this. If you're a project manager, then your responsibilities will involve things like defining success criteria, facilitating workshops or defining the scopes of projects. In each of these steps are decision points where we can be more sustainable or less, more focused on justice and equity or less.

On an individual level, more and more workers are aware of and want to center these issues. But they're also encountering barriers. They're running up against business incentives for unsustainable growth, or they feel powerless in the context of their work. For someone in that position, what are some ways to take action within your sphere of influence?

Cover image for the book Insolvent by Christoph Becker.Click on the book cover
for more about Insolvent.

Becker: I think recognizing what the situation is -- that tech has this conflicted relationship, that it is simultaneously doing bad and good, but that none of this is inevitable. If you work with stakeholders, which stakeholders get included? Can you include broader stakeholders? How do you reckon with the fact that there are stakeholders who are affected who you cannot possibly include? Or maybe you're doing impact assessment. You can broaden the scope of those impact assessments to look at long-term environmental impact.

That, I think, always should involve some form of reading. It doesn't have to be reading; it can be podcast listening and so on. But use that to reflect on your role and positionality, find some room for action and recognize that you're not alone. You might feel alone, but ultimately, you're not alone. You don't have to do any of this alone. Share that conversation with coworkers. Share it with others who have similar interests. Join interest groups. Join reading groups. Figure out what you all share that you might not see.

One concept I introduce in the book is this idea of the critical friends of computing. I didn't invent the critical friends concept; that comes from pedagogy. But I think, especially for computing and related technical fields, it is so valuable. Essentially, critical friends tell us things we can't see ourselves, and we're willing to listen to them because they're our friends. There's a base level of respect and trust.

What they tell us might be uncomfortable, but it's exactly where things are uncomfortable that we have the biggest potential for growth and learning. There is a reason why it's uncomfortable. If we lean into that discomfort, we're probably going to learn something valuable about ourselves. Forming critical friendships with others who are not like us can be extremely powerful in making us see things we didn't recognize.

Some of your critiques along those lines in the book made me think of Meredith Broussard's concept of technochauvinism -- "the assumption that computers are superior to people, or that a technological solution is superior to any other." Clearly, we shouldn't present technological solutions and perspectives as the only or universally ideal option. But where do you see computing fitting into sustainability and social change movements?

Becker: It fits in so many ways. Anywhere it supports genuine social needs -- coordinating global activism, analyzing quantitative information about environmental impacts, monitoring overfishing or illegal rainforest logging in ways that leverage satellite imaging or drones. There are interesting things we couldn't do without some advanced tech.

It's often not the hottest new tech that gives us the best impact. The projects that make me most excited are those that clearly are driven by and governed by a community where technology comes in -- where there is an actual, clearly identified need to do something meaningful that needs technology rather than a startup that wants to do AI for good and then is desperately looking for an application for some algorithm they already have.

I'm generally less excited about things like efficiency. Energy efficiency, especially, is something that I'm usually less excited about than attempts to introduce tech in a meaningful way to amplify potential for social change or find leverage points where a small intervention can potentially lead to a much bigger change. I think those are the kind of changes we need to change the trajectory of technology in our world right now -- much more than incremental improvements of efficiency. Not to say that energy efficiency is all bad, but it sometimes is bad, actually. If your business is all about supporting the fossil fuel industry, then I'd rather you be less efficient.

Right. When we're thinking about AI's impact on the environment, for example, that goes beyond just the emissions associated with training a model. If a model is ultimately being used for oil and gas exploration, then its efficiency almost doesn't matter if its downstream impacts are so much greater.

Becker: Yes, that is what I'm getting at. In this case, it's also the fact that if you're working in oil and gas exploration and you're making the technological part more efficient, what you're really doing is increasing the return on investment. You're incentivizing more of it, and you increase the profit of the fossil fuel industry at a time when it has to be shut down. You would expect that increased efficiency would reduce consumption. But in fact, it just increases consumption, so the total energy use rebounds.

It's been known for almost 170 or so years now as the Jevons paradox; there's a few different names for it. It's a well-established phenomenon that needs to be quantified to have a clear assessment. It's not so straightforward to understand in different situations. But it often limits the positive effects of energy efficiency.

One project that we're currently doing in my group, in collaboration with Montreal, is named Curbcut. It's an urban sustainability visualization and analysis platform that aims to bring lots of data sources together to help people understand relationships between variables, so you can more clearly understand what the impact of different policies might be. For example, how access to environmental green space is related to social equity, sustainable transport, pollution, income distribution, demographics and so on.

Computing can give us tools for flexible analytics that allow us to explore these complex questions in quick ways and enable anyone on the planet with an internet browser, in principle, to access this. There are some wonderful opportunities there. But how exactly we make use of this and who gets to decide how we make use of it and which data gets shown -- that's the question. Who benefits from that, and what do we do with it, and what kind of knowledge do you have to have to understand and use these visualizations?

Some people are likely to experience discomfort when confronted with these ideas for the first time. What would you suggest for practitioners who find themselves trying to reach peers or leadership who aren't always receptive to these concerns?

Becker: One thing I've learned is that we do want to, as much as we can, be a little selective in who we talk to. We don't have to talk to everyone. We don't have to engage in every conversation. We don't have to convince everyone. There are certain reactions to bringing things up where I realize that engaging in that conversation is going to be useless. I'd rather save my effort for others who are much closer to where I am, because there are so many others who are willing to engage.

There's a category of people in technology who think that anyone who mentions the words "critical" or "critique" is just against everything. In that kind of situation, emphasizing that we critique because we care and that we critique to improve is one important thing to say. If that doesn't lead to a real change in attitude, then there is limited value in those kinds of confrontations. But I think there's an entire spectrum, right? On the other end of the spectrum, there's someone who is enthusiastically going to agree, and we already agree that we want the same thing, and we start talking about what we can do.

In between, there's this wide spectrum of others, and in between is where it often gets nuanced. There is the reaction where people immediately agree and realize they are constrained by business objectives and existing contracts. But then that is a starting point to say, "Okay, within those constraints, what can we do?" That might lead further down the line to questioning some of these constraints. If it turns out that the constraints produce unsustainability, then maybe someone has to do something about them, and it starts with us.

There is this idea that to become more sustainable, you have to sacrifice something, and that's not always true. Often something that can be shifted that makes things more sustainable and simultaneously better in other ways.

The other thing is to find others who also care. I especially like this recent statistic from the [Green Software Foundation] report that almost all tech workers are already concerned about climate change. I think nowadays, if you're a tech worker and you're concerned about this, you don't have to look far to find others who are also concerned, because almost everyone already is. Bringing that up and finding others as a starting point changes everything already, because it's not you anymore, it's us.

When you talk about reorienting the underlying norms in computing, it seems like one thing that needs to change is that sustainability can't just be treated as an add-on at the end of a tech project.

Becker: Absolutely. This cannot just be an add-on that comes at the very end of the list of desirables. It has to be a fundamental constraint to technology development. Designers have always known that constraints are a beautiful thing for design. Setting specific constraints frees innovation to do things in a better way -- that's a classic exercise in design schools.

So what if we do set fundamental fixed constraints? For example, if the tradeoff is 0.00001% accuracy against 20 tons of carbon dioxide [when training a machine learning model], honestly, I would find it extremely hard to justify the accuracy for any use case at all. But of course, in a business like Google or Netflix, that 0.00001% might be translated into increased revenue. There's a clear bottom line and a predictive model for that bottom line and how valuable that accuracy actually is.

At some point, I'm hoping that we're in a world where someone doesn't get away with making such reckless decisions at the cost of others. I'm also hoping those tech workers with more privilege are increasingly called on to refuse to do that. Of course, being able to say "I refuse" is a significantly privileged option that not everyone has. But if you can say that, then at some of these extreme points, you probably should. Some courageous people have done that publicly and shifted the conversation.

As the climate crisis becomes increasingly dire, it's easy to get overwhelmed when learning about it and looking to take action. That crisis fatigue obviously extends beyond the tech context, but it can be especially tough for tech workers given that the industry is both influential and flawed. What gives you hope that computing can change course to reorient around justice and sustainability?

Becker: I've thought a lot about this in the last few years, and I've been inspired by some ways of thinking about what hope is. Especially in the tradition that I came from, hope is silently assumed to be what I could describe as a sort of predictive optimism --"I hope that," and after "that" comes a declarative statement about a future state of the world: "I hope that you recover quickly," "I hope that nothing bad comes of it," "I hope that we solve this problem." We are optimists if we believe that prediction will be true, and we're pessimists if we believe that prediction will not be true.

It's a narrow way of thinking, though. There are many other ways of thinking that are much, much broader. I've drawn a lot of inspiration from Rebecca Solnit's writing -- Hope in the Dark, A Paradise Built in Hell. Right now on my reading list is Not Too Late, edited by Rebecca Solnit and Thelma Young Lutunatabua, an edited collection about turning the stories of climate activism from despair to hope.

But the thing is, hope in this framing is not optimism or pessimism. Pessimism is believing that it doesn't matter what we do because it's not going to work anyway, and optimism is believing that it's going to work out anyway, so I don't have to do anything. Both diminish the role of our agency. Hope, instead, is a force of action, a motivating force. Instead of focusing on predictive optimism, pessimism and probabilities of success, hope is the conviction that we have a role to play.

It's a source of energy for me. When I think of it like that, I am full of hope, because I know there is so much we have to do, and it absolutely does make a difference what we do. None of these social organizations are inevitable; we can change things. Democracy still means that we can rearrange things and we can redefine things together. The hope there is the conviction that we all have something to do here, not the probability of success. Reading these kinds of things gives me a lot of energy for action, and I think that's what hope most meaningfully is.

Dig Deeper on Sustainable IT