Professor declines Google funding in solidarity with ousted employees

In early March Luke Stark, a professor in the Faculty of Information and Media Studies, turned down a $60,000 Google Scholar Research award because of the company’s controversial handling of an internal research process and mistreatment of senior members of its ethical artificial intelligence (AI) team.

To read more about the controversy, click here. Stark recently spoke to UWOFA about his decision to stand in solidarity with Timnit Gebru and Margaret Mitchell, who were ousted from the company in recent months.

This interview has been edited for brevity and clarity. 

Q. How did you come to the decision to decline research funding from Google?

Luke Stark: I had applied in December before Timnit and Meg were fired. The AI ethics world is pretty small – although I think it’s growing as interest grows in the topic – and I’m not close with either Meg or Timnit, but I know their work very well. I’ve interacted with them professionally at conferences and I have a number of colleagues and friends who are still employed by Google on the Google AI team. And so, for all those reasons, it seemed like it would be a real blow to the kind of solidarity that those folks are trying to build around pressuring Google to shape up again, if that’s possible, and more broadly to get tech workplaces to take a wide variety of social issues seriously. I thought if I took the money I would really be undercutting that project in my own community. My first reaction was, ‘I just can’t take this. I have to communicate with a bunch of people. I have to talk to colleagues. I have to talk to my dean.’

Q. Can you elaborate on the push for technology companies to take social issues more seriously?

A. I see labour organizing and labour activism as really tied tightly into these conversations of AI ethics. A lot of the conversations around the ethical impacts, or the social impacts, of AI systems really are tightly tied to, first of all, the lack of diversity and equity within the workforces of these companies. And so, it’s not a surprise that Timnit, as a Black woman, has been the target of a huge harassment campaign online just by the fact that she’s one of the world’s top experts in this space.

And it’s also not a coincidence that Google has used a lot of the same kinds of legal and procedural arguments against Meg Mitchell that they’ve used to try to quash worker organizing. So this idea that Google has a lot of rules and the rules never really matter until Google wants them to matter and needs a pretext for firing you. So, to me, and I think to a lot of people increasingly in this space, labour activism and labour representation are tightly tied to getting companies to change how they’re developing new products precisely because without worker control, the companies aren’t going to change anything. Without workers having a say in developing the direction of the company policies, the leadership is really going to be able to do what they want to do. And that’s been something that’s been really quite inspiring to me, seeing how much growth there’s been in worker activism in Silicon Valley and the tech sector over the last three or so years. I think digital tech, and computer science in general, has a really longstanding libertarian streak that often gets held up, and still gets held up by many, as defining the discipline. So it’s been great to see, especially people who are more junior, people interested in getting engaged in social justice questions and also interested in engaging labour questions.

Q. You felt you had to decline the funding. Why was it also important for you to speak publicly about this?

A. That was something else I chewed on, in terms of whether I should post something or speak up. I thought it was important in the interest of building solidarity with Timnit and Meg, and in the interest of encouraging people, if they can, to take similar or related actions. And I want to be very clear that I know many people aren’t in the same position I am in terms of their discipline, the students that are relying on them. I don’t begrudge people taking the money and I think there are lots of different ways that you can show solidarity with Timnit and Meg and these broader questions. But I thought that if I was going to turn it down, I should do it publicly for that solidarity-building. This isn’t a story about me, particularly, I don’t think. This is a story about them and Google’s bad actions and, more broadly, these problems in the tech sector. The more attention that comes from that, I think, is the better.

Q. In your view, what can Google do to address this in a meaningful way?

A. One of the really disappointing things about this whole episode is that until Timnit was fired, it really seemed that Google was building up this world-class, amazing team of folks thinking about the social and ethical impacts of AI: about how the design of these systems can incorporate human biases and can exacerbate power asymmetries. It seemed like the company was doing the right thing. And then my impression is that the work came to the attention of people higher up in the company that saw it as interfering with the business model, essentially.

I think that Google could certainly apologize, could offer a kind of full, transparent account of what happened. I can’t speak for my colleagues, I don’t know if they would want to work for Google again, but they could put in an independent research process which was the cause of this whole thing. And they could, more broadly, accept the necessity of organizing among their workers and have a kind of reckoning of what their products are doing and how they can shift course to make those products less deleterious. But I’m not holding my breath for any of that at this point.

Q. You wrote in your letter to Google denying the funding that you look forward to a day when you can collaborate with Google Research if they foster, in part, critical research and products that support equity and justice. In your view, how can products support equity and justice?

A. There are lots of ways that digital capabilities can be designed, from conception to execution, to at least avoid many of the common pitfalls that people have identified. So for instance, one of the big problems with Google’s interest in natural language processing, and natural language processing more broadly, is the fact that it recreates the biases in the texts that it uses to learn. It finds Islamophobic texts, transphobic texts, and it reproduces it. And then you have a text generation system that produces a text on that topic. Finding ways of dealing with that within commercial models is one low-hanging option.

I do also think that it’s really critical that these companies make decisions that they’re not going to pursue certain lines of potentially profitable, but clearly problematic, technology. Facial recognition and analysis technologies is one area that I think qualifies for that. There is a really long list of things that these companies can do and they are, with a few exceptions, barely scratching the surface.


Luke Stark is an assistant professor in FIMS whose research focuses on the ethical and social impacts of artificial intelligence and digital systems