AI Weekly: Google should listen to its employees and stay out of the business of war
This week, we learned that thousands of Google employees are upset about the company’s involvement in Project Maven, a U.S. Department of Defense initiative that has tapped Google to help drones with drone footage analysis.
News of Google’s participation in the program and concern among the company’s ranks was first reported by Gizmodo, which cited anonymous sources last month. A letter obtained by the New York Times and published earlier this week makes it clear just how seriously this issue is being taken.
Written and signed by 3,100 Google employees, the letter addressed to CEO Sundar Pichai urges the company to pull out of Project Maven and enact a policy stating that Google and its contractors will not build “warfare technology.” The letter states that failure to do so could “irreparably damage Google’s brand and its ability to compete for talent” at a time when Google is “already struggling to keep the public’s trust.”
“We cannot outsource the moral responsibility of our technologies to third parties,” the letter reads. “Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values.”
News of Google’s internal spat comes the same week as 50 AI researchers refused to support a supposed autonomous weapons and “killer robot” initiative at South Korea’s top university.
Google apparently characterizes its work with the Pentagon as “non-offensive,” but think of the bomb-diffusing robot used in 2016 to kill a mass shooter in Dallas. As this and many other examples make clear, a tool made for one purpose can always be used for other ends. This problem is the subject of increased scrutiny in AI communities, most recently in a report from EFF, OpenAI, and other reputable organizations that implores engineers to remember the duality of AI use cases.
The New York Times article refers to the letter from Google employees as “idealistic,” an assertion I find very odd. There’s nothing “idealistic” about employees of a company that makes the majority of its money on advertising articulating an aversion to killing people.
When it comes to AI use cases, keep in mind that Google already has monopolies in fields like internet search and is developing businesses in many more sectors, not to mention new avenues for AI that will open up down the road.
The technology giant has its hand in an astonishing number of other pies.
It owns both Chrome, the most popular web browser, and Android, the world’s most popular mobile operating system. It’s squarely second in the U.S. smart speaker market, with expansions set for India and other countries around the world. It’s in the workplace with millions of G Suite users. Google’s education tech is used in more than half of U.S. primary and secondary schools. Google is even helping governments with specially made apps and cloud services and initiatives like Project Loon to spread internet access around the world.
This is all to say nothing of Google Cloud, YouTube, GV, Waymo’s ambitions for autonomous vehicles, and many other industries where Google is a dominant force.
Sure, Google’s involvement with Project Maven could be motivated by some form of patriotism, or justified more pragmatically by the knowledge that if Google refuses to help, another company will happily step in to do so. Like Google’s push to fund campaigns of both liberal and conservative politicians in recent years, the company’s role in Maven could also be aimed at bolstering ties with the federal government as people are calling for increased antitrust regulation of tech giants.
Google could also be motivated in part by competition with players like Amazon. But this whole situation reminds me of the end of the movie Bad Santa, when Billy Bob Thornton is betrayed by one of his elves. In the moment when Thornton’s character is about to get killed for his share of a mall robbery, he doesn’t plead for his life, he’s shocked by the elf’s greed.
“Do you really need all that shit?” he asks about the money and a pile of stolen merchandise.
Google has a bit — or a lot — of everything. Economically speaking, the company doesn’t need to make tools of war.
We’re in the midst of what VentureBeat correspondent Chris O’Brien calls “the rise of tech nationalism,” in places like France as well as in authoritarian nations with large standing armies and an appreciation of AI’s strategic importance, like Russia and China.
This is also a moment when major tech giants like Google are declaring themselves reborn as AI companies, and the areas they choose to devote resources to will shape not just their revenue but public perception of AI and what this powerful technology is capable of.
Google Cloud chief scientist Fei-Fei Li recently expressed her view that Google should explore ways to work more closely with social scientists, humanists, lawyers, artists, and policy makers — collaborative prospects that are a long way from making tools of war.
Exactly what’s at stake for Google with Project Maven is tough to gauge, but the potential riches that come from working with the Department of Defense on more accurate drones may not justify the risk of alienating consumers or governments around the world.
Like Facebook, where internal strife has also led to recent controversy inside and outside the company, Google encourages spirited debate among its employees, and the biggest loss for Google — again like Facebook — may be erosion of trust in a company that’s ever-present in all of our lives.
A public backlash against two companies that have acquired much of the world’s top AI talent could also impact a vibrant but still growing AI ecosystem.
As the recent controversies have driven home, there’s a lot more to consider with AI than finding the right model or datasets to train neural nets for businesses that can appear at times more powerful than a lot of nation states.
Thanks for reading,