Google Ends Involvement With Controversial U.S. Government Project
June 13, 2018 - 4 minutes readDue to backlash from employees and academia, Google has decided to pull out of its involvement with Project Maven, a controversial project that would help the U.S. government strengthen its drone surveillance with artificial intelligence (AI).
Do the Right Thing
“Don’t be evil” was a motto used within Google’s corporate code of conduct for quite some time. After the San Francisco-based developer‘s restructuring under Alphabet Inc., this motto was replaced by “Do the right thing.” The sentiments expressed by these two sayings have regularly been used by the public to reprimand Google in times of contentious events.
This time around was no different, save for the fact that a substantial amount of criticism was coming from within the company. Thousands of Google’s employees signed an internal letter protesting Project Maven. About a dozen employees even went so far as to resign from their positions because they believed the project did not align with Google’s core values.
Last week, Google Cloud CEO Diane Greene told employees that the company would not be pursuing a renewal of its contract for Project Maven.
One Step Away From Misuse
Officially dubbed the Algorithmic Warfare Cross-Functional Team, the project quickly became known in the Pentagon as Project Maven. The program utilizes recent AI developments, specifically machine learning, to analyze footage from aerial drones.
Google has stated that the research for Project Maven is focused on “non-offensive” purposes. Contrary to the common assumption that it could be used in warfare, Google claimed that “the technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.”
In an open letter, AI experts around the world responded by saying that this research does bring the U.S. military one step closer to weaponizing AI for warfare: “We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control.”
Taking It Too Far?
Per Gizmodo, Google’s current Project Maven contract expires in 2019. Initially valued at $15 million, the budget apparently ballooned to estimates as high as $250 million. And while the initial scope of the project pertained to drones, Gizmodo claims that Google may have had other plans that went beyond this:
“Google intended to build a ‘Google Earth-like’ surveillance system that would allow Pentagon analysts to ‘click on a building and see everything associated with it’ and build graphs of objects like vehicles, people, land features, and large crowds for the entire city.”
AI is allowing us to accomplish some magnificent things. But on the flip side, it’s also enabling the possibility for disastrous situations. Since Google is “AI-first” and undoubtedly one of the pioneers in the field, their decisions will have everlasting effects on the technology and how it’s used.
Do you think Google should help the U.S. government? Or do you think there is too much potential for this power to be abused?
Tags: AI, AI and machine learning, AI and ML, AI App Developer, AI app developers, AI App Development, AI applications, drone surveillance, Google, Google Advanced Technology, government and tech, government and technology, government eavedropping, tech and politics, tech politics, technology and politics