Google CEO: Our AI won’t be used for harm

Adjust Comment Print

Technologies that gather or use information for surveillance violating internationally accepted norms.

This commitment follows protests from staff over the USA military's research into using Google's vision recognition systems to help guide drones. Google, under its current contract, is providing AI technology to analyze drone footage, a controversial business arrangement that resulted in severe backlash among both the public and Google's own employees.

The comments came shortly after Google CEO Sundar Pichai published a list of governing principles on how the company plans to work with AI technology in the future. Pichai says Google will be "actively" looking for these collaborations.

This comes in direct response to the company's work with the Department of Defense's Project Maven.


The document, which also enshrines "relevant explanations" of how AI systems work, lays the groundwork for the rollout of Duplex, a human-sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May. The charter sets "concrete standards" for how Google will design its AI research, implement its software tools and steer clear of certain work, Pichai said in a blog post. The company said on Thursday that if the principles had existed earlier, Google would not have bid for Project Maven. Thousands of Google employees petitioned the contract and some quit in protest.

Be made available for uses that accord with these principles. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law.

Google said it will not pursue development of AI when it could be used to break global law.

Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Google's backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems.


The principles include aims such as safety, accountability, privacy, avoiding unfair bias, and being "socially beneficial". Google seems to recognize the massive potential of AI technology, so it wants to start building systems with this framework in mind.

Google's decision to restrict military work has inspired criticism from members of Congress.

Irina Raicu, director for the Markkula Center for Applied Ethics at Santa Clara University, pointed out that Pichai also said, "Many technologies have multiple uses". "In the absence of positive actions, such as publicly supporting an global ban on autonomous weapons, Google will have to offer more public transparency as to the systems they build".


Comments