Google Promises Its A.I. Will Not Be Used for Weapons


The guidelines come after an internal and external backlash to the use of artificial intelligence technology in a contract Google signed a year ago with the Department of Defense, known as Project Maven.

The principles follow a conflict inside Google, pitting thousands of employees against management. The goal of this project was to process and catalog drone imagery, and Google's rank-and-file workers were none too pleased.

The news comes with Google facing an uproar from employees and others over a contract with the USA military, which the California tech giant said last week would not be renewed.

"We recognize that such powerful technology raises equally powerful questions about its use", Pichai wrote in introducing seven principles "to guide" the company's future work.

Going ahead, Google will work on AI applications that are socially beneficial but will avoid creating or reinforcing unfair bias.

The post also notes that there is space for more voices in this conversation on AI principles, and Google will "work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches".

Only weapons that have a "principal purpose" of causing injury will be avoided, but it's unclear which weapons that refers to.

More news: Whittaker favoured in non-title rematch with Romero for UFC 225

"The company has constrained itself to only assisting AI surveillance projects that don't violate internationally accepted norms", Eckersley said. The company, Pichai said, would also evaluate its work in AI by examining how closely its technology could be "adaptable to a harmful use". Pichai, in his blog, has specified that Google will not use AI in weaponry, surveillance and other areas where the implementation is likely to cause overall harm.

Google will pursue other government contracts including around cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post Thursday. In the final line, it now says: "And remember... don't be evil, and if you see something that you think isn't right - speak up!" Last week, Gizmodo reported that Google cloud chief Diane Greene told employees that Google had chose to stop providing AI tech to the Pentagon.

Pichai also mentioned that Google will avoid developing any surveillance technology that would violate the internationally accepted norms of human rights or something that breaks worldwide laws. How AI is developed and used will have a significant impact on society for many years to come.

The move comes less than a week after the company announced that it planned to sever ties with the Pentagon, after a contract with the Department of Defense sparked internal protests at the company over something called Project Maven.

"These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe", he wrote.

The limitation could help Google administration defuse a very long time of challenge by a great many representatives against the organization's work with the United States military to recognize objects in drone video.