AI and neural networks

Google withdraws promise not to use AI for sensitive topics

Google withdraws promise not to use AI for sensitive topics

Google has reversed a commitment to refrain from using AI for potentially dangerous things like weapons and surveillance, according to the company’s updated “Artificial Intelligence Principles.”

According to the company’s updated Artificial Intelligence Principles.

A previous version of the company’s artificial intelligence principles stated that Google would not use AI to develop weapons, surveillance, technologies that “cause or are likely to cause generalized harm,” or in cases contrary to principles of international law and human rights.

The old version of the page is available in the web archive.

An old version of the page is available in the web archive.

Google’s senior vice president James Manyika and Demis Hasabis, who heads Google’s artificial intelligence lab DeepMind, defended the move. They argue that businesses and democratic governments should work together on AI that “supports national security.”

They argue that businesses and democratic governments should work together on AI that “supports national security.

Google’s updated set of principles includes that the company will rely on human oversight and feedback to ensure its technology is used in accordance with “generally recognized principles of international law and human rights.” The principles also say the company will test technologies to “mitigate unintended or dangerous consequences.”

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

You may also like