• Uncategorized

Google bars uses of its artificial intelligence tech in weapons, unreasonable surveillance

nttnttntttnttttntttttttttnttttttntttttttnttttttttntttttttn

n n Google bars uses of its artificial intelligence tech in weapons, unreasonable surveillance - image1n n

n nttttttt

Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, California, May 8. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are “socially beneficial,” that avoid creating or reinforcing bias and that are accountable to people. | AP
nttttttntttttnttttttttntttnttntnnttttt

Google will not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts, according to its latest ethical principles released Thursday.

The new restrictions could help management at Google, a unit of Alphabet Inc., defuse months of protests by thousands of employees against the company’s work with the U.S. military to identify objects in drone videos.

n

Google will pursue other government contracts, including around cybersecurity, military recruitment and search and rescue, CEO Sundar Pichai said in a blog post the same day.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he said.

Breakthroughs in the cost and performance of advanced computers have begun to carry AI from research labs into industries such as defense and health. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large data sets to make predictions and identify patterns and anomalies faster than humans could.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents through mass collection of online communications has sparked concerns among academic ethicists and Google employees.

“Taking a clear and consistent stand against the weaponization of its technologies” would help Google demonstrate “its commitment to safeguarding the trust of its international base of customers and users,” Lucy Suchman, a sociology professor at Lancaster University in England, said ahead of Thursday’s announcement.

Google said it would not pursue AI applications intended to cause physical injury, that tie into surveillance “violating internationally accepted norms of human rights,” or that present greater “material risk of harm” than countervailing benefits.

Its principles also call for employees as well as customers “to avoid unjust impacts on people,” particularly around race, gender, sexual orientation and political or religious belief.

Pichai said Google reserved the right to block applications that violated its principles.

A Google official described the principles and recommendations as a template that anyone in the AI community could put into immediate use in their own software. Though Microsoft Corp. and other firms released AI guidelines earlier, the industry has followed Google’s efforts closely because of the internal pushback against the drone imagery deal.

ntnttnttntntnttnttntntnttnttntnnnt

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.

ntSUBSCRIBE NOWnnnntnttPHOTO GALLERY (CLICK TO ENLARGE)ntntnttnt ntttttttttntttttnttttttntttttttnttttttntttttnttttnttttttn ntnnntnttKEYWORDSntntmilitary, Google, drones, Microsoft, weapons, privacy, surveillance, AInn n n tn tttt