Some Artificial Intelligence Should Be Regulated, Research Group Says

Using artificial intelligence in sensitive areas like criminal justice and healthcare should be regulated.

That’s one takeaway from a report by the AI Now Institute, a research group affiliated with New York University, that looks at the cutting-edge technology’s impact on society. The researchers fear scenarios in which governments and companies use AI technologies to constantly monitor certain populations or unfairly discriminate against some communities, among other situations.

The report, released on Thursday, comes as machine learning technologies are increasingly used to crunch vast amounts of data, including at Google, which taps it for chores like automatically recognizing dogs in photos. But the technology also raises a number of ethical concerns.

For instance, workers at Google criticized the search giant’s executives for considering signing a Defense Department contract that involved helping the government use image-recognition technology for military-purposes. Amid the backlash, Google dropped out of a $10 billion cloud computing contract with the agency, citing its internal policies against using machine learning for warfare and surveillance.

Meanwhile, companies like Amazon and Microsoft have both suffered employee backlash over government contracts over concern that the companies’ AI-powered facial-recognition software could be used by law enforcement to unfairly target minorities or marginalized communities.

AI Now Institute’s report says the federal government should “oversee, audit, and monitor” the use of AI in fields like criminal justice, healthcare, and education. Individual agencies or collections of experts should handle the oversight, the group said, rather than one big AI regulatory agency because each industry is governed by its own set of regulations.

Data Sheet, Fortune’s technology newsletter.

In terms of facial-recognition technology, the report recommends “national laws that require strong oversight, clear limitations, and public transparency.” The authors said communities should be able to reject the use of facial-recognition technology in their areas if they believe that it could be used to unfairly discriminate or target individuals.

Kate Crawford, an AI Now co-founder and principal researcher at Microsoft’s research arm, likened communities determining the use of facial-recognition technology in their city limits to local governments being able to establish nuclear-free zones.

This year was especially tumultuous for leading tech companies that use AI technologies like Facebook, which suffered multiple data privacy blunders and was slammed by critics for its AI-powered newsfeed that spread fake news in the prelude to the 2016 U.S. presidential election. The fact that employees at companies like Google and Microsoft pressured executives to talk publicly about sensitive AI-related topics and even change course, show that workers can make an impact, Crawford said.

In another hopeful sign, she pointed out that companies like Microsoft and Amazon are not fighting government regulation. Microsoft president Brad Smith, for instance, has urged for “thoughtful government regulation” of facial recognition technology.

“This has not happened before,” Crawford said. “We are seeing something new.”

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.