Microsoft Bans US Police Departments

Microsoft has strengthened its prohibition on U.S. law enforcement agencies utilizing generative AI for facial recognition via Azure OpenAI Service, the company’s comprehensive platform tailored for enterprise applications of OpenAI technology.

In an update added on Wednesday to the terms of service for Azure OpenAI Service, language has been revised to more explicitly forbid any integration with the service from “by or for” used by U.S. police departments for facial recognition purposes. This includes both current and potential future image analysis models developed by OpenAI.

Furthermore, a new addition to the terms addresses “any law enforcement globally,” explicitly prohibiting the utilization of “real-time facial recognition technology” on mobile cameras such as body cameras and dashcams. This prohibition extends to attempts to identify individuals in “uncontrolled, in the wild” environments.

The rules changed soon after Axon, a company that makes technology and weapons for the military and police, introduced a new product. This product uses OpenAI’s GPT-4 to summarize audio from body cameras. Some people quickly pointed out potential problems like hallucinations (even the best AI can do this) and racial biases (especially since police stop people of color more often).

We’re not sure if Axon used GPT-4 through Azure OpenAI Service, and if they did, whether the policy change was because of Axon’s product. OpenAI had already restricted the use of its models for facial recognition through its APIs. We’ve asked Axon, Microsoft, and OpenAI for more information and will update you when we hear back.

The updated terms still allow Microsoft some flexibility. The ban on using Azure OpenAI Service applies only to U.S. police, not international ones. Also, it doesn’t include facial recognition with stationary cameras in controlled places, like an office (although U.S. police are still not allowed to use it at all).

This matches Microsoft’s and its partner OpenAI’s recent stance on AI contracts related to law enforcement and defense contracts.

In January, Bloomberg revealed that OpenAI is collaborating with the Pentagon on various projects, including enhancing cybersecurity capabilities. This marks a shift from OpenAI’s earlier policy of not providing its AI technology to the military. According to The Intercept, Microsoft has proposed using OpenAI’s image generation tool, DALL-E, to aid the Department of Defense (DoD) in developing software for military operations.

Azure OpenAI Service was available in Microsoft’s Azure Government product in February. This version includes additional features tailored for government agencies, including law enforcement, focusing on compliance and management. In a blog post, Candice Ling, Senior Vice President of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.

Update: Microsoft has clarified that the initial change to the terms of service was incorrect. The ban applies specifically to facial recognition in the U.S. and does not completely prohibit police departments from using the service.

Read More: Microsoft Pledges $2.2 Billion Investment to Boost Malaysia’s AI and Cloud Infrastructure