Skip to content
Menu
menu

Illustration by iStock; Security Management

Major Tech Companies Commit to Voluntarily Regulate Development of Artificial Intelligence

Seven major technology companies agreed to a plan to voluntarily regulate their development of artificial intelligence (AI), the White House announced Friday morning.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI committed to the plan to move toward safe, secure, and transparent development of AI technology.

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI,” the White House said in a fact sheet. “As the pace of innovation continues to accelerate, the Biden-Harris administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.”

OpenAI said in a press release that it was committing to the voluntary regulations to reinforce the safety, security, and trustworthiness of AI technology and its services. The company, which is responsible for the popular ChatGPT product, called Friday’s announcement an “important step” to advance meaningful and effective AI governance.

“Policymakers around the world are considering new laws for highly capable AI systems,” said Anna Makanju, vice president of global affairs at OpenAI. “Today’s commitments contribute specific and concrete practices to that ongoing discussion. This announcement is part of our ongoing collaboration with governments, civil society organizations, and others around the world to advance AI governance.”

What Did the Companies Commit to?

Under the voluntary agreement, the companies will immediately move forward on major action items:

1. Conducting internal and external security testing of AI systems before releasing them. Independent experts will conduct this testing for safeguards against significant sources of AI risks, such as biosecurity, cybersecurity, and societal effects.

“Companies making this commitment understand that robust red-teaming is essential for building successful products, ensuring public confidence in AI, and guarding against significant national security threats,” according to the regulation.

2. Sharing information across industry and with governments, civil society, and academia on managing AI risks. The information shared will include best practices for safety, attempts to circumvent safeguards, and technical collaboration.

“They commit to establish or join a forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety, such as the NIST AI Risk Management Framework or future standards related to red-teaming, safety, and societal risks,” the regulation said.

3. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Model weights are essential aspects of AI systems, so the companies have agreed they should only be released when intended after security risks have been considered.

“This includes limiting access to model weights to those whose job function requires it and establishing a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets,” according to the regulation. “In addition, it requires storing and working with the weights in an appropriately secure environment to reduce the risk of unsanctioned release.”

4. Facilitating third-party discovery and reporting of vulnerabilities in AI systems. Under this commitment, the companies will establish systems within scope bounty systems, contests, or prizes to “incent the responsible disclosures of weaknesses, such as unsafe behaviors, or to include AI systems in their existing bug bounty programs,” the regulation explained.

5. Developing robust technical mechanisms to ensure users know when content is AI generated to limit fraud and deception.

As part of this commitment, the companies agree to develop tools or APIs to determine if content was created using their system—except for audiovisual content that is “readily distinguishable from reality or that is designed to be readily recognizable as generated by a company’s AI system,” the regulation explained.

“More generally, companies making this commitment pledge to work with industry peers and standards-setting bodies as appropriate towards developing a technical framework to help users distinguish audio or visual content generated by users from audio or visual content generated by AI,” the regulation said.

6. Publicly reporting AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This public reporting will include assessments of security and societal risks, as well as safety evaluations conducted and results of adversarial testing conducted to evaluate the model’s fitness for deployment.

7. Prioritizing research on the societal risks that AI systems can pose. This will include avoiding harmful bias and discrimination, as well as protecting privacy.

“Companies commit generally to empowering trust and safety teams, advancing AI safety research, advancing privacy, protecting children, and working to proactively manage the risks of AI so that its benefits can be realized,” according to the regulation.

8. Developing and deploying advanced AI systems to help address society’s greatest challenges. For instance, the companies commit to supporting research and development of how AI can be used for climate change mitigation and adaptation, early cancer detection and prevention, and combatting cyber threats.

“Companies also commit to supporting initiatives that foster the education and training of students and workers to prosper from the benefits of AI, and to helping citizens understand the nature, capabilities, limitations, and impacts of the technology,” the regulation explained.

In a blog post about the commitments, Microsoft Vice Chair and President Brad Smith wrote that by embracing the voluntary regulation, Microsoft is “expanding its safe and responsible AI practices” while working alongside industry leaders.

Microsoft has pledged to make additional commitments to the development of safety, security, and trust in AI systems by supporting a pilot of the National AI Research Resource and the establishment of a national registry of high-risk AI systems.

“Establishing codes of conducts early in the development of this emerging technology will not only help ensure safety, security, and trustworthiness, it will also allow us to better unlock AI’s positive impact for communities across the U.S. and around the world,” Smith wrote.

What’s Next?

The Biden administration announcement comes at a time when regulators around the world are looking at ways to address the development of AI and the technologies that use it—most aggressively across the Atlantic in Europe.

The European Union, for instance, has proposed regulating AI as part of its digital strategy with measures addressing high-risk AI that could negatively affect safety or fundamental rights. The European Council adopted the Artificial Intelligence Act in late 2022, as well as amendments to the act in June 2023, which is now serving as a negotiating position for member states and the European Commission before passing a final version of the law later this year.

“The European bill takes a ‘risk-based’ approach to regulating AI, focusing on applications with the greatest potential for human harm,” according to The New York Times. “This would include where AI systems were used to operate critical infrastructure like water or energy, in the legal system, and when determining access to public services and government benefits. Makers of the technology would have to conduct risk assessments before putting the tech into everyday use, akin to the drug approval process.”

The White House said it has already consulted with other countries about the voluntary regulations announced on Friday, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Japan, the United Arab Emirates, and the United Kingdom.

“The United States seeks to ensure that these commitments support and complement Japan’s leadership of the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as chair of the Global Partnership of AI.”

Additionally, the Biden administration pledged to work with U.S. allies and partners to create a strong international framework to govern the development and use of AI.

In a statement shared with Security Management, U.S. Senator Mark Warner (D-VA), chair of the Senate Select Committee on Intelligence, said he was glad to see the administration take steps towards addressing the security and trust of AI systems, but that this is just the beginning.

“We must continue to ensure these systems, which are already being adopted and integrated into broader IT systems in areas as wide-ranging as consumer finance and critical infrastructure, are safe, secure, and trustworthy—including through consumer-facing commitments and rules,” Warner explained. “While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse.

“These commitments are a step in the right direction, but, as I have said before, we need more than industry commitments. We also need some degree of regulation. That’s why I will continue to work diligently to ensure that vendors prioritize security, combat bias, and responsibly roll out new technologies.”

arrow_upward