The Biden administration announced Friday that seven of the nation’s top artificial intelligence developers have agreed to guidelines aimed at ensuring the "safe" deployment of AI.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all agreed to the guidelines and will participate in a Friday afternoon event with President Biden to tout the voluntary agreement.

"Companies that are developing these emerging technologies have a responsibility to ensure their products are safe," the White House said in a Friday morning statement. "To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety."

AI LIKENED TO GUN DEBATE AS COLLEGE STUDENTS STAND AT TECH CROSSROADS

President Biden at lectern with US flags behind him

The White House on Friday announced new voluntary AI guidelines that seven developers have agreed to follow. President Biden is expected to speak on AI with representatives of these companeis on Friday. Photographer: Samuel Corum/Sipa/Bloomberg via Getty Images

Under the voluntary guidelines, companies agree to ensure their AI systems are "safe" before they are released to the public. That involves a commitment to "internal and external security testing" of these systems before they are released.

"This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects," the White House said.

OPENAI CEO SAM ALTMAN HAS DONATED $200,000 TO BIDEN CAMPAIGN

Companies agreed to share best practices for safety across the industry but also with the government and academics.

AUTHORS SUE OPENAI FOR COPYRIGHT INFRINGEMENT, CLAIM CHATGPT UNLAWFULLY ‘INGESTED’ THEIR BOOKS

OpenAI CEO Sam Altman at microphone

OpenAI, led by CEO Sam Altman, is one of the companies that will follow the voluntary guidelines on AI development. Photographer: Eric Lee/Bloomberg via Getty Images (Getty)

The seven companies agreed to invest in cybersecurity and "insider threat safeguards" in order to protect unreleased AI systems, and to allow "third-party discovery and reporting of vulnerability " in their AI systems.

Another major component of the White House-brokered deal is steps to "earn the public’s trust." According to the announcement, the companies agreed to develop tools to help people know when content is AI-generated, such as a "watermarking" system.

"This action enables creativity with AI to flourish but reduces the dangers of fraud and deception," the White House said.

CRUZ SHOOTS DOWN SCHUMER EFFORT TO REGULATE AI: ‘MORE HARM THAN GOOD’

President Biden speaks at outdoor event,

President Biden's administration has pushed for AI guidelines that ensure the ‘safe’ development of AI. (AP Photo/Damian Dovarganes)

Companies will also report AI systems’ capabilities and limitations, research the risks AI can pose, and deploy AI to "help address society’s greatest challenges," such as cancer prevention and "mitigating climate change."

Senate Majority Leader Chuck Schumer, D-N.Y., who has been looking for ways to regulate AI in the Senate, welcomed the White House announcement but said some legislation will still be needed.

CLICK HERE TO GET THE FOX NEWS APP

"To maintain our lead, harness the potential, and tackle the challenges of AI effectively requires legislation to build and expand on the actions President Biden is taking today," he said. "We will continue working closely with the Biden administration and our bipartisan colleagues to build upon their actions and pass the legislation that’s needed."