AI Companies Will Be Required to Report Safety Tests to U.S. Government
The Biden Administration has decided to apply new AI regulations, where all developers of major AI systems will be required to disclose their safety test results to the government.
As part of these new rules, tech companies will be required to let the government know when they train an AI model using a significant amount of computing power. The new rules will give the U.S. government access to sensitive data from companies like Google, Amazon Web Services, and OpenAI.
The National Institute of Standards and Technology is being tasked to develop standards to ensure AI tools are safe and secure before public release. In addition, the Commerce Department will issue guidance to watermark AI-generated content to clearly distinguish between authentic and artificial content.
Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants "to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”
Artificial intelligence has emerged as a leading economic and national security concern for the U.S. government. This is not surprising given the hype surrounding generative AI, and the investments and uncertainties it has created in the market.
The President signed an ambitious executive order three months ago to manage the fast-evolving technology. The proposed rules in the executive order include guidance for the development of AI, including established standards for security.
The White House AI Council met on Monday to review the progress made by the executive order. This included the top officials from a wide range of federal departments and agencies. The council released a statement that “substantial progress” has been made in achieving the mandate to protect Americans from the potential harms of AI systems. The Biden government is also actively working with its international allies, including the European Union, to establish cross-border rules and regulations for managing the technology.
With the new regulations, U.S. cloud companies will be required to determine whether their foreign entities are accessing U.S. data centers to train AI models. This move is aimed at preventing non-state actors, such as China, from accessing U.S. cloud servers to train their models.
The Biden government published a “Know Your Customer (KYC)” proposal on Monday. This proposal would require cloud computing companies to verify the identity of foreigners who sign up or maintain accounts that use U.S. cloud computing. This move is part of a widening tech conflict between Washinton and Beijing.
The new regulations could put extra strain on U.S. tech companies, such as Amazon and Google, who would need to develop a process to collect details about their foreign customers' names, and IP addresses, and report any suspicious activity to the federal government. The tech companies would also need to certify compliance annually.
While the self-reporting regulations can provide some protection for U.S. interests and encourage AI developers to be more cautious, it is still unclear how the government will tackle those who choose not to report accurately or at all. There are also legal and ethical concerns about gaining access to sensitive data.
Related Items
AI Governance Needs Emerging Quickly
Trust Me, I’m Smart: HPC and Government Regulation in the Coming AI Age
AI Safety Summit 2023 – Understanding Global AI Risks