By Jeff Mason and Trevor Hunnicutt
(Reuters) -U.S. President Joe Biden is seeking to reduce the risks that artificial intelligence (AI) poses to consumers, workers, minority groups and national security with a new executive action on Monday.
The order, which he signed at the White House, requires developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they are released to the public.
It also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks.
"To realize the promise of AI and avoid the risk, we need to govern this technology," Biden said. "In the wrong hands AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run."
The move is the latest step by the administration to set parameters around AI as it makes rapid gains in capability and popularity in an environment of, so far, limited regulation. The order prompted a mixed response from industry and trade groups.
Bradley Tusk, CEO at Tusk Ventures, a venture capital firm with investments in tech and AI, welcomed the move. But he said tech companies would likely shy away from sharing proprietary data with the government over fears it could be provided to rivals.
"Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited," Tusk said.
NetChoice, a national trade association that includes major tech platforms, described the order as an "AI Red Tape Wishlist," that will end up "stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation."
The new order goes beyond voluntary commitments made earlier this year by AI companies such as OpenAI, Alphabet and Meta Platforms, which pledged to watermark AI-generated content to make the technology safer.
As part of the order, the Commerce Department will "develop guidance for content authentication and watermarking" for labeling items that are generated by AI, to make sure government communications are clear, the White House said in a release.
The Group of Seven industrial countries on Monday will agree a code of conduct for companies developing advanced artificial intelligence systems, according to a G7 document.
A senior administration official, briefing reporters ahead of the official unveiling of the order, pushed back against criticism that Europe had been more aggressive at regulating AI than the United States.
The official said the White House believed that legislative action from Congress was also necessary for AI governance. Biden is calling on Congress to pass legislation in particular on data privacy, the White House said.
"While this is a good step forward, we need additional legislative measures," Senator Mark Warner, a Democrat who chairs the Senate Select Committee on Intelligence, said in a statement.
U.S. officials have warned that AI can heighten the risk of bias and civil rights violations, and Biden's executive order seeks to address that by calling for guidance to landlords, federal benefits programs and federal contractors "to keep AI algorithms from being used to exacerbate discrimination," the release said.
The order also calls for the development of "best practices" to address harms that AI may cause workers, including job displacement, and requires a report on labor market impacts.
(Reporting by Jeff Mason and Trevor Hunnicutt; additional reporting by Alexandra Alper, Krystal Hu, Katie Paul, John Kruzel, David Shepardson and Diane Bartz; editing by Grant McCool, Jonathan Oatis, Bill Berkrot and Deepa Babington)