Executives from two major AI companies asked senators on Tuesday to pass regulations for the ground-breaking but nascent technology as rapid innovation raises ethical, legal and national security questions.
Speaking to a Senate Judiciary subcommittee, OpenAI Chief Executive Officer Sam Altman praised the potential of the new technology, which he said could solve humanity’s biggest problems. But he also warned that artificial intelligence is powerful enough to change society in unpredictable ways, and “regulatory intervention by governments will be critical to mitigate the risks.”
“My worst fear is that we, the technology industry, cause significant harm to the world,” Altman said. “If this technology goes wrong, it can go quite wrong.”
IBM’s Chief Privacy and Trust Officer Christina Montgomery focused on a risk-based approach and called for “precision regulation” on how AI tools are used, rather than how they’re developed.
It’s unclear whether Congress is up to the task. Political gridlock and heavy lobbying from big technology firms have complicated efforts in Washington to set basic guardrails for challenges including data security and child protections for social media. And as senators pointed out in their questions, the deliberative process of Congress often lags far behind the pace of new tech advancements.
Demonstrating AI’s power to deceive, Senator Richard Blumenthal, the Connecticut Democrat who chairs the panel, played an AI-written and produced recording that sounded exactly like him during his opening statement. While he urged AI innovators to work with regulators on new restrictions, he recognized that Congress hasn’t passed adequate protections for existing technology.
“Congress has a choice now. We had the same choice when we faced social media,” Blumenthal said. “Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”
As Tuesday’s hearing got underway, senators questioned the potential for dangerous disinformation and the biases inherent in models trained on internet content. They raised the risks that AI-fabricated content poses for the democratic process, while also fretting that global adversaries like China could surpass US capabilities.
Blumenthal asked about “hallucinations” when AI technology gets information wrong. Tennessee Republican Marsha Blackburn asked about protections for singers and songwriters in her home state, drawing a pledge from Altman to work with artists on rights and compensation.
Missouri Senator Josh Hawley, the ranking Republican on the subcommittee, asked whether AI will serve to be as transformative as the printing press, disseminating knowledge more widely, or as destructive as the atomic bomb.
“To a certain extent, it’s up to us here and to us as the American people to write the answer,” Hawley said. “What kind of technology will this be? How will we use it to better our lives?”
Much of the initial discussion focused on generative AI, which can produce images, audio and text that seems human-crafted. OpenAI has driven many of these developments by introducing products like ChatGPT, which can converse or produce human-like, but not always accurate, blocks of text, as well as DALL-E, which can produce fantastical or eerily realistic images from simple text prompts.
But there are boundless other ways that machine learning is being deployed across the modern economy. Recommendation algorithms on social media rely on AI, as do programs that analyze large data sets or weather patterns.
Required Registration
The Biden administration has put forth several nonbinding guidelines for artificial intelligence. The National Institute of Standards and Technology in January released a voluntary risk management framework to manage the most high-stakes applications of AI. The White House earlier this year published an “AI Bill of Rights” to help consumers navigate the new technology.
Federal Trade Commission Chair Lina Khan pledged to use existing law to guard against abuses enabled by AI technology. The Department of Homeland Security last month created a task force to study how AI can be be used to secure supply chains and combat drug trafficking.
In Tuesday’s hearing, Altman focused his initial policy recommendations on required registration for AI models of a certain sophistication. He said companies should be required to get a license to operate and conduct a series of tests before releasing new AI models.
Read more: OpenAI’s Altman Casts Global Gaze in Urging AI Regulation in US
Montgomery said policymakers should require AI products to be transparent about when users are interacting with a machine. She also touted IBM’s AI ethics board, which provides internal guardrails that Congress has yet to set.
“It’s often said that innovation moves too fast for government to keep up,” Montgomery said. “But while AI may be having its moment, the moment for government to play its proper role has not passed us by.”