What happens when Mark Zuckerberg, Elon Musk, Bill Gates, Sam Altman, and Sundar Pichai walk into a Senate hearing room together?
These giants of the tech world descended on Capitol Hill Wednesday to discuss public policy initiatives to try to minimize the risks of artificial intelligence. One of the proposals under consideration is the creation of a federal regulatory commission to monitor AI developments. Altman has already spoken publicly in favor of such an approach. CEOs in favor of regulation? What is going on?
Trying to regulate AI is like trying to nail Jell-O to the wall. It will never work. The attempt to direct this fast-changing category of mathematical tools through legislation or traditional regulatory mechanisms, no matter how well-intentioned, is unlikely to be successful and is much more likely to have negative unanticipated consequences, including delaying and misdirecting technical development. AI is the application of a set of mathematical algorithms, and you can’t regulate math.
If the commission of a crime is facilitated using an automobile, a telephone, a hammer, or a knife, the appropriate response is to focus on the criminal and the criminal act itself. Except for guns, the tools used in a crime are not generally the subject of regulation.
Advertisement
Concerns have been raised that AI may be involved in financial crimes, identity theft, unwelcome violations of personal privacy, racial bias, the dissemination of fake news, plagiarism, and physical harm to humans. There is an extensive legal system for identifying and adjudicating such matters. Attempts to outlaw potential behaviors before they take place is folly; a new regulatory agency to monitor and control high tech “hammers and knives” that may be used in criminal activity is ill-advised.
Instead of instituting regulatory measures against AI, perhaps those who are concerned should consider the unfortunate regulatory compulsion that has found its way into the current debate because of the dramatic success and awkward errors of the latest generation of generative AI tools. It is not possible to identify “artificial intelligence” as distinct from virtually any of the existing digital information processing technologies. This technological frontier is international in character. For example, restrictions in the United States could push development to a country like China, where regulations might be weaker. Generative AI has some of the properties of constitutionally protected public speech, which makes it a particularly complex candidate for regulation. The American legal tradition emphasizes the avoidance of prior restraint but focuses instead on resultant legal liabilities.
Advertisement
AI is an extremely fast-moving and increasingly diversified field, which makes it a poor match for slow-moving regulatory and judicial processes. Congress included Section 230 in the Communications Decency Act of 1996 to protect communication platforms like Facebook from liability when used by malevolent parties. The Supreme Court has, so far, sustained the constitutionality of this principle. The same principle should apply to AI-empowered communication platforms. Focus attention appropriately on harmful results rather than on any of the potentially thousands of enabling technologies.
Those proposing regulatory responses to the prospect of AI use the metaphor of “guardrails” and “emergency brakes.” These catch phrases sound reasonable. But they are physical metaphors. It makes sense for vehicles and streets. It defies meaningful application attempting to direct the use of mathematical algorithms. Would some propose a limitation on computer processing speeds or training database sizes?
Advertisement
This sort of centralized command-and-control thinking is just what has plagued China’s lagging AI research and development. Such an enterprise has very little chance of actually reducing risk while inevitably hampering technical development.
An alternative course of action would be to establish a new regulatory agency to address the potential risks inherent in implementing AI technologies. It could require impact statements before permission was granted to experiment with neural network AI models. It could set limits on the computational power utilized. It could require public disclosure of otherwise commercially protectable algorithms.
We don’t need new rules and laws to criminalize AI. We need to support research and development — and when bad guys get involved use the laws we already have.
W. Russell Neuman, a founding faculty of the MIT Media Laboratory, is professor of media technology at New York University. His new book is “Evolutionary Intelligence: How Technology Will Make Us Smarter.”