Lack of Political Consensus, Slow Policymaking Could Hamper Development of AI Regulation
Subscribe
MOSCOW (Sputnik), Kirill Krasilnikov - The process of regulating AI faces the problem that governments lack agile policymaking as well as the political consensus to promptly address the ongoing technological changes, experts told Sputnik.
This December, European Union countries and lawmakers reached an provisional agreement on the Artificial Intelligence Act, which would be the world's first comprehensive regulation of AI. It aims "to ensure that artificial intelligence (AI) systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values." This came on the heels of the worldwide fascination and anxiety over the recent progress in AI technology, which has been triggered in particular by the launch of OpenAI’s ChatGPT language model in 2022. This has been praised by some for its professional applications, while others are wondering about the potential effect of this technology on various facets of human life.
"The Future of Life Institute, whose membership is composed of leading technologists, warns against unfettered development that could lead to spread of false narrative, significant replacement of jobs, and possible near term equivalency or even surpassing of human mind capabilities. It calls for governments to institute safety protocols to prevent possible disastrous results that no government can control. The latest ChatGPT development may exacerbate possible negative outcomes," Rosario Girasa, distinguished professor at Pace University's Lubin School of Business, said.
At the same time, Daniel Linna, senior lecturer and director of law and technology initiatives at the Northwestern Pritzker School of Law and the McCormick School of Engineering, noted that policymakers and regulators should take into consideration not only AI-related risks but also potential benefits of the new technology.
"Organizations should be expected to test AI systems for accuracy and bias, be transparent, and conduct impact assessments, identifying and mitigating the harms that could be caused by an AI system. Policymakers should seize the opportunity to provide AI education and training to workers, students, and the public," Linna suggested.
Vassilis Galanos, a teaching fellow and research fellow at the University of Edinburgh, observed that current AI regulation was focused on the "frontier AI," meaning cutting-edge general purpose models that could potentially pose great risks. Nevertheless, it should also cover low risk systems that may still have major effects on society, according to the expert.
"Many of the systems that are loosely regulated because they are considered less risky can cause plenty of harm. It’s essential that countries and international organisations prioritise the development of regulatory frameworks that enforce ethical standards, safeguard privacy, and ensure that AI systems do not perpetuate discrimination or harm for underserved communities," Galanos stated.
Leviathan and AI
Despite the growing concern among governments across the world about potential AI-related risks, the fast progress in the field of artificial intelligence makes it difficult for a state to come up with appropriate regulations and guidelines.
"Given the rapid pace of AI advancement, there is a substantial risk of regulatory efforts lagging behind, particularly because modern governments may not have the existing expertise or agile governance structures needed to respond quickly," Galanos said, adding that "it's critical for global regulatory measures to be both proactive and adaptive, by continuously evolving with technological advancements and by engaging a broader range of stakeholders in the regulatory process."
In a similar vein, Girasa underscored the tendency of governments to act slowly, especially in times of deep intellectual divisions between political parties.
"More autocratic governments, such as China, do have the capability of moving more quickly to oppose perceived present dangers. In the US, there is a heightened danger of inaction because the country is almost evenly divided politically thereby making necessary regulation very difficult to muster," Girasa continued.
Linna, for his part, thinks that the alleged lack of expertise within governments is not an issue, unlike "the failure of governments to make it a priority to capture the benefits of AI and mitigate the risks of AI." He also highlighted the volume of so-called soft law, meaning non-binding guidelines, codes of conduct and standards that can be utilized by authorities.
"Modern governments are well equipped to bring in experts to help create policy and regulations for emerging technologies. A robust body of ‘soft law’ for the design, development, deployment, and validation of AI systems has been developed through the work of academics, professionals, and a wide range of organizations. Governments can take ‘soft law’ principles and evaluation standards and turn them into concrete laws and regulations," Linna explained.
Global Governance
The signing of the Bletchley Declaration confirmed the willingness of many states to cooperate in mitigating the potential risks posed by AI. However, authorities in various parts of the world use different approaches to managing AI-related risks. For example, the United States, despite recent steps by the Biden administration, such as the adoption of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, lacks a comprehensive federal approach, while the European Union has pursued a more coordinated approach through legislation.
"Divergent regulatory approaches between the United States, the European Union, China, and India could hinder collaborative risk management efforts across the Atlantic and the Pacific, particularly when it comes to the definition of 'low risk' AI systems and the protection of vulnerable groups. Transatlantic and transpacific alignment would be strengthened by creating shared principles and standards regulated by independent international bodies that consider the nuances of different AI applications and their respective risk profiles," Galanos said.
Still, as Linna pointed out, countries will opt for different policy and regulatory approaches for AI even if they agree on global principles and rules.
"What the European Union, United States, and other countries have done so far will not significantly hamper the possibilities for transatlantic or global cooperation on AI risk management. The bigger obstacles are the usual geopolitical challenges and AI competition. But in terms of AI regulation and global agreements, we are in the very early stages of historical developments that will play out over many years," Linna added.
Meanwhile, Girasa pointed out the difficulty of predicting what will happen next, as we cannot know what new innovations will emerge further down the road.
"Who could have predicted the development of blockchain, AI and its deep learning capabilities, ChatGPT, and the many other advances now occurring in laboratories worldwide and yet to be made known [to the] public? Quantum computing undeniably will also have an enormous impact within the decade," the expert concluded.