Governments across the world race to regulate AI tools
Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI’s ChatGPT are complicating governments’ efforts to agree laws governing the use of the technology.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
Australia – Seeking input on regulations
The government is consulting Australia’s main science advisory body and is considering next steps, a spokesperson for the industry and science minister said in April. Britain – Planning regulations Britain’s competition regulator said on Thursday it would start examining the impact of AI on consumers, businesses and the economy and whether new controls were needed.
Britain said in March it planned to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body.
China – Planning regulations China’s cyberspace regulator in April unveiled draft measures to manage generative AI services, saying it wanted firms to submit security assessments to authorities before they launch offerings to the public.
Beijing will support leading enterprises in building AI models that can challenge ChatGPT, its economy and information technology bureau said in February.
European Union
Members of the European Parliament reached a preliminary deal on the draft of the EU’s Artificial Intelligence Act, that could pave the way for the world’s first comprehensive laws governing the technology.
The draft, which will be voted by a committee of lawmakers on May 11, identified copyright protection as central to the effort to keep AI in check.
Members of European Parliament raced to update the rules to catch up with an explosion of interest in generative AI, Reuters interviews with four lawmakers and two other sources found.
The European Data Protection Board, which unites Europe’s national privacy watchdogs, said in April it had set up a task force on ChatGPT, a potentially important first step towards a common policy on setting privacy rules on AI.
The European Consumer Organisation (BEUC) has joined in the concern about ChatGPT and other AI chatbots, calling on EU consumer protection agencies to investigate the technology and the potential harm to individuals.
France – Investigating possible breaches
France’s privacy watchdog CNIL said in April it was investigating several complaints about ChatGPT after the chatbox was temporarily banned in Italy over a suspected breach of privacy rules.
France’s National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups.
G7 – Seeking input on regulations
Group of Seven advanced nations should adopt “risk-based” regulation on AI, G7 digital ministers said after a meeting on April 29-30 in Japan.
Ireland – Seeking input on regulations
Generative AI needs to be regulated, but governing bodies must work out how to do so properly before rushing into prohibitions that “really aren't going to stand up,” Ireland’s data protection chief said in April.
Italy – Lifted ban
ChatGPT is available again to users in Italy, a spokesperson for OpenAI said on April 28.
Italy temporarily banned ChatGPT in March after its data protection authority raised concerns over possible privacy violations and for failing to verify that users were aged 13 or above, as it had requested.
Spain – Investigating possible breaches
Spain’s data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU's privacy watchdog to evaluate privacy concerns surrounding ChatGPT, the agency told Reuters in April.
US – Seeking input on regulations
The US Federal Trade Commission’s chief said on Tuesday the agency was committed to using existing laws to keep in check some of the dangers of AI, such as enhancing the power of dominant firms and “turbocharging” fraud.
Senator Michael Bennet introduced a bill on April 27 that would create a task force to look at US policies on AI, and identify how best to reduce threats to privacy, civil liberties and due process. The Biden administration said in April it was seeking public comments on potential accountability measures for AI systems.
President Joe Biden had earlier told science and technology advisers that AI could help to address disease and climate change, but it was also important to address potential risks to society, national security and the economy.