Business

Experts warn of AI advancements’ societal impact and call for global regulation


Rapid advancements in artificial intelligence have the potential to exacerbate societal problems and even pose an existential threat to human life, increasing the need for global regulation, AI experts told the Reuters MOMENTUM conference this week.

The explosion of generative AI – which can create text, photos and videos in response to open-ended prompts – in recent months has spurred both excitement about its potential as well as fears it could make some jobs obsolete, upend economies and even possibly overpower humans.

For all the latest headlines, follow our Google News channel online or via the app.

“We are flying down the highway in this car of AI,” said Ian Swanson, CEO and co-founder of Protect AI, which helps businesses secure their AI and machine learning systems, during a Reuters MOMENTUM panel on Tuesday.

“So what do we need to do? We need to have safety checks. We need to do the proper basic maintenance and we need regulation.”

Regulators need look no further than at social media platforms to understand how unchecked growth of a new industry can lead to negative consequences like creating an information echo chamber, said Seth Dobrin, president of the Responsible AI Institute.

“If we expand the digital divide … that’s going to lead to disruption of society,” Dobrin said. “Regulators need to think about that.”

Regulation is already being prepared in several countries to tackle issues around AI.

The European Union’s proposed AI Act, for example, would classify AI applications into different risk levels, banning uses considered “unacceptable” and subjecting “high-risk” applications to rigorous assessments.

US lawmakers last month introduced two separate AI-focused bills, one that would require the US government to be transparent when using AI to interact with people and another that would establish an office to determine if the United States remains competitive in the latest technologies.

One emerging threat that lawmakers and tech leaders must guard against is the possibility of AI making nuclear weapons even more powerful, Anthony Aguirre, founder and executive director of the Future of Life Institute, said in an interview at the conference.

Developing ever-more powerful AI will also risk eliminating jobs to a point where it may be impossible for humans to simply learn new skills and enter other industries.

“We’re going to end up in a world where our skills are irrelevant,” he said.

The Future of Life Institute, a nonprofit aimed at reducing catastrophic risks from advanced artificial intelligence, made headlines in March when it released an open letter calling for a six-month pause on the training of AI systems more powerful than OpenAI’s GPT-4. It warned that AI labs have been “locked in an out-of-control race” to develop “powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

“It seems like the most obvious thing in the world not to put AI into nuclear command and control,” he said. “That doesn’t mean we won’t do that, because we do a lot of unwise things.”

Read more:

Google’s AI for medicine shows clinical answers more than 90 pct accurate

Countries agree to hold off on digital services tax freeze through 2024

Indian IT giant Wipro to spend $1 billion to train entire staff in AI

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version