Guardrails are needed to control AI growth


The prime minister said “guardrails” should be put in place to ensure that artificial intelligence (AI) is developed and used “safely and reliably”.

Rishi Sunak told journalists accompanying him in Japan that he expected to discuss AI with world leaders at the G7 summit in Hiroshima.

“There are clear benefits of artificial intelligence in economic growth, social transformation and improved public services if it is used safely,” he said.

“But like I said, it has to be done safely and securely with guardrails in place, and that’s our regulatory approach.”

With the explosion of AI around the world, and concerns over its impact on jobs, industry, copyright, education, and privacy, among others, regulators around the world are stepping up their scrutiny of AI.

On 4 May, the UK’s competition watchdog, the Competition and Markets Authority, launched a review of the AI ​​market to consider the opportunities and risks of AI, as well as competition rules and consumer protections that may be required. It has started.

“There is recognition that AI is a problem that cannot be solved by one country acting unilaterally,” said an official spokesperson for the prime minister. Due to the nature of AI, the UK approach is meant to be agile and iterative.

“The starting point for us is safety and making sure the public is comfortable with how AI is being used on their behalf.”


Sunak’s comments come a day after Britain’s largest broadband and mobile provider, BT Group, said it would cut up to 55,000 jobs by the end of 2010.

About one-fifth of job losses are attributed to telecom giants’ plans to shift to AI and automated services, as customers will rely more on online and app-based communications than call centers for account services and upgrades. be done.

Former government chief scientific adviser Sir Patrick Vallance told a parliamentary committee on May 3 that the rapid development of AI “has surprised everyone”, including those very close to the field.

He warned that the technology would have a “massive impact on employment,” and said the impact “could be as great as the Industrial Revolution.”

He also said it’s important to keep track of “what these things are going to be and what the risks are with that when they actually start doing unexpected things.”

Risk to Humanity

AI has recently taken the world by storm, and ChatGPT has come to the fore in the last few months after its version was released last year.

Following the announcement of the latest version of ChatGPT in March, some AI experts signed an open letter written by the non-profit Future of Life Institute, calling the technology “significant risks to society and humanity.” warned that it would bring

One of the signatories, Tesla CEO Elon Musk, has outspokenly expressed his concerns about AI in general, saying it poses serious risks to human civilization.

“AI, in the sense of potential, is arguably more dangerous than, say, poorly managed aircraft design and production maintenance, or poor car production. No, it’s potentially civilization-destroying,” he told Fox News in a recent interview.

British computer scientist Geoffrey Hinton, dubbed the “Godfather of AI,” recently stepped down from his position as vice president and engineering fellow at Google to join dozens of other experts in the field on threats and risks. decided to speak up. AI’s.

“It’s hard to understand how you can prevent the bad guys from taking advantage of you,” Hinton, 75, said in an interview with The New York Times.

PA Media contributed to this report.