An Australian MP says the risks associated with artificial intelligence need to be thoroughly investigated as they can pose a threat to human life.
In a speech to parliament on February 6, Labor MP Julian Hill said ChatGPT had the potential to revolutionize the world, but said it would do serious damage if AI surpassed human intelligence. I warned you that it could happen.
“Once you start thinking about it, it doesn’t take long for you to realize that the destructive and catastrophic risks from pristine AGI are real, plausible, and easy to imagine,” he said.
Risk analysts grappling with threats like asteroids, climate change, giant volcanoes, nuclear devastation, solar flares and deadly pandemics are increasingly putting artificial intelligence (AGI) at the top of their list of concerns, according to Hill. is placed.
“AGI has the potential to revolutionize the world in ways we cannot even imagine, but if AGI surpasses human intelligence, its goals and motivations must align with ours. could cause serious harm to mankind,” he said.
“People much smarter than me are increasingly concerned that AGI can be controlled by humans, or that malicious actors are less likely to exploit it for mass destruction.”
Hill also noted that militaries around the world are developing AGI. This is because it could transform warfare and render current “defensive capabilities” obsolete.
“An AGI-enabled adversary can conquer Australia or unleash societal-level destruction without being bound by globally agreed norms,” he said.
AI programs are banned in schools in New South Wales, Queensland, Tasmania, Victoria and Western Australia.
Part of MP’s speech was written by ChatGPT
To explain his concerns, Hill said he used ChatGPT to write part of his speech.
The program took just 90 seconds to summarize recent media reports of Australian students using artificial intelligence to cheat, and said the paragraphs produced were “pretty good.”
ChatGPT wrote: AI technologies, such as smart software that can write essays and generate answers, are becoming more accessible to students, allowing them to complete assignments and tests without actually understanding the material. This raises concerns, understandable concerns, for teachers worried about the impact on the integrity of the education system. ”
ChatGPT also writes that students are effectively bypassing their education by using AI to gain an unfair advantage.
“This can lead to a lack of critical thinking skills and an overall decline in the quality of teaching. Additionally, teachers may not be able to detect whether students have used AI to complete assignments. This makes cheating difficult to identify and deal with, and using AI to cheat raises ethical questions about students’ responsibility to learn and understand the material being tested. It will also happen.”

Hill warned that the quality of the response means humanity needs to stay one step ahead.
“If humans can control AGI before the intelligence explosion, it has the potential to transform science, the economy, the environment, society, and make progress in all areas of human endeavor,” he said, referring to a study on investigating the issue. Or asked for international cooperation.
“The key message I want to convey is that we have to start now.”
Worried about the AI community
Hill’s speech International Conference on Machine Learning Prohibit authors from writing scientific papers using chatbots.
“Over the past few years, we have observed and been part of rapid progress in large-scale language models (LLMs), both in research and deployment. As many, including us, have noticed, LLMs released in the last few months, such as OpenAI’s chatGPT, are often hard to distinguish from human-written text. We can now generate text snippets,” says ICML.
“Such rapid progress often has unintended consequences.
“Unfortunately, we did not have enough time to observe, investigate and consider the impact on the review and publication process. I decided to ban it.”
Pentagon Adds AGI to Watch List
Meanwhile, the US Defense Information Systems Agency (DISA) has put AGI on its watch list.
DISA’s watchlist includes 5G, Zero Trust Digital Defense post-quantum cryptography, edge computing, telepresence.
DISA chief technology officer Stephen Wallace said at an event organized by the Armed Forces Communications and Electronics Association International (AFEA) that the group is interested in the technology.

According to Defense News, Wallace said on January 25th: “But the ability to generate this kind of content is a very interesting feature.
“We are beginning to consider: [generative AI] In fact, it will change DISA’s mission in this sector and what it will offer in the future. ”