Business

AI compared to nuclear weapons and could potentially lead to human extinction: report

The US government has a “clear and urgent need” to act as swiftly developing artificial intelligence could potentially lead to human extinction through weaponization and loss of control, according to a government-commissioned report.

The report, obtained by TIME Magazine and titled, “An Action Plan to Increase the Safety and Security of Advanced AI,” states that “the rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.”

“Given the growing risk to national security posed by rapidly expanding AI capabilities from weaponization and loss of control — and particularly, the fact that the ongoing proliferation of these capabilities serves to amplify both risks — there is a clear and urgent need for the U.S. government to intervene,” read the report, issued by Gladstone AI Inc.

The report suggested a blueprint plan for intervention that was developed over 13 months during which the researchers spoke with over two hundred people, including those from the US and Canadian governments, major cloud providers, AI safety organizations, and security and computing experts.

Robot from "Terminator" film
The report, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” states that “the rise of advanced AI and AGI has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” ©Paramount/Courtesy Everett Collection

The plan begins with establishing interim advanced AI safeguards before formalizing them into law. The safeguards would then be internationalized.

Some measures could include a new AI agency putting a leash on the level of computing power AI is set at, requiring AI companies to get government permission to deploy new models above a certain threshold and to consider outlawing the publication of how powerful AI models work, such as in open-source licensing, TIME reported.

Illustration of robot thinking
Some measures could include a new AI agency putting a leash on the level of computing power AI is set at, requiring AI companies to get government permission to deploy new models above a certain threshold. phonlamaiphoto – stock.adobe.com

The report also recommended the government tighten controls on the manufacture and export of AI chips.