Vitalik Buterin Warns of Superintelligent AI Risks
Ethereum co-founder Vitalik Buterin has warned of the risks of super artificial intelligence and the need for strong defense mechanisms.
Buterin’s remarks come as concerns about the safety of artificial intelligence have intensified with the rapid development of artificial intelligence.
Buterin’s plans for AI regulation: Responsibility, pause buttons, and international controls
in a blog post The date is January 5, Vitalik Buterin Outlining his thinking behind “d/acc or defense acceleration”, where technology is developed to defend rather than cause harm. However, this is not the first time Buterin has disclosed the risks associated with AI.
“One way that AI gone wrong could make the world worse is (almost) the worst way: it could actually lead to human extinction,” Buterin explain 2023.
Buterin has now followed up on his theory from 2023. According to Buterin, superintelligence may be just a few years away from existence.
“It looks like we’re probably still three years away from general artificial intelligence and three years away from superintelligence. So if we don’t want the world to be destroyed or fall into an irreversible trap, we can’t just accelerate the good side , we must also slow down the bad side,” Buterin wrote.
To mitigate AI-related risks, Buterin’s supporters for creation Decentralized artificial intelligence system Still closely related to human decision-making. By ensuring that AI remains a tool in human hands, the threat of catastrophic consequences can be minimized.
Buterin then explained how the military could become a responsible player in an “AI apocalypse” scenario. The use of artificial intelligence in the military is increasing globally, as is the case in Ukraine and Gaza. Buterin also believes that any Artificial Intelligence Regulation The bill’s entry into force is likely to exempt the military, making them a significant threat.
The Ethereum co-founder further outlines his regulatory plans Use of artificial intelligence. He said the first step in avoiding AI-related risks is to hold users accountable.
“While the connection between how a model is developed and how it is ultimately used is often unclear, users can decide exactly how the AI is used,” explains Buterin, emphasizing the role played by the user.
If liability rules don’t work, the next step would be to implement a “soft pause” button that would allow AI regulation to slow the pace of potentially dangerous progress.
“Our goal is to be able to reduce the amount of global available computing by about 90-99% during the critical period for 1-2 years, giving humans more time to prepare.”
He said the suspension could be achieved through AI location verification and registration.
Another approach is to control AI hardware. Buterin explained that AI hardware can be equipped with chips to control it.
The chip will only allow the AI system to operate if it receives three signatures from international agencies each week. He further added that at least one of the agencies should be a non-military affiliate.
Still, Buterin acknowledged that his strategy had holes and was only a “temporary stopgap.”
Disclaimer
follow trust project BeInCrypto is committed to fair and transparent reporting. This news article is designed to provide accurate and timely information. However, readers are advised to independently verify the facts and consult a professional before making any decisions based on the content of this article. Please note that our terms and Conditions, privacy policyand Disclaimer Updated.