
Elon Musk and other AI experts have recently signed an open letter that urges a pause on AI development that is more powerful than OpenAI’s GPT-4. The letter mentions the potential risks that these systems pose to society and that they should only be developed if their positive effects can be ensured and their risks can be managed. The Future of Life Institute, funded by the Musk Foundation, Founders Pledge, and Silicon Valley Community Foundation, issued the letter.
Musk, who co-founded industry leader OpenAI and has used AI for his carmaker Tesla’s autopilot system, has expressed concern about the regulation of AI development. Tesla had to recall more than 362,000 U.S. vehicles last month to update the software after regulators stated that the driver assistance system could cause crashes.
More than 1,000 people, including researchers from DeepMind and AI experts Yoshua Bengio and Stuart Russell, signed the letter. However, Sam Altman, OpenAI’s CEO, and Sundar Pichai and Satya Nadella, the CEOs of Alphabet and Microsoft, respectively, did not sign the letter.
The letter urges developers to work with policymakers on governance and to wait until shared safety protocols are developed by independent experts before developing advanced AI. It also emphasizes that unelected tech leaders should not make decisions about such a critical matter.
Since OpenAI’s ChatGPT was released last year, it has prompted other companies to accelerate the development of similar language models. However, the open letter emphasizes the need to slow down until the potential ramifications are better understood.
The UK government recently proposed a regulatory framework around AI, and EU police force Europol warned about the potential misuse of the system in phishing attempts, disinformation, and cybercrime. This concern comes as ChatGPT attracts attention from U.S. lawmakers about its impact on national security and education.
In conclusion, the open letter urges caution and thoughtful development of advanced AI to ensure that its positive effects can be guaranteed, and its risks can be managed.