ChatGPT developer OpenAI argues that a global regulatory body needs to be established to prepare for the emergence of 'superintelligent AI' based on concerns that 'AI will exceed the skill level of experts in most fields within 10 years.'



On May 22, 2023 local time, OpenAI, an AI research organization that developed the conversational AI 'ChatGPT' and the large-scale language model 'GPT-4,' predicted the emergence of ' superintelligence ,' an AI that will perform advanced production activities beyond the skill level of experts. They proposed that an international regulatory body be established to promote the safe development of AI.

Governance of superintelligence

https://openai.com/blog/governance-of-superintelligence



On May 22, 2023, OpenAI CEO Sam Altman, along with Greg Brockman and Ilya Sutskever, jointly proposed the establishment of an international oversight and regulatory body for AI research.




OpenAI states that 'within the next decade, AI systems will surpass the skill level of experts in most fields and perform highly productive activities on a scale comparable to that of large corporations,' calling future AI 'superintelligence' and assuming it will be more powerful than various technologies that humanity has dealt with so far.

OpenAI also argues that 'superintelligence' has potential benefits and risks, and that special adjustments are needed to mitigate those risks.

Regarding the development of 'superintelligence,' OpenAI argues that 'major governments around the world should launch projects to limit the rapid growth of AI to ensure its safety and smooth integration into society,' and that 'individual companies should be held responsible for their development.'

They also advocate for the establishment of an international regulatory body similar to the International Atomic Energy Agency that would inspect and audit systems, test for compliance with safety standards, and limit security levels. OpenAI states that it is important that these international regulatory bodies focus on mitigating risks, focusing on issues that cannot be left to individual countries or governments, such as defining what AI should be allowed to say.




Furthermore, they advocate that 'technical capabilities are needed to safely develop superintelligence,' and OpenAI states that 'we have made a lot of effort so far.'

On the other hand, OpenAI stated, 'It is important to make exceptions and allow companies and open source projects to develop models below a certain level of capability without restrictions such as licensing or audits.' However, Elon Musk, who was an investor in OpenAI when it was founded and later resigned from the board due to a deterioration in relations, responded to a tweet criticizing OpenAI, which develops high-performance AI models such as GPT-4, for this statement with a '🎯' (spot on).




OpenAI argues that 'Today's AI systems, while fraught with potential risks, create enormous value for society. AI technologies that fall short of superintelligence should not be subject to regulation, audits, or standards.'

Regarding superintelligence, OpenAI says, 'We believe that superintelligence will bring about a world far superior to the results achieved by AI to date in areas such as education, creative work, and productivity. To achieve this, we must solve many of the problems we face.'



He also stated, 'The cost of building a superintelligence is decreasing year by year, while the number of AI research institutes is rapidly increasing, making it difficult to prevent the creation of future superintelligences.' Regarding safe AI development, he argues, 'The benefits of creating a superintelligence are enormous, and the development of a superintelligence is one of the essential goals of AI development organizations like OpenAI. To prevent the rapid development of superintelligences, a global oversight system is necessary, but there is no guarantee that this system will function, so it is important to understand it correctly.'

in AI,   Software, Posted by log1r_ut