170073537056117.webp

The OpenAI Superalignment team develops new control techniques for super-intelligent AI

OpenAI claims it is making strides in its ability to manage extremely intelligent AI systems, as per an article in a recent WIRED report. Its Superalignment team, headed by the chief scientist of OpenAI Ilya Sutskever, has created an approach to control the behavior of AI models as they become ever smarter.

The OpenAI Superalignment team develops new control techniques for super-intelligent AI
The Superalignment team, which was established in July, is focused on the task of making sure that AI is safe and effective when it reaches and exceeds human intelligence. "AGI is rapidly moving towards it," Leopold Aschenbrenner, an expert at OpenAI said to WIRED. "We're going to see models that are superhuman that will have huge capabilities, and they're also extremely dangerous, and we're not yet equipped with the means to manage these models."

The new research paper from OpenAI outlines the concept of supervision, which is where a less sophisticated AI model is guiding the actions of a more advanced one. This technique aims to preserve the capabilities of the model while also ensuring that it follows the guidelines of safety and ethics. This is considered to be an essential step towards managing the possibility of superhuman AIs.

The research involved using OpenAI's GPT-2 text generator to train GPT-4, which is a more sophisticated system. Researchers tested two strategies to ensure the stability in the performance of GPT-4. The first involved training more and larger models, while the second one added an algorithmic modification to GPT-4. This method proved to be more efficient however, the researchers admit that the perfect control of behavior cannot be guaranteed.

Future direction and response of the industry
Dan Hendryks, director of the Center for AI Safety, applauded OpenAI's proactive approach to regulating superhuman AIs. Its Superalignment group's efforts are viewed as a significant initial step, but more research and development are required to ensure that control systems are effective.

OpenAI plans to allocate large portions of its computing resources for the Superalignment project and is appealing for collaboration from outside. OpenAI, in collaboration with Eric Schmidt, is offering 10 million dollars in grants to researchers who are working on AI control methods. In addition, there will be an event on superalignment later this year to explore further this crucial area.

Ilya Sutskever, a co-founder of OpenAI and a major player in the company's technological advancements and Superalignment, is the team leader. Sutskever's involvement in the project is vital, particularly in light of the recent governance issues at OpenAI. Sutskever's leadership and expertise are crucial in pushing this project to its next level.

The creation of methods to manage super-intelligent AI is a complicated and urgent job. As AI technology advances rapidly the need to ensure that it is in line with the values of humans and security is becoming more important. OpenAI's effort in this direction is a major step, however, the process of establishing reliable and efficient AI controls is still ongoing and requires the collaboration of the world AI researchers.
170073537014693.webp