OpenAI co-founder launches new company to build "safe super intelligence" - here's what it means

OpenAI co-founder launches new company to build "safe super intelligence" - here's what it means

One of the co-founders of OpenAI, who was Chief scientist until last month, launched a new company with the sole aim of building "secure super intelligence."'

Ilya Sutskever is one of the most important figures in the world of generating AI, including the development of models that led to ChatGPT 1. 

In recent years, his focus has been on super-alignment, especially trying to prevent super-intelligent AI from making its own bids. He was one of the directors who fired Sam Altman earlier this year and resigned when Altman returned.

That's when he started his new company SSI Inc.This is what we want to continue to do. This is the first AI lab to skip Artificial General Intelligence (AGI) and go straight for a sci-fi-inspired superbrain. "Our team, investors and business models are all aligned to achieve SSI," the company wrote in X

The founders are Sutskever, Daniel Gross, whose former Apple AI lead became an investor in AI products, and Daniel Levy, a former OpenAI optimization lead and AI privacy expert.

Artificial super intelligence (ASI) is an AI with intelligence beyond the human level. According to IBM, "At the most basic level, this super-intelligent AI has state-of-the-art cognitive functions and highly developed thinking skills."

Unlike AGI, which is generally more intelligent than humans, ASI needs to be much more intelligent in all areas, including reasoning and cognition.

There is no strict definition of super intelligence, and each company that approaches advanced AI has a different interpretation. There are also disagreements about the time it takes to achieve this level of technology with some experts predicting decades.

One aspect of super intelligence is AI that can improve its own intelligence and abilities, which will result in further distance between human and AI abilities.1

The problem with creating an AI model that is more intelligent than humanity is that it is difficult to keep it in control or stop it from outsmart us if it is not properly aligned with human values and interests, it can choose to destroy humanity.

To solve this problem, all companies working on advanced AI are also developing alignment technologies. These approaches are different from systems running on top of an AI model or those trained with it. That is the approach of SSI Inc.

SSI states that by focusing only on super intelligence, we can guarantee that it will be developed along with alignment and safety. "SSI is our entire mission, name, and product roadmap because it's our only focus," they write in X.

"We approach safety and capability in parallel as technical issues to be solved through innovative engineering and scientific breakthroughs," the company added. "We plan to improve our capabilities as quickly as possible while making sure safety is always ahead.”

Categories