ChatGPT may be smarter than your professor in the next two years

ChatGPT may be smarter than your professor in the next two years

OpenAI is dripping with information about the future of the frontier AI model and whether this will be called GPT-5, GPT-5o, or something else entirely.

According to the latest statements from CTO Mira Muratti, it appears that something with the intelligence of a professor will be born within two years. This will likely be based on the GPT-4o technology announced earlier this year, with native speech and vision capabilities.

"If you look at the trajectory of improvement, GPT-3 is toddler-level intelligence, systems like GPT-4 are smart high school level intelligence, and in the next couple of years we are looking at doctoral-level intelligence for specific tasks," she said in a talk at Dartmouth College.

Some took this as a suggestion to wait two years for GPT-5, given other OpenAI revelations, such as the graph showing this year's "GPT-Next" and future "future models," and CEO Sam Altman's failure to mention GPT-5 in recent interviews, I am not convinced.

The release of GPT-4o was a game changer for OpenAI, creating from scratch something entirely new, built to understand not only text and images, but also native speech and vision. GPT-4o was built to understand not only text and images, but also native speech and built to understand visuals, something entirely new.

But the company is also under increasing pressure from competition and commercial realities. In recent tests, Anthropic's Claude appears to be beating ChatGPT, and Meta is increasing its investment in building advanced AI.

The previous generation model, the GPT-4, was introduced last March and has undergone several minor updates since then. And the GPT-4o, launched earlier this year, is a new type of true multimodal model.

Since the success of ChatGPT, OpenAI has become more cautious and more product-focused.

Apparently the focus is still on building artificial intelligence, but Muratti's comment that in some areas it is already as intelligent as humans seems to suggest a shift toward defining specific tasks rather than broad, general systems.

Muratti says there is a simple formula for creating advanced AI models. Just combine computing, data, and deep learning. Scaling both data and computing leads to better AI systems. This discovery will lead to great leaps forward."

"We are building on decades and decades of human effort. What has happened in the last decade is a combination of neural networks, massive amounts of data, and massive amounts of computing power. Combine the three and you have a transformative system that can do amazing things," says Muratti.

Muratti said it is not clear at this time how these systems will actually work, only that they will work, as they have done for over three years and have seen improvements over time.

"It understands language at the same level that we can. It doesn't memorize what happens next; it has its own understanding of patterns in the data it has seen before." We also found that it's not just language. It doesn't care what kind of data you put in."

According to Muratti, within the next couple of years, we will have PhD-level intelligence for specific tasks. Within the next year to 18 months, some of this may be realized. In other words, within two years, you will be able to have a conversation with ChatGPT about a topic you know well, and ChatGPT will seem smarter than you or your professor.

Mulatti says safety work on future AI models is essential. 'We think a lot about this,' he said. It is definitely realistic to see AI systems that have agent capabilities, are connected to the Internet, and have agents connecting to each other to perform tasks together, or agents connecting to humans to collaborate seamlessly," she says.

This includes situations where "humans work with AI, just as we work with each other today" through agent-like systems.

She says that building safety guardrails must be done in a way that is built alongside technology to get it right. 'It is much easier to direct a smarter system by telling it not to do this than it is to direct an unintelligent system.' [Intelligence and safety go hand in hand," Muratti added. She said that while one must think about safety and deployment, in terms of research, both safety and improvement go hand in hand.

What is not clear is how new features and advanced capabilities will emerge. Therefore, a new science of capability prediction is needed to determine how much risk a new model is likely to have and what can be done to mitigate that risk in the future.

.

Categories