OpenAI Announces "Significantly Improved" Version of GPT-4-Turbo - Coming Soon to ChatGPT

OpenAI Announces "Significantly Improved" Version of GPT-4-Turbo - Coming Soon to ChatGPT

OpenAI has released an update to its advanced GPT-4-Turbo artificial intelligence model, bringing "significantly improved" response and analysis capabilities.

Initially, this model, which includes AI vision technology to analyze and understand content from video, images, and audio, is only available to developers, but OpenAI said these capabilities will soon come to ChatGPT.

This is the first time a GPT-4-Turbo with vision technology will be available to third-party developers. This could lead to fascinating new applications and services around fashion, coding, and even gaming.

The new model also raises the knowledge deadline to December 2023. This is the point at which training of the AI is completed. Previously, the knowledge deadline was last April.

Most of GPT-4-Turbo's focus is on improving the lives of developers who access OpenAI models through API calls. The company claims that the new updates will streamline workflows and create more efficient apps. This is because different models were needed for images and text.

In the future, this model and its visual analysis capabilities will be extended and added to consumer apps like ChatGPT to make understanding images and videos more efficient.

This is something Google has begun to roll out with Gemini Pro 1.5, but for now, like OpenAI, the search giant is limiting itself to platforms used by developers, not consumers.

One of the most notable applications is Cognition Labs' viral coding agent Devin, which can create complex applications from prompts.

The GPT-4 has not fared particularly well in benchmark tests against newer models, such as the Claude 3 Opus and Google's Gemini. Some smaller models have also outperformed the GPT-4 on certain tasks.

The update should change this situation, or at least add new features that will be attractive to enterprise customers until GPT-5 arrives.

The update will continue the 128,000 token context window. While not the largest on the market, it is sufficient for most use cases.

So far, OpenAI has focused on analyzing and understanding speech in addition to text and images in ChatGPT. The new update brings video to a wider audience. Once this is introduced to ChatGPT, users may be able to upload short video clips and have AI give a summary of the content or pick out key moments.

Categories