OpenAI Pulls Gemini off the Top of the Chatbot Leaderboard with New Model

OpenAI Pulls Gemini off the Top of the Chatbot Leaderboard with New Model

OpenAI's ChatGPT and Google Gemini have been battling over chatbot prompts for months, and the competition is really starting to heat up

Earlier this year, Claude took the top spot in the AI benchmarking tool LMSys Chatbot Arena, while Gemini reigned supreme

Now, however, a new version of ChatGPT-4o (20240808) has reclaimed the top spot from its rival with a score of 1314 7]

According to X's lmsysorg, “the new ChatGPT-4o is a better performer in technical domains, especially coding (GPT-4o- more than 30 points over 20240513), and shows marked improvements in instruction following and hard prompting”

We have also found that Gemini-15-Pro-Exp

We have recently discovered that OpenAI has rolled out a new version of GPT 4o on ChatGPT

In our tests, we found it to be much faster than the previous version and even managed to build an entire iOS app in an hour using the latest version of the model

This, coupled with improvements in the Mac app, meant that it was a bigger week than usual for ChatGPT users and OpenAI itself This was a bigger week than usual for ChatGPT users and for OpenAI itself

Still, with new and revamped models arriving all the time, it is quite possible that in the coming months or even weeks we will see a replacement of the top

Categories