OpenAI has confirmed that it will not roll out advanced voice capabilities for ChatGPT until later this year, but continues to provide insight into what we can expect The latest one showcases GPT-4o's impressive language capabilities, teaching users Portuguese
GPT-4o was announced in the OpenAI Spring Update earlier this year, along with impressive advanced voice capabilities Some vision and screen sharing features were also unveiled, but we know that this will not appear until much later this year, or perhaps early next year This is my own experience with the current voice model
In a new OpenAI video, native English speakers trying to learn Portuguese and Spanish speakers with a basic knowledge of Portuguese use ChatGPT to improve their skills In various situations, they ask ChatGPT to slow down or explain terms, and ChatGPT does it perfectly
What makes the new ChatGPT-4o's advanced voice so exciting is the fact that it is native speech synthesis Unlike previous models that must first convert speech into text and then do the same thing in reverse for a response, this just naturally understands what you are saying
The ability to understand speech and audio natively allows for several exciting features such as working across multiple languages, putting on different accents, changing the speed tone and vibration of your voice, etc, essentially making it a perfect teacher
The native s speech feature allows them to hear what you are saying and even analyze the way you say certain words and your accent It can then provide feedback directly based on what it hears, rather than evaluating a transcript
In addition to these, GPT-4o also has impressive reasoning and problem-solving abilities, so it can identify where you are making mistakes in a less obvious way
Several demonstrations of new advanced voice features are now available, including some that were not intended to be released One of them shows that you can create sound effects while telling a story, another shows that you can use several different voices
In an official video published by OpenAI on YouTube, it is used as a math teacher In the video, he is working on an iPad, the screen is shared, and the AI shows advice and information about all aspects of a math problem
We feel that the advanced voice mode and especially the ability to natively understand speech is one of the most important leaps in artificial intelligence since OpenAI included a chat interface in its GPT-3 model in November 2022
Comments