OpenAI held its much-anticipated Spring Update event on Monday, where it announced a new ChatGPT desktop app, minor UI changes to the ChatGPT web client, and a new flagship artificial intelligence (AI) model called GPT-4o. The event was streamed online on YouTube and held in front of a small live audience. During the event, the AI ​​firm also announced that all GPT-4 features that were previously only available to premium users will now be available to everyone for free.

OpenAI’s ChatGPT desktop app and interface refresh

Meera Murati, CTO of OpenAI, kicked off the event and launched the new ChatGPT desktop app, which now comes with computer vision and can look at the user’s screen. Users will be able to turn this feature on and off, and artificial intelligence will analyze and help with everything that is displayed. The CTO also revealed that the web version of ChatGPT is getting a slight interface refresh. The new user interface comes with a minimalistic look and users will see offer cards when they enter the website. The icons are also smaller and hide the entire sidebar, making more of the screen available for conversations. Notably, ChatGPT can now also access a web browser and provide real-time search results.

Features of GPT-4o

The main attraction at the OpenAI event was the company’s latest flagship AI model, called GPT-4o, where the “o” stands for omni-model. Muratti emphasizes that the new chatbot is twice as fast, 50 percent cheaper and has five times higher speed limits compared to the GPT-4 Turbo model.

GPT-4o also offers significant improvements in response latency and can generate real-time responses even in speech mode. In a live demonstration of the AI ​​model, OpenAI showed that it can have a real-time conversation and react to the user. Powered by GPT-4o ChatGPT can now be interrupted to answer a different question, which was previously impossible. However, the biggest improvement in the revealed model is the inclusion of emotional voices.

Now when ChatGPT speaks, its responses contain different voice modulations, making it more human and less robotic. A demonstration showed that AI can also pick up human emotions in speech and react to them. For example, if a user speaks in a panicky voice, they will speak in a concerned voice.

Improvements have also been made to computer vision, and based on live demos, it can now process and respond to live video feeds from the device’s camera. It can watch a user solve a math equation and offer step-by-step guidance. It can also correct the user in real time if they make a mistake. Similarly, it can now process large coding data and instantly analyze it and share suggestions for its improvement. Finally, users can now open the camera and talk to visible faces, and AI can recognize their emotions.

Finally, another live demo highlighted that ChatGPT, powered by the latest AI model, can also perform live voice translations and speak multiple languages ​​in quick succession. Although OpenAI did not mention the cost of a subscription to access the GPT-4o model, it emphasized that it will be released in the coming weeks and will be available as an API.

GPT-4 is now available for free

Besides all the new launches, OpenAI has also made the GPT-4 AI model, including its features, available for free. People using the free tier of the platform will have access to features such as GPT (mini chatbots designed for specific use cases), GPT Store, the memory feature through which AI can remember the user and specific information related to them for future conversations. , and its advanced data analysis without paying anything.

https://www.gadgets360.com/ai/news/openai-gpt-4o-real-time-responses-video-interaction-emotive-voices-launched-free-5656749#rss-gadgets-all