Everyone can now use ChatGPT's sophisticated voice mode for free. Here's how to try it out and activate it
To give Advanced Voice Mode—the feature that lets you communicate with ChatGPT verbally—more personality, OpenAI has introduced a new model.
OpenAI claims that the model, which started rolling out this Monday, is an update designed to give all users the ability to "pause and collect their thoughts" when interacting with ChatGPT.
To make this option as user-friendly as possible at this time, OpenAI has taken community feedback into consideration and made the required adjustments. Some of the additions are as follows:
Reduced interruptions: ChatGPT's tendency to cut off sentences or reply too soon was one of the main problems users encountered. The system can now more accurately predict the turns needed for conversation, resulting in more time for introspection and organic pauses.
ChatGPT Plus, Team, and Enterprise subscribers will observe that responses are more straightforward, succinct, interesting, and better fit the conversation's tone. This is an example of personality improvement (for paid users).
All users, both free and paid, can now access the update; however, paid users get extra advantages in terms of the expressiveness and smoothness of the model.
In a YouTube video, Manuka, a researcher on OpenAI's training team, explained that this means the chatbot is "more direct and concise," has a "more engaging and natural" tone, and interrupts the user less.
Instructions for activating ChatGPT's new advanced voice mode
This feature is very easy to use, and the steps you need to take in the ChatGPT application are as follows:
- Open the ChatGPT app (available on iOS and Android).
- On the Home screen, tap the sound icon in the bottom right corner.
- When Advanced Sound Mode is active, you will see a blue ball in the center of the screen.
- You can now start talking to ChatGPT normally.
If you reach your daily limit for Advanced Voice Mode (on the free plan), you can still chat using Standard Voice Mode, albeit with fewer emoji nuances.
Everything indicates that the development of this feature will not stop, and it seems very likely that OpenAI will continue to improve this model, incorporating options such as greater contextual adaptation, support for more languages and dialects, and integration with other platforms. In the meantime, if you haven't tried the advanced voice mode yet, this is a great opportunity to do so.

