In May 2024, OpenAI introduced Advanced Voice Mode for its popular AI chatbot, ChatGPT, aiming to make the interactions feel more natural and human-like. However, this advancement comes with both impressive progress and some unexpected quirks.
Since July 2024, select ChatGPT Pro users have been granted access to this new voice mode, and they’ve now uncovered eight different voices that OpenAI seems to be testing. These voice models—named Fathom, Glimmer, Harp, Maple, Orbit, Rainbow, Reef, Ridge, and Vale—are showcasing significant improvements in making ChatGPT sound more human.
Exploring ChatGPT’s New Voices
Technology expert Tibor Blaho shared extended samples of these voices on social media, highlighting their unique characteristics. While the voices read through text in these samples, they go beyond basic vocalization—one standout feature is the ability to mimic animal sounds, like dogs barking or crows cawing, a surprising touch that demonstrates the expanding versatility of OpenAI’s voice models.
Additionally, the new voice models display enhanced capability to emphasize certain words or phrases more naturally. There’s even the subtle presence of dialects like British or Australian, as noted by Mashable, giving the voices a more diverse and realistic feel.
Is ChatGPT’s Voice Real Enough?
The question of how realistic these voices sound is subjective—some users are impressed by the lifelike quality, while others can still detect the slightly robotic undertone. Regardless, it’s clear that OpenAI has made significant strides in creating AI voices that can mimic human speech with an unprecedented level of nuance.
What’s Next for OpenAI’s Voice Mode?
The big question remains: Will these eight voices be officially released for all users? OpenAI has yet to comment on whether they will be integrated into the Advanced Voice Mode. It’s possible that features like animal sound replication are part of internal tests that may or may not make it into the final version.
Currently, ChatGPT’s voice mode offers users four basic voice options, which are still noticeably robotic. But if these new voices are any indication, ChatGPT could soon deliver voice interactions that sound much closer to real human speech. This progress, while exciting, also brings new challenges—such as increased difficulty in detecting deepfakes, which could blur the lines between AI and reality even further.
As AI continues to evolve, OpenAI is pushing the boundaries of what voice assistants can do. The future of human-like AI conversations is closer than ever, and these new voice models are the latest step toward making that a reality.