Key Points of This Article
- The AI model for “Gemini Live,” the intuitive conversational service for Google’s AI “Gemini” on Android/iOS, has been updated as previously announced.
- A dramatic improvement in its ability to understand the key elements of human speech and utilize them in conversation has finally been rolled out.
- It can now be used for various learning and practice activities—such as adjusting speech speed, supporting language learning, conducting mock interviews, and reading stories from a specific character’s perspective—all within a smoother conversational flow than ever before.
About three months ago, on Wednesday, August 20, 2025, Google announced an update to the underlying AI model for “Gemini Live,” the intuitive conversational service for its Google AI “Gemini” on Android/iOS.
About three months later, on Wednesday, November 12, 2025, the AI model for “Gemini Live” was updated as announced, and a dramatic improvement in its ability to understand the key elements of human speech and utilize them in conversation has finally begun to roll out.

With this AI model update for “Gemini Live,” it can now be used for various learning and practice activities, such as adjusting speech speed, supporting language learning, conducting mock interviews, and reading stories from a specific character’s perspective, all within a smoother conversational flow than ever before. It should be particularly more useful than ever for language learning and mock interviews.
Source: Google




コメントを残す