Meta Platforms Inc. announced on Wednesday a significant update to its artificial intelligence chatbot, Meta AI, allowing Facebook and Instagram users to engage in real-time conversations with voices that closely resemble those of celebrities like John Cena and Judi Dench. However, it’s important to note that these voices are not the actual actors but are computer-generated replicas trained to sound like them.
This initiative is part of Meta’s effort to enhance its AI chatbot, which users can access across Facebook, Instagram, WhatsApp, and Threads, and to keep pace with competitors such as ChatGPT, which is introducing its own voice mode. Mark Zuckerberg, Meta’s CEO, expressed his ambition for Meta AI to become “the most used AI assistant in the world” by the end of this year, aided by the company’s massive user base of over 3 billion daily users. However, the specifics of how Meta measures chatbot engagement remain unclear.
To develop these celebrity voices, Meta partnered with actors Kristen Bell, Awkwafina, and Keegan-Michael Key, alongside Cena and Dench. This contrasts with a controversy that arose earlier this year when OpenAI showcased a voice feature for ChatGPT that sounded like actress Scarlett Johansson, who had declined to participate. OpenAI later clarified that the voice, named Sky, was not based on Johansson but suspended its use in light of the backlash. In Meta’s case, the collaboration with actors ensures that their voices are used with consent.
During the annual Meta Connect conference, Zuckerberg unveiled this new voice mode along with other advancements, including a new, more affordable version of Meta’s Quest headsets and updates to the company’s augmented reality Ray-Ban glasses.
Another exciting announcement is the introduction of AI-generated versions of social media influencers. Previously, influencers could only create AI chatbots for text-based conversations with their followers. Now, these AI versions will be capable of engaging in quasi-video calls, enhancing interaction with their audience.
Additionally, Meta plans to implement auto-translation and dubbing for foreign language Reels, its short-form video content. For instance, if an English-speaking user encounters a Reel originally created in Spanish, the platform will automatically dub it in English, including visual edits to align the speaker’s mouth movements with the new audio.
Moreover, users may soon notice an influx of AI-generated content in their Facebook and Instagram feeds. Meta aims to create and distribute AI-generated images tailored to users’ interests and current trends, a feature they have dubbed “imagined for you.” However, it remains unclear if users will have the option to opt out of this content, preferring to see posts only from their human friends.
Zuckerberg also highlighted the advancements in Meta’s AR glasses, which will now include live, AI-enabled translation capabilities. Users can engage in conversations with speakers of foreign languages and receive real-time translations in their own language.
The CEO also teased a prototype called “Orion,” which represents a more sophisticated pair of glasses designed to deliver augmented reality experiences comparable to headsets like Meta Quest or Apple’s Vision Pro. Unlike existing AR headsets that display images through a camera, Orion lenses utilize a see-through design and holograms, allowing digital content, such as emails and text messages, to appear as though they float in the user’s space.
Zuckerberg touted the Orion as “the most advanced glasses the world has ever seen.” However, these innovative glasses are not yet available for consumer purchase. The company plans to conduct further internal testing and collaborate with select third-party developers to build applications for the glasses before releasing them to the public.
As Meta continues to refine its AI and augmented reality technologies, the landscape of social media engagement is poised for transformation, enhancing user experiences with interactive, personalized content.