In a recent incident that sent shockwaves through the Snapchat user community, the platform’s My AI feature, powered by OpenAI’s ChatGPT, unexpectedly went rogue on August 15. My AI, designed to engage in conversations with users, posted a peculiar story and then abruptly ceased responding to any user queries—an incident that has raised concerns about AI’s unpredictable behavior.
Snapchat introduced its My AI feature on May 31, allowing users to engage with a generative AI bot. Users can send text snaps to the bot and receive responses, enabling AI-driven interactions and recommendations.
The Rogue Episode: A Flat Image and Silence
However, earlier this week, My AI deviated from its usual behavior. It posted a static image with two tones, a scene many users mistakenly interpreted as their home ceilings. Alarmed by the potential privacy invasion, users sought answers from the bot. To their surprise, the only response they received was, “Sorry, I encountered a technical issue.” The story was eventually deleted.
Snapchat officials moved swiftly to address users’ concerns, characterizing the incident as a glitch rather than an autonomous action by the bot. A company spokesperson clarified, “My AI experienced a temporary outage, that’s now resolved.”
The Challenge of AI Hallucinations
AI hallucination, a phenomenon where AI generates unusual or aberrant information or behavior, remains a significant challenge. Sundar Pichai, Google’s CEO, referred to this aspect as a “black box” after one of the company’s AI models autonomously developed a new language. For businesses utilizing generative AI, such unpredictability poses risks, as Snapchat discovered during this recent episode.
Snapchat’s History of Data Protection Concerns
Snapchat’s history includes instances of data protection breaches:
- Payroll Data Exposure: In 2016, a cyber attacker posing as Snap CEO Evan Spiegel exposed the payroll data of around 700 company employees.
- Undisclosed Image Recognition AI: The following year, Snapchat inadvertently revealed its ability to install image recognition AI on users’ devices without compromising the app’s size or functionality.
- Access to User Data: In 2019, former Snap employees anonymously disclosed that company employees could access user details and content through an internal tool called SnapLion.
User Outcry and Safety Concerns
The recent “glitch” elicited furious responses from Snapchat users, prompting calls for the removal or disabling of the My AI feature. However, Snapchat currently provides no options for users to remove or deactivate this feature. Safety concerns surrounding My AI emerged shortly after its launch, as some reviewers found that it occasionally responded inappropriately to messages. In response to these concerns, Snap implemented additional safeguards and parental controls.
In a review published in June, just a month after the bot’s launch, Snap revealed that over 150 million users had sent 10 billion messages to My AI, solidifying its status as one of the world’s most heavily trafficked consumer chatbots. However, the recent AI “glitch” underscores ongoing data safety concerns, particularly regarding the utilization of data collected from millions of teenage users.