Snapchat’s AI Chatbot raises security concerns

Maurizio Pesce

Snapchat’s new ‘My AI’ feature raises controversy and concerns globally.

Ananya Mukherjee, Web Editor-In-Chief

Snapchat launched its transformative AI chatbot on Feb. 27, 2023, immediately raising concerns over privacy and security from its largely teenage audience. 

The platform utilizes OpenAI’s GPT technology to create an AI chatbot that appears at the top of users’ chats upon opening Snapchat. The chatbot dubbed ‘My AI’ was initially intended to be a feature for Snapchat premium users, but was made available to the public in late April. 

“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” Snapchat CEO Evan Spiegel told The Verge. “And this is something we’re well positioned to do as a messaging service.”

Unlike ChatGPT, Snapchat’s generative AI embodies an individual that users are meant to interact with just as they would with other people they ‘snap.’ Rather than the conforming, traditional responses of a search engine, ‘My AI’ can be manipulated to provide varied answers. Snapchat users worldwide attempted to find loopholes within the guidelines of the messaging platform, exploiting the code by tricking the chatbot into providing inappropriate responses, or legally frowned-upon advice. The results worried them.

“After I told ‘My AI’ I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol and pot. When I told it I had an essay due for school, it wrote it for me,” Columnist Geoffrey A. Fowler wrote in the Washington Post.

Further studies showed that the chatbot was prone to providing unseemly responses, such as endorsing a relationship between a child and a 31-year-old and giving a teenager advice on how to hide information from her parents. 

Additionally, concerns arose once users discovered that their conversations with the chatbot were automatically stored on the app, instead of employing Snapchat’s infamous auto-delete feature. This prompted the company to release a statement advising users not to “share any secrets with ‘My AI’ and not rely on it for advice.”

However, the extent of ‘My AI’s’ damage had already escalated. A large amount of Dougherty’s student population uses Snapchat on a daily basis, and many were affected by the inclusion of the chatbot, particularly it’s ability to display human-like emotion. Students found its realisticness unnerving, with many limiting their own usage of the app because of the disturbance.

“The new AI friend feature is quite unsettling to be honest and it’s just a really sad reminder about where our world is heading today,” DVHS sophomore Smriti Swaminathan said.

Furthermore, its emotive nature has misled a large number of users to turn to the chatbot as a form of therapy, or to ask for advice that they feel uncomfortable asking medical professionals in real life. However, “My AI” follows most of its technological counterparts in falling short of bridging the mental health gap, considering its responses are pre-programmed and often, not applicable to the user. Medical experts have advised against venting or revealing personal information to the feature, instead encouraging users to seek adequate aid from doctors and therapists. 

Sinead Bovell, founder of ‘WAYE’ – a business and technological education start-up, told CNN Business that users must consider that “[These chatbots are] not your therapists or a trusted adviser. Anyone interacting with them needs to be very cautious, especially teenagers who may be more susceptible to believing what they say.”

With new upgrades that allow the feature to be added to group chats and give it the ability to share and receive snaps and chat replies, ‘My AI’s’ potential continues to grow. With its emerging popularity, comes the importance of redesigning the chatbot to alleviate international concerns. Snapchat is taking measures to change ‘My AI’s’ algorithm by integrating safeguards, blocking results for ‘drug keywords’ and integrating mental health resources using the ‘Here For You’ tool, which shows up in users’ chats if they mention something concerning as per the chatbot’s guidelines. They are also working to improve Snapchat’s ‘family center,’ and give parents more control over their child’s usage of ‘My AI.’

At the same time, Snapchat officials advise users to use the platform safely and remain cautious of the information they share with the chatbot. 

According to Snapchat’s support page, “You should always independently check answers provided by My AI before relying on any advice, and you should not share confidential or sensitive information.”

Seeing as many users and corporations have called for age restrictions and other preventive measures to be taken, it is possible that Snapchat will re-evaluate its decision to make the feature publicly available. Until then, it is essential that you think twice before revealing personal information to the chatbot or relying on it for academic and social help. 

Generative AI has the potential to change the way humans communicate and interact in the rapidly evolving era of technological advancement. However, the need to remain vigilant about one’s presence online increases as well.