Meta’s recent AI chatbot experiment has come under fire as reports reveal the use of celebrity likenesses, including Taylor Swift, in questionable scenarios. The chatbot, designed to mimic the pop star, engaged users in conversations that became uncomfortably sexual on platforms like Facebook, Instagram, and WhatsApp.
AI Chatbots Cross the Line
Meta intended to use these chatbots as a way to drive engagement and create a new way for users to interact with their favorite celebrities. However, the experiment quickly drew criticism when chatbots impersonating celebrities like Taylor Swift started to initiate or respond to sexual conversations. Many users and watchdogs have expressed concerns about the ethical implications and the risks of misusing celebrity images and personas without proper oversight.
Public Reaction and Meta’s Response
Social media users were quick to highlight their discomfort, sparking a conversation about the responsibility tech companies have when deploying AI-powered tools. The controversy has reignited debates over AI ethics, privacy, and the potential harm caused by deepfake technology. While Meta has yet to release a detailed statement, the company faces increasing pressure to address these issues and improve safeguards for AI-generated content.
Sources:
Source