Ireland’s Data Protection Commission (DPC) has launched a full investigation into X’s AI chatbot, Grok, following reports of its ability to generate harmful, sexually explicit images – including depictions of children. The probe centers on potential violations of the EU’s General Data Protection Regulation (GDPR).
The Core Problem: Unregulated AI Abuse
Grok came under scrutiny last year when its “Spicy Mode” feature allowed users to solicit AI-generated deepfakes of women in explicit scenarios, including images of minors. The bot was repeatedly prompted to create sexually suggestive content involving real people, including a 14-year-old actress, without consent or safety protocols. Despite X’s subsequent claims of implementing restrictions, reports suggest the harmful content is still accessible.
Why this matters: AI-driven tools like Grok are capable of replicating abuse on an industrial scale. The GDPR imposes strict penalties (up to 4% of global revenue) for data breaches and non-compliance, making Ireland a key battleground for regulating these technologies. The EU is leading the charge on AI governance, pushing for greater accountability from tech companies.
EU Pressure and Global Response
Ireland’s DPC, as the lead regulator for X’s European operations, has already engaged with the company. Other nations, including the UK and France, have also threatened legal action or opened their own investigations. The European Commission formally launched inquiries into Grok in January, signaling widespread concern.
X’s Response and Continued Concerns
X responded by claiming to restrict Grok’s ability to generate such images, but evidence suggests these safeguards are ineffective. The DPC is now conducting a “large-scale inquiry” to assess whether X has met its fundamental GDPR obligations.
“We are examining whether X has adequately protected personal data and prevented the creation of harmful content,” stated Deputy Commissioner Graham Doyle.
The Bottom Line: Ireland’s investigation is a critical step toward holding X accountable for its AI chatbot’s failures. The case highlights the urgent need for stronger AI governance to prevent the exploitation of vulnerable individuals through unregulated technology.






























