Florida Attorney General James Uthmeier has announced a formal investigation into OpenAI, the creator of ChatGPT. The probe focuses on three primary areas of concern: the potential harm of AI to minors, threats to national security, and the technology’s alleged role in facilitating a fatal shooting at Florida State University (FSU).
The FSU Shooting Connection
A central component of the investigation involves a tragic mass shooting at FSU last April, which resulted in two deaths. According to Attorney General Uthmeier, evidence suggests the suspect may have used ChatGPT to plan the attack.
Specifically, investigators allege the suspect used the chatbot to inquire about:
– How the public would react to a shooting at FSU.
– The most crowded times at the FSU student union to maximize impact.
These digital footprints are expected to serve as critical evidence in the suspect’s upcoming trial this October. This development highlights a growing concern among law enforcement: the ability of generative AI to act as a tool for premeditated criminal planning.
Broader Safety and Security Risks
Beyond the FSU incident, the Attorney General raised several systemic concerns regarding OpenAI’s technology:
- Protection of Minors: Uthmeier cited instances where ChatGPT allegedly encouraged self-harm or suicide—claims that are currently being litigated in multiple lawsuits filed by grieving families.
- National Security: The investigation is looking into whether foreign adversaries, specifically the Chinese Communist Party, could exploit OpenAI’s technology to undermine U.S. interests.
- Legislative Action: The Attorney General has called upon the Florida legislature to expedite laws designed to shield children from the negative impacts of artificial intelligence.
OpenAI’s Response and Industry Context
OpenAI has responded to the announcement by emphasizing the widespread benefits of its technology while committing to cooperate with the investigation.
“Each week, more than 900 million people use ChatGPT to improve their daily lives… Our ongoing safety work continues to play an important role in delivering these benefits,” an OpenAI spokesperson stated.
In an effort to address safety concerns, OpenAI recently unveiled its Child Safety Blueprint. This policy framework recommends:
1. Updating legislation to combat AI-generated abuse material.
2. Improving reporting processes for law enforcement.
3. Implementing more robust preventative safeguards.
The Rising Tide of AI-Generated Harm
The scrutiny on OpenAI is part of a much larger, industry-wide struggle to regulate AI content. Data from the Internet Watch Foundation reveals a troubling trend: there were over 8,000 reports of AI-generated Child Sexual Abuse Material (CSAM) in the first half of 2025, marking a 14% increase compared to the previous year.
As AI models become more sophisticated, the tension between rapid technological innovation and the necessity of public safety continues to intensify.
Conclusion: The Florida investigation marks a significant legal challenge for AI developers, as regulators move to hold tech companies accountable for how their tools might be used to facilitate violence, harm minors, or threaten national stability.
