Grok is under criminal investigation. The classic platform defense is dead.

By investigating X's AI for generating illegal synthetic images, French authorities are treating the software as an active creator, not a neutral host.

In March 2026, French authorities launched a formal criminal investigation into Grok, X's generative AI system, for disseminating non-consensual sexually explicit deepfakes. The probe focuses on the system's capacity to create and distribute illegal synthetic imagery of real people without sufficient guardrails.

For twenty years, social media companies successfully dodged legal liability by claiming they were merely neutral "platforms" hosting user-generated content. But generative AI breaks that legal shield entirely. Grok is not passively hosting a user's upload; it is actively manufacturing the image. This follows the exact legal vulnerability that ultimately destroyed Napster: when you build and distribute a system explicitly designed to bypass copyright or generate illicit material, you are no longer a passive distributor. You are a participant.

When a government begins treating an AI model like a liable actor rather than an innocent software tool, the era of tech companies hiding behind platform immunity is officially over.

💡
SO WHAT?
Audit your generative AI features for product liability, not just content moderation. If your software actively creates the output rather than just hosting it, you can no longer legally claim you are a neutral bystander.

Source: TechPolicy.Press