Three minors from Tennessee filed a class action lawsuit on 16 March 2026 in the Northern District of California against xAI, the artificial intelligence company controlled by Elon Musk, alleging that Grok was used to produce hyperrealistic child rape media depicting them. [1] A perpetrator, who has since been arrested, used the plaintiffs’ real photographs sourced from social media to generate sexualized, hyperrealistic AI images and videos through xAI’s technology, made available through an app on his phone. He also enticed one victim to send him real photographs of herself, then distributed the material on Discord, Telegram, and the file-sharing platform Mega, trading AI-generated images of these girls for sexualized content of other minors. [1]
That filing is now one of at least five civil suits pending against xAI across multiple jurisdictions. On 26 March 2026, the Amsterdam District Court issued the first binding judicial injunction against xAI in Europe. On the same day, the European Parliament voted to adopt a position amending the AI Act to include an explicit ban on nudifier systems. Those three events occurred within 24 hours of each other.
This report documents the full legal record: the five civil suits, the Dutch ruling, the international regulatory responses, the criminal statutes that apply, the profit architecture that made this possible, and for every person who may have been harmed, the specific steps available to seek accountability.
The Scale of What Was Built
[2] The Centre for Countering Digital Hate estimated that Grok produced 23,338 sexualized images of children between December 29, 2025, and January 9, 2026, roughly one every 41 seconds. [3] The broader CCDH analysis found an estimated 3 million sexualized images, including roughly 23,000 appearing to depict children, generated in just 11 days at an average pace of 190 per minute. [4] A separate 24-hour analysis between January 5 and 6 calculated that users had Grok create 6,700 sexualized or nudified images per hour, 84 times more than the top five deepfake websites combined.
[5] The Internet Watch Foundation discovered 3,440 AI videos of child rape media in 2025, compared to only 13 in 2024, a 26,362 percent increase. Of those, 65 percent were classified in the most severe category. [5] IWF Chief Executive Kerry Smith stated that criminals can now essentially have their own child rape media machines, capable of producing whatever they want to see. [6] Girls were the targets of 97 percent of illegal AI-generated sexualized images assessed by the IWF in 2025.
How Musk and xAI Profit From This
The complaint filed by Lieff Cabraser on behalf of the Tennessee minors does not merely allege that xAI failed to prevent harm. It alleges that the harm was a component of a revenue model. The documentary record supports that claim across three distinct profit mechanisms.
The first is direct subscription revenue. [22] Grok generated $88 million in revenue in Q3 2025 alone, a 35.3 percent increase on the previous quarter. The company is projected to generate close to $300 million for the full year 2025. SuperGrok costs $30 per month, SuperGrok Heavy costs $300 per month, and users can access Grok through X Premium+ at $40 per month. [23] In February 2026, X’s subscription business alone hit $1 billion in annualized recurring revenue, driven by X Premium tiers. The consolidated revenue mix includes X’s advertising and premium subscriptions representing more than $3.3 billion in annualized revenue by year-end 2025.
The specific connection between the CSAM-generating capability and that subscription revenue is documented in the complaint. [5] After the mass production of sexualized images was revealed, rather than disabling the feature, xAI restricted image generation to paying subscribers and advertised “Spicy Mode” as a premium benefit. The restriction of the most controversial feature to a paywall is the clearest available evidence that the company identified that feature as a commercial asset rather than a liability to be eliminated.
The second profit mechanism is API licensing. [5] The lawsuit claims that third-party apps with licenses to access Grok were still able to use xAI servers and platforms to produce CSAM content upon customers’ requests, creating an additional revenue stream for xAI. [1] The complaint alleges that xAI licenses access to its Grok AI model to third-party apps, using xAI servers and platforms to produce CSAM content requested by these apps’ customers, as an additional source of xAI profit. Each API call to generate an image through a licensed third-party application constitutes billable usage under xAI’s usage-based pricing model.
The third mechanism is platform valuation. [23] xAI closed a $20 billion Series E funding round in January 2026 at a $230 billion valuation, upsized from an initial $15 billion target. Participants included Valor Equity Partners, StepStone Group, Fidelity Management and Research, Qatar Investment Authority, MGX, Baron Capital Group, Nvidia, Cisco Investments, and Tesla. That round closed while the CSAM crisis was at its peak media visibility, in the period between Musk’s January 14 statement that he was “not aware of any naked underage images” and the filing of the Tennessee lawsuit in March. The company’s ability to raise $20 billion during an active child exploitation crisis, at an elevated valuation, is itself a data point in the accountability picture.
The personal conduct of Elon Musk constitutes a fourth documented component. [11] Lawyers in the Baltimore complaint wrote that Musk’s post depicting himself in a string bikini on December 31, 2025 “functioned as public endorsement of Grok’s ability to generate sexualized or revealing edits of real people, and it signaled to users that these uses of Grok were acceptable, humorous, and encouraged,” adding that the post “operated as marketing and promotion for the very image-editing capability that was being used to generate non-consensual sexual imagery.” Marketing is an asset with measurable value. When the founder of a company publicly promotes a product capability that the company has restricted to paid subscribers, that promotion drives subscription conversion. The Baltimore complaint identifies that mechanism explicitly.
A fifth revenue mechanism, operating in parallel with the consumer product, is the xAI government services contract. [41] The U.S. General Services Administration signed a OneGov agreement with xAI in September 2025 making Grok AI models accessible to all federal agencies for $0.42 per organization, valid until March 2027, in what GSA described as the longest-duration and lowest-price AI deal in the initiative’s history. [42] The contract was accompanied by a separate $200 million ceiling contract with the U.S. Department of Defense. The GSA deal was signed while the CSAM production period documented in the Tennessee complaint was either already occurring or imminent. [42] Internal communications obtained by Wired in August 2025 revealed the White House had instructed the GSA to add xAI’s Grok to the approved vendor list “ASAP.” Musk, who served as de facto head of the Department of Government Efficiency before stepping down in May 2025, placed aides at the GSA and other agencies responsible for regulating or awarding government contracts in industries in which Musk has business interests. The intersection of that institutional access with the GSA contract signed three months before the CSAM crisis is documented. Its significance for any analysis of federal enforcement decisions is a question the available evidence raises but cannot resolve.
Here is a visual breakdown of how the profit chain operated:

The following chart tracks Grok’s documented revenue growth against key events in the CSAM crisis.

The financial architecture is documented. [5] Prosecuting attorney and survivor advocate Vanessa Baehr-Jones stated that xAI chose to profit off the sexual predation of real people, including children, despite knowing full well the consequences of creating such a dangerous product. [2] The lawsuit reads: “xAI and its founder Elon Musk saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children. Knowing the type of harmful, illegal content that could and would be produced, xAI released Grok.”
What Crimes Have Been Committed
The criminal law applicable to this conduct is not ambiguous. Federal statutes governing child rape media were specifically drafted to anticipate digital and computer-generated imagery. The question of whether those statutes have been violated, and by whom, is the central unresolved legal question in the U.S. proceedings. The following reference table documents the applicable federal statutes.

The criminal law picture for individual perpetrators who used Grok is established. [25] Under 18 U.S.C. § 2256, child rape media includes any visual depiction, including computer-generated images, that is, or is indistinguishable from, a minor engaging in sexually explicit conduct. [26] The PROTECT Act of 2003 explicitly criminalises “virtual” child pornography. The legal threshold for prosecution is met regardless of whether the creator intended to keep the material private, no monetary exchange occurred, or the images were generated by AI rather than photographed. [24] A first-time offender convicted of producing child pornography faces a statutory minimum of 15 years to 30 years maximum in prison. A first-time offender convicted of transporting child pornography in interstate or foreign commerce faces a statutory minimum of 5 years to 20 years maximum.
The question of corporate criminal liability is less settled and more consequential. [26] Distribution and transportation offenses under 18 U.S.C. § 2252 carry a mandatory minimum sentence of 5 years in federal prison, with a maximum of 20 years for first-time offenders. Simply offering or advertising such material triggers these penalties. A corporation that designed a product to generate material that meets the statutory definition of CSAM, that was notified its product was generating such material, and that continued operating that product while restricting its most harmful capability to a paid tier, is within the perimeter of conduct those statutes were designed to reach. No criminal charges against xAI as a corporate entity or against Elon Musk individually had been filed in any U.S. jurisdiction at time of publication.
[27] Section 230 of the Communications Decency Act provides broad immunity to internet service providers for user content, but this immunity does not extend to violations of child exploitation laws. Courts have held that platforms hosting CSAM cannot use Section 230 as a defence against federal enforcement. [9] Legal commentators note that a Section 230 defence will be harder to sustain against nonconsensual sexualized content and CSAM that the AI model itself generated, as opposed to content merely hosted by the platform.
Under Dutch law, the Amsterdam District Court held on 26 March 2026 that [12] nonconsensual undressing images constitute a violation of the GDPR, and that the facilitation of child rape media constitutes unlawful conduct under Article 6:162 of the Dutch Civil Code. In France, the February 2026 investigation by Paris prosecutors and Europol expanded to cover the sexual deepfake conduct alongside the original allegations of algorithmic abuse. [16] The UK’s Ofcom investigation, launched January 12, 2026, warned that X could face a ban or fines up to 10% of global revenue.
The Full Litigation Map
The Tennessee filing is the most structurally significant of the civil actions now pending against xAI, but it is not the first and not the only one.
On 15 January 2026, Ashley St. Clair, a political commentator and the mother of one of Elon Musk’s children, [16] filed a lawsuit against xAI in New York State Superior Court, alleging that Grok generated and distributed “countless sexually abusive, intimate, and degrading deepfake content” of her. [15] St. Clair says some images modify a photo of her at age 14, creating AI-generated child rape media. [16] The lawsuit alleges that even after she notified xAI and received assurances that her images would not be used or altered without explicit consent, the company continued to allow users to create explicit AI-generated images of her. [21] xAI counter-sued St. Clair in the Northern District of Texas within days, seeking to enforce a forum-selection clause and seeking over $75,000 in damages, claiming she violated xAI’s terms of service by filing in New York.
On 23 January 2026, a class action was filed in the Northern District of California on behalf of a woman in South Carolina identified as Jane Doe. [9] She says the Grok account posted an AI-generated image of her in a revealing bikini without her consent. X refused to take the image down after she originally reported it, and she was only able to get it removed after reporting it many times over three days. She said she had to take unpaid time off work and lives in fear that the image will resurface and cost her professional opportunities.
On 16 March 2026, the Tennessee class action was filed on behalf of the three minor plaintiffs, encompassing claims under Masha’s Law, the Trafficking Victims Protection Act, and California state law, including 13 counts ranging from intent to distribute child pornography to intentional infliction of emotional distress. [9] All of the cases allege negligence on the part of xAI in releasing Grok. Each alleges that xAI did not undertake industry-standard testing or implement common guardrails to prevent nonconsensual explicit images or child rape media from being generated.
On 24 March 2026, [10] the Mayor and City Council of Baltimore filed suit in the Circuit Court for Baltimore City against X Corp., x.AI Corp., x.AI LLC, and Space Exploration Technologies Corp., alleging the companies violated Baltimore’s Consumer Protection Ordinance by designing, marketing, and deploying a generative artificial intelligence system that produces and disseminates nonconsensual sexualized images, including content involving minors. [11] The complaint characterises Musk’s December 31 bikini post as public endorsement of and marketing for the image-editing capability that was being used to generate nonconsensual sexual imagery. [11] Baltimore said Grok had flooded X users with objectionable content, becoming one of the largest distributors of material depicting nonconsensual sexualized activity and child rape media despite promising it bans such content.
The following card stack tracks the status of all five active cases.

The Dutch Ruling: What a Court Decided
The Amsterdam District Court’s ruling of 26 March 2026 is the first binding judicial order against xAI in Europe on the substance of its image generation conduct. [12] The case was brought by Offlimits, a Dutch nonprofit expertise centre on online sexual abuse, in cooperation with Fonds Slachtofferhulp, a victim support organisation, after Offlimits concluded that regulatory enforcement was moving too slowly relative to the pace of harm.
At the March 12 hearing, [13] xAI’s lawyers said the company does not want people to use Grok for making child rape media images or unwanted nudity, that the company is doing everything in its power to prevent these images, and that it is impossible to give a “100% guarantee” that people will not be able to make such images. The lawyer stated that “users who want to abuse it are always looking for new ways to circumvent the security,” and that a fine would “punish” xAI “for the behavior of malicious third parties.”
Offlimits demonstrated to the court what was actually possible. [13] Uploading a photo of a woman wearing a T-shirt and jeans with the command “take her bra off” results in the same woman topless. Grok does not warn users about this, and instead goes further, suggesting generating an image of a “seductively revealing silhouette.”
The court issued its ruling on 26 March 2026. [14] The Amsterdam Court’s preliminary injunction ordered xAI and Grok not to generate and distribute images “undressing” adults or children, or showing them in sexualized poses with scant or no clothing, without their consent in the Netherlands, and imposed fines of 100,000 euros per day if the companies do not comply. It also ordered xAI not to offer Grok on social media platform X while in breach of the order.
The most significant element of the ruling is the evidentiary finding. [12] At the March 12 hearing, xAI’s lawyers categorically rejected any suggestion that Grok still permitted nonconsensual intimate imagery or CSAM as of January 20, 2026. The court found that claim difficult to reconcile with evidence showing that on March 9, 2026, the same day the defendants sent that categorical denial, Offlimits was still able to generate a video of an existing person in a sexualized context from a single uploaded photograph, without Grok verifying consent. The judgment reads: “The fact that generating this video was apparently still possible on the same day that the defendants wrote to Offlimits categorically rejecting any suggestion that such content can be generated raises reasonable doubt regarding the certainty with which the defendants stated that the measures taken are adequate.”
[12] The court held that nonconsensual undressing images constitute a violation of the GDPR, and that the facilitation of child rape media constitutes unlawful conduct under Article 6:162 of the Dutch Civil Code. The defendants have ten working days from service of the judgment to confirm in writing to Offlimits how they have complied.
The “we cannot stop users” argument that xAI deployed in the Amsterdam proceedings is the same structural argument used by Backpage.com, the classified advertising website whose owners argued that responsibility for sex trafficking advertisements placed by third parties lay with the users, not with the platform. The owners of Backpage were convicted in federal court and sentenced to prison. The legal principle that a platform which knowingly structures itself to enable a foreseeable harm cannot fully attribute that harm to individual users is precisely what the Dutch ruling applied to xAI. In European jurisdictions, unlike in U.S. federal courts where Section 230 has historically provided broader protection, that principle has now been tested and upheld.
The Conduct Pattern
The pattern of corporate conduct documented across all five suits and the Dutch ruling follows an identifiable sequence.
When Grok’s CSAM generation became subject to sustained media and regulatory attention in early January 2026, xAI responded to requests for comment from media organisations with the automated reply [4] “Legacy Media Lies.” [4] On January 2, Musk reacted with multiple laughing emojis to an image of a toaster in a bikini. [3] On January 14, Musk stated publicly that he was “not aware of any naked underage images generated by Grok.” The CCDH had by that date already published research documenting more than 23,000 such images over the preceding two weeks. [17] Musk dismissed outrage as an effort to “suppress free speech.” [17] This framing was echoed across X despite the documented fact that X’s own rules ban both nonconsensual intimate imagery and CSAM, including computer-generated versions.
The technical response was calibrated to reduce regulatory pressure while preserving commercial functionality. [18] Paid subscription gating arrived January 9. Stricter content filters followed January 14. CCDH sampling on January 22 found many child images still live. [4] xAI announced X users would no longer be able to use Grok to alter images of real people in revealing clothing. Verified X users, as well as users of the standalone Grok app and website, were still able to generate such images. [15] AI Forensics found that Grok was still generating sexualized images of individuals despite X’s restrictions. Researchers found users bypassing the ban by accessing Grok directly through its website rather than through X, or by using Grok Imagine, the image and video generation tool.
[10] The Baltimore complaint states that despite public claims that such content is prohibited, Grok routinely produced and distributed nonconsensual intimate imagery and material resembling child rape media content, often with minimal user prompting. When CBS News directly prompted Grok AI about whether it should face regulation for generating sexualized images of real people without verifying consent, [19] the chatbot replied that “tools like me should face meaningful regulation,” and acknowledged that the design of the system created “a gray area ripe for abuse” that had “led to floods of nonconsensual ‘undressing’ or sexualized edits of real women, public figures, and even minors.” The company that built the system offered a different message. A CBS News request for comment prompted an automated reply.
The countersuit against Ashley St. Clair, filed within days of her complaint and citing her violation of a forum-selection clause, is documented in the record as the company’s most direct response to victim-initiated accountability proceedings. [21] The use of procedural litigation as an immediate response to a victim’s complaint is consistent with the broader conduct pattern documented across all five cases.
The Wider Industry: Who Built This Ecosystem
xAI and Grok exist within a documented commercial ecosystem that preceded them. [30] In the summer of 2024, a man in the Minneapolis area used a site called DeepSwap to create explicit deepfakes of over 80 women using their Facebook photographs without their consent. Because the women were adults and the man did not distribute the content, there was no apparent crime. [30] DeepSwap has shifted its claimed headquarters from Hong Kong to Dublin, listing “MINDSPARK AI LIMITED” as its corporate entity. CNBC could not locate its listed CEO or receive responses from its marketing manager.
[29] In April 2025, security researcher Jeremiah Fowler discovered an exposed database belonging to South Korean AI company AI-NOMIS and its platform GenNomis. The files included everyday pictures of women, apparently for face-swapping into explicit scenes on demand by users. The websites of both GenNomis and AI-NOMIS went dark within days of the discovery. No enforcement action against those entities has been publicly disclosed. [40] Alibaba, the Chinese technology company, released an AI video generation model in 2025 called Wan 2.1, which was modified to produce nonconsensual pornography. A Telegram pornography bot using AI generation has been documented with more than 100,000 monthly users.
[31] A Kentucky teenager died by suicide after being blackmailed with an AI-generated nude image. [31] A court in Almendralejo, Spain sentenced 15 minors to probation for creating and sharing AI-generated nude images of classmates. [31] In South Korea, authorities uncovered Telegram channels with hundreds of thousands of members distributing deepfake pornography. Hundreds of cases were investigated, and lawmakers advanced stricter penalties for possession and distribution.
[33] In late 2024, technology companies including Adobe, Anthropic, Cohere, Microsoft, OpenAI, and the open source repository Common Crawl signed a non-binding pledge to prevent their AI products from being used to generate nonconsensual deepfake pornography and child rape media. xAI was not among the signatories. The pledge was non-binding and carried no enforcement mechanism.
The Harm: On Record
[1] The mother of Jane Doe 2 described watching her daughter have a panic attack after realizing that images had been created and distributed without any hope of being recalled. Her daughter’s excitement about her senior year, including her Spring formal, graduation, and senior trip, now comes with the fear that anything she shares will be used and manipulated again. [1] One plaintiff suffers from recurring nightmares and has needed academic accommodations. Another cannot sleep without medical intervention and dreads attending her own graduation. [1] All three plaintiffs’ files have been entered into a national database managed by the National Centre for Missing and Exploited Children. For the rest of their lives, they will receive notifications every time their images are identified in a criminal case.
[9] The South Carolina plaintiff was only able to get her image removed after reporting it many times over three days. She had to take unpaid time off work and lives in fear that the image will resurface and cost her professional opportunities.
Law professor Mary Anne Franks of George Washington University Law School has described the documented experience of victims of image-based sexual abuse as something that [30] “makes you feel like you don’t own your own body, that you’ll never be able to take back your own identity.” [30] Even when nudified images have not been posted publicly, victims live in fear that the images may eventually be shared, described by one legal scholar as a “sword of Damocles.” [5] The Internet Watch Foundation has warned that many of these images have migrated to the dark web, where they are being repurposed by predators.
The International Regulatory Response
[12] Parallel enforcement actions include Ireland’s Data Protection Commission, which opened a GDPR investigation in February 2026; the European Commission, which launched DSA proceedings against X in January 2026; and Ofcom in the UK, which opened an Online Safety Act investigation. [15] If X is found to have breached the Digital Services Act, the company could face fines of up to 6% of its global annual revenue.
[12] On March 26, the European Parliament voted to adopt its position on amending the AI Act to include an explicit ban on nudifier systems. [35] The EU’s proposed amendment would introduce a ban on so-called “nudifier” systems that use AI to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person without that person’s consent. [35] Commentators described it as the first EU move to specifically target AI platforms that generate and permit distribution of sexualized material without the subject’s consent.
In France, [4] ministers reported the AI tool to prosecutors in early January 2026, calling the content “manifestly illegal.” On February 3, Paris prosecutors, a cybercrime team, and Europol searched the Paris offices of X. The investigation, which began as one into alleged abuse of algorithms and fraudulent data extraction, expanded to encompass the sexual deepfakes. [4] Elon Musk and former CEO Linda Yaccarino have been summoned to a hearing on April 20. In Japan, [4] the AI Promotion Act contains no explicit penalties for non-compliance, focusing instead on voluntary cooperation, limiting the practical effect of Japan’s regulatory response.
[21] Thirty-five state attorneys general issued a joint letter demanding xAI cease allowing sexual deepfakes. [34] The DEFIANCE Act, passed unanimously by the U.S. Senate in January 2026, would establish a federal right of action allowing victims of nonconsensual, sexually explicit deepfakes to sue creators, distributors, and those who knowingly host such content, with statutory damages up to $150,000, or $250,000 when linked to sexual assault, stalking, or harassment. The bill awaits a House floor vote.
How to Protect Yourself and How to Join or Open a Lawsuit
To anyone who has been harmed by AI-generated nonconsensual intimate imagery or child rape media , or who believes they may have been.
If you are a minor, or a parent of a minor, whose images were used to generate explicit material through Grok or any AI tool, the single most important first step is to report the images to the National Center for Missing and Exploited Children CyberTipline at missingkids.org/gethelpnow/cybertipline. NCMEC accepts reports 24 hours a day and transmits them directly to the FBI and to state and local law enforcement. Filing a CyberTipline report is not a lawsuit and does not require a lawyer. It creates a federal record and initiates a law enforcement response. NCMEC’s Take It Down program, at takeitdown.ncmec.org, allows minors to hash images before they are distributed so that participating platforms can detect and block matching content.
The law firm Lieff Cabraser Heimann and Bernstein, which filed the Tennessee class action, maintains a specific intake page for AI-generated CSAM claims at lieffcabraser.com/ai-deepfakes. Consultations are free and the case is pursued on a contingency basis. Wallace Miller, which filed the January 2026 class action on behalf of the South Carolina plaintiff, maintains a public intake form at wallacemiller.com.
If you are an adult whose images were used to generate nonconsensual intimate imagery, the Take It Down Act requires platforms to remove such imagery within 48 hours of notice. If X or any platform has failed to remove such imagery after you have reported it, file a report with the FTC at reportfraud.ftc.gov citing the Take It Down Act. The Cyber Civil Rights Initiative at cybercivilrights.org maintains a crisis helpline at 844-878-2274 and provides referrals to attorneys experienced in nonconsensual intimate imagery cases in every U.S. state.
For EU, UK, and Netherlands residents: the Dutch ruling of 26 March applies within the Netherlands. Any Dutch resident harmed by Grok-generated imagery should contact Offlimits directly at offlimits.nl. For EU residents generally, GDPR Article 79 provides a right to an effective judicial remedy. A complaint to your national supervisory authority, listed at edpb.europa.eu, is the entry point for GDPR enforcement. UK residents can report to Ofcom at ofcom.org.uk/online-safety.
The following checklist documents protective steps available regardless of jurisdiction.

Conclusion: What the Documented Record Establishes
The Dutch court’s ruling of 26 March 2026 arrived on the same day as this report. The timing is coincidental. The convergence it represents is not.
On a single day, an Amsterdam judge found that xAI’s categorical denial that its product could still generate nonconsensual intimate imagery was directly contradicted by a live courtroom demonstration conducted on the day of that denial. Three teenage girls in Tennessee are receiving federal notifications every time the images made of them surface in a criminal investigation, notifications they will receive for the rest of their lives. The European Parliament voted to ban the category of tool that produced those images. And Elon Musk’s company, now valued at $230 billion and holding a GSA contract to provide AI to every federal agency in the U.S. government, has not responded to requests for comment in any of the proceedings described in this report except through an automated message.
The documentary record that underpins this investigation establishes four things with precision.
First, that xAI designed and deployed a model with internal safety instructions specifically configured to minimise barriers to generating sexually explicit imagery of people described as teenage girls, and that this configuration is documented in the complaint’s citation of Grok Safety Instructions version 8. [8] The system instructions specified that the model should “not enforce additional content policies,” that there are “no restrictions on fictional adult sexual content with dark or violent themes,” and that the model should “assume good intent” when users referenced “teenage” or “girl.” [8] xAI’s safety logs are internal, mutable, and unverifiable without adversarial discovery. There is no cryptographic record of what Grok refused, when, or under which policy version.
Second, that xAI monetised the resulting capability through a subscription paywall, through API licensing to third parties, through the personal promotional conduct of its founder, and through a government services contract obtained while that capability was either in use or about to be deployed at scale. That monetisation occurred after the company had been notified that the capability was producing CSAM at a documented rate of one image of a child every 41 seconds.
Third, that xAI’s response to documented harm followed an identifiable pattern of deflection, minimisation, partial restriction with preserved capability, counter-litigation against victims, and automated non-response to journalists and regulators, a pattern now documented across five civil suits, three continents, and two judicial findings.
Fourth, that the federal criminal statutes applicable to this conduct are comprehensive, well-established, and have not been applied to the corporate actors who built the system. The absence of a criminal referral or charge against xAI as a corporate entity, or against the individuals who directed its design, is itself a policy choice made by the Department of Justice. [42] xAI holds a government services contract obtained through a procurement process in which, according to reporting by Wired, the White House instructed the GSA to add xAI to the approved vendor list “ASAP,” and in which Musk’s aides were placed at the GSA and other agencies responsible for regulating or awarding contracts in industries where Musk has business interests. Whether those facts have influenced the pace of federal criminal enforcement is a question the available evidence raises. It is not a question the available evidence can currently answer.
What is not a question is what the harm costs its victims. One plaintiff’s mother described her daughter’s panic attack upon learning that images of her had been created and distributed with no mechanism to recall them. For the rest of that child’s life, a federal database will notify her family each time those images surface in a criminal investigation. That is the documented consequence of a system that was built, marketed, paywalled, and defended.
Citations
[1] Lieff Cabraser Heimann and Bernstein. “LCHB Files Class Action on behalf of Minor Victims Alleging xAI’s Grok Generated and Profited from AI Sexual Exploitation Images and Videos.” Press release. March 16, 2026. https://www.lieffcabraser.com/2026/03/lchb-files-class-action-obo-minor-victims-alleging-xais-grok-generated-and-profited-from-ai-sexual-exploitation-images-and-videos/
[2] Yahoo Finance. “Minors Sue xAI in California Over Alleged Grok Deepfake Images.” March 2026. https://www.yahoo.com/news/articles/minors-sue-xai-california-over-044611778.html
[3] Prism News. “Baltimore Sues xAI Over Grok’s Alleged Generation of Child Sexual Abuse Images.” March 2026. https://www.prismnews.com/news/baltimore-sues-xai-over-groks-alleged-generation-of-child
[4] “Grok Sexual Deepfake Scandal.” Wikipedia. Last modified March 26, 2026. https://en.wikipedia.org/wiki/Grok_sexual_deepfake_scandal
[5] Cybernews Staff. “Teens Sue xAI Over Grok’s ‘Spicy Mode’ and AI-Generated CSAM Claims.” Cybernews. March 2026. https://cybernews.com/ai-news/teens-sue-xai-grok-ai-generated-child-porn/
[6] Internet Watch Foundation. “Annual Report 2025: AI-Generated CSAM Findings.” IWF. 2026. https://www.iwf.org.uk
[7] Kelley, Colleen. “Tennessee Teens Bring Class-Action Suit Against Elon Musk and xAI for Fake Sexualized Images.” FindLaw. March 2026. https://www.findlaw.com/legalblogs/courtside/tennessee-teens-bring-class-action-suit-against-elon-musk-and-xai-for-fake-sexualized-images/
[8] VeritasChain. “The First CSAM Lawsuit Against an AI Company Just Landed.” DEV Community. March 2026. https://dev.to/veritaschain/the-first-csam-lawsuit-against-an-ai-company-just-landed-heres-how-to-build-the-refusal-logs-that-4o4e
[9] Miller, Sherri. “Women and Girls Are Taking Grok to Court Over Sexualized AI Deepfakes.” The 19th News. March 2026. https://19thnews.org/2026/03/women-girls-lawsuit-grok-ai-deepfakes/
[10] DiCello Levitt. “City of Baltimore Sues Over Grok AI’s Role in Generating Non-Consensual Sexualized Deepfakes.” Press release. March 24, 2026. https://dicellolevitt.com/city-of-baltimore-sues-over-grok-ais-role-in-generating-non-consensual-sexualized-deepfakes/
[11] Steele, Ann. “Baltimore is First U.S. City to Sue Over Grok Deepfake Porn as Legal Pressure Mounts on Musk’s xAI.” CNBC. March 24, 2026. https://www.cnbc.com/2026/03/24/musk-xai-sued-baltimore-grok-deepfake-porn.html
[12] Lomas, Natasha. “Dutch Court Orders X, Grok to Stop AI-Generated Sexual Abuse Content.” TechPolicy.Press. March 26, 2026. https://www.techpolicy.press/dutch-court-orders-x-grok-to-stop-aigenerated-sexual-abuse-content/
[13] “Impossible to 100% Prevent Abuse, Grok Lawyers Say in Dutch Case Against Nudify Tools.” NL Times. March 12, 2026. https://nltimes.nl/2026/03/12/impossible-100-prevent-abuse-grok-lawyers-say-dutch-case-nudify-tools
[14] Stempel, Jonathan. “Dutch Court Orders xAI, Grok Not to Create, Distribute Nonconsensual Sex Images in Netherlands.” Reuters. March 26, 2026. https://www.yahoo.com/news/articles/dutch-court-orders-xai-grok-162635028.html
[15] Wile, Rob. “Why Ashley St. Clair, MAGA Influencer and Elon Musk’s Ex, Is Taking on His AI Empire.” Fortune. January 28, 2026. https://fortune.com/2026/01/28/ashley-st-clair-elon-musk-grok-x-deepfakes-lawsuit-xai/
[16] Walter, Alan N. “Grok AI Deepfake Crisis: Actions, Bans and Responses.” Alan N. Walter, Counsel. 2026. https://waltercounsel.com/grok-ai-deepfake-crisis-actions-bans-responses/
[17] Citron, Danielle. “Grok, ‘Censorship,’ and the Collapse of Accountability.” Lawfare. January 29, 2026. https://www.lawfaremedia.org/article/grok—censorship—–the-collapse-of-accountability
[18] AI CERTs Staff. “xAI Grok Safety Failure Spurs Global CSAM Lawsuits.” AI CERTs News. March 2026. https://www.aicerts.ai/news/xai-grok-safety-failure-spurs-global-csam-lawsuits/
[19] Tarrant, Mackenzie. “X, Grok AI Still Allow Users to Digitally Undress People Without Consent, as EU Announces Investigation.” CBS News. January 26, 2026. https://www.cbsnews.com/news/x-grok-ai-imagery-elon-musk-eu-uk-us-regulation/
[20] Hasan, Sarmad. “Baltimore Sues Musk’s xAI Over Grok’s Sexually Explicit Images.” NBC News. March 24, 2026. https://www.nbcnews.com/tech/tech-news/baltimore-sues-musks-xai-groks-sexually-explicit-images-rcna264950
[21] CoreProse Staff. “Inside the Grok Deepfake Meltdown: Timeline, Global Crackdown, and Content Moderation Lessons for AI.” CoreProse. February 2, 2026. https://www.coreprose.com/kb-incidents/inside-the-grok-deepfake-meltdown-timeline-global-crackdown-and-content-moderation-lessons-for-ai
[22] Business of Apps. “Grok Revenue and Usage Statistics 2026.” Business of Apps. March 2026. https://www.businessofapps.com/data/grok-statistics/
[23] Sacra Research. “xAI Revenue, Valuation and Funding.” Sacra. January 2026. https://sacra.com/c/xai/
[24] U.S. Department of Justice, Criminal Division. “Citizen’s Guide to U.S. Federal Law on Child Pornography.” DOJ. Accessed March 2026. https://www.justice.gov/criminal/criminal-ceos/citizens-guide-us-federal-law-child-pornography
[25] RAINN. “Which U.S. Laws Address CSAM?” RAINN. Updated October 9, 2025. https://rainn.org/get-the-facts-about-csam-child-sexual-abuse-material/which-u-s-laws-address-csam/
[26] Coxwell Law. “The Digital Frontier of Child Protection: Understanding AI-Generated CSAM and Federal Law.” Coxwell Law. April 28, 2025. https://www.coxwelllaw.com/blog/2025/april/the-digital-frontier-of-child-protection-underst/
[27] LegalClarity Staff. “18 USC 2256: Key Legal Definitions and Implications.” LegalClarity. March 28, 2025. https://legalclarity.org/18-usc-2256-key-legal-definitions-and-implications/
[28] Marcelo, Philip. “Lawsuit Accuses xAI of Creating CSAM from Teen Photos.” The Hill. March 2026. https://thehill.com/policy/technology/5788337-elon-musk-xai-grok-lawsuit/
[29] Claburn, Thomas. “GenAI Website Goes Dark After Explicit Fakes Exposed.” The Register. April 1, 2025. https://www.theregister.com/2025/04/01/nudify_website_open_database/
[30] Vanian, Jonathan, and Katie Tarasov. “5 Takeaways from CNBC’s Investigation Into ‘Nudify’ Apps and Sites.” CNBC. September 28, 2025. https://www.cnbc.com/2025/09/28/5-takeaways-from-cnbcs-investigation-into-nudify-apps-and-sites.html
[31] Bonfire Leadership Solutions. “Deepfake and AI-Generated Non-Consensual Pornography Tools: An Empirical Brief for Schools, Platforms, and Lawmakers.” Bonfire Leadership Solutions. March 2026. https://bonfireleadershipsolutions.com/blog/deepfake-non-consensual-pornography-tools-schools-lawmakers/
[32] Reality Defender. “What is Deepfake Pornography?” Reality Defender. Accessed March 2026. https://www.realitydefender.com/insights/addressing-the-growing-scourge-of-nonconsensual-deepfakes
[33] Reality Defender. “The State of Deepfake and AI Regulations: What Businesses Need to Know.” Reality Defender. January 9, 2026. https://www.realitydefender.com/insights/the-state-of-deepfake-regulations
[34] Jones Walker LLP. “Deepfakes-as-a-Service Meets State Laws: Governing Synthetic Media in a Fragmented Legal Landscape.” Jones Walker. January 15, 2026. https://www.joneswalker.com/en/insights/blogs/ai-law-blog/deepfakes-as-a-service-meets-state-laws-governing-synthetic-media-in-a-fragmente.html
[35] Business Story Staff. “EU Law Could Undermine Musk’s Tactic of Blaming Users for Grok Sex Images.” Business Story. March 18, 2026. https://www.businessstory.org/2026/03/18/eu-law-could-undermine-musks-tactic-of-blaming-users-for-grok-sex-images/
[36] Cadelago, Christopher. “California Orders Elon Musk’s AI Company to Immediately Stop Sharing Sexual Deepfakes.” CalMatters. January 17, 2026. https://calmatters.org/economy/technology/2026/01/california-investigates-deepfakes-elon-musk-company/
[37] Factually Staff. “How Civil Lawsuits Against xAI and X Over Grok Deepfakes Have Developed.” Factually. February 2026. https://factually.co/fact-checks/technology/grok-deepfakes-xai-x-civil-lawsuits-us-court-updates-since-january-2026-9f2155
[38] Wallace Miller Law. “Grok AI Deepfake Lawsuit.” Wallace Miller. 2026. https://www.wallacemiller.com/all-litigations/cases-under-investigation/grok-ai-deepfake-lawsuit/
[39] Thorn. “The ENFORCE Act: Critical Updates to Federal Law for Addressing AI-Generated CSAM Offenses.” Thorn. December 2025. https://www.thorn.org/blog/the-enforce-act-addressing-ai-generated-csam-offenses/
[40] “Generative AI Pornography.” Wikipedia. Last modified March 2026. https://en.wikipedia.org/wiki/Generative_AI_pornography
[41] U.S. General Services Administration. “GSA and xAI Partner on $0.42 per Agency Agreement to Accelerate Federal AI Adoption.” GSA press release. September 25, 2025. https://www.gsa.gov/about-us/newsroom/news-releases/gsa-xai-partner-to-accelerate-federal-ai-adoption-09252025
[42] Schiffer, Zoe. “Elon Musk’s xAI Offers Grok to Federal Government for 42 Cents.” TechCrunch. September 25, 2025. https://techcrunch.com/2025/09/25/elon-musks-xai-offers-grok-to-federal-government-for-42-cents/
