EU Issues €120m Fine to X Under Digital Services Act, Musk Has Public Meltdown

11–17 minutes

The European Commission’s decision to impose a €120 million fine on X, formerly Twitter, marks the first significant enforcement action taken under the Digital Services Act. The penalty followed a detailed investigation into the platform’s verification model, its handling of political and commercial advertising transparency, and the level of data access provided to independent researchers. Officials concluded that X had not met several of the Act’s core requirements, which were introduced to protect information integrity, increase transparency, and ensure that large online platforms operate with appropriate levels of public accountability.

In response, Elon Musk publicly reacted by calling for the abolition of the European Union, claiming that sovereignty should return to individual nations so governments can “better represent their people.” While he frequently presents himself as a defender of “free speech”, he has consistently demonstrated the opposite with his willingness to comply with censorship demands issued by authoritarian governments in Turkey, India and China, particularly when his commercial interests or operating licences depend on cooperation. Analysts argue that this selective approach to regulation undermines his stated principles and raises questions about whether his resistance to the EU reflects a commitment to free speech or an opposition to democratic oversight specifically. Calling for the abolition of the EU over a regulatory decision, Musk is openly undermining one of the world’s largest democratic institutions. The European Union’s governance structure is built on treaties ratified by democratic member states, parliamentary representation and legal mechanisms designed to protect fundamental rights. By portraying the Union as illegitimate or unnecessary, Musk is taking a stand against democracy and the rule of law.

According to the Commission, the failures identified during the investigation demonstrated a systemic inability or unwillingness to comply with obligations that are considered essential for safeguarding the reliability of digital communication within the European Union. European officials argue that the goal of the DSA is not to police speech, but to create a structural framework that prevents the manipulation of democratic discourse. Reduced verification allows for easier impersonation. Weak advertising transparency obscures funding sources. Limited researcher access diminishes the capacity to detect coordinated networks. In combination, these factors allow influence campaigns, whether officially sanctioned or carried out through aligned proxies, to operate with greater freedom inside Europe’s digital sphere.

Misleading verification design

The European Commission identified X’s revised verification system as a central deficiency under the Digital Services Act. Regulators said that allowing any user to purchase a blue check without confirming identity created a misleading signal of authenticity and weakened a mechanism that previously helped distinguish legitimate accounts from impersonators.

According to the Commission, the erosion of identity controls has direct implications for public trust, particularly during fast moving events in which authoritative information is essential. When verification markers no longer represent verified identity, hostile entities gain additional freedom of manoeuvre. State linked actors can acquire the appearance of credibility at minimal cost and deploy influence campaigns that resemble genuine public sentiment. The impersonation of officials, journalists, or critical infrastructure authorities becomes more feasible, raising the risk of confusion in moments when clarity is critical.

The rapid dissemination of false instructions, fabricated alerts or counterfeit emergency messages becomes significantly easier under such conditions. The result is a degradation of information integrity, which officials describe as a core component of national resilience and an essential safeguard in the wider European security environment.

Advertising transparency failures

Regulators also concluded that X failed to comply with the Digital Services Act’s transparency requirements for political and commercial advertising. The platform’s ad library, which is legally required to disclose who paid for advertisements, what audiences were targeted and what content was promoted, did not contain the level of detail mandated by the legislation.

The Commission said this shortfall made it significantly more difficult to determine the provenance of paid content and to assess whether covert influence operations were underway. Analysts note that opaque advertising systems create distinct vulnerabilities in the information environment. When authorities cannot identify who is financing or directing targeted campaigns, it becomes far harder to detect coordinated foreign interference or organised networks attempting to manipulate public debate. Microtargeting techniques, which rely on granular audience segmentation, can be used to deepen political divisions, weaken trust in institutions or suppress electoral participation.

Covert funding arrangements can disguise state backed or politically motivated actors behind the façade of ordinary user accounts. In the absence of robust advertising transparency, such activity becomes more difficult to identify at an early stage. The loss of visibility into these patterns removes key indicators that intelligence and regulatory bodies depend upon to expose malign operations before they escalate.

Restricting researcher access

EU investigators also determined that X restricted researcher access to key categories of public data, a finding that constituted another breach of the Digital Services Act. The legislation requires large platforms to provide qualified researchers with the information necessary to study systemic risks, including the spread of misinformation and the behaviour of coordinated networks. According to the Commission, the access limitations introduced by X significantly reduced the capacity for independent scrutiny. Research organisations, which have historically played a vital role in identifying botnets, mapping coordinated inauthentic behaviour and analysing foreign information warfare operations, reported that the data needed to conduct such assessments had become either inaccessible or unreliable.

Removing independent researchers from the information environment creates a notable intelligence gap. Without outside scrutiny, malign actors can operate with a lower risk of detection or attribution, and emerging narratives or threat vectors may remain unidentified until they have already gained considerable traction. The broader analytic ecosystem, which typically includes academic institutions, civil society monitors and technical specialists, becomes weaker when data access is impeded. The result is a diminished national and regional capacity to chart the scale and evolution of digital threats.

European officials emphasise that these shortcomings, taken together, degrade information integrity and situational awareness. The reduction in identity safeguards, transparency mechanisms and researcher visibility narrows the ability of governments and independent analysts to track adversarial activity in real time. In such an environment, foreign intelligence services, extremist organisations and criminal networks are better positioned to operate without immediate detection. Regulators argue that the combination of these factors represents a systemic risk to both the reliability of public information and the resilience of democratic systems. If you want, I can now integrate all three sections into a unified report narrative.

Strategic Interest in Limiting EU Regulatory Power

The Digital Services Act has emerged as one of the most extensive regulatory frameworks governing large technology platforms, and its scope allows the European Union to shape global standards on transparency, data access and platform accountability. This has led some governments to view the law as an expansion of European regulatory influence beyond the continent’s borders. In the United States, critics argue that a robust EU framework imposes additional constraints on US technology companies and, by extension, reduces D.C.’s capacity to influence and control the global information environment in line with its own priorities. In Russia and Israel, European regulatory assertiveness is sometimes interpreted as an attempt to extend European norms into international digital spaces in ways that may not align with local political or security interests.

More broadly, several governments maintain that stronger EU oversight strengthens a supranational body that has often scrutinised their human rights violations, corrupt rule of law and authoritarian digital surveillance. Governments that have historically engaged in influence operations or information campaigns aimed at European audiences have a clear strategic interest in a regulatory environment that remains fragmented or weakly enforced. The Digital Services Act was designed in part to close the gaps that such actors have exploited for more than a decade. By requiring platforms to authenticate identities, disclose political advertising and provide researchers with access to key data, the DSA reduces the opportunities for covert influence to operate undetected. When these measures are not fully implemented, the information space becomes more permissive and more difficult for European authorities to monitor.

The USA, Israel, and Russia have long used digital platforms to shape public sentiment in Europe, a conclusion supported by multiple intelligence assessments from EU member states and independent research institutions. These efforts typically rely on anonymity, networked amplification and the strategic use of advertising tools. A platform without strong verification mechanisms or robust ad transparency enables these techniques to function with far less risk of exposure. For actors seeking to maintain plausible deniability, the absence of systematic oversight is advantageous. In the United States, Authoritarian right wing political factions aligned with a more deregulatory approach argue that European rules place undue burdens on massive multi-billion dollar US technology firms that are more than often contracted with the US military and intelligence. A lack of enforcement can provide fertile conditions for hostile entities to target European audiences without the disclosure requirements that would otherwise apply European regulators have repeatedly stated that opaque advertising systems allow foreign entities to reach EU citizens in ways that are difficult to track. Israel’s government, which frequently contests European positions on regional security and human rights issues, also communicates directly with European publics through influence campaigns, propaganda, and disinformation. Governments engaged in contested information environments often prefer platforms where attribution is difficult and oversight is limited. When access for independent researchers is restricted and when advertising data is incomplete, it becomes significantly harder for external observers to determine whether narrative campaigns originate with private individuals, interest groups or state linked actors.

Across all three cases, the underlying dynamic is similar. Influence campaigns, whether state directed or carried out by affiliated networks, depend on the ability to mask origins, manipulate reach and operate across multiple accounts. The DSA was designed to constrain these methods by increasing transparency and by giving researchers and regulators the tools necessary to map information flows. When enforcement of these laws fails, the conditions that support maligned covert influence operations remains intact. In practical terms, reduced regulatory pressure preserves an environment in which malignant actors can shape democratic discourse for their own benefit, at a relatively low cost and with limited risk of exposure.

Opposition to the European Union carries political value within several domestic contexts. In the United States, figures aligned with Donald Trump have frequently portrayed the EU as an intrusive or anti-American institution, a stance that appeals to nationalist and fascist audiences within his political base. In Russia, the Kremlin has long framed European institutions as overreaching or biased, using criticism of Brussels to reinforce internal narratives that depict the West as hostile to Russian interests. Israel’s current government, led by Benjamin Netanyahu, has often disputed European criticism on matters of security, human rights, and regional policy, and public defiance of EU positions tends to resonate with domestic supporters who believe outside pressure undermines national sovereignty.

Political Alignment with Musk’s Free Speech Branding

Elon Musk has presented X as a platform with limited content protections and an emphasis on what he describes as “unrestricted speech”.  Independent analyses of X since Elon Musk’s takeover have found that the platform’s actual moderation practices diverge from its public presentation as a venue for unrestricted speech. While Musk has repeatedly stated that X is committed to maximal free expression, several documented policy changes and technical interventions indicate that the platform engages in forms of content limitation that critics describe as selective or opaque. Researchers at multiple institutions, including the University of Washington and the Center for Countering Digital Hate, have reported that X continues to remove posts, limit account visibility and apply algorithmic “deboosting” to certain users or topics, often without clear explanation or transparent appeals processes. The platform has also acknowledged the existence of visibility filters, though it argues they are necessary for managing harmful or low quality content.

Investigations by media organisations have shown that X has restricted or downgraded the reach of accounts across a wide political spectrum, sometimes affecting journalists, civil society organisations and government critics. In several documented cases, users reported substantial reductions in engagement after posting material critical of governments with which Musk was publicly negotiating or maintaining commercial ties. X has not disclosed the criteria used for such downranking, and the opacity surrounding these decisions has led analysts to conclude that the platform retains broad discretionary control over what circulates widely and what does not.

Regulatory bodies in Europe and academic observers have argued that these practices amount to a form of selective moderation rather than the absence of moderation. In particular, researchers point to instances in which content critical of certain authoritarian or semi-authoritarian governments appears to have been restricted, while material from state aligned accounts in those same countries continued to circulate without similar impediments. Such patterns have been noted in relation to Turkey, India and China, states where Musk’s companies have significant business interests and where governments have historically demanded compliance from foreign technology firms. X has complied with a number of takedown requests from these governments, even when they involved political speech. While the platform promotes itself as a defender of free speech, its operational decisions reveal a willingness to limit speech when doing so protects commercial interests, satisfies regulatory demands from powerful states or reduces conflict with governments that hold leverage over Musk’s business operations. In practice, this means that speech may be free only to the extent that it does not jeopardise relationships with governments capable of imposing legal or economic consequences.

Musk as a Useful Counterweight to Democratic Safeguards

Musk’s global profile and his repeated opposition to regulatory oversight have positioned him as a prominent challenger to institutional authority; however, that is limited to challenging democratic states. When it comes to authoritarian states, Musk has proven time and time again that he will uphold the institution without hesitation.  His criticism of the European Union has been a consistent feature of his public commentary, placing him at odds with governments and organisations that advocate smarter digital security and safeguards for democratic discourse.

For authoritarian states that view international regulatory frameworks as constraints on their domestic or geopolitical ambitions, a high profile private actor who disputes those frameworks can serve as an effective counterweight. Musk’s statements and political associations have also created a perception of ideological proximity between him and authoritarian governments, particularly those that favour deregulation or express scepticism toward democratic bodies and international law.

  1. European Commission, “Commission Fines X €120 Million under the Digital Services Act,” Press Release, December 5, 2025.
  2. European Commission, “Digital Services Act: Ensuring a Safer and More Accountable Online Environment,” Publications Office, 2022.
  3. “Elon Musk’s X Fined €120m by EU in First Clash under New Digital Laws,” The Guardian, December 5, 2025.
  4. “EU Should Be Abolished, Elon Musk Says after Fine against X,” Vanguard, December 6, 2025.
  5. “EU Fines X €120 Million for Breaching Digital Services Act Obligations,” Reuters, December 5, 2025.
  6. “EU Regulators Hit Elon Musk’s X with €120 Million Fine,” CityNews Halifax, December 5, 2025.
  7. Center for Countering Digital Hate, “Toxic Twitter: How Changes to Twitter’s Policies Have Increased Hate and Abuse,” CCDH Report, 2023.
  8. University of Washington Center for an Informed Public, “Trends in Platform Moderation and Visibility Filtering After Musk’s Acquisition of Twitter,” Research Brief, 2023.
  9. Freedom House, Freedom on the Net: Turkey, 2023.
  10. Freedom House, Freedom on the Net: India, 2023.
  11. Freedom House, Freedom on the Net: China, 2023.
  12. European Parliament, “Disinformation and Foreign Influence in the EU: State Actors and Strategic Operations,” EPRS, 2022.
  13. NATO StratCom Centre of Excellence, “Foreign Influence Operations and the European Information Environment,” 2021.
  14. Oxford Internet Institute, “Industrialized Disinformation: State-Backed Manipulation of Social Media in 81 Countries,” 2022.
  15. Carnegie Endowment for International Peace, “Digital Authoritarianism and the Global Battle for Democracy,” 2021.
  16. Brookings Institution, “Political Advertising Transparency and the Risks of Hidden Influence,” 2020.
  17. RAND Corporation, “The Weaponization of Information: Foreign Interference and Online Manipulation,” 2021.
  18. European External Action Service (East StratCom Task Force), EUvsDisinfo Annual Report, 2022.


Discover more from Spooky Connections

Subscribe to get the latest posts sent to your email.

One response to “EU Issues €120m Fine to X Under Digital Services Act, Musk Has Public Meltdown”

  1. […] EU Issues €120m Fine to X Under Digital Services Act, Musk Has Public Meltdown […]

Discover more from Spooky Connections

Subscribe now to keep reading and get access to the full archive.

Continue reading