A European Threat to Freedom of Speech and Scientific debate

The effects of the European Union’s Digital Services Act (DSA) on freedom of speech have been a topic of heated debate in Washington and beyond.
The DSA, which was passed in 2022 and came largely into force by mid-2023, is the EU’s flagship online regulatory legislation. It is supposed to ensure a “safe online environment,” but to this end requires online platforms—in particular, social media platforms—to implement robust “content moderation” measures.
In other words, they must delete content and/or accounts or otherwise suppress their visibility. The latter can be done, for example, by restricting the shareability of posts. Platforms whose “content moderation” efforts are found wanting by the European Commission risk fines of up to 6 percent of their global revenues.
Given that the Internet is global, the issue, as raised, for instance, by the House Judiciary Committee in a recent interim staff report on the DSA, is whether the required “content moderation” implies censorship not just of Europeans, but indeed of the entire world, including Americans. The role of what the law dubs “trusted flaggers” has been a particular matter of concern.
The law requires online platforms to maintain “notice and action” mechanisms that allow users to flag illegal content for removal, with “illegal” here referring to EU law and the laws of each of the 27 EU member states. This is already a problem from an American perspective, since the laws of many EU countries include all sorts of speech prohibitions that are obviously incompatible with America’s First Amendment.
Thus, in a letter to the European Commission signed by over 100 free speech experts, the Alliance Defending Freedom (ADF) has protested that the EU country with the most speech-restrictive laws risks setting the standard not only for the entire EU, but indeed the entire world. Germany undoubtedly represents this “lowest common denominator,” as the letter puts it. German law, for instance, includes not only prohibitions on alleged “hate speech,” a notoriously slippery concept to begin with, but even on mere “insults” and “disparagement.”
In addition to flagging by individual users, however, the DSA also makes provision for flagging by organizations that have been certified by an EU member state government as possessing expertise in some relevant domain and hence whose reports should be given expedited, priority treatment by the platforms. These are the “trusted flaggers.”
In its interim report, the House Judiciary Committee has warned that “trusted flaggers” are not truly independent—notably from the EU governments that, after all, appoint them and, in some cases, as the report notes, fund them—and that they will increase the pressure on platforms to censor.
In a critical response to the Committee report, however, Democratic members note reassuringly that it is up to the platforms themselves to decide whether to remove the flagged content. “The trusted flaggers ‘don’t have a magic delete button,’” the authors explain, citing an expert source:
These individuals merely provide extra resources to platforms that do not have an affirmative duty to search for and remove illegal content by themselves.
Regrettably, it is obvious from these remarks that the Democratic members have not done their due diligence on the subject: as touched upon above, the “trusted flaggers” are not individuals but rather organizations that are supposed to have relevant expertise in certain areas of the law.
In some cases, they are prima facie uncontroversial even from an American perspective, since their areas of specialization involve laws that are largely identical on both sides of the Atlantic. One can hardly object, for instance, to the activity of the many “flaggers” dedicated to the protection of minors or those specializing in intellectual property rights and consumer protection—at least if their brief is truly limited to their ostensible area of expertise. (A full list of the 43 “trusted flaggers” named thus far is available from the European Commission here.)
It’s another matter when their area of expertise is speech crimes. Ironically, the expert source quoted by the Democratic members—“Trusted flaggers do not have a magic delete button”—is Managing Director of precisely one such organization: Josephine Ballon of the German organization HateAid.
In June, the German government—more precisely, the German telecommunications regulator, the Bundesnetzagentur—named HateAid as a “trusted flagger.” The Bundesnetzagentur (or “Federal Network Agency”) serves as Germany’s national DSA implementing authority or “Digital Services Coordinator” (DSC).
Moreover, HateAid was not only appointed by the German government, it is also funded by it. According to data in the German government’s Lobby Registry, it received nearly €1.3 million in support from two different government ministries in 2024, for instance.
If Americans would not regard “flagging” of speech for removal by an organization that is appointed and funded by the American government as anything other than government censorship, why should they regard it as something else when the organization is funded and appointed by the German government?
Ballon is, of course, right that “trusted flaggers” do not have “a magic delete button.” Platforms are not required to remove all the content flagged by the “trusted flaggers.” But were they to remove none, given the official function assigned to the “flaggers,” they would clearly be non-compliant with the DSA and hence risk the massive fines that the European Commission is empowered to apply under the law. The only option for an American company wanting to avoid the fines would be to leave the EU market altogether.
As its name suggests, HateAid specializes in providing assistance to victims of “hate.” This does not mean victims of “hate crime,” as understood in American law, but rather of “hate speech”—i.e., “speech crimes”—which, needless to say, is not even a category in American law. HateAid’s very raison d’être is thus, by definition, incompatible with America’s First Amendment.
An example of the aberrations to which German “hate speech” laws have led is provided by the case of the German retiree Stefan Niehoff. Niehoff had his home raided by German police last fall merely for having retweeted a meme that, in a play on the name of the German hair-care products brand Schwarzkopf, jokingly referred to Germany’s then Minister of the Economy, Robert Habeck, as a “professional moron.” Under §188 of the German Criminal Code, commonly referred to as the “lèse-majesté” law, public officials enjoy heightened protection against “insults.”
Moreover, while Niehoff’s case received wide publicity, it is worth noting that the raid on his home occurred on one of the to-date 12 “days of action against criminal hate posts” in which hundreds of German citizens have had their homes raided by the police on account of social media posts.
Of course, Americans need not fear their homes being raided by German police should HateAid or some other European “trusted flagger” denounce their posts. But what they can fear is that their posts will be removed or otherwise suppressed.
Perhaps such fears are unfounded or exaggerated, as some academic defenders of the DSA have suggested. The DSA does not, after all, require platforms to remove posts globally. They also have the option of merely removing them in the particular jurisdiction or jurisdictions in which they would constitute crimes, or, in other words, geo-blocking.
What platforms could do in theory and what they do in practice—viz., to meet the law’s requirements in a manner that is both technically-feasible and cost-effective for them—are, however, two different things. The proof of the pudding is in the eating, and what is clear from the available data on DSA compliance is that the law is having massively extraterritorial consequences already.
“Very Large Online Platforms” and “Very Large Online Search Engines,” which fall under the DSA’s strictest provisions, are required to publish periodic reports on how they handle notifications under the DSA and their “content moderation” more generally. Consider, for instance, the latest “DSA Transparency Report” posted by LinkedIn. Given that EU member state governments have only recently begun appointing the “trusted flaggers,” little data is available on platforms’ response to
their notifications in particular. LinkedIn notes that it did not, in fact, receive any notifications from “trusted flaggers” in the reporting period.
It did, however, receive nearly one million “EU reports” from users via the required DSA “notice and action” mechanism. In response, the platform removed nearly 5,000 items as “hateful speech” and another nearly 3,000 items as “misinformation.” The LinkedIn report does not indicate that any content was merely geo-blocked. The only other enforcement actions listed by LinkedIn are two types of visibility filtering: i.e., LinkedIn did not remove the items outright but restricted their visibility.
Moreover, the DSA also creates an expectation for platforms to be proactive in suppressing “illegal” speech by way of both automated systems and human “content moderators.” LinkedIn reports that, during the 6-month reporting period, it removed another nearly 24,000 items as “hateful” and over 12,000 items as alleged “misinformation” on its own initiative. Needless to say, both allegedly “hateful” speech and alleged “misinformation” are constitutionally protected speech in the USA. It is not up to the government to decide what is “hateful,” much less what is correct and/or incorrect information.
Unfortunately, the DSA “transparency reports” are not in fact so transparent. The report does not tell us just whose posts were thus being removed as “hateful” or “misinformation.”
But lest it be imagined that the “content moderation” was only targeting posts by Europeans, leaving Americans unscathed, consider also the data that LinkedIn provides, as required by the DSA, on the “linguistic expertise” of its content moderation “team.”
Of LinkedIn’s 1,623 content moderators, 1,443—or nearly 90 percent—are English-speakers.
But post-Brexit, only roughly 1 percent of the EU’s population are native English speakers. It is clear, then, that this “content moderation” is not only occasionally, but indeed overwhelmingly affecting non-Europeans: above all, Americans and other English speakers.
read the rest at lawliberty.org
About the author: John Rosenthal is a journalist and political analyst who has been covering European politics for the last two decades. His writings have appeared in such publications as Policy Review, World Affairs, The Weekly Standard, World Politics Review, and many others. He holds a PhD in philosophy and previously taught political philosophy and the history of European philosophy at schools in both the United States and Europe. He is the author of “Make Speech Free Again: How the U.S. can defeat E.U. censorship” in the Spring 2025 issue of the Claremont Review of Books.
Please Donate Below To Support Our Ongoing Work To Defend the Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
