19 december 2023

The DSA's tackling of systemic risks should have been content-agnostic

Any government-controlled/sanctioned/policed form of content moderation that is based on the content itself, makes that government a Ministry of Truth in the style of 1984. Like the European Commission since the Digital Services Act (DSA). It's a sobering fact that cannot be denied.

At the same time, the risks to democracy posed by AI-generated content and social media algorithms are immense: the DSA does address a real need (though in a very wrong way, at least the provisions related to "systemic risks").

Imo, the only solution that respects fundamental rights and democratic principles lies in content-agnostic moderation procedures. I am convinced that this is possible. Here are three concrete ideas (which may of course not be the best ones):

1⃣ All platforms should be required to allow (not require) both humans and bots to be identified on the platform. For identified bots, an identified human should be linked and accept accountability. Platforms should be free to ban (identified) bots altogether.

2⃣ Require T&Cs and content moderation policies to respect freedom of speech and of information. This must be a prime concern, not an afterthought or vague boundary condition.

For identified accounts, (only!) the identified individual can and should be legally accountable for allegedly illegal content, and must thus not be moderated by the platform on such grounds until a regular court has confirmed illegality and mandated removal.

For illegal content from anonymous accounts, the platforms should be held accountable, e.g. using a mechanism similar to the DSA's.

Transparent forms of T&C-based content moderation should of course be allowed (but *never* government-mandated), to allow platforms reach an intended target audience (e.g. banning nsfw content).

3⃣ Impose algorithmic limits/brakes on the virality of any content (regardless of which content it is), particularly for content that is not created by identified humans. There are many ways in which this could be done. Note that many algorithms today do the exact opposite.

It strikes me that I've never seen a meaningful focus on 3⃣, although the DSA actually does follow this approach to a very limited extent in its regulation of recommender systems. Yet, imo this is a crucial piece of the puzzle, which achieves the goal without harming fundamental rights (rather on the contrary).

(Note: my thoughts on this were partially inspired by this outstanding piece by Jonathan Haidt and Eric Schmidt.)

18 december 2023

Distinguishing True from False Content and Its Proxies

1) There's information that you like, and there's information that you dislike.

2) There's human-generated content, and there's bot-generated content.

3) And then there's true information, and there's false information.

The problem with regulations to fight disinformation such as the Digital Services Act (DSA), is that the first distinction is easy to make, the second distinction much harder, and the third distinction practically impossible (except for in the most trivial cases).

The inevitable failure to make the third and second distinctions will mean that the first one will be used as an all too convenient proxy. As we are predictably seeing today with the DSA.

And that's the beginning of the end of democracy as we know it.

Although the DSA addresses a real need, and although it has many merits as well (e.g. the provisions to protect minors, and the provisions regarding transparency even though they still fall short), I am convinced that its handling of systemic risks is a mistake and should be undone.

The European Commission has created the DSA to tackle "systemic risks". In doing so, it may have created the greatest systemic risk of all.