19 december 2023

The DSA's tackling of systemic risks should have been content-agnostic

Any government-controlled/sanctioned/policed form of content moderation that is based on the content itself, makes that government a Ministry of Truth in the style of 1984. Like the European Commission since the Digital Services Act (DSA). It's a sobering fact that cannot be denied.

At the same time, the risks to democracy posed by AI-generated content and social media algorithms are immense: the DSA does address a real need (though in a very wrong way, at least the provisions related to "systemic risks").

Imo, the only solution that respects fundamental rights and democratic principles lies in content-agnostic moderation procedures. I am convinced that this is possible. Here are three concrete ideas (which may of course not be the best ones):

1⃣ All platforms should be required to allow (not require) both humans and bots to be identified on the platform. For identified bots, an identified human should be linked and accept accountability. Platforms should be free to ban (identified) bots altogether.

2⃣ Require T&Cs and content moderation policies to respect freedom of speech and of information. This must be a prime concern, not an afterthought or vague boundary condition.

For identified accounts, (only!) the identified individual can and should be legally accountable for allegedly illegal content, and must thus not be moderated by the platform on such grounds until a regular court has confirmed illegality and mandated removal.

For illegal content from anonymous accounts, the platforms should be held accountable, e.g. using a mechanism similar to the DSA's.

Transparent forms of T&C-based content moderation should of course be allowed (but *never* government-mandated), to allow platforms reach an intended target audience (e.g. banning nsfw content).

3⃣ Impose algorithmic limits/brakes on the virality of any content (regardless of which content it is), particularly for content that is not created by identified humans. There are many ways in which this could be done. Note that many algorithms today do the exact opposite.

It strikes me that I've never seen a meaningful focus on 3⃣, although the DSA actually does follow this approach to a very limited extent in its regulation of recommender systems. Yet, imo this is a crucial piece of the puzzle, which achieves the goal without harming fundamental rights (rather on the contrary).

(Note: my thoughts on this were partially inspired by this outstanding piece by Jonathan Haidt and Eric Schmidt.)

Geen opmerkingen:

Een reactie posten