Imo, the only solution that respects fundamental rights and democratic principles lies in content-agnostic moderation procedures. I am convinced that this is possible. Here are three concrete ideas (which may of course not be the best ones):
For identified accounts, (only!) the identified individual can and should be legally accountable for allegedly illegal content, and must thus not be moderated by the platform on such grounds until a regular court has confirmed illegality and mandated removal.
For illegal content from anonymous accounts, the platforms should be held accountable, e.g. using a mechanism similar to the DSA's.
Transparent forms of T&C-based content moderation should of course be allowed (but *never* government-mandated), to allow platforms reach an intended target audience (e.g. banning nsfw content).