The online public sphere, composed jointly of various social media platforms, enables dialogue at societal scale, rapid dissemination of news, and heightened opportunity for people to discover content and opinions they find interesting or important.

These platforms, however, also involve various content moderation policies, around which tend to revolve polarized debates about free speech. To what extent can a platform’s moderation policy, intended to minimize proliferation of harmful content, result in the undeserving censorship of controversial content or minority opinions?

In addition to the matter of censorship, there is also the matter of maintenance overhead and the costs associated with data storage. There appears, at least on the surface, to be a structural incongruity involved in the current social media status quo, namely that these platforms, as private enterprises with profit-maximizing imperatives, are obligated to maintain vast troves of data which, in effect, function as public records.

To what extent should we expect these platforms to fulfill this obligation, especially in situations where it is neither legally required nor financially advisable to do so? Should we really be surprised when large swathes of such data are erased or otherwise deprecated?

The ethical considerations of content moderation, and the financial and technical considerations of data storage, tend to undergird our collective conversations around social media platforms and their role in modern social and civic life. What appears to be a central pain point in this arrangement is the conflict between private business imperatives and public welfare responsibilities - a conflict which can potentially be resolved by reevaluating how content moderation can be exercised.

A Time and Place for Moderation

Right now, content moderation seems to generally be practiced in arrangements where the moderating entity controls not just the visibility of the content in question, but its very existence.

It is certainly possible for backups to be made of moderated content, and for these backups to be stored in a number of places, but in general this concurrence of a platform not only displaying content for public consumption, but also managing the storage of this content, seems to be at the root of the problem. That is, should a media platform, which can have legal and operational obligations to maximize profits and minimize superfluous expenses, be responsible for storing the content it displays as well as displaying it for end readers?

The leading arguments in favor of content moderation, namely that some content is bigoted, illegal or otherwise dangerous, can arguably be satisfied by exercising moderation at the point(s) where the content is being displayed and socially circulated, rather than where the content is stored. In other words, moderated content can technically still exist, but just not be socially circulated or otherwise platformed by media companies subject to relevant laws and public scrutiny.

Such an arrangement may not be very feasible if the same company is tasked with storing censored and uncensored content, while also displaying only the uncensored content. However, if there was a way for the content (qua data) to be stored in a way that constituted less of a financial burden for a given platform, say perhaps on public peer-to-peer databases, such as IPFS or Arweave, then the platform could draw content from that database with arguably greater flexibility in terms of what moderation policies it enforces.

That is, if the platform isn’t responsible for storing the data, its censorship and moderation decisions would pertain not to the content’s existence, but to its degree of public exposure. This way, such content could still exist, and be manually accessed by anyone who wants to, but it would not be circulated on the social media platforms whose moderation policies prohibit it. This enables a patchwork of social media platforms, each exercising their own content moderation policies, without actually compromising at the roots of free speech.

A Dialectical Framing

Below is an example of how this conflict can be understood in dialectical terms, comparing two possible content moderation arrangements, one for each side of the debate, as well as how a potential solution can substantially address the concerns of both sides.

  • Thesis: All opinions should be able to be voiced, and thus privately-run social media companies should provide a neutral platform for all opinions.
    • Antithesis: Some content is dangerous, and thus privately-run social media companies need to censor or de-platform certain content.
      • Synthesis: All opinions can be voiced, but not all should be circulated or platformed, so privately-run social media companies should outsource content storage to public databases and merely draw from these databases to visualize whatever content complies with their moderation policies.

        Notes: In this framing, the pro-moderation “thesis” represents those who want harmful content to not be circulated, rather than those who want such harmful content to be deleted from existence. For the purposes of this inquiry, this latter group’s demand is compromised.

        Given how the dialectical method works, the proposed “synthesis” involves its own pain points, namely the potential difficulties regarding user experience and product-market fit for social media companies which don’t own the content data generated by their users. These pain points would need to be addressed by a subsequent solution, which will involve its own pain points, and so on.

        Sensemaking methodology aside, what is being considered here is whether the theoretical solution proposed above would, indeed, substantially address the concerns of the majority of people on each side of this debate, and if the drawbacks it entails would be preferable, by a similar majority, over the drawbacks of the existing social media status quo.

        That said, while the above solution does remain largely theoretical, such an arrangement is already gradually emerging, with several examples of media companies zeroing in on product-market fit within this new arrangement.

        A New Social Media Experience

        In this new arrangement, wherein content resides in distributed databases which are not controlled by a single entity, and where this content can be visualized and circulated on privately-run social platforms, the dynamics of moderation are fundamentally different.

        Publishing platforms such as t2, Mirror, Paragraph, and Planet are successfully building tools and platforms for writers to publish their content to decentralized databases like IPFS and Arweave, publicize their content using open-source social protocols like Lens and Farcaster, and monetize their content using blockchain technology.

        None of these platforms own the content created by users. Because the content itself lives on peer-to-peer data storage systems, these social media platforms must build their revenue models in less extractive fashions, such as through offering appealing content creation, visualization and discovery features.

        But beyond these publishing platforms, which focus mainly on short-form to long-form written content, the wider arena of social media content is being innovated by open-source projects like Lens and Farcaster. These projects have each built a system of social media building blocks, which users employ in creating their content, and which can be visualized by privately-run social media apps.

        While the social media platforms of today control the content generated by users, this new arrangement establishes more of a content commons. Here, all content from tweet-length posts to multimedia essays can live on distributed databases, have their ownership represented on decentralized blockchains which function as social graphs, and be visualized across a number of centralized social media platforms which compete for users by offering streamlined ways to interact with the underlying open-source software.

        Instead of making a post and having that post be siloed within a given social media app, the post is published to a public record which a variety of social media apps may draw from.

        As for content moderation, these new platforms can establish and enforce whatever policies they need to, without threatening the existence of the moderated content, simply because the content itself is not controlled by the platforms. The platforms built in the Farcaster ecosystem (including Warpcast, Alphacaster, and OpenCast) and the platforms built using the Lens protocol (including Hey, Orb, and Buttrfly) can each implement their own moderation policies. These policies can be shaped by free market responses, such as users choosing to leave one platform and experience the same content on a different platform with stricter policies against hate speech, as well as legal constraints, such as laws against platforming certain kinds of pornography.

        In terms of revenue models and product-market fit, this is all still an active and experimental development. These new platforms compete with each other not in terms of who can hoard and privately capitalize upon the most user-generated content, but who can offer the best user experience and features built upon a public open-source infrastructure.

        Because the userbases of these platforms currently consist predominantly of web3 enthusiasts and technologists, it is yet to be seen whether such new business models can scale beyond niche markets and operate at the levels of the incumbent social media platforms. If these platforms can scale to such levels, however, and if the aforementioned business models remain viable at such scales, then our collective social media experience can drastically change, ostensibly for the better.