The recent action taken by social media companies against former user @realDonaldTrump following the insurrection at the Capitol sparked a fierce debate.
For some, the decisions were a legitimate, necessary, and proportionate response to the obvious incitement of violence and insurrection: a new line had been crossed, and tough action was merited.
For others, this action came too late—the damage had already been done, and the lack of action against previous transgressions (such as quoting “when the looting starts, the shooting starts”) had allowed users to push the boundaries of online speech too far, for too long.
For others still, the decisions were a problematic infringement on the right to freedom of expression and a step too far for companies to take.
A debate raged on about whether and how democratically elected leaders should be treated differently online. For some, the public has a right to hear from their elected officials, no matter how objectionable the message; for others, elected officials should be afforded less freedom when incitement to violence is at stake, owing to the substantial influence that the speaker has with the audience and increased risk of harm.
These are real dilemmas with potentially conflicting rights at stake, such as freedom of expression, public safety, and the right to participate in government. There are no simple solutions (if you think there are, here is a reading list) and we have nothing but admiration for those inside social media companies genuinely seeking to take principled approaches to resolve these dilemmas. It is not an easy task, and it's one where no good deed goes unpunished.
BSR’s engagement in this dialogue has focused on how social media companies can apply a human rights-based approach to the challenge of online speech. In other words, how should a company’s responsibility to respect human rights according to the United Nations Guiding Principles on Business and Human Rights (UNGPs) manifest itself in the context of content governance?
With this objective in mind, we are publishing a short paper today setting out a four-part approach to human rights and content governance, based on a combination of the UNGPs and the various human rights principles, standards, and methodologies upon which the UNGPs were built. These four parts are as follows:
- Content policy—statements about what is and is not allowed on a social media platform should encompass all human rights, be founded upon human rights standards and instruments, and draw upon engagement with affected stakeholders.
- Content policy implementation—given the challenges of enormous scale and rapid speed, companies should prioritize implementation based on the severity of human rights risk (globally, not limited to the United States), understand the link between online content and offline harm in the relevant local context, and provide effective remedy when mistakes are made.
- Product development—the features, services, and functionalities of social media platforms are constantly evolving, and it is important that potential human rights impacts are assessed during the development and deployment process, especially for high-risk and conflict-affected markets.
- Tracking and transparency—companies should maintain quantitative and qualitative indicators of the effectiveness of their approach and be transparent about the rationale for important content decisions, with reference to relevant human rights considerations.
There are two important features to highlight about this approach.
First, these four parts constitute a robust framework of ongoing human rights due diligence that enable content decisions to be made thoughtfully, deliberately, and grounded in rights-based analysis, rather than “on the go” or according to the whim of the moment. They emphasize that the process matters as much as the decision itself—that while different people or companies may reach different decisions, these decisions should be intellectually consistent, defensible on human rights grounds, and conveyed transparently.
Second, these four parts encompass more than simply what content is and is not allowed on a platform. Our approach assumes that international human rights law and the UNGPs provide an overall framework for principled decision making and action, not a “copy and paste” set of content rules for companies to follow. The four parts are intended to be considered as a package and enable companies to adapt as the reality of social media use unfolds.
One of the concerns most frequently expressed over recent weeks has been the unease that companies have so much power—with some arguing that decisions about content should be a role for governments, not companies.
However, there are three reasons why we believe companies should play a role in content governance and thus why setting out a company-based human rights-based approach to content governance remains essential.
First, many of the most significant public policy proposals on content governance—such as the U.K. Online Harms White Paper, the EU Digital Services Act, and proposals to reform U.S. Section 230—envision a very important role for companies taking responsibility for harm associated with content on their platforms.
Second, these public policy proposals relate to specific jurisdictions, whereas the internet is global. Human rights-based approaches enable consistent approaches to be taken across international borders, including jurisdictions where laws and regulations conflict with international rights standards. Indeed, a global approach based upon international human rights standards provide a strong foundation to push back against governments seeking to suppress freedom of expression and other rights.
Third, the UNGPs state that companies have a responsibility to address the adverse human rights impacts with which they are involved. User-generated content clearly has a connection to adverse human rights impacts, and therefore a human rights-based approach to content governance is essential to meet the responsibility of companies to address this connection.
Indeed, the most promising near-term contribution to dilemmas relating to @realDonaldTrump won’t come from government, but from the independent Facebook Oversight Board, which is due to review the former user’s suspension from the platform.
We hope that this paper provides a useful contribution for how respecting human rights and implementing the UNGPs can provide a foundation for this infrastructure, and we welcome comments to amend, improve, and build on this approach.