Human Rights Assessments in the Decisive Decade: Applying UNGPs in the Technology Sector

iStockphoto / Vit_Mar

February 11, 2020
Authors
  • Dunstan Allison-Hope portrait

    Dunstan Allison-Hope

    Senior Advisor, BSR

This blog is the first in a series of two about human rights assessments in the technology industry. The next blog will consider how to address the challenges raised here.

Over the past 10 years, the BSR team has undertaken a number of human rights assessments for companies in the technology industry. While most have not been published, recent publicly available examples include the assessments of Facebook in Myanmar, Google’s Celebrity Recognition tool, Telia Company’s exit from Eurasia, and Facebook’s proposed Oversight Board.

Our assessments have focused on a variety of topics: some have focused on geography, others on a new product, service, or technology, and others on how to integrate a human rights-based approach into a decision-making process. A common challenge throughout has been how to address traditional human rights concerns in the technology industry’s very different modern setting.

This challenge is well recognized by UN Human Rights, which has launched a project to provide guidance and resources to enhance the quality of implementation of the United Nations Guiding Principles on Business and Human Rights (UNGPs) in the technology sphere. As their scoping paper rightly recognizes, there is both a lack of clarity on how to apply the UNGPs in practice and a need to ensure that approaches are aligned with international standards. BSR looks forward to active participation in this project.

There is both a lack of clarity on how to apply the UNGPs in practice and a need to ensure that approaches are aligned with international standards.

Human rights assessments are just one part of the overall human rights due diligence approach expected by the UNGPs, though they are often a key input into the design of overall approaches for identifying, preventing, mitigating, and accounting for how companies address human rights impacts. Our past decade undertaking human rights assessments in the technology sector has raised several challenges, which must be addressed when designing human rights due diligence frameworks fit for the decade ahead as newer technology is developed, introduced, and adopted.

  1. The role of users in shaping impact. A significant challenge when assessing human rights impacts is the interplay between the design of the product by the technology company and how it is used in real life, whether by individuals, enterprise customers, or governments. While technology companies may not typically be the cause of harm, they are often contributing or directly linked to it by the use of their products and services by others, and for this reason, it is increasingly common for companies to set out restrictions on how a product can be used. However, identifying violations while respecting user privacy is not easy—often technology companies simply lack visibility into the data needed to spot misuse. We often finish a human rights assessment for a technology company and wryly conclude that we should be undertaking the assessment for the companies using the technology, not just the company designing it.
  2. The substitutability problem. “If we don’t sell this product to a nefarious actor, then someone else will” is a common refrain in the technology industry—and while not unique to the technology industry, this problem does take on special significance. Today’s concerns around facial recognition illustrates the point perfectly: the overall realization of human rights is not improved if a responsible company decides not to provide service to a nefarious actor and a different company chooses to do so anyway. At the global scale, there is no shortage of technology companies willing to step in where rights-respected companies decide not to.
  3. The need for system-wide approaches. While today’s human rights assessments are typically undertaken for a single company, in the technology industry, the solutions often need to be applied at the system level. For example, the risk of governments gaining access to data and using it to violate human rights—extreme surveillance, for instance—cannot be addressed by one company acting alone. As such, collaborations such as the Global Network Initiative are essential in making sure that insights gained at the company level are addressed collectively. In many cases, standards setting, multi-stakeholder efforts, and regulation by governments are essential to effectively addressing adverse impacts.
  4. The differential importance of local context. Users across the globe can quickly adopt and begin using a new technology product, service, platform, or feature, but the human rights significance can vary significantly depending on local contexts. This creates twin challenges of identifying what these impacts are in a timely fashion and trying to apply global policies in nearly 200 countries—and often different local contexts within each country. Companies can prioritize higher risk locations, but there is always a risk that countries, regions, and communities covered by the western media gain much more attention from companies than those that do not.
  5. The challenge of scale. Related to the importance of local context is the sheer reach of technology, and the operational implications of this scale should not be underestimated. As we noted in our recent human rights review of Facebook’s proposed Oversight Board, “while efforts to provide access to remedy in other industries are typically designed to meet the needs of a bounded number of rightsholders, based in clearly defined geographical areas and speaking a limited number of languages, the Facebook Oversight Board needs to be designed to meet the needs of billions of rightsholders (both users and non-users), who could be anywhere in the world and who may speak any language.”
  6. The democratizing impact of technology. In several recent human rights assessments, we have recommended that companies deploy approaches based on “allow lists” and “block lists” to define who should or should not be able to use a product, with the emphasis on avoiding or preventing product misuse in higher risk scenarios. The critique of this approach is simple: the original promise of technology is that it democratizes and opens closed societies, so who are companies to decide where, when, and how it gets deployed?
  7. The government problem. The UNGPs clearly state that governments have the duty to protect human rights; however, in practice governments often fall well short of this duty and make use of technology, data, and regulations to violate rather than protect rights. In many other industry contexts, the laws are often good but are not being enforced, and in the technology industry, the laws are often bad and are being over-enforced. System-wide approaches are impossible when a key player in the system isn’t willing to cooperate or is a source of the problem—for example, facial recognition solutions need regulating, yet governments are often the customer using face identification in ways that violate human rights.
  8. Assessing for uncertainty. A human rights assessment of a new product, service, or feature should focus on identifying, avoiding, preventing, and mitigating adverse human rights impacts that may be associated with this use—the challenge being that we often don’t know for certain how it will be used, where it will be used, or by whom it will be used. We’ve begun deploying futures and strategic foresight methods as part of human rights due diligence, but it would be naive to suggest we can anticipate every eventuality. This challenge is multiplied the further up the research and design chain that the human rights assessment takes place.

Will restricting the use of new products to avoid adverse human rights impacts also hinder the urgent priority of spreading the benefits of scientific advancement and technological progress, which itself is a human right?

Taken together, these challenges present a complex mix of ethical questions and dilemmas: When is it appropriate for a company to define how its product can and cannot be used, and when is it appropriate for government or society to do so? Will restricting the use of new products to avoid adverse human rights impacts also hinder the urgent priority of spreading the benefits of scientific advancement and technological progress, which itself is a human right? How much autonomy should companies provide rightsholders on how they use a product, and to what extent should that autonomy be restricted? Is it right that appropriate actions may vary at different levels of the internet, such the platform provider, service provider, or the infrastructure layer?

Looming behind these questions are also fundamental debates shaping the future of internet, such as the role of international standards-setting bodies in defining how the internet works, the risk that the internet “splinters” along geographic, political, and commercial boundaries, and whether governments should place limits on encryption.

The UNGPs and the various human rights principles, standards, and methodologies upon which the UNGPs were built provide a pathway to address these challenges, but much work remains to be done to define the direction this path should take. At BSR, we’ve undertaken human rights assessments knowing that they are just one part of a journey of discovery in learning about the impact of technology on human rights. We greatly appreciate companies that have contributed to the dialogue by publishing their assessments, and in the second part of this series, we will share more about the solutions to address the challenges raised here.

Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.

Contact Us