Artificial intelligence (AI)—and the big data business models underpinning it—is disrupting how we live, work, do business, and govern. 

The economic, social, and environmental benefits of AI could be significant, such as improved health diagnostics, self-driving vehicles that increase road safety, and enhanced fraud prevention.

However, AI also brings social risks, including new forms of discrimination arising from algorithmic bias, labor impacts associated with the displacement of workers by machines, and the heightened potential of surveillance using tracking devices and facial recognition tools.

Artificial intelligence (AI)—and the big data business models underpinning it—is disrupting how we live, work, do business, and govern.

The speed, complexity, and novelty of these disruptions imply that similarly innovative approaches to responsible business will be needed for us to realize the full potential of AI to create long-term value.

Over the past few years, various ethics-based approaches to the responsible development and deployment of AI have emerged, covering issues like privacy, surveillance, discrimination, bias, unintended consequences, and misuse by bad actors. The pace with which new principles, organizations, and design tools have been established is extremely impressive, and these have made a tremendously positive contribution to the debate about the future of AI.

We believe that three major enhancements to the current approach are required:

  1. Human rights-based approaches offer a robust framework for the responsible development and use of AI and should form an essential part of business policy and practice.
  2. Companies outside the technology industry have an essential role to play, and we believe they should be more proactively involved in the development of responsible approaches to AI.
  3. Due diligence approaches developed by the business and human rights field in recent decades can be usefully deployed in the quest for responsible and rights-respecting development and deployment of AI—though we also believe that these approaches require significant stretching and innovation.

For these reasons, today we are publishing three papers describing a potential blueprint for responsible business practice with regard to AI both within and beyond the technology sector. 

  • In Paper 1: Why a Rights-Based Approach?, we outline 10 beliefs—built on the internationally agreed foundations of the business and human rights field—to govern and guide the use of AI. We draw heavily on the UN Guiding Principles on Business and Human Rights, the foundational and internationally endorsed road map for addressing business human rights impacts on people.
  • In Paper 2: Beyond the Technology Industry, we argue that we must pay attention to the AI value chain, as well as the positive and negative human rights impacts associated with AI that are directly relevant for companies beyond the technology sector.
  • Finally, Paper 3: Implementing Human Rights Due Diligence, we explore the tools, methodologies, and guidance needed operationalize business respect for human rights in the context of AI development and use. We propose several innovations, including using futures methodology, experimenting with the concept of ‘human rights by design,’ and taking rights-based approaches to identify opportunities.

These papers draw upon approaches and lessons learned from the field of business and human rights. We are presenting them as “working papers” for discussion, dialogue, and feedback from our readers, and there will be many opportunities to examine these proposals in the months ahead. This will include a BSR member company event in New York in October and a session at BSR conference in November.

We intend to publish revised and improved versions of these papers at a later date based on your input. We’d love to hear your reactions or those of your colleagues and stakeholders. If you are keen to engage or collaborate, or if you see other opportunities to test these papers, please get in touch.



BSR18: Advance Rate