This is part of a series of blog posts BSR has published in 2018 exploring the intersection of disruptive technologies and sustainability.

Over the past few years, the technology industry has been under the spotlight for social, ethical, and human rights issues arising from the rapid development of artificial intelligence (AI). Yet AI will be deployed in almost every sector of the economy—companies in mining, agriculture, healthcare, retail, financial services, or transportation are all exploring how AI can be turned into innovative customer solutions, new business models, and massive operational efficiencies.

This means that human rights teams and decision-makers in non-technology companies will need to engage with the social, ethical, and human rights issues relating to the use of AI in their respective industries.

Given the uncertainty around the deployment of AI, futures methodology can be applied to consider different scenarios and assist companies in identifying their potential human rights impacts of the future.

On October 3, we ran a workshop to assist companies in the exploration of these issues. The workshop did two things: First, it identified the barriers and challenges of applying human rights due diligence practices to new and disruptive technology through industry-specific roundtables. Second, it trained companies to anticipate AI-related human rights issues and take tangible steps toward addressing them through an innovative futures methodology exercise. We specifically engaged non-technology companies, in addition to the ICT sector, to broaden the discussion about the diverse opportunities and risks associated with AI.

Reflecting on this workshop, we reached four main conclusions:

  1. AI is becoming increasingly relevant to non-technology companies and should be considered in human rights due diligence processes. Examples like the use of algorithms to assess credit-worthiness in the financial sector, automated purchasing in the retail industry, and the use of AI in connected health demonstrate that the widespread application of AI is slowly transforming almost every company into a technology company. This creates an urgent need to consider whether AI is reshaping existing human rights risks and opportunities or creating entirely new ones. A human rights due diligence process that ignores AI and other disruptive technologies runs a high risk of being incomplete.
  2. The questions outnumber the answers. The impact of AI on human rights is uncertain, with a wide variety of potential scenarios. In this context, a practical next step for companies is to broaden the range of questions they ask about potential impacts in the future—even if the answers to these questions are unknown. Some early stage questions may include: Can my company describe whether specific choices or decisions arrived at by AI have impacted the human rights of customers, users, or employees? How could a lack of understanding around the algorithmic decision-making process affect my ability of my customers to understand their rights? Are algorithms discriminating against certain populations—either intentionally or unintentionally? If my company expands operations to new countries and cultures, are the same algorithms and data going to be used to make decisions, and if they are, what are the potential ramifications?
  3. A human rights due diligence and futures methodology “mash up” has potential. Futures thinking, strategic foresight, and scenario planning methodologies have been in existence for many years, as have human rights due diligence methodologies. However, to our knowledge, the two methodologies have not been combined since the publication of the UNGPs. Given the uncertainty around the deployment of AI, futures methodology can be applied to consider different scenarios and assist companies in identifying their potential human rights impacts of the future.
  4. Industry-wide human rights impact assessments could be a way to increase the capacity of entire industries to address AI and human rights. While the tech sector is actively engaged in the task of incorporating human rights into the design, development, and deployment of AI, other industries also have work to do. Working with industry peers to identify human rights impacts, risks, and opportunities will allow for shared learning and support efforts at individual companies to address the human rights impacts of AI.

As we move into 2019, BSR will continue to prioritize the connection between disruptive technology and human rights, and we will endeavor to put these industry explorations at the forefront of our agenda.