How can we ensure access to remedy when decisions are made by machines rather than humans? This was the complex question that BSR and the International Corporate Accountability Roundtable (ICAR) considered in our joint session at the UN Annual Forum on Business and Human Rights last week.
By vastly improving our analytical capability, artificial intelligence (AI) has the potential to address some of humanity’s most pressing challenges, including those relating to healthcare, education, transportation, counter-terrorism, and criminal justice. However, as we have noted in our new primer on top human rights priorities for the ICT industry, AI brings with it new and previously unforeseen human rights risks on topics as diverse as non-discrimination, privacy, child rights, freedom of expression, and access to public services.
For example, using AI when making sentencing decisions in courts, providing access to credit, or identifying potential terrorists can result in discriminatory decisions. Using voice recognition-based AI devices can bring implications for privacy rights and child rights, while some have articulated concern that machines making decisions about whether social media posts comply with terms of service could negatively impact freedom of expression.
The third pillar of the UN Guiding Principles on Business and Human Rights (UNGPs) establishes that access to remedy should be provided for victims of such violations. Our session considered three new challenges for securing access to remedy in the context of AI:
- Guaranteeing remedy when violations result from decisions made by machines and algorithms, rather than humans
- Providing operational grievance mechanisms when there are hundreds of millions of rightsholders and billions of decisions
- Safeguarding access to remedy when dozens of companies, rather than a single corporate actor, are linked to a human rights violation via the interaction of different AI-based products and services
While these discussions can seem hypothetical, technologies are moving fast, and companies from all industries are rapidly integrating AI into their products, services, and operations.
Microsoft Vice President and Deputy General Counsel Steve Crown raised the challenges of knowing when a harm has taken place, identifying who might be at fault, and defining a remedy that can return the victim to their previous state. Crown provided the example of a young woman who was targeted with advertisements based on retail data analytics suggesting she was pregnant—and her father discovering this fact from direct mail, rather than from his daughter. In this case, had a privacy violation taken place, what remedy might be appropriate if it had, and how could the company stop it from happening again?
Sandra Wachter, a researcher in data ethics at the University of Oxford and research fellow at The Alan Turing Institute, surfaced the notion of a “right to explanation” that might come into force under the new European General Data Protection Regulation (GDPR) in scenarios when decisions are made by machines, such as access to credit or employment opportunities.
However, Wachter highlighted that this right in the GDPR disappears once a human is involved in the process—even if the human is involved as a rubber stamp—and that many companies will oppose revealing detail about decision-making algorithms as being commercially confidential. Sandra proposed an alternative “right to explanation” model based on counterfactuals that describe facts that lead to that decision (such as income or educational achievement, for example) that may offer meaningful information to rightsholders, without the need to convey the internal logic of an algorithm. Sandra also spoke in favor of an independent watchdog to scrutinize companies and ensure accountability.
Google Free Expression and Human Rights Counsel Alex Walden spoke about how machines are being deployed to assist with judgments about controversial content uploaded by internet users, such as hate speech and terrorist content. These machines can be especially helpful given the sheer volume of content uploaded—but while machines can sift through huge volumes of content to identify cases, only humans have the necessary understanding of context and language to make final decisions.
A theme running throughout was the notion that AI is going to play an increasingly important role in our lives, and that it is going to be used by many industries, not only technology companies. Overall, I reached three conclusions about the application of the UNGPs in the age of AI.
First, it is important that the human rights implications of AI are understood by all sectors of the economy—such as retail, financial services, energy, healthcare, transportation, infrastructure, and the public sector.
Second, we should consider access to remedy through the lens of the rightsholder. AI is extremely complex, and only a very small number of people in the world know how it works. If AI is to fulfil its potential while mitigating accompanying risks, it is essential that civil society, rightsholders, and vulnerable populations benefit from channels to participate meaningfully in discussions about its application and have access to remedy. The professional communities engaged in the development of AI would benefit from a deep understanding of ethics issues and rightsholder perspectives, as is beginning to happen through initiatives such as Partnership in AI and AI Now.
Finally, there is a need to assess whether the access to remedy being developed in the context of AI meets the remedy effectiveness criteria set out in the UNGPs, such as being legitimate, accessible, predictable, equitable, transparent, rights-compatible, and based on engagement and dialogue.
Answers to these questions will only arise over time (unfortunately, we can’t just ask Alexa, Siri, or Cortana!) and with the identification of use cases demonstrating how effective remedy can be obtained. We look forward to the opportunity of working with our member companies from all industries to explore these important conversations further.