- BSR is launching a public Human Rights Assessment of Generative AI across the value chain in 2024.
- The Assessment will cover the “why”, “how”, and “what” of integrating human rights approaches into company workflows on GenAI governance.
- BSR will be engaging with experts and stakeholders over the next few months to inform this work.
Recent advancements in generative AI (GenAI) have accelerated the benefits and risks of the technology. Although publicly available tools were launched just over a year ago, a recent survey found 22% of employees are already using them at work. While early models such as GPT-2 only worked with text, new models like Gemini and GPT-4 are multimodal: they can simultaneously process and understand different types of inputs, including text, images, and sounds. Such features improve product performance but also create human rights risks, including new ways to produce harmful content, conduct surveillance, or carry out cyberattacks.
To help companies identify, prioritize, and mitigate these risks and maximize opportunities, BSR will be conducting a sector-wide human rights assessment (HRA) of GenAI over the coming months. The assessment will identify human rights risks across the value chain of GenAI, from upstream data workers to model developers and end-users, and make recommendations on how to address these risks.
The Human Rights Assessment will be informed by interviews with leading companies that develop and deploy GenAI and with a broad range of stakeholders, such as civil society organizations, intergovernmental organizations, and academics. The assessment will also draw on diverse research sources, including industry papers, academic literature, and NGO reports.
The HRA will use the proven and internationally recognized methodology provided by the UN Guiding Principles on Business and Human Rights (UNGPs) to provide practical guidance for companies on how to identify, prioritize, and mitigate GenAI-associated risks. The HRA will specify how GenAI developers and deployers can integrate that methodology into existing AI governance workflows, such as model evaluations, impact assessments, and institutional review boards.
To align existing processes and frameworks, the HRA will also explore how rights-based approaches can complement the ethics or trust and safety-based approaches that dominate current industry practice. Company-specific AI Principles have already helped to ground responsible AI product development and deployment in good practice, but integrating rights-based approaches will help companies better meet their commitments by ensuring methodological consistency across the industry. It will provide a more comprehensive understanding of risk that focuses on impacted stakeholders (“rightsholders”), particularly the most vulnerable.
The HRA is coming at an important inflection point in the responsible AI field. Stakeholders are increasingly emphasizing the importance of a rights-based approach to responsible and safe AI, while the EU’s provisional agreement on the AI Act includes a mandatory obligation to assess high-risk AI systems for impacts on human rights (fundamental rights impact assessments (FRIA).
Civil society stakeholders, many of whom lobbied for the inclusion of FRIAs into the AI Act, continue to call for a rights-based approach to AI governance, but there is a lack of public analysis and resources that show companies how to take a human rights based approach to AI in practice. We aim to help fill that gap.
The HRA will build on BSR’s existing work on GenAI and human rights with a variety of companies, as well as our recent collaborations with the B-Tech project of the UN’s Office of the High Commissioner for Human Rights (foundational paper on the value proposition of the UNGPs, overview of current company practice, GenAI human rights risk taxonomy) and our FAQ on the ethics and human rights implications of GenAI.
We’ll coordinate closely with peers undertaking related research and analysis on the responsible design, development, and deployment of GenAI to ensure the HRA complements rather than duplicates other work. We’ll also engage with a broad group of experts and stakeholders to inform our analysis.
We aim to publish the HRA and accompanying practical guidance for companies in Q3 of 2024. We look forward to contributing to the vibrant public debate on generative AI and producing helpful, practical resources for the public domain.