A Business Guide to Responsible and Sustainable AI

March 2024 Edition

Video by DKosig on iStock

Insights+ provides insights and foresight that empower sustainability leaders to be strategic C-Suite advisors by identifying and analyzing emerging and cross-cutting sustainability issues, as well as supporting their efforts to achieve tangible progress toward a more just and sustainable world.

In the sixth edition of Insights+, the Human Rights, Climate, and Technology teams set out the latest developments in upcoming regulations, discuss the implications for how AI is utilized in business across industries, and spotlight key emerging issues.

What Business Leaders Need to Know

Executive Summary

With the increasing use of artificial intelligence (AI) technologies across businesses, leaders are questioning the sustainability implications: is the use of AI going to increase our company’s carbon footprint, or will it lead to efficiencies that offset the increase in computing power? How should we think about the social and ethical impacts of AI? How is evolving AI regulation going to affect us?

Now is the time to make sure a rights-respecting approach is integrated into how AI technologies are built and deployed. While the breadth and depth of long-term environmental and social impacts of AI are still unknown, business leaders can start by considering the following:

  • Environmental Impacts: Sustainability leads would be well advised to consider the environmental impacts of their AI investments. Training, finetuning, and running AI models can be energy-intensive and can increase a company’s footprint. The growing use of AI software also requires increased data center and hardware capacity, which have significant environmental impacts, can lead to increased e-waste, and impact a company’s scope 3 emissions. 

    Additionally, business activities enabled by AI systems can have environmental impacts further downstream. For example, AI can be used to speed up fossil fuel extraction, and AI algorithms used in advertising can lead to increased consumption of non-sustainable products. Companies developing and deploying AI systems should consider how such use cases may adversely impact the environment and slow down efforts to address the climate crisis. 
Data Center

 

  • Human Rights Impacts: The use of AI by businesses can impact human rights in many ways, depending on how it is used. Key human rights issues include those related to bias and discrimination (e.g., the use of AI systems in hiring decisions that result in the discrimination of underrepresented groups), privacy and surveillance (e.g., the use of AI systems to analyze employee behavior in the workplace), and bodily security (e.g., the use of AI systems to automate decision-making in ways that impact of health, safety, or personal security).

    It’s important to take a human rights-based approach to these issues to minimize risks to people and maximize opportunities to benefit society. The UN Guiding Principles on Business and Human Rights (UNGPs) provide an authoritative, global framework for all companies, including those developing, deploying, and utilizing AI.   

    There are also broader societal impacts of AI that are important to consider, including implications on employment. Although projections about the impact of AI on jobs vary widely, it is plausible that AI could radically reshape labor markets. Additional concerns relate to the impact AI might have on democracy and information ecosystems, financial stability, and armed conflict. 

At the same time, AI holds real promise to help advance sustainability goals. AI technologies are already being used to accelerate the drug discovery process, enhance energy efficiency, and develop novel materials such as lightweight polymers for improving automotive efficiency and inorganic crystals essential to computer chips, batteries, and solar panels—all with fewer environmental and social impacts.

As AI technologies are rapidly deployed by business, it is vital for companies to understand and manage the full range of risks and opportunities so that both the sustainability impacts and business success of AI initiatives are addressed. We look forward to working with our members to advance the responsible use of AI.

Latest Developments

Over the past couple of years, the release and widespread adoption of ChatGPT and other generative AI tools powered by large language models led to concerns over privacy and copyright issues; heated safety debates among researchers, including a call to pause the development of generative AI systems; a rush among policymakers to regulate AI technologies; and worries about the strain that AI is putting on power grids.

The explosive growth in generative AI technologies has accelerated the adoption of AI software. However, safety, ethics, and human rights considerations have been around for some time. Recent years have seen the growth and maturation of Responsible AI efforts by many companies, particularly in the tech sector. Responsible AI refers to an evolving set of practices aimed at developing and deploying AI in a way that considers its potential adverse impacts. These practices employ a mix of different approaches, including those based on ethics and human rights.

Evolving regulations are shifting the way that companies think about and assess risks related to the use of emerging technologies. This includes specific AI regulation (such as the EU AI Act), but also regulations related to digital services and platforms (such as the EU’s Digital Services Act and the UK’s Online Safety Act), which cover the use of algorithms, and broader corporate due diligence regulations (such as the EU’s CSRD and proposed CSDDD), which cover certain uses of technology by companies.

Taking a human rights-based approach aligned with the UNGPs can help ensure compliance with a broad range of AI-focused and other types of regulations that incorporate requirements to consider risks to people connected to AI.

Specifically, the EU AI Act (currently in its final stages and expected to be adopted in April) will require companies—both the developers and deployers of AI systems—to identify and mitigate the potential adverse impacts of AI. As well as requiring human rights impact assessments of certain AI systems (generally those expected to carry the greatest risks to human rights), this regulation will also require companies to take steps to ensure that AI systems are developed and deployed in ways which minimize potential risks to people, that biases in datasets are addressed, that there is sufficient human oversight over AI systems’ deployment, and that there is greater transparency so that individuals know when they are interacting with AI.

Hardware manufacturing


Similarly, policymakers have started considering regulating the environmental impacts of AI. Recently introduced in the US, the Artificial Intelligence Environmental Impacts Act calls for the creation of standards to measure the full range of environmental impacts across the AI lifecycle, from the mining of minerals to the manufacturing of hardware, to the training and use of AI models. The act also calls for the creation of a voluntary reporting framework for companies developing or operating AI systems.

In addition to the voluntary efforts of the industry and regulatory developments, various international bodies have issued guidance or standards for the responsible development and deployment of AI. For example,

What Does This Mean for Business?

Sustainability issues related to AI technologies are relevant for every industry, not just tech companies. Risks and opportunities associated with AI are related to both the design and development of technologies, as well as how these technologies are deployed and used by companies outside of the tech sector.

AI use cases vary widely across industries. Last year, BSR worked with member companies to explore the potential human rights impacts of AI in four key industries and published primers detailing the findings: retail, extractives, financial services, and healthcare. We continue to work with our members to explore the impacts of AI across different industries and use cases.


For companies looking to develop or deploy AI technologies, here are five key action items to consider:

1. Adopt a Responsible AI Approach to the Development and Deployment of AI

BSR recommends establishing a holistic approach that positions the company to address both the human rights and environmental impacts of AI. Some of the fundamental steps a company should take include developing AI principles and establishing a governance mechanism. These should be grounded in human rights (i.e., the UNGPs) and environmental standards.

Setting up a Responsible AI program or approach typically requires the involvement of various functions at a company. For companies that do not have a dedicated team addressing these issues yet, we recommend starting the process by involving the following functions:

  • Teams that can manage the issue from a central perspective, such as Sustainability, Human Rights, Ethics, Legal Compliance
  • Teams that use AI technologies, such as Operations, Supply Chain, Human Resources
  • Teams that develop or purchase AI technologies, such as Engineering, Product, IT, Research and Development, Procurement

2. Undertake Human Rights Due Diligence

To identify and address the actual and potential human rights impacts of the AI technologies that they are developing, using, or procuring, companies can undertake human rights due diligence, a process that specifically assesses risks to people (as opposed to risks to the business). Given the rapidly evolving nature of AI and how it is used, human rights due diligence should be undertaken on an ongoing basis.

As part of their human rights due diligence, companies may need to undertake human rights impact assessments (HRIA) of specific AI technologies or use cases. For example, a financial services company using AI to assess the qualification of loan applicants could conduct an HRIA to explore possible human rights impacts, including risks to privacy and non-discrimination. The results of these impact assessments should then be used, if necessary, to modify or adapt the technologies or to ensure that sufficient mitigation measures or safeguards are in place to address any identified risks.

Note that BSR will be conducting a sector-wide human rights assessment (HRA) of GenAI over the coming months. The assessment will identify human rights risks across the value chain of GenAI, from upstream data workers to model developers and end-users, and provide recommendations on how to address these risks. We aim to publish the HRA and accompanying practical guidance for companies later this year.


3. Assess the Environmental Impacts of AI

In addition to human rights due diligence, companies should assess the environmental impacts of their AI systems throughout the full value chain—from the construction of data centers to hardware use and disposal, as well as the development, training, and use of AI models. Throughout this value chain, increased use of AI technologies is contributing to higher rates of energy use, increased water usage, e-waste, and possibly embodied carbon. In the absence of responsible AI policies and practices, the environmental impact of the AI value chain is likely going to worsen.

Quantifying the environmental impacts of AI systems can be challenging for companies because there are many different layers of the tech stack, and collecting accurate data on each of them may be impossible. New tools, such as the Green Software Foundation’s automated tool to assess the carbon footprint of software solutions, can be helpful. Even when it is difficult to quantify the impacts, companies should take proactive measures to reduce the environmental impacts of their AI systems, both independently and through collaboration, and increase transparency of their efforts.

4. Develop a Responsible Sourcing Approach

Human rights and environmental due diligence should extend through the full AI value chain, including services that are sourced to build and train AI systems. For example, companies can apply sustainable procurement principles when assessing which data center, co-location facility, or cloud service provider to contract with, and they can request transparency data on their energy and water usage. For companies utilizing such services, the Corporate Colocation and Cloud Buyer’s Principles can be a helpful resource.

Companies should also be aware of potential worker rights issues in the AI value chain. AI systems need to be trained on massive datasets, which are collected and labeled by humans. The data collection, labeling, and enrichment industry has emerged to meet this need at scale. While this industry has created millions of jobs in the Global South, it has also come under fire for labor rights abuses, such as exploitation through low pay and insecure employment conditions, and some companies have been labeled “digital sweatshops.” For companies utilizing such data services, the Partnership on AI’s responsible sourcing guide can be a good starting point.

5. Apply Futures Thinking to Explore the Impacts of AI

The broad societal impacts of AI, such as the displacement of workers, unemployment, or radical advances in vaccine development, cannot be attributed to or addressed by any individual company. These are the cumulative impacts of many companies’ actions. Nevertheless, these impacts could profoundly reshape the operating context for sustainable business. 

Properly contending with the implications of these broader and longer-term impacts requires a creative approach. Futures thinking techniques, like scenario planning, are well-suited to helping a business explore disparate possibilities, understand emerging risks, and create more resilient strategies that account for highly dynamic developments.

Emerging Issues

Large-Scale Machine Learning Applications Speed up Material Discoveries

  • The AI tool GNoME (Graphical Networks for Material Exploration) has discovered 2.2 million new crystal structures, of which 380,000 are stable enough to be considered for development into new materials. This increased efficiency in material discovery by AI may speed up the production of transformative technologies, such as superconductors, supercomputers, and batteries particularly relevant for the energy transition.
  • GNoME showcases how large-scale machine learning applications may be useful to speed up scientific discoveries in material science as the models have demonstrated an emergent capacity to learn outside of identically distributed data at scale. However, various challenges remain within the process of synthesizing and applying new materials that will continue to take years until commercialization.

AI Takes a Lead in Providing Frontline Healthcare Services

  • AI has potential to relieve the administrative burden of healthcare and may eventually take over some clinical work. For instance, Forward Health’s AI-powered CarePods aim to replace the doctor’s office for routine services like throat swabs or even blood work, while a Microsoft AI application transcribes doctors’ notes. This could allow healthcare providers more time to engage with patients and deliver the right diagnosis and care. 
  • AI can assess vast amounts of information—including patient medical history, the latest medical research, and data across medical and social fields—to provide comprehensive and potentially more accurate medical diagnoses and develop more creative and tailored treatment plans for review and approval by healthcare professionals.  
  • The success of AI in healthcare depends on the data that power it. Algorithmic bias and lack of diversity in data can lead to misdiagnosis and inadequate care, especially for minority communities. Additionally, safeguards must be put in place to ensure patient data are protected.  

Provenance Solutions Rise in Response to Deepfake Concerns

  • Provenance solutions, which identify the origin of digital content, are being developed due to the rise in deepfake content (including hyper-realistic computer-generated photos, videos, and audio) resulting from the increased popularity and accessibility of AI tools.
  • OpenAI’s recently previewed text-to-video model, Sora, can create hyper-realistic videos from simple text prompts. Stakeholders have raised concerns that such generative AI tools may facilitate the creation of increasingly convincing mis- and disinformation, among other risks. Numerous jurisdictions have deepfake offense laws, which include protection from those who share intimate images without consent, requirements to label artificially generated content, and making deepfakes illegal to use.
  • The Coalition for Content Provenance and Authenticity (C2PA) and other initiatives are creating platforms that attempt to provide instant provenance information by transforming digital media into verifiable and secure digital assets, thus combating deepfakes and verifying online content.  
  • Over time, platforms may attempt to instantaneously verify all posted content. However, provenance information may not be available for all content, nor would provenance information be a definitive indicator of the veracity of a piece of content. The speed at which AI tools can adapt will also continue to challenge solutions.

Our Experts

Our team consists of global experts across multiple focus areas and industries, bringing a depth of experience in developing sustainable business strategies and solutions.

Hannah Darnton portrait

Hannah Darnton

Director, Technology and Human Rights

San Francisco

Richard Wingfield portrait

Richard Wingfield

Director, Technology Sectors

London

Lindsey Andersen portrait

Lindsey Andersen

Associate Director, Human Rights

San Francisco

Lale Tekişalp portrait

Lale Tekişalp

Associate Director, Technology Sectors

San Francisco

Ameer Azim portrait

Ameer Azim

Director, Climate Change

Washington, D.C.

Jacob Park portrait

Jacob Park

Director, Transformation

New York

Anna Iles portrait

Anna Iles

Associate Director, Transformation

Hong Kong

Scarlet George portrait

Scarlet George

Manager, Technology Sectors

San Francisco

Kelly Gallo portrait

Kelly Gallo

Director, Technology Sectors

San Francisco

Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.

Contact Us