Photo by Cholada Hongsaeng on iStock
Over the last couple of years, AI—and particularly generative AI—has been mainstreamed into everyday life and the operations, products, and services of many companies. 2026 will see generative AI continue to advance, including in audio, imagery, and video, but also see new types of AI technologies become more widespread, including agentic AI, AI-powered robotics, and driverless vehicles.
While these innovative AI technologies offer new opportunities, they also raise questions about risks to people, increased environmental impacts, and how they should be governed to ensure transparency and accountability. Here, BSR explores the challenges and opportunities associated with AI for companies, given the trends we are seeing in 2026, and key steps for building the systems and processes to ensure a responsible approach to AI.
Why Responsible AI Is a Strategic Imperative
Evolving regulatory requirements, the geopolitical dimensions of AI, and the emergence of new AI technologies and use cases all contribute to a complex and fast-changing landscape that can be challenging for companies to navigate.
When it comes to AI regulation, governments are taking divergent approaches. The EU has advanced comprehensive legislation through the AI Act (though not without pushback), while the U.S. has emphasized innovation over safety in federal policy while limiting state-level regulation. Other governments are pursuing sector-specific rules or voluntary risk management frameworks.
AI is also increasingly viewed through a geopolitical and security lens. Governments are investing heavily in defense applications and strengthening partnerships with technology companies. At the same time, semiconductor supply chains—critical to AI development—remain volatile, prompting efforts to localize chip manufacturing.
Scrutiny from external stakeholders is also rising. Investors, customers, employees, civil society, and others are all paying close attention to how companies develop, procure, and use AI, and the measures they put in place to identify, mitigate, and remedy risks and harms to people, societies, and the environment.
As AI is integrated into products, operations, and decision-making processes across industries, its human rights and environmental impacts are expanding in scale and complexity. This, combined with the trends above, makes the case for a responsible approach to AI more urgent.
All of this means that AI is not just a technical issue—it’s a business issue. Companies that fail to anticipate and manage AI’s impacts risk losing trust, facing regulatory sanctions, and undermining their license to operate. Conversely, companies that adopt a responsible approach to the development, deployment, and use of AI will strengthen stakeholder confidence, unlock new opportunities for sustainable growth, and position themselves as leaders in shaping the business landscape of the future.
Key Steps for Responsible Use of AI within Companies
We recommend that companies developing, procuring, and deploying AI systems take these four steps in 2026.
1. Adopt a Responsible Approach to the Development and Deployment of AI
Companies should establish a holistic Responsible AI approach grounded in international standards, including the UN Guiding Principles on Business and Human Rights (UNGPs) and the OECD Due Diligence Guidance for Responsible AI, and aligned with credible environmental standards. Core elements of such an approach include:
- Public commitments and policies that articulate values, principles, and expectations for respect for human rights, environmental sustainability, transparency, and accountability.
- Governance and accountability structures that assign clear ownership and oversight.
- Established management responsibilities for implementing and monitoring relevant policies and risk management processes.
- Review, escalation, and feedback processes for high-risk or novel AI use cases, with clear integration of learnings into governance and risk management processes.
Governance should be integrated across the company into enterprise risk management, procurement, product development, sales, and board-level oversight. Where no dedicated Responsible AI function exists, companies can begin by convening sustainability, human rights, legal, compliance, engineering, product, procurement, and operations teams to clarify ownership and responsibilities.
BSR’s latest sustainability insights and events straight to your inbox.
Authors
-
Director, Responsible Tech, BSR
-
Director, Responsible Tech, BSR
Topics
2. Undertake Human Rights and Environmental Due Diligence
Companies should identify and address actual and potential impacts associated with AI systems across their value chain through due diligence. This includes risks to people—such as those related to privacy, non-discrimination, freedom of expression, and labor rights—as well as environmental impacts, including energy use, water consumption, emissions, and e-waste.
Due diligence should extend across the full AI value chain, from data inputs and model development to cloud infrastructure and downstream use. Companies should consider cumulative and systemic risks, not only isolated product-level harms.
Human rights due diligence may include conducting human rights impact assessments for specific AI products, features, or use cases in higher-risk contexts. Environmental due diligence should consider impacts across the full AI value chain—from semiconductor manufacturing and data center construction to model training, deployment, and hardware disposal. While quantification remains challenging due to limited transparency, companies should not delay action. Proactive mitigation, supplier engagement, and transparency are critical.
3. Develop a Responsible Approach to Sourcing, Procurement, and End Use
A Responsible AI approach extends beyond internal governance and includes sourcing, procurement, and end use decisions.
Companies relying on data collection and annotation services should conduct due diligence on labor conditions, strengthen contractual expectations, and leverage emerging guidance, such as the Partnership on AI’s responsible sourcing framework.
The environmental and community impacts of expanding AI infrastructure should be considered by integrating environmental criteria into data center and cloud procurement decisions, seeking clear disclosure on energy sources, water use, and emissions from cloud and data center providers, and drawing on resources such as the Corporate Colocation and Cloud Buyers' Principles.
Finally, companies should ensure AI-related products and services are used as intended and adverse impacts in the use phase are avoided or mitigated. This may include conducting risk assessments on specific customers, markets, or use cases prior to deployment, setting clear use-case limitations, monitoring and audit mechanisms, stakeholder engagement, and grievance and remedy processes.
4. Explore How AI Can Support Your Sustainability Goals
While AI introduces new risks, it also presents opportunities to advance sustainability objectives. Companies are increasingly using AI to improve operational efficiency, strengthen climate resilience, and enhance transparency across value chains.
AI can help optimize energy use, improve logistics and reduce emissions, enhance climate modeling and risk forecasting, and monitor deforestation and biodiversity loss. It can also strengthen sustainability reporting and impact analysis by enabling companies to process large volumes of data and identify patterns across operations. When applied responsibly, AI can support more informed decision-making and accelerate progress toward climate, environmental, and broader societal goals.
How BSR Can Help
Whether developing AI systems, procuring third-party tools, or embedding AI across your operations, it's important to build a responsible approach now. To learn more, reach out to BSR's Responsible Technology team.
Our Experts
Our team consists of global experts across multiple focus areas and industries, bringing a depth of experience in developing sustainable business strategies and solutions.
Lindsey Andersen
Associate Director, Responsible Tech
San Francisco
Ameer Azim
Director, Climate and Nature
New York
Giulio Berruti
Director, Global Lead, Climate and Nature
Copenhagen
Kelly Gallo
Director
San Francisco
Lale Tekişalp
Associate Director, Responsible Tech
San Francisco