Seven Things Every Company Should Know about Artificial Intelligence and Sustainable Business

January 11, 2018
Authors
  • Dunstan Allison-Hope portrait

    Dunstan Allison-Hope

    Senior Advisor, BSR

  • Jacob Park portrait

    Jacob Park

    Director, Transformation, BSR

  • Michael Rohwer

    Former Director, Technology Sectors, BSR

This is the first in a series of blog posts BSR will publish in 2018 exploring the intersection of disruptive technologies and sustainability.

Artificial intelligence (AI) is advancing rapidly, thanks to ever-more-powerful computing, massive growth in the availability of digital data, and increasingly sophisticated algorithms. The world’s largest technology firms are investing billions to develop their AI capabilities, and companies across industries, from travel to real estate to fashion, are racing to bring AI-enabled services to market.

AI has the potential to bring significant social benefits, including healthcare (via improved diagnostics), transportation (through self-driving vehicles), and law enforcement (with improved fraud detection). AI also brings new social risks, including to non-discrimination (from algorithmic bias), privacy (through the misuse of personal information), child rights (through lack of informed consent), and labor rights (because of the mass displacement of workers by machines).

While by no means exhaustive, we believe the following seven considerations are essential for our members to factor into their AI strategies.

  1. AI is relevant for all industries, not just technology companies. The development of AI today is being driven by Silicon Valley, and it is understandable that private-sector participation in the dialogue about the social implications of AI has been dominated by technology companies. However, it is an urgent priority for companies in other sectors using AI—such as financial services, healthcare, infrastructure, public services, and retail—to understand how AI impacts their business models, employees, and customers.
  2. The human rights and ethics impacts of AI are especially important. The UN Guiding Principles on Business and Human Rights were created to guide the integration of human rights into business decision-making, and should be deliberately applied to the development and deployment of AI. This means asking and addressing questions like “What are the most severe potential impacts?”, “Who are the most vulnerable groups?”, and “How can we ensure access to remedy?” Companies should take a human rights by design approach to AI.
  3. Environmental issues are important, too. While significant attention has been paid to the ethical and human rights implications of AI, we have a tremendous opportunity to embed environmental learning into AI—as Google has done to radically improve the power use effectiveness of its data centers. AI can also be used as an environmental solution—as Microsoft’s AI for Earth commitment demonstrates. At the same time, it will be important that the data processing needs created by AI don’t substantially increase energy use.
  4. Research, product development, and marketing teams are essential to engage on sustainability. In our 2017 annual survey of sustainable business leaders, we asked which functions were most important to achieve substantive progress on sustainability—and only 24 percent mentioned product development, 13 percent mentioned research and development, and 8 percent mentioned marketing. These functions will have a significant influence on the development and deployment of AI, so it is crucial that they participate actively in the conversation around AI and sustainability.
  5. Companies will need to communicate the complexity of AI in accessible ways. AI is extremely complex, and only a very small number of people in the world—mostly concentrated inside companies—understand how it works. If AI is to fulfil its potential while mitigating accompanying risks, civil society, rights-holders, and vulnerable populations should have access to information about the issues at stake and channels to participate meaningfully in discussions about its application.
  6. Ethics and principles for AI are being developed rapidly, but implementing them in practice will be challenging. It is noteworthy how rapidly the AI field has developed principles, with organizations such as the Institute of Electrical and Electronics Engineers, the Software and Information Industry Association, the Information Technology Industry Council, and the Future of Life Institute all publishing statements of ethics. Initiatives like Partnership on AI, the Ethics and Governance of AI Fund, and AI Now are embarking on substantial efforts to explore key dilemmas and facilitate dialogue on them. However, turning theory into practice will require thorough review of real-life cases.
  7. The future of AI is uncertain, but decisions today can have long-term consequences. Taking responsible approaches to AI will require grappling with rapid change, uncertainty, and complexity. We can’t know exactly what path the development and deployment of AI will take, so we should be prepared for different versions of the future and think through the possible long-term implications of today’s decisions. Futures thinking, also known as strategic foresight, can provide structured ways to explore multiple possible futures and chart a path forward that considers the various possible outcomes that might unfurl.

In our recent report on the Future of Sustainable Business, we listed the intersection of technology, ethics, and human rights as one of the three big issue sets that we believe need to be front and center on the business agenda—not only for sustainability reasons, but because these questions will be increasingly central to business performance and strategy. We have much to lose if AI does not evolve in ways that support the public good, and we look forward to working with you to help ensure that it does.

Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.

Contact Us