Technological transformation brings complex, nuanced, and systemwide risks and opportunities for the realization of human rights. These risks and opportunities are related to both the design and development of technologies, as well as how technologies are deployed and used by companies, such as retailers.
As the retail industry continues its digital transformation, retailers need to consider the human rights impacts that may be associated with the use of artificial intelligence (AI) systems across their business and value chain. This may include the use of AI in retail stores to automate self-checkout, to personalize product recommendations on e-commerce sites, or to forecast demand and optimize supply chain operations.
This report identifies salient human rights issues associated with the increased use of AI technology in the retail sector and makes preliminary recommendations to companies on how they can address the human rights impacts of AI in retail.
Human Rights Impacts of AI in the Retail Industry
With the digital transformation of the retail industry, an increasing array of digital technologies are being used in combination to create automated processes, both in retail stores and in the retail supply chain. This transformation brings risks and opportunities for the realization of human rights that retailers need to consider.
Below, we list six main categories of human rights risk that may be associated with the use of AI technologies in the retail industry.
Human Rights That May Be Impacted
With the use of AI solutions, retailers may collect, utilize, and share customer data in ways that infringe on the right to privacy. For example, AI-powered video surveillance may include facial recognition that can be used to identify loyalty program customers and provide them with special promotions or to identify potential shoplifters based on assumptions about how shoplifters look and act.
The use of AI solutions by retailers to personalize consumer experience may result in the discrimination of individuals by race, gender, age, disability, or other protected categories.
For example, targeted ads or product recommendations may discriminate against women by recommending cleaning supplies based on biased assumptions. Or, the information collected or generated by the AI solution may be used in discriminatory ways by retailers—such as using video surveillance data to discriminate against racial minorities.
The use of AI solutions by retailers may lead to a more efficient and fair distribution of goods and services. The same solutions may be used in ways that limit access to goods and services.
For example, retailers using demand forecasting solutions based on historical purchasing trends may continue to limit healthy grocery options in food deserts or provide access only to certain types of books in some communities.
AI solutions used for labor planning and scheduling may impact working conditions for workers throughout the retail value chain (e.g., product manufacturing and sourcing, fulfillment and delivery, store operations, etc.)
For example, based on near-real-time demand forecasting insights, retailers may move toward automated or “just-in-time” scheduling to respond to granular fluctuations in demand, giving workers little time to adjust their schedules and leading to unstable work schedules and income uncertainty.
The use of AI solutions to forecast demand and personalize customer experience may lead to the behavioral profiling of customers to predict what they want or need and, as a result, to guide them down certain purchasing paths.
The notion that an individual may only see certain options or experiences based on decisions made by the AI model signals a loss of human autonomy and a possible violation of the right to freedom of thought.
The use of AI solutions to improve supply chain operations may lead to increased efficiency along the retail value chain, leading to positive impacts on individuals’ right to a healthy environment.
The use of AI solutions to improve customer experience may either lead to the perpetuation of environmentally destructive purchasing practices, resulting in negative environmental impacts, or to the promotion of sustainable consumption habits, resulting in positive environmental impacts.
Responsible AI challenges typically need the involvement of various functions at a company. For companies who do not have a dedicated team addressing these issues yet, we recommend starting the process by involving the following functions:
- Teams that can manage the issue from a central perspective, such as: Sustainability, Human Rights, Ethics, Legal Compliance
- Teams that use AI technologies, such as: Supply Chain, Marketing, Customer Service, Human Resources
- Teams that develop or purchase AI technologies, such as: Technology, Product, IT, Research and Development, Procurement
To mitigate any adverse human rights impacts related to the use of AI, companies can take a range of different actions, such as undertaking human rights due diligence, testing AI models for bias and externalities, and providing transparency about how AI models work.
Download the PDF to read our recommendations.