How Advances in Neurotech Will Impact Human Rights

Photo by whitehoune on iStock

December 14, 2022
Authors
  • Kelly Metcalf portrait

    Kelly Metcalf

    Manager, Human Rights, BSR

  • Anna Iles portrait

    Anna Iles

    Associate Director, Transformation, BSR

The emerging field of “neurorights” questions how neurotechnology could impact the human rights of people to freedom of thought, identity, privacy, and free will. Undoubtedly, technology is getting closer to influencing our behavior. This piece asks how this technology could impact our human rights, and it will be followed by a second part exploring the responsibility and role of business—from tech companies to tech-enabled products and services—to protect and respect users, both now and in the future. 

Technology can increasingly detect and influence what we think and how we behave. An expanding field of “neurotechnology” can record or interfere with human brain activity, from physical devices, like wearables and medical implants, to artificial intelligence (AI), designed to decode thought or spoken patterns. Then, there is technology that impacts how we experience the world and feel about ourselves, from algorithms to social media apps.

While many of these technologies offer benefits to health, wellness, and human capacities, it is important to understand how they impact human freedoms. Under international human rights law, business has a responsibility for protecting and respecting human rights, such as rights to privacy, freedom of expression, thought, and opinion. However, as technology accelerates into new and unknown territory, many believe that the current international human rights framework may not be adequate to meet emerging human rights risks.

What are Neurorights?

The NeuroRights Foundation, co-founded by Rafael Yuste of the Neuro Technology Center at Columbia University and Jared Genser of Perseus Strategies, aims to address gaps in existing human rights frameworks by outlining five distinct “neurorights” that could be forsaken by the misuse or abuse of technology.

The right to mental privacy highlights the vulnerability of neural data for sale, commercial transfer, and unauthorized use.

Today, wearable devices, like headbands, monitor and stimulate brain activity to help users increase their focus or improve their sleep patterns. Brain-to-text interfaces access brain activity in a way that allows you to write simply by thinking. Medical brain implants can help patients with severe paralysis gain a level of functional independence. Recent AI models aim to decode speech from recordings of brain activity, which could help patients with traumatic brain injuries communicate again, as well as voice biomarker technology that analyzes snippets of a person’s speech to identify mental health issues, like depression and anxiety.

While these technologies could vastly improve quality of life, complete access to deeply personal neural data also raises privacy concerns for users beyond the scope of current human rights protections. One concern is the vast area of uncertainty in how neural data might be used in the future: what potential applications are users consenting to? While there are limits on what can be deciphered from today’s data, technology will get smarter at processing, decoding, and leveraging it. Entities collecting neurodata—whether that’s from wearable devices and implants, or monitoring systems for workforce safety or productivity—could face increased scrutiny on data storage and management.

The right to personal identity calls out the power of technology to impact how we perceive and express ourselves.

Social media has already had a profound impact on freedom of expression and identity. While research suggests moderate use of screens and devices can support social and emotional well-being in children, significant screen time has been shown to impact circadian rhythms, disrupting sleep and hormonal rhythms, which may be a factor in early puberty. Overuse of TV and video games may impact how we develop, disrupting motor skill development and the ability to concentrate.

The right to personal identity may be among the least protected of the neurorights under current international frameworks, where there is currently no concrete language on how identity is formed and how to protect self-perception and self-expression.

What happens to our identity in a world where technology is interacting daily with our neural activity and hormones, and responding to data from vocal and facial expressions? And how might society and regulators respond to new research documenting unintended consequences?

The right to free will recognizes that decision-making is increasingly subject to technological manipulation.

Algorithmic amplification and recommender systems—common in social media and streaming services—also have tremendous potential to impact how we access information and form opinions. There are a wide variety of studies into the impact of algorithmic amplification on news, conflict, and commerce, with an equally diverse range of conclusions—everything from increasing the prominence of high- over low-quality information to the potential impact of algorithms on elections.

For a long time, we have accepted the role of advertising in influencing our decisions, and increasingly, we accept predictive text and corrective algorithms editing in real time how we express ourselves. However, the use of technology to discern and manipulate thoughts and behavior poses a very different level of risk to human rights.

While the NeuroRights Foundation doesn't specifically address risks to freedom of thought or freedom of opinion, as these are already established human rights, it does explore the impacts of neurotechnology on these freedoms.

The right to protection from algorithmic bias points to the widespread impacts on socioeconomic outcomes of bias in algorithms and neurotechnology.

Bias is widespread in the development and application of technology. Research has shown that algorithms used by healthcare companies, to support the detection of heart disease, for instance, source from data that is not diverse, which leads to unequal outcomes or inaccurate results for patients, particularly those of color. The UK’s Department of Health recently launched an investigation to explore the impact of potential bias in medical devices on patients from different ethnic groups, including data used in algorithms and AI tools.

Without proper controls, bias can influence neurotechnology, which may directly impact the quality and outcomes of user experiences. Diverse, cross-cutting teams and research methodologies need to be in place when designing, implementing, and monitoring technologies that interact with our minds to ensure challenges in access or adverse impacts from use are identified and mitigated. 

This brings us to the final right, fair access to mental augmentation, which raises the question of how far a "neurotech divide" could hinder equality and inclusion. While some new offerings aim to enable inclusion, by restoring or replicating brain functionality, access to these will not be equitable, while the application of neurotech geared towards augmenting human capacity could require scrutiny in classrooms, workplaces, and competitive arenas.

Let’s talk about how BSR can help you to transform your business and achieve your sustainability goals.

Contact Us