The emerging field of “neurorights” questions how neurotechnology could impact the human rights of people to freedom of thought, identity, privacy, and free will. Undoubtedly, technology is getting closer to influencing our behavior. Following our initial piece exploring how this technology could impact our human rights, today we focus on the responsibility and role of business—from tech companies to tech-enabled products and services—to protect and respect users, both now and in the future.
Evolving Rights Frameworks
Fundamentally, human rights principles and frameworks need to evolve to respond to the emerging implications of neurotechnology (neurotech). Already, there are efforts at both the national and international levels to enshrine neurorights in laws and treaties—indicating that a shift in expectations and regulations for business related to the influence and impact of technology is on the horizon.
For instance, in Chile, a draft constitution proposed the world’s first legislation on neurorights, including a “neuro-protection” bill to regulate technologies that are capable of “interfering” with brains, including mind control and mind reading, and prohibit the buying or selling of neural data, declaring it the equivalent of trading human organs.
While the constitutional changes were rejected in a referendum, they show the issue making headway into public policy. The EU is developing an artificial intelligence (AI) act to protect human rights and define high-risk uses, which will introduce requirements to address the impacts of AI on society, and the UN has indicated that neurotech is one of the frontier issues for human rights. The Inter-American Juridical Committee, which promotes the codification of international law across the US, has approved a neurorights declaration on neuroscience. Meanwhile, new legislation on human rights taking shape across the EU and at a national level (see Japan, New Zealand) could also plausibly expand to encompass neurorights, impacting future human rights due diligence efforts and assessments.
There’s also the European Commission’s Digital Services Act, which creates a common set of rules for the transparency of recommender systems across the EU, with implications for the "right to free will." How can businesses prepare to meet more regulation and scrutiny on the extent and impact of their services? And how might regulators distinguish between technologies that influence our brains and those that could one day control them? For instance, could future neurotech users outsource some daily decisions to a trusted service, for instance, to limit their appetite or override their impulse to smoke?
Ahead of these regulatory frameworks, what actions should business take to meet their responsibility to respect human rights, including both technology companies and those that use tech?
A first step is undertaking human rights due diligence of technology that might implicate neurorights to identify potential impacts across the full range of internationally recognized human rights and establish strategies, alone and with others, to address those risks as part of product development. Business can actively engage in this conversation and work to establish guardrails before adverse impacts related to neurotech escalate.
Stephanie Herrmann, a human rights attorney and co-author of the NeuroRights Foundation report, points out that some of the concepts underlying neurorights can be difficult to define and therefore protect, such as identity. Therefore, international human rights law must evolve alongside the technology to provide more explicit protection.
Businesses can get ahead by asking how the applications and impacts of neurotech might evolve over the next decade and by identifying appropriate actions to address new risks. Additionally, business leaders should play a role in adapting and creating normative frameworks to help shape the field and positively contribute to human rights protections of their users today and in the future.
Questions for Business
- How might technologies in product road maps impact existing rights to privacy, freedom of expression, thought, and opinion? How might they impact the proposed new rights of mental privacy, personal identity, free will, fair access to mental augmentation, and protection from bias?
- Who will be most vulnerable to adverse impacts from neurotech? Which communities, stakeholders, and experts should we be engaging with? How might these impacts vary across geographies and contexts?
- How might our technology be misused, and what harms may arise from this misuse? What leverage do we and others have to avoid, prevent, and mitigate these harms?
- How should the adverse impacts of neurotech be remedied, and by whom?
- What policy, legal, and regulatory framework is most appropriate for neurotech, and how can we help shape it?