You wake up early to the sound of an alarm that only exists inside your own head, and it is silenced automatically as soon as you’re awake. You get up to shower and the water is already running, at exactly the temperature you like. While drinking your coffee you receive a message from a prospective client that they would like to move your meeting forward and whether this was possible. You respond without the need to check whether you have availability or what else you are doing with the day – perfect clarity about your diary simply appears as if you had perfect photographic recall. You confirm your software is updated and begin prepping for your meetings that day. Your meetings are a mix of in person and virtual, but you can join them in the metaverse with the virtual world simply appearing to you without the need for a headset. At no point was a phone or computer accessed. Instead, you have a small device directly connected to your brain that allows you to interact with your other devices, the internet, and virtual worlds, with a simple thought.

Until relatively recently, the thought of body augmenting technology was one of far-future science fiction. The protagonist, often involuntarily augmented, able to perform feats of strength, agility, and cognition far beyond their peers. But fiction may soon be fact, as several organisations are making significant progress in developing devices designed to augment the capabilities of society, often referred to in group as ‘neurotechnology’.

In August 2022, the Law Society of England and Wales, published a report (the “Report”) with Dr Allan McCay on how these technologies may impact society and the practice of law. The Report considers what exactly neurotechnology is, its potential to impact society, and the likely challenges and opportunities faced by the legal profession and practising of law.

Here we consider several matters raised in the Report and reflect upon them in the context of a developing regulatory background of AI (since many neurotechnologies rely on AI within the brain-machine interface) and technology across the world. We highlight several potential concerns that organisations may face when involving themselves in the development, production, distribution, and use of neurotechnology.

What is Neurotechnology?

Neurotechnology is the category of devices that interact with, monitor, and modulate a person’s brain or nervous system. The canonical sci-fi variant is the neural lace described in Ian M Banks’ ‘Culture’ series, but similar techno-telepathy systems have featured in many other series. Amongst the real-life variants being worked on, some would be directly implanted into the brain of the user, as in the case with neurostimulators in their treatment of Parkinson’s disease. Others are more akin to sophisticated wearables, such as those used to interact with the metaverse and other computer-based software.

In essence, neurotechnology is about sending signals to and receiving signals from the brain. This can be literally at the level of direct electrical connections to neurons within the user’s (or in some cases, patient’s) brain. The technology can then, depending on its purpose, read and/or write signals from and to the person’s brain and nervous system. One example of ‘read’ capabilities described in the Report is with patients suffering from locked-in syndrome. In this instance, a person may have a device implanted into their head (or a non-invasive headset) that ‘reads’ and interprets the signals of a person’s brain and translates them into signals that can be interpreted by the device and other connected technology. For example, those using the technology may be able to have a computer translate the electromagnetic patterns created by certain thoughts into movements of a cursor on screen, giving that patient an avenue for significantly creative expression of will and control over their environment. In the case of ‘write’ capabilities, a notable example can be found in the neurotechnology applied in the treatment of Parkinson’s disease, where corrective signals combined with those coming from the patient’s brain, provide it an artificial ‘software patch’ of sorts, allowing the patient to better control their symptoms.

Read/write capabilities, however, are not exclusively useful in the treatment of disease and the re-granting of communication. Much in the way of upgrading your computer components, the augmentation of a person’s brain and nervous system has the possibility to offer a great deal more in terms of application.

Applications of Neurotechnology

Medical

The use of neurotechnology in a medical context has long been a focus of research. As noted above, neurotechnology is already allowing us to provide hope to patients with several classes of neurodegenerative diseases. As the Report notes, it is not possible to consider every application of neurotechnology in medicine as, quite simply, the use cases are already too extensive. In development, at the time of writing, are even more advanced devices in the form of auditory or visual aids and those targeted at improving memory or reducing the symptoms of neurodegenerative illnesses. In the not-too-distant future it is easy to imagine neurotechnology providing solutions for complicated mental health issues, such as chronic depression. Given how many people are affected by such issues at some point in their life, it is easy to imagine that neurotechnologies used to manage such conditions could become widespread.

Military

More thought of, but less applied (at least currently…), is the use of neurotechnology in non-medicinal applications in the context of the armed forces. The Report cites a paper, published by the Ministry of Defence in 2021, that provides a coherent and succinct example of why governments and the defence sector are so interested in the use of neurotechnology:

“[I]n terms of augmentation, brain interfaces could: enhance concentration and memory function; lead to new forms of collaborative intelligence; or even allow new skills or knowledge to simply be ‘downloaded’. Manipulating the physical world by thoughts alone would also be possible; anything from a door handle to an aircraft could in theory and more recently in practice, be controlled from anywhere in the world.”

The ability to rapidly prepare (or even ‘upgrade’) soldiers for battle or increase their capabilities may offer any number of advantages over adversaries. It is therefore of little surprise that such interest has been directed at its further development. Critics point to the serious ethical questions posed if a consequence of neurotechnologically-augmented soldiers is the compromise of a soldier’s free will. “I was only following orders” is already a hollow defence to accusations of war crimes, but a soldier who can no longer choose to disobey under any circumstances, presents a nightmarish vision.  

Personal and Professional

The ‘levelling-up’ of a person as a means of some form of competitive advantage is by no means restricted to military application. In everyday life, the ability to remember more, perform tasks more rapidly, or learn skills more quickly, is of equal interest to society. In a world where employers vie for the best talent, and places at the best universities are oversubscribed, the ability to have a secret ace up one’s sleeve is of undeniable interest. However, if such technologies are only accessible to a privileged few, those in less affluent socio-economic groups are at an even greater disadvantage, and neural enhancements could be a means by which social mobility is stifled further.

Not every use of neurotechnology has to read like the plot to an episode of ‘Black Mirror’ however, and many uses could prove socially beneficial. In the Report, for example, the use of technology to monitor cognitive states of those in high-pressure jobs, such as air-traffic controllers, would allow us to see when employees are stressed or unalert. This would allow employers to justify breaks throughout the day and ensure the wellbeing of their staff is maintained.

Plunging into the vortex

As with all novel technology, use of neural enhancement and brain-machine interfaces requires a fine balance to be struck. Failure to appropriately consider the consequences of use at an early stage may result in misuse or function creep. Should appropriate measures not be put in place at the outset, we may quickly find ourselves in an uncontrollable spiral from which we may not be able to easily emerge.

Privacy and confidential information

One of the first concerns that comes to mind in the use of neurotechnology surrounds the element of privacy. Even with today’s technology, the ability to tap into, and accumulate, data direct from a person, whether this includes biological markers (such as risk of disease) or signals (such as responses to external stimuli), inhibits a person’s ability to restrict otherwise internal and private information from being exposed to others.

Insurers, for example, may be able to use this additional information to contribute to their determination of a person’s risk of neurological issues in a way far more intrusive than before. It would equally not be too great a stretch of the imagination to think that data obtained from these devices could be used to predict behavioural outcomes, and even manipulate and engineer an individual’s behaviour in a particular direction.

Potentially more concerning, however, is the creation of additional vectors for surveillance. The augmentation of persons (either by corporations or governments) would make it far easier to gain an insight into those individuals’ mental states and confidential information, whether this be over a continuous period, or in response to stimuli. While, as noted above, there may be advantages to this in certain high-pressure environments, the ability to monitor a person’s internal responses may result in ‘unintended’ outcomes. This intrusive monitoring of response to stimuli could equally be applied in a marketing or governmental application, where companies and candidates could amend campaigns and policies based on the internal responses of those transmitting data.  A significant and related concern of all of this is the potential for espionage and stealing of confidential information and trade secrets.

Even further into the future, it is perfectly conceivable that technologies could give direct access to thoughts or memories. It is uncomfortable enough to think that malware might give hackers control of your laptop; the idea that the interface to your brain might be compromised is utterly terrifying.

We must therefore consider whether the current regulatory frameworks which protect privacy are adequate to address body augmentation technology and the growing potential for otherwise private responses to be utilised for other means. Even more fundamentally, we should seek to address whether this is an acceptable ancillary use of neurotechnology and ways in which risks may be mitigated.

Safety

It is equally of concern that the devices with which people are augmenting themselves are safe for the user and those around them. It is unlikely that people would be willing, for medical purposes or otherwise, to accept unnecessary risks arising from body augmentation, and would want assurances that a reasonable standard of safety is assured. After all, if a device is interfacing with your brain, you would want to be very clear that any potential for damage to the seat of your consciousness should be infinitesimal. A base principle of their use, therefore, is that neurotechnology should be held to a high standard through its lifecycle. This would include during design and development, use and maintenance and continue to the decommissioning and removal of the devices. This may include technical methods, such as certification of parts used, as seen in the case of medical devices, or through procedural intervention, such as analysis of results and algorithm reviews for bias and the prompt investigation of any faulty results.

Safety of the device should also consider its capability of being accessed by third parties, particularly those of a more malevolent nature. For a device to be deemed ‘safe’ it should include features preventing unauthorised access to the data and its features from a hardware (such as external access locks) and software (such as firewalls) perspective. We’ve touched above on the nightmare scenarios of having your thoughts and memories hacked. The specifics of protection from external threats would likely take a risk-based approach, based on the invasiveness of the device and information, but a minimum standard of protection would be expected, with devices capable of access to thought or memory requiring proportionally higher standards of security.

Ethical and Fair Use

The concern of ethics in the context of augmentation and neurotechnology has been explored both in serious academic papers and speculative fiction alike. One of the more colourful considerations of the ethical challenges arising from brain-machine interfaces is to be found in Iain M Bank’s novel ‘Excession’, where a powerful AI starship mind called the ‘Grey Area’ is castigated by its peers for its habit of reading the minds of others without their consent. In one memorable passage, the Grey Area chillingly explains to another rather naïve character the fact that her ‘neural lace’, the name given to brain-machine interface technology in the novel, could be deployed as the single most effective torture device for organic living beings, ever invented.

Whilst a gruesome prospect, this does illustrate that the power to directly manipulate the signals within the brain creates quite a different class of ethical challenge. The issues stretch from the social impacts that might arise if only a privileged few have access to the advantages granted by neurotechnologies, through to nightmarish visions of mind control and the removal of privacy of thought, that make George Orwell’s ‘1984’ sound like a libertarian utopia.

Having painted a dark picture of the ethical risks, there are also ethical challenges if the technology is not explored. As we considered in the medical context above, brain-machine interfaces are already showing enormous potential to treat medical conditions that rob people of freedom and dignity. To abandon a technology that has such promise would raise ethical questions of its own. Medical use isn’t without its risks and challenges (even the best-intentioned treatments might change the patient’s personality in ways they might not wish for), but it certainly presents a particularly good example of the benefits we might realise from these technologies. An even more techno-utopian vision posits that brain-machine interfaces could facilitate voluntary sharing of thoughts without the barrier of language getting in the way, leading to more widespread empathy and understanding. In this narrative, neurotechnology is a vital step toward a more connected and more collaborative version of the human species… Although while some might read this as a positive translational development, it might sound nightmarish in its own way to those with a strongly independent view of themselves.

Given this wide array of ethical concerns, it is clear that regulatory frameworks will need to be developed and enhanced to address the unique challenges posed by these technologies.

Regulation of neurotechnology

Neurotechnology interacts with several areas of law across different sectors – life sciences, technology, medical devices, software, AI, and data to name just a few. The approach to its regulation, unsurprisingly, is therefore equally varied. Separate aspects, such as software or medical approval, for example, each have their own regulatory regime that must be followed to ensure the technology is compliant.

US Perspective

In the US, the Food and Drug Administration (the “FDA”) is an important regulator of neurotechnology. Many neurotechnologies would be regulated for safety and efficacy under the FDA’s existing medical device regulations[1]. Most existing neurological devices are classified as class II (moderate risk) or class III (high risk) devices and undergo some form of pre-market review by the FDA. Any neurotechnologies that do not fall under the FDA’s purview are more likely to raise questions around whether there are sufficient safeguards to ensure user safety, as well as data security and fairness issues that the FDA is increasingly scrutinising in relation to digital technologies.

Various other US government agencies would likely also assert oversight of neurotechnologies, including the Federal Trade Commission (from a consumer protection perspective), the Department of Defence (if used with military applications) and sector regulators depending on the context of use (for instance if used by air traffic controllers as in the example use case we described above, we might expect the Federal Aviation Administration to take an interest). From an information and data protection perspective, presently a patchwork of privacy laws would also apply, although moves are afoot for a possible Federal privacy law to be enacted. None are specifically geared at protecting brain data, but data collected by neurotechnologies could implicate various existing federal and state-level privacy laws (including the Health Insurance Portability and Accountability Act (HIPAA), the Health Information Technology for Economic and Clinical Health Act (HITECH), the California Consumer Privacy Act (CCPA), California Privacy Rights Act (CPRA) and Illinois Biometric Information Privacy Act (BIPA)) depending on what information is being collected and by whom.

Various trade secret laws would also likely be implicated including the Defense Trade Secrets Act as well as the Economic Espionage Act of 1996 which provides for criminal sanctions for theft or misappropriation of trade secrets.

EU and UK Perspectives

A not dissimilar analysis applies in the EU and UK. Any neurotechnology would be regarded as a medical device for the purposes of the Medical Devices Regulation (EU) 2017/745, which also still applies in the UK post-Brexit. Under that regime, any neurotechnology would have to be checked against relevant standards and CE or UKCA marked. Any embedded neurotechnologies would also require patients/users to have implant cards describing the implanted device.

Similarly, we would expect other regulations to be engaged. Devices are likely to contain machine learning components that would be caught by the EU AI Regulation (more on that below) and could be subject to export controls as controlled technologies. Defence use cases would also trigger the application of relevant regimes from that sphere, and sector use cases would be subject to specific sector regulation.

One big area of difference between any EU/UK approach and the US would be in the field of privacy. The EU and UK flavours of data protection regulation (GDPR and the fast-evolving post-Brexit data landscape in the UK) cast a long shadow. These would provide some additional reassurance to EU- and UK-based neurotechnology users that some of the more egregious and privacy-invasive secondary uses of neurotechnologies would not be as prevalent in those territories.    

As with the US, European Law (including Directive 2016/943) and UK Trade Secrets Regulations would need to be considered to ensure that body augmenting technology, provides protection against unlawful acquisition, use and disclosure of undisclosed know-how and business information.

Regulating AI in neurotechnology

For devices of a more complex and technical nature, regulations relating to artificial intelligence will also likely come into play. As of yet, no established regime has been finalised, and many jurisdictions are investigating the best way to approach the issue.

One of the most developed, though still in draft form, regimes is that of the EU, in the form of the draft AI Regulation (the “AI Act”). The AI Act defines what types of AI systems fall within its remit and what it deems are prohibited uses of AI, which use cases are high risk, and therefore by default which are low risk. Since high-risk AI includes anything used as a safety component within a device subject to EU harmonised standards legislation (a long-winded way of saying “CE marked devices”), neurotechnology will almost certainly also be a high-risk AI use. This means that those seeking to take advantage of neurotechnology, whether as a user or vendor, will be subject to compliance with several, often strictly worded, safety and monitoring requirements. Based on the most recent drafts of the AI Act, failure to ensure this is done correctly may result in the breaching party receiving fines of up to €30 million or 6% of global revenue.

The UK, comparatively, has currently proposed a more distributed approach, whereby sector-specific regulatory standards, and industry regulators, rather than a single regime, are to regulate AI systems. Users and vendors of neurotechnology will therefore be required to ensure they comply with specific regulations and standards as they apply to the context of their use. Oversight of compliance is currently expected to be delegated to the individual regulators, such as the MHRA for medical devices, Ofcom to the extent wireless technologies are used etc., though the specifics of how this is expected to be achieved is yet to be established.

In the US, Congress is considering the American Data Privacy and Protection Act (ADPPA – HR 8152), which aims to create a comprehensive national data privacy and security framework by establishing standards on what types of data companies can gather from individuals and how that information can be used. Notably, a section of the pending bill seeks to take a significant step toward a federal enforcement mechanism over how businesses design and employ algorithms, and the underlying data used to support them. While it is challenging to predict what legislation may or may not become law, a continuous emerging theme, from both sides of the aisle, is the increased attention to algorithms among policymakers as well as the growing role they play in our lives.

While approaches may vary, what remains clear is the regulator recognition that neurotechnology, particularly that propelled by AI, requires a careful and considered approach. Regulations must be put in place for the protection of its users, while at the same time encouraging the development of technology that has the potential to substantially benefit society.

Get in touch

For more information on AI and the emerging legal and regulatory standards and their application to neurotechnology, visit DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Regulation and what’s in store for AI and neurotechnology devices in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI, neurotechnology, and the law at Technology’s Legal Edge, DLA Piper’s tech-sector blog and Cortex, DLA Piper’s life sciences insight hub.

DLA Piper continues to monitor updates and developments of AI and neurotechnology and their impacts on industry in the UK and abroad. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.


[1] Neurotechnology would be deemed a medical device if it was intended for use in the diagnosis, cure, mitigation, treatment or prevention of disease and/or to affect the structure or any function of the body, and does not achieve its primary intended purposes through chemical action.