For the past few years, the UK Government (the “Government”) has grown to acknowledge that the increasing prevalence of AI within the public and private sector has led to the inescapable impact of its behaviour on the UK public. In many cases, this has been met with favour, acknowledging the fact that AI offers a number of helpful uses and opportunities, such as identifying criminal financial behaviour or tax avoidance. However, as with many technologies, it is very much a double edged sword. Incorporation of AI and technology into everyday life, particularly in areas of sensitivity, such as healthcare, has often led to scepticism and distrust from members of the public.

This is not without reason. A cursory search for AI in its earlier application within these sectors flags a number of issues: inappropriate data use, biased algorithms, and inaccurate outputs. As a means to address these earlier, but my no means irrelevant concerns, as well as seeking to act as a world leader in the space of AI, the Government has made a strong push towards developing a sophisticated digital environment based on trust and transparency. A notable example of this can be found in the creation of a roadmap and similar initiatives targeted at building an effective AI assurance ecosystem.

More recently, the Government has set their focus towards leading the charge in shaping AI standards and use throughout the world. In January 2022, the Government announced a pilot, in partnership with the British Standards Institution, the Alan Turing Institute, and the National Physical Laboratory, that would seek to shape the way organisations and regulators set technical standards for AI on a global scale. Following this announcement, the Government have announced  that the National Health Service (the “NHS”) would begin another world-leading pilot in the area of AI.

The announcement stems as a development from earlier discussions between the NHS (and their NHS AI lab) and the Ada Lovelace Institute (the “Institute”), directed at creating a framework for assessing the impact of medical AI. In this pilot, the NHS will act as the first healthcare body to trial algorithmic impact assessments (“AIA”) within their organisation. The primary working purpose, among others, is to act as a means of tackling health inequalities and biases in systems underpinning health and care services, thereby removing some of the surrounding distrust these systems have within the healthcare sector.

What exactly are algorithmic impact assessments?

Best described by the Institute, AIAs are a “tool used for assessing possible societal impacts of an AI system before the system is in use”.[1] Their purpose, among other things, is to create greater accountability and transparency for the deployment of AI systems.[2] In doing so, it is hoped that trust in AI is built, while mitigating the potential for harm to specific categories of persons.[3]

In many ways AIAs are similar to the commonplace impact assessment tools in existence today. A prime example of this is the data protection impact assessment, which evaluates and works to minimise the impact that data processing technologies and policies would have on a person’s privacy rights. In similar fashion, an AIA allows organisations to conduct an impact assessment on the potential risks and outcomes that may be produced when utilising the data it is fed, whether it be non-sensitive such as hospital admission rates, or more sensitive such as gender, ethnicity, or family history of illnesses.

By recognising the potential risks caused by the incorporation of certain AI programmes, organisations may then alter their system at early stages of development, as well as prior to implementation at a wider level.

An example and its user guide, set to be implemented by the NHS AI lab in partnership with the Institute, can be accessed here.

Why is this such a significant step?

The piloting of AIAs in a setting such as the NHS is a significant step due to the fact that they are not extensively used in either the public or private sector. As noted above, the pilot acts as the first instance that a public healthcare body has sought to incorporate them within their organisation. There is therefore little coherence or uniformity in approach and no guarantee many of them produce the outcome that is intended. By virtue of this, there is equally no guarantee that many of these AIAs are effective in reducing risks of bias or inadvertent harm to those who own the data being processed. This pilot is an opportunity to provide the framework created by the Institute a sandbox for rigorous testing and feedback, which may then be used to alter their master proposal moving forwards.

Despite the novelty of AIAs within the healthcare service, it should be noted that approved AIA models are already in existence and use within other contexts. In 2020, the Treasury Board of Canada Secretariat’s Directive on Automated Decision-Making implemented a standard form, aimed at assisting Canadian civil servants in the managing of public sector AI standardisation. In parallel to the emergence of more rigid assessment tools, a rise in the creation of softer frameworks for assessment has also begun, such as the IEEE’s AI Standards or the UN Guiding Principles on Business and Human Rights, which are to be used alongside an organisation’s existing code of ethics.

The implementation of AIAs within the NHS therefore offers an invaluable opportunity to further determine the efficacy of AIAs and continue to fill the gap in knowledge and data currently ailing their use. Should sufficient success in this pilot be found, it is likely that further pilots within other areas of the public and private sector will develop, therefore continuing to push the UK forward in creating a global approach to AI and standards.

The NHS Pilot

The NHS are set to trial this assessment across a number of initiatives and will also be used as part of the data access process for both the National Covid-19 Chest Imaging Database and the National Medical Imaging Platform.

The objective of this pilot is to support researchers and developers in assessing possible risks and biases of AI systems when dealing with patient data and members of the public before they are able to access these resources. As noted in their announcement, while artificial intelligence has the potential to support health and care workers in delivering better care, it may also exacerbate existing health inequalities if certain biases are not appropriately accounted. For example, the Institute notes that AI systems, due to training biases and available data, among other factors, have been less effective at diagnosing skin cancer in persons of colour. By involving developers and impact assessments at an early stage, patients and healthcare professionals are able to become involved early in the use and development of these technologies. In doing so, it is expected that instances of polluted or bias data will reduce and that overall patient experience will improve.

The announcement goes on to note that this pilot complements the ongoing work of the ethics team within the NHS AI lab in ensuring that training data and testing of systems provide outcomes reflective of diversity and inclusivity, thereby creating a far more useful set of training data and an overall increase in public trust.

It is hoped, that through the successful implementation of this pilot, that AIAs could be used more widely to increase the transparency, accountability, and legitimacy of use of AI in the healthcare sector.

Breaking ground: a pioneering framework for assessing the impact of medical AI

Perhaps highlighted best by Lord Clement-Jones in a number of his discussions on the pending Health and Care Bill, AI in healthcare (and even more widely with the public sphere) will not be successfully leveraged unless the public are confident that their health data will be used in an ethical manner, assigned its true value, and used for the greater benefit of UK healthcare. While the pilot will certainly not be the final step in achieving this goal, it is certainly a positive step in building trust that AI can perform to the benefit of patients and practitioners.

Although this particular pilot of the framework is to be carried out by the NHS, the Institute notes that their proposal has been developed to assist software developers, researchers, and policymakers in their creation and implementation across a number of healthcare sector scenarios. One such area that would benefit from the implementation of these protocols is medical devices. The use of AI within sophisticated surgical machinery, testing equipment, and diagnosis tools, offer unparalleled potential in the provision of accurate and speedy healthcare. They do, however, suffer from the same sceptical approach and lack of trust that technology faces when dealing with a service that we have come to accept requires a human touch to trust. Should the pilot be deemed successful in its isolated environment, the expansion of AIA pilots into the medical device procedures may do well in increasing overall support in their use and allow members of the public to see that their data and care are being fully accounted.

It should however be noted that, given the wide applicability of the framework created by the Institute, its application does not stop at healthcare. Instead, the framework may form the basis for any number of sectors and organisations. It therefore serves as a useful resource for all sector participants in determining how to create one’s own AIAs for implementation throughout the design and incorporation stages of AI.

DLA Piper continues to monitor updates and developments to the Government’s work on AI within the UK. For further information or if you have any questions please contact the authors or your usual DLA Piper contact.

[1] Ada Lovelace Institute and DataKindUK. (2020). Examining the Black Box: tools for assessing algorithmic systems. Available at: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/

[2] Knowles, B. and Richards, J. (2021). ‘The sanction of authority: promoting public trust in AI’. Computers and Society. Available at: https://arxiv.org/abs/2102.04221

[3] Raji, D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D. and Barnes, P. (2020). ‘Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing’. Conference on Fairness, Accountability, and Transparency, pp.33–44. Barcelona: ACM. Available at: https://doi.org/10.1145/3351095.3372873