The Competition & Markets Authority (‘CMA’) published its response to the Department for Digital, Culture, Media & Sport (‘DCMS’) policy paper on establishing a pro-innovation approach to regulating artificial intelligence (AI) on 29 September 2022. This is in parallel with the coming into force of the new National Security & Investment Act 2021, under which the UK government is scrutinising transactions that use AI to produce goods, services and technology with the potential to track individuals, objects and events.

In its response, the CMA commented on the need to (i) adopt a risk based approach to the regulation of AI, (ii) consider whether existing regulatory powers are appropriate, and (iii) encourage collaboration between regulators.

Growing influence of AI and competition concerns

The CMA is mindful of the risk of AI and algorithms, which could be used to detect and respond to price deviations, facilitating cartels by increasing the stability of collusive practices. It has also raised concerns over the market power of digital platforms, consumer control over data and competition regarding digital advertising.  Personalised pricing and price comparison tools, while beneficial to consumers, can also lead to consumer harm, thereby warranting regulatory protection. As part of its broader competition and consumer reforms, the government has set out a new pro-competitive regime for digital markets regulation under which the Digital Markets Unit (‘DMU’) will set out parameters to regulate firms with strategic market status. In this way, the government will be able to balance the benefits of AI with remedying high-risk harms to consumers and competitive businesses from misuse of AI systems.

Risk based approach

The CMA agreed with the government’s view that taking a risk-based approach to the regulation of AI would be most appropriate. The CMAs approach focuses on the most harmful practices of AI (e.g. enhancing incumbent firms’ ability to promote itself at the expense of new innovative services, being insufficiently transparent to consumers, or leading to discriminatory personalised pricing) and aims to address potential risks, whilst allowing consumers to benefit from the opportunities and efficiencies that AI offers.

The CMA warned that regulators should not limit their focus to immediate and obvious risks.  Competition risks are often long term and structural. One such risk identified by the CMA is the potential “concentration of key inputs to AI supply chains, such as chips, data, and computational resources”. 

The response emphasises the need for any regulatory approach to provide clarity to firms on what to consider when developing and implementing AI offerings, and encourage firms to participate in the regulatory process. In the first instance, the CMA is considering issuing guidance to particular sectors and concentrating regulatory efforts on systems which fulfil certain key functions.

Regulatory powers

The CMA has encouraged the government to consider existing powers and how they may need to be updated to keep up with the pace of innovation.  It agrees that a ‘light touch’ approach, using existing regulatory powers, may be appropriate whilst AI technology is in the early stages of its development.  However, the government and regulators must evolve their position and respond in an agile manner as AI technologies develop. This may also mean introducing new powers or capabilities for regulators, where risks associated with particular applications arise.

Collaboration between regulators

Given the variety of regulatory concerns raised by the increasing prevalence of AI, the CMA expressed the need for a holistic cross-regulatory approach to avoid divergence between regulators.  The CMA welcomed the government’s recognition of the Digital Regulation Cooperation Forum (which counts the CMA, ICO, Ofcom and the FCA as members) as an appropriate venue for discussion of how best to implement a coherent regulatory framework. It also expressed the need for cooperation with its international counterparts.

Comment

Whilst the CMA response is a positive step, it is going to be challenging to strike the correct balance to regulating anti-competitive AI practices. Weak application of a risk-based regulatory framework could allow incumbents and first movers to take advantage of dominant market positions, impacting innovation and productivity in the long run. It is essential that the CMA adapt their position as AI technologies become more sophisticated and the competition risks of such systems become more apparent.