In the rapid development of artificial intelligence (“AI”), regulators are playing catch up in creating frameworks to aid and regulate its development.

As the AI landscape begins to mature, different jurisdictions have begun to publish guidance and frameworks. Most recently, on 11 June 2024, Hong Kong’s Office of the Privacy Commissioner for Personal Data (“PCPD”) published the Artificial Intelligence: Model Personal Data Protection Framework (“Model Framework”) as a step to provide organisations with internationally recognised practical recommendations and best practices in the procurement and implementation of AI.

Summary of the Model Framework

The key underlying theme for Hong Kong’s Model Framework lies in the ethical procurement, implementation and use of AI systems, in compliance with data protection under the Personal Data Privacy Ordinance (“PDPO”).

The non-binding Model Framework seeks to promote organisation’s internal governance measures. As such, the Model Framework focuses on four key areas for organisations to take measures throughout the lifecycle of the deployment of AI:

  • Establishing an AI strategy and governance – to formulate an internal strategy and governance considerations in the procurement of AI solutions.
  • Conducting risk assessment with human oversight – undergo risk assessments and tailor risk management with respect to the organisation’s use of AI, including the decision of the level of human oversight in automated decision making.
  • Customising AI models and implementation and management of AI systems – preparation and management of data in the use of AI systems to ensure data and system security.
  • Communicating and engaging with stakeholders – communicate with relevant stakeholders (e.g. suppliers, customers, regulators) to promote transparency and trust in the use of AI.

It is worth noting that the Model Framework makes reference to the 2021 Guidance on the Ethical Development and Use of Artificial Intelligence (“Guidance”), also issued by the PCPD. The Model Framework, which focuses on the procurement of AI solutions, complements the earlier Guidance which is primarily aimed at AI solution providers and vendors.

As a recap, the Guidance recommends three data stewardship values of being respectful, beneficial and fair, as well as seven ethical principles of accountability, human oversight, transparency and interpretability, data privacy, beneficial AI, reliability robustness and security, and fairness – which are not foreign concepts for organisations from a data protection perspective.

Comparison with other jurisdictions

With different jurisdictions each grappling with their own AI regulatory framework, the common theme is the goal of ensuring the responsible use of AI. That said, there are slight nuances in the focus of each regulator.

For instance, the AI Act of the European Union considers AI systems in terms of their risk level, whereby serious AI incidents must be reported to relevant market surveillance authorities. Hong Kong’s Model Framework differs in that its approach to AI incidents mirrors the PDPO’s non-compulsory reporting of general personal data incidents.

Meanwhile in Singapore, the regulatory framework also touches on the responsible use of AI in personal data protection. That said, compared to the Hong Kong Model Framework’s personal data protection focus, the Singapore’s regulatory framework is a more general, broader governance model for generative AI applications.

Next steps

The publication of the Model Framework is a welcomed move, as it provides more clarity as to the direction and focus of Hong Kong regulators on the use of AI. We expect more standards and guidance to be gradually published, with personal data protection as a central theme to the compliance of such.

Whilst different global regulators differ slightly in their focus – the central goal of responsible use of AI remains. As such, organisations currently using or considering to use AI in their operations – be it for internal or external purposes – should focus on designing a global internal strategy and governance rules, in order to understand and mitigate the risks associated with their use of AI.

As a first step, organisations should understand the extent and use of AI in their operations (i.e. whether this is a procurement of AI solutions, or the implementation and training of the organisation’s own AI model). With this, organisations should then undergo an internal data audit to understand the scope and extent of information involved in the deployment of AI, in order to assess and mitigate risks accordingly.

Please contact Carolyn Bigg (Carolyn.Bigg@dlapiper.com) if you would like to discuss what these latest developments mean for your organisation.

This article was not generated by artificial intelligence.