Artificial intelligence technologies might find a considerable hurdle in privacy rights, obligations and sanctions provided by the EU General Data Protection Regulation.

 

What are artificial intelligence technologies?

The definition from Wikipedia is that

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal “intelligent” machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

Therefore the main features of AI are

  • the collection of large amounts of information, also from the environment around them and
  • the ability to take autonomous decisions/actions aimed at maximizing the chances of success.

The perfect example of an artificial intelligence is a self driving car which needs to take autonomous decision based on whatever happens on a street. And a confirmation of the current concerns (and prejudices) around AI is a new study from Germany’s Federal Highway Research Institute which found that the autopilot feature of the Tesla Model S constitutes a “considerable traffic hazard“. This finding was unsurprisingly highly criticised by Tesla CEO, Elon Musk, who said in a tweet that those reports were “not actually based on science” and repeated that “Autopilot is safer than manually driven cars.

But when it comes to AI which might take decisions that affect individuals, new privacy related issues arise following the adoption of the EU General Data Protection Regulation.

The privacy right to object to automated decisions

The EU Privacy Regulation provides that individuals

shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“.

Exceptions to such rule apply when an automated decision is either provided by the law, such as in the case of fraud prevention systems, or is necessary to enter into a contract or is based on the individual’s prior consent. But, in the latter two scenarios, individuals will still have the right to obtain human intervention to express their point of view and to contest the decision which is commonly known as the right to receive a justification of the automated decision.

The most frequent example is when a mortgage or a recruiting application is turned down since, according to the system, the applying individual does not meet some parameters. However, the main issue arises when AI becomes so complex and its decisions are based on such a large number of data that is not actually possible to give a justification of a specific decision.

The solution might be that artificial intelligence whose decisions might impact individuals shall be structured in a way that it will be possible to track the reasoning of the decision. But this also depends on what level of justification would be sufficient to meet the criteria set out in the EU Privacy Regulation. Is it sufficient to say that the applicant for a mortgage did not meet the creditworthiness parameters? Or it will be required to identify the specific parameter and if the parameter has become relevant only because it was linked to a number of other parameters?

Is all data collected by the AI legally processed?

An additional privacy issue is whether all the information about an individual which is used by an artificial intelligence system has been obtained with consent of that individual or on the basis of a different legal ground and is used for the purposes for which it was initially collected.

Indeed, AI is by definition based on the processing of a very large amount of data from different sources. And individuals might object to decisions taken on them also because they are based on data illegally processed.

What happens in case of wrong decisions?

The complexity of an artificial intelligence is expected to escalate in the coming years. And such complexity might make more difficult to determine when a cyber attack has occurred and therefore a data breach notification obligation is triggered. This is a relevant circumstance since the EU General Data Protection Regulation introduces the obligation to notify an unauthorised access to personal data to the competent privacy authority and to the individuals whose data was affected.

We recently saw the case involving the UK telecom provider Talk Talk that was sanctioned by the Information Commissioner with a fine of £ 400,000 for not having prevented a cyber-attack which led to the access to data of over 150,000 customers. But what would have happened if Talk Talk was not able to determine whether a cyber attack had occurred and all of sudden its system starts taking “unusual” decisions? Given the potentially massive fines provided by the EU Privacy Regulation, this is a relevant issue.

And the common issue of smart technologies such as those of the Internet of Things, but also AI relates to the difficulty to identify the entity liable for a malfunctioning or a data breach.

Privacy by design as the main “shield” against liabilities

I already mentioned it in relation to Internet of Things technologies. With the principle of accountability that places the burden of proof of having complied with data protection laws on the entity that is investigated, the main defence for any business is the implementation of a privacy by design approach. The ability to have documented material proving the adoption of all the measures required by data protection laws can be an invaluable tool in an environment where there are considerable uncertainties led mainly by the rapid technological innovation and the growth of cybercrime.

@GiulioCoraggio