By Kit Burden and Harry Jefferies

Alongside blockchain and, more recently, augmented reality and edge computing, artificial intelligence, AI, is another technology buzzword that is working its way into the business world. So what is AI?  Simply put, it is the ambition to use machines to undertake decision making-based processes that a human would otherwise undertake, only faster, more efficiently and potentially with the ability to identify patterns that we cannot.

In practice, AI can support businesses by helping to automate their processes, delivering insights via data analytics and engaging with their employees and customers (for example, by using natural language processing chatbots or intelligent agents). Arguably the most exciting facet of AI is machine learning, which is predicated on the use of algorithmic solutions that are often trained with appropriately structured and cleansed data so that they “learn” and continuously improve their outputs over time, without the need for direct human involvement.

While AI is already pervasive across the retail and banking sectors, the opportunities that robotics and machine learning tools can offer are becoming ever more prevalent in the context of the business process outsourcing (BPO) industry. The potential use cases for AI are clear to see.  For example, “robotic process automation” technologies may be used to process finance and accounting spreadsheets; data analytics may be used to improve a supplier’s performance; chatbots may be deployed to replace humans at telephone help desks and customer contact services. These tools can drive efficiencies, including faster (and more accurate) processing times, a reduction in carbon labor and enhanced cost savings. Indeed, the advances in AI have caused some observers to predict the end of BPO offshoring practices altogether. Onshore solutions that are powered by AI now present viable alternatives to traditional offshore service offerings.

The promise that services may be delivered far more quickly and cheaply may sound like the holy grail to most, if not all, suppliers and recipients of BPO services, but there are potential legal, regulatory and reputational challenges. Take, for example, an outsourcing of consumer contact services in the consumer healthcare industry, where a supplier may have traditionally used FTEs in offshore hubs to resolve or escalate consumer queries (such as reports of adverse reactions to over-the-counter products). Going forward, a supplier may instead look to deploy a chatbot to engage with consumers in this regard. At present, such reports are often fielded by a person who has been trained to interact with consumers and identify red flags.   An AI that powers the chatbot performing this function would need to be taught the same behaviors − how to interact with customers and identify problems − using training data, with the bot’s performance then being refined and continuously improved over time from user inputs and human oversight. It is important to note here that the “learned” chatbot will not “know” what it is doing, as a human would (or should); it will simply act in the way it has been trained, learning continuously from the data it processes. How then might the chatbot deal with emotional responses from consumers who are upset about an adverse reaction to a product? Would a logical, dispassionate response lead to an increased level of consumer dissatisfaction? And what would the position be if the chatbot provided consumers with incorrect information or guidance?

What legal issues may be relevant here? 

Depending on the AI use case, it will be extremely important for the BPO customer to understand how the solution operates and makes its decisions (ie, the outputs it generates).  For example, where the AI is responsible for an application process and it rejects an individual’s application, the customer would need to be able to justify that decision to a regulator and the individual in question by explaining how the decision was made.  If the AI is truly a black box, such that the way in which it has made the decision is incapable of being fully explained, this may be problematic.

In conjunction with this, another risk to guard against for certain use cases is the pollution of the AI with bad data, whether during the AI’s training or thereafter from user inputs. The corruption of Microsoft’s Tay bot is a good example of how data can adversely affect an AI’s behavior and the importance of human oversight in that regard. The risk of a consumer-facing chatbot or other AI producing biased or simply wrong answers is ongoing.  The insurance industry’s use of AI to analyze claims patterns as a basis for determining who they should (or should not) be offering insurance to suggests that – if one is not careful – the unthinking application of the results of an AI-based analysis of “pure” data may have unintended and potentially discriminatory effects.  From a business standpoint, it is important to establish a process around this and how the underlying causes of any “bias” are resolved. Human augmentation and/or intervention at key stages is critical to the AI’s operation and management.

From a liability perspective, it is extremely important to identify who is ultimately responsible for the failure of the AI tool in the performance of its tasks. What happens if the AI tool produces a biased or wrong answer?  For example, where a claims processing tool makes a biased decision that is predicated on a person’s surname or home address, who is liable – the supplier or the customer? Suppliers may well want to distinguish between errors and bias so as to find some margin for error (for example, would a human have made the same mistake? Or is there just a service level attributable to accuracy?) Given that a customer will be reliant on the AI to operate a potentially critical part of its business (ie, where the AI has become the business rather than simply acting as a service provider), the impact of failures could be even more severe.  Therefore, traditional liability positions (eg, market standard financial caps) may need to be revisited.

As we’ve touched on already, AI solutions pose some obvious data concerns. Who is providing the data during any training phase and after go-live? There will likely be important data protection issues to work through during any training and production phases.  For example, if you intend to rely on existing customer datasets for training the AI, it is important to establish the legal basis for the processing (or repurposing) of any EU-protected personal data in this way, and to consider whether any special category or other restricted datasets are in scope (eg, criminal record data). If the AI will support the business globally (eg, if there is a help desk to support the customer’s operations across different geographies), it will be important to consider the implications of cross-border data transfers and any country-specific restrictions (eg, data residency requirements). It is worth noting that AI-based solutions may end up gathering and combining data from multiple sources, such that data which does not initially constitute “personal” data (in the sense of being able to identify a living human being) may become so, once the AI-based tool or process has combined it with other data from elsewhere. If this happens, a host of additional legal obligations and considerations associated with the processing of personal data would come into play

What intellectual property rights are comprised within the AI solution? Depending on the type of AI model, there may be various layers that are relevant here (model parameters, weightings, algorithms, and so on). A customer will likely want to own all IP rights in the AI model (or as a minimum have a broad licence to use/reuse those items) so as to ensure a smooth transition on exit and to enhance and protect its business. The last point is particularly relevant, because the operation of the AI may give the supplier an intricate understanding of the customer’s business (for example, the business rules that determine whether a claim is accepted/rejected) which could allow the supplier to replicate its business practices elsewhere.

Another key consideration is what happens on exit. If the customer does not have appropriate IP ownership or license rights that will allow continued usage of the AI, how will this affect any transition of the services to a replacement supplier or to the customer itself? How does this tie into the customer’s regulatory obligations? These important considerations need to dealt with up front so as to avoid potential issues later on. In a current example we are dealing with, a service provider claims the right to take away the configured versions of AI-related tools upon the conclusion of its engagement with a particular customer in an outsourcing context, on the basis that such configurations constitute valuable know-how and/or trade secrets; the customer, on the other hand, claims a right to continue to use the tools, to save itself the cost and disruption of having the same work done afresh by its replacement supplier.

Conclusion

The displacement of BPO offshoring practices by AI presents both exciting opportunities and new challenges which require detailed consideration at the pre-contract and contracting phases. This article has highlighted some of the key legal issues; however, it is not exhaustive and there are additional points to contemplate if you are introducing AI as the basis for a BPO service: for example, how any service level regime will operate so that it addresses non-human errors, or, the implications of TUPE/ARD or similar legislation if using the AI creates redundancies in an inherited workforce (or a proportion thereof), as well as the corresponding risks of losing know-how on exit (for instance, if there are no employees to transfer knowledge back to the customer or onward to a new supplier).