Below is a guest post by Dr. Ai Deng, PhD who writes often about issues related to price fixing cartels.  Dr. Deng is at NERA Economic Consultants, and can be reached at ai.deng@nera.com or connect with him on LinkedIn (https://www.linkedin.com/in/aideng/)

*********************************************************************

In a recent post, Bob shared his thoughts on the latest DOJ policy that could give credit to companies for their effective antitrust compliance programs.  In this post, I want to discuss how data analytics and artificial intelligence could help build a robust compliance program.

Empirical screen

Empirical screen is a topic that I have previously covered in other posts on cartelcapers.  And this data analytic technique is a promising starting point to consider.  It has already been used by antitrust authorities all over the world, and is getting increasing attention in the recent academic literature. One notable application of empirical screens was Christie and Schultz’s (1994) research on the market maker’s collusion on NASDAQ.  In 2011, Competition Policy International’s Antitrust Chronicles devoted an entire issue to the subject of empirical screens. In that issue, Mexican and Brazilian competition authority officials shared their experiences in applying the screens to detect potentially collusive conduct.  Laitenberger and Hüschelrath (2011) described the European experience in applying empirical screens.

To put simply, an empirical screen is a metric that is based on data and a prespecified formulation. The value of the metric changes as the likelihood of market manipulation increases or decreases. When the value crosses a certain threshold, a “red flag” for suspicious activity goes up. When this occurs, additional investigation of the causes may be warranted.  For example, changes in price variability and market shares have been proposed in the literature as collusion screens.

This is not a new idea. “Detection” techniques similar to empirical screens are widely used in the credit card and telecommunications industries for fraud detection purposes. AT&T Labs’ researchers Becker, Volinsky, and Wilks (2010) noted that AT&T implemented its fraud detection system (the Global Fraud Management System) nearly 20 years ago, in 1998.  AT&T’s team of data experts continuously analyzes data and devises new techniques to detect fraud.  Credit card companies also invest significantly in fraud detection efforts. All this is to say that antitrust compliance programs can also benefit from similar analytical approaches.  Readers interested in learning more about this approach, including some of the pitfalls, can check out my short Law360 article (https://www.law360.com/articles/708083/what-compliance-officials-must-know-about-market-screening) or a longer and more detailed paper published in the Journal of Antitrust Enforcement.  (https://academic.oup.com/antitrust/article-abstract/5/3/488/2884289?redirectedFrom=fulltext)

Algorithmic Compliance

Autonomously colluding algorithms have generated a great deal of concerns recently.  But if pricing algorithms could autonomously collude, can they be made automatic antitrust compliant as well? EU Competition Commissioner Margrethe Vestager certainly believes so, as she stated in a recent speech that “(w)hat businesses can – and must – do is to ensure antitrust compliance by design. That means pricing algorithms need to be built in a way that doesn’t allow them to collude.”

There are several potential pathways to algorithmic compliance. One of the most promising ideas is based on the recent advances in explainable AI, also known as XAI.  As the name suggests, explainable AI aims to make algorithmic decision-making understandable to a human.  Notably, the Defense Advanced Research Projects Agency (DARPA) sponsors a program named XAI.  The organization FATML (Fairness, Accountability, and Transparency in Machine Learning) also aims to promote the explainable AI effort.

Some of the commercial interest in explainable AI comes from the lending industry because of the regulation and the need to explain lending decisions to a consumer especially when the decision is made by machine learning models.  It should not come as a surprise that the same need for explainability goes well beyond the lending industry.  For example, interpretability of algorithms can be equally important in the medical and healthcare domain.  And it can also be an important part of antitrust compliance by design.

Suppose that your pricing algorithm is setting a price that you think might be too high.  Imagine that your algorithm can explain its decision making.  For example, you may want to ask your algorithm “what if we lower the price?” or “would we generate higher immediate profit for doing that?”  The answer that your algorithm provides might be “based on the demand forecasts and our customers’ price elasticity, this is the optimal price we should set” or “we have no reason to lower our price because we know that the competitor’s algorithm is not going to lower theirs, and we know that because we have figured out that this is the best course of action that benefits both of us in the long term,” or even “we should not because the last time we lowered our price, the competitor started a price war.”  Whether or not the last two responses suggest problematic algorithmic conduct, having this knowledge can be extremely helpful.

But how would one go about tackling the algorithmic explainability such as this?  An AI study published in 2018 titled “Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences” is an important step toward achieving the type of explainability discussed above.  In the framework of the standard reinforcement learning (RL), the researchers of this study developed a method that enables a RL agent to answer precisely the what-if questions similar to those we posed above. Suppose that we are curious about why the autonomous RL agent takes the action A, instead of another action B, in a situation.  Their RL agent will answer this why question by contrasting the expected outcomes or consequences of the two actions. This type of contrasting is exactly what underlies our conjectured answers above.  Interested readers can find a much more in-depth discussion in a recent article of mine, titled “From the Dark Side to the Bright Side:  Exploring Algorithmic Antitrust Compliance” available for download here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3334164.

As always, I welcome your thoughts and comments.  I can be reached at my new email address ai.deng@nera.com.  You may also connect with me on LinkedIn (https://www.linkedin.com/in/aideng/) 

Ai Deng, PhD
Associate Director

NERA
ECONOMIC CONSULTING
Tel: +1 (202) 4669210

Fax: +1 (202) 4669252
www.nera.com

The post Dr. Ai Deng on the Role of Data Analytics and Artificial Intelligence in building a Robust Antitrust Compliance Program appeared first on Cartel Capers.