Algorithmic Regulation and Algorithmic Collusion in Network Markets
Posted: 16 Mar 2018
Date Written: March 16, 2018
The expanding literature on the antitrust implications of data analytics and algorithmic decision-making focuses on how pricing applications that apply analytics (including machine-learning - ML) to observed data let users coordinate effectively and tacitly on anti-competitive prices. Algorithms that use data on competitors’ past prices (public information) and consumer characteristics (public or past purchases) could implement price discrimination that is profitable for firms and costly to society without the need to exchange proprietary information. Algos that use ML may produce outcomes not detectable from the source code. This tacit market power is enhanced if algorithms signal to each other. As FTC v. Topkins illustrates, they may explicitly facilitate or force tacit collusion; in other cases non-competitive outcomes may emerge as ‘happy accidents’ whose mutual adoption may be a stable convention among firms. This research considers three questions: i) what algorithms will prevail in static, dynamic or evolutionary equilibrium and with what efficiency consequences; ii) how does this reflect information collected and exchanged by firms; and iii) how can regulators detect, prove and correct algorithmic market failure?
The ability to force collusion is illustrated by ‘zero-determinant’ strategies in repeated Prisoners’ Dilemma (many market games are essentially PD); theoretical and experimental results indicate that unilateral adoption of e.g. the Linear Extortion to Collusion algorithm can force other players to behave as though they were colluding. This stabilises and spreads the use of such programmes. We generalise this to finite automata strategies to measure the relative fitness of algorithms of varying complexity and the resulting equilibria for fixed data structures (what each algorithm ‘sees’). There are three outcomes depending on market structure: an evolutionarily stable symmetric pricing strategy close to the Cournot outcome; a symmetric set of asymmetric equilibria close to a Stackelberg outcome and an effectively collusive outcome close to joint monopoly. Comparative statics is used to consider whether more complex (more states or longer memory) algorithms ‘beat’ simpler ones.
This is used to model a game in which apps are written by developers to maximise the value of the app to downstream firms; these developers and algorithms are selected by evolutionary competition (ESS) or mimicry (Replicator dynamics). The reason for adding the second stage is to see how developers’ access to information from several instances of the algorithm will affect the equilibrium, and thus how data protection rules interact with algorithmic decisions.
Finally, we consider the regulatory setting; what types of competitive harm are associated with equilibria, how ‘bad’ algorithms can be detected and differentiated from ‘bad outcomes’ and the scope for information sharing rules and data analytics to minimise harm while retaining the benefits of such decisions.
We consider a dynamic spectrum access application - allocation of spectrum to firms that compete to offer services to overlapping markets with different time-patterns of use. This model can be used to analyse the trade-off between speed and complexity, analogously to the comparative performance of complex software-based pricing models and fast hardware-based (moving average) models in algorithmic asset trading models.
Suggested Citation: Suggested Citation