Algorithmic Collusion and Algorithmic Compliance: Risks and Opportunities

60 Pages Posted: 19 Nov 2020 Last revised: 13 Dec 2021

See all articles by Ai Deng

Ai Deng

Charles River Associates; Johns Hopkins University

Date Written: November 19, 2020


Algorithms are becoming ubiquitous in our society. They are powerful and, in some cases, indispensable tools in today’s economy. In terms of the technology, we do not yet have AI sophisticated enough to, with a reasonable degree of certainty, reach autonomous tacit collusion in most real markets. This does not mean that we should ignore the potential risks. In fact, in their effort to design AIs that can learn to cooperate with each other and with humans for social good, AI researchers have shown that autonomous algorithmic coordination is possible. But there are also several positive takeaways from this research. For example, given the technical challenges, I argue that just like emails leave a trail of evidence when a cartel uses them to coordinate, a similar trail of evidence is likely present when collusive algorithms are being designed. The literature also gives us a good deal of insights about the types of design features and capabilities that could lead to algorithmic collusion. I highlight the role of algorithmic communication as a leading example and argue that these known collusive features should raise red flags even if collusion is ultimately reached autonomously by algorithms.

Suggested Citation

Deng, Ai, Algorithmic Collusion and Algorithmic Compliance: Risks and Opportunities (November 19, 2020). The Global Antitrust Institute Report on the Digital Economy 27, Available at SSRN: or

Ai Deng (Contact Author)

Charles River Associates ( email )

1201 F Street NW
Suite 800
Washington, DC DC 20004
United States

Johns Hopkins University ( email )

1717 Massachusetts Ave NW
Washington, DC DC 20036
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics