
- 1903 reads
Abstract: The increasing reliance on artificial intelligence (AI) to set market prices, especially in digital markets, has led to the threat of algorithmic collusion, where pricing algorithms align market behavior among competitors without explicit human agreement. This Insight examines the implications of such practices, highlighting the relevance of the approach taken in the recent European AI Act, which regulates the development and the employment of AI systems. While pricing algorithms can improve market efficiency by responding quickly to fluctuations in supply and demand, they also raise concerns about potential anti-competitive effects. Two different scenarios are discussed: the Predictable Agent and the Digital Eye, where algorithms operate either under human control or in an autonomous manner. In the Predictable Agent scenario, the conduct of the algorithm can be attributed to the undertaking, potentially giving rise to liability under art. 101 TFEU. Conversely, the Digital Eye scenario challenges the accountability of the undertaking for the conduct of the algorithm due to their autonomy. To address these issues, the Insight discusses the “compliance by design”, as emphasized in the AI Act, which requires undertakings to ensure that their algorithms comply with antitrust rules. This is complemented by the “outcome visibility”, that in turn provides for the undertakings to correct any anti-competitive outcomes, even if algorithms have been programmed in accordance with guidelines. Together, these measures seek to balance the benefits of AI with the need to prevent collusive practices in digital markets.
Keywords: algorithmic collusion – European competition law– art. 101 TFEU – AI Act – concerted practice – compliance by design.
I. Introduction
In today’s markets, more and more companies are turning to artificial intelligence to determine market prices. This practice is particularly prevalent in digital markets, driven by the widespread availability of so-called big data.[1] The growing use of big data has reshaped the modern economy, as the processing of this data allows companies to make more effective and strategic market interventions, gradually replacing intuitive and entrepreneurial decision-making with precise and timely data. This development has encouraged the rise of algorithms[2] capable of autonomously determining prices based on collected data and their variations over time, without human intervention. These automated algorithms can continuously adjust prices in milliseconds, responding instantly to fluctuations in market supply and demand.
Pricing algorithms undoubtedly have pro-competitive effects (e.g., improving quality[3], reducing prices, increasing innovation[4]). However, the economic and legal literature also suggests that they can have anti-competitive effects, facilitated by increased transparency in digital markets. Indeed, algorithms, even when not programmed to this end, may nonetheless collude with each other to set prices at supra-competitive levels, thereby facilitating coordinated pricing behavior between competitors in the digital market.[5]
The phenomenon has been named “algorithmic collusion” because of its similarities to what economists call tacit collusion. Legally, the term “collusion” is improper as it would suggest that liability under European competition law can only arise from a consciously accepted agreement between both parties. However, in the case law of the Court of justice, art.101 TFEU covers broader forms of cooperative or coordinated behavior, where anti-competitive effects result from any direct or indirect contact between undertakings intended to influence market behavior or reveal the course of action they have decided to adopt or are planning to adopt.[6] From a legal point of view, it would be more appropriate to speak of parallel behavior, i.e. situations where undertakings engage in similar or identical market behavior without any agreement or contact between them. Art. 101 TFEU does not deprive undertakings of their right to “adapt intelligently to the existing or anticipated behavior of their competitors”.[7] Therefore, mere parallel behavior between competing undertakings does not constitute per se a violation of European rules. Such parallel behavior traditionally concerns oligopolistic markets, where undertakings can achieve the objectives of an explicit cartel simply by recognising their interdependence and avoiding competitive behavior. Antitrust authorities have historically struggled to address the so-called oligopoly problem[8] through competition law instruments. Apart from ex ante intervention under the merger regulation,[9] which prevents the creation of collusive oligopolistic markets, parallel behavior is generally insufficient to give rise to antitrust liability and it is therefore not sanctioned under EU law.[10]
The similarity between this phenomenon and so-called algorithmic collusion lies in the fact that the price alignment does not result from a collusive strategy agreed between competitors, but from different, autonomous and unilateral decisions taken by individual algorithms to increase the undertaking’s ability to make profits. The main concern is that pricing algorithms could lead to the spread of parallel behavior not only in oligopolies but also across other markets.
The economic literature[11] distinguishes four categories of algorithmic collusion: the Messenger[12] and Hub and Spoke[13] scenarios, where algorithms facilitate coordination between undertakings, and the Predictable Agent and Digital Eye scenarios, where algorithms generate coordination between undertakings.
The first two scenarios will not be examined, as in both cases algorithms act as facilitators or supporters of the implementation of the prohibited agreement[14] and thus clearly fall within the scope of art. 101 TFEU.
Coordinating algorithms are designed and implemented unilaterally, i.e. each undertaking uses its own price-setting algorithm. The peculiarity of these algorithms is that there is no initial nor ongoing communication or contact between the companies. However, the fact that several undertakings rely on such algorithms could facilitate the coordination of their market behavior.[15]
The economic literature distinguishes two types of algorithms based on their complexity: descriptive algorithms and black-box algorithms. With regard to descriptive algorithms, it is possible to understand the strategy and actions that arise from their use through the analysis of the source code. Among the descriptive algorithms is the so-called signaling algorithm, which operates according to a systematic scheme: initially, it signals a price increase and monitors the reactions of competitors. If the latter align, it sets the price; otherwise, it signals a new price. Black-box algorithms, on the other hand, are more challenging to interpret: the strategy resulting from their use is not always clear just by observing the code. These algorithms are often machine learning and autonomous models, known as self-learning, which do not explicitly define a pricing strategy. In other words, they can process information and make pricing decisions based on available data without having explicitly defined a pricing strategy in advance.
Given these two types of algorithms, Ezrachi and Stucke’s research[16] distinguishes between a Predictable Agent scenario and a Digital Eye scenario. The first is characterized by human intervention in unilaterally designing the machine to deliver predictable results and respond in a specific way to changing market conditions.[17] The second, on the other hand, involves situations where the algorithm is given a goal and operates autonomously to achieve it. In the Digital Eye scenario, tacit coordination, when it occurs, is not the result of explicit human design, but rather the result of self-learning and autonomous machine execution.[18]
The main issues in this category of pricing algorithms are twofold: (i) under what circumstances the parallel use of algorithms results in an anti-competitive outcome due to coordination rather than mere parallel behavior and (ii) given that this outcome is caused by artificial intelligence, to what extent coordination caused by an algorithm can be attributed to an undertaking.
II. Corporate accountability for algorithmic behaviour
ii.1. Predictable agent scenario
In the context of the Predictable Agent scenario, the prevailing doctrine compares the algorithm with an employee of the undertaking.[19] In this sense, the algorithm would be an extended hand of the undertaking, so that just as the company is responsible for the antitrust conduct decided and implemented by an employee, it is also responsible for the conduct potentially decided and implemented by the algorithm.[20] To assess the validity of this analogy, it is necessary to assess the concept of “employee” under the European law and determine whether it can encompass the signalling algorithm.
According to the case law of the Court of Justice, the concept of employee is characterised by the fact that “for a certain period of time a person performs services for and under the direction of another person in return for which he receives remuneration”.[21] This definition identifies four constitutive criteria: i) the personal nature of the service; ii) the technical-functional subordination of the employee to the employer's instructions; iii) the temporal nature of the service; and iv) the payment of a remuneration in exchange for the service rendered.
As mentioned above, the signalling algorithm acts in favor of and in accordance with the decisions and strategies of the undertaking, which is aware of its operating mechanism. Thus, the second criterion is definitely met. Moreover, once programmed, it is assumed that the company intends to use the algorithm for a certain period of time, which satisfies the third criterion. The two most problematic criteria are, therefore, the personal natura of the service and the payment of a remuneration. An algorithm is defined as “any well-defined computational procedure that takes a value or set of values as an input and produces a value or set of values as an output in a finite amount of time. An algorithm is thus a sequence of computational steps that transforms the input into the output”.[22] More broadly, the Artificial Intelligence Act (hereafter AI Act)[23] defines an artificial intelligence system as a “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that […] infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.[24]
Based on an objective interpretation of the AI Act’s language, the signalling algorithm should not be considered equivalent to an employee because, firstly, only a person can be qualified as an employee under EU law and, secondly, the activities of the algorithm are not subject to remuneration. In this respect, the algorithm should therefore be considered as a mere tool, as a means or device used by the undertaking to achieve a specific objective. Therefore, the algorithm would be regulated like an economic asset.
To the contrary, an evolutionary interpretation leads to a different conclusion. It might well be argued that today, following significant technological progress, the concept of employee could be extended to include artificial intelligence. This interpretation would then satisfy the need for accountability, since in this respect the algorithm performs its functions for and under the direction of the undertaking for which it works and it is thus integrated into the economic unit that constitutes the undertaking itself.[25] However, this interpretation fails to consider the four criteria established by the Court in the Neidel case. Equating artificial intelligence with a person would likely result in an overly broad interpretation, potentially leading to a contra legem interpretation.
In view of these considerations, the objective construction is preferable. The qualification of the algorithm as a tool in the hands of the undertaking is consistent with the rationale of the recent regulation of the AI Act.[26] Indeed, artificial intelligence is regulated in the context of the placing on the market, commissioning and use of AI systems.[27] This means that undertakings need to ensure that their algorithms comply with existing regulations and take responsibility for the use of such tools, just as they would for any other regulated device. Classifying the signalling algorithm as a tool, rather than an employee, does not exempt the undertaking from liability in case of anti-competitive conduct by the algorithm. Descriptive algorithms do not have decision-making or strategic autonomy. Rather, the undertaking has full control over their operation. As a result, any anticompetitive effect produced by the algorithm in this scenario is actually accepted and foreseen by the undertaking itself.
ii.2. Digital eye scenario
In this scenario, as mentioned above, price coordination in the market is achieved through the use of self-learning algorithms, i.e. automatic and autonomous learning models that are capable of processing data in a complex and rapid manner to produce an optimal output without revealing the decision-making process. As a result, these algorithms enable undertakings to achieve a collusive outcome even without being aware of it.[28] Indeed, undertakings may not necessarily be motivated by an anti-competitive intent, nor may they be able to foresee the likelihood that the use of AI in the market will lead to algorithmic collusion. In this scenario, therefore, there is a misalignment between human intent and the strategic decisions of the algorithm itself.[29]
As far as accountability is concerned, the inherent autonomy of self-learning algorithms means that the analogy drawn in the previous section regarding the classification of the algorithm as a tool does not apply. In this context, the European Parliament has stated that “the more autonomous robots are, the less they can be considered as simple tools in the hands of other actors”.[30] However, the algorithm itself cannot be held liable because, according to the criteria set out by the Court of Justice in the Hofner judgment,[31] it is not an entity carrying out an economic activity[32] and therefore cannot be considered an autonomous legal entity under European competition law.
Some scholars suggest applying by analogy the principles of the relationship between a principal and a commercial agent.[33]
In this view, the European Commission defines in its Guidelines on vertical restraints[34] a commercial agent as “a legal or natural person entrusted with the power to negotiate and/or conclude contracts on behalf of another person (the principal), either in the agent’s own name or in the name of the principal, for the purchase of goods or services by the principal, or the sale of goods or services supplied by the principal.”.[35] Following the reasoning in the previous section, only an evolutionary interpretation can equate a self-learning algorithm with the figure of an agent. However, this would again lead to an overly broad and unjustified interpretation of the concept of person. Moreover, in order for the conduct of the agent to be attributed to the undertaking, it is necessary that the agent does not independently determine its own conduct on the market, but applies the instructions given by the principal and carries out its activities on behalf, and under the direction, of the undertaking.[36] As explained above, black box algorithms have an autonomy that allows them to develop independent strategies of action without disclosing the decision-making process by which they reach the final result. Therefore, since they do not act under the direction of the undertaking, these algorithms do not form an economic unit with it[37] and their behavior cannot be attributed to the undertaking.
As a consequence, in the Digital Eye scenario, it is unlikely that the anti-competitive behavior of the algorithms can be attributed to the undertaking.
III. Applicability of art. 101 TFEU to algorithmic collusion
iii.1. Predictable agent scenario
As explained earlier, there is no agreement between the undertakings in this scenario. Each undertaking designs and configures its own algorithm independently, programming it to monitor price changes with the aim of maximizing the undertaking’s profit. However, according to the European Commission’s e-commerce sector inquiry,[38] more than 70 per cent of retailers using this type of algorithm do programme it for the same purpose. Thus, any undertaking could foresee that adopting similar signalling algorithms would encourage parallel behavior, leading to supra-competitive prices.
Ezrachi and Stucke argue that “this conscious parallelism at the human level leads to the programming of machines that are aware of possible conscious parallelism at the market level”.[39] Therefore, according to the authors, even if there is no evidence of collusion between undertakings, there is likely evidence of human intent to use the algorithm anticompetitively. Similarly, Calzolari remarks that algorithmic collusion cannot be compared to parallel behaviour,[40] stating that there is a significant difference from the traditional scenario, in which undertakings adapt intelligently and rationally to existing market conditions, that by themselves encourage the emergence of tacit analogue collusion phenomena. In the digital scenario, instead, undertakings proactively help to create conditions that subsequently allow their rational algorithms to engage in tacit collusion.[41] This reasoning is based on the presumed predictability or knowledge on the part of the undertakings of the potential anti-competitive effects of the algorithms; therefore, algorithmic collusion would be comparable to a concerted practice, as the resulting prices appear to be the result of a genuine coordination mechanism accepted by the market participants.
In the case law of the Court of justice the notion of concerted practice includes a mental consent to distort competition by concerted conduct.[42] However, this consent does not necessarily have to be reached verbally but may also be based on mere direct or indirect contacts between the parties with a view to eliminating uncertainty as to their future conduct on the market.[43] Consequently, antitrust liability cannot be imposed in the absence of knowledge of the anti-competitive agreement: in simple terms, “an undertaking cannot participate in an illegal practice of which it is unaware”.[44] Indirect contacts include the exchange of information, which the Commission has classified as an objective infringement of art. 101 TFEU because “exchanging information relating to undertakings’ future conduct regarding prices […] is particularly likely to lead to a collusive outcome”.[45] The notion of exchange of information has further been addressed by the case law of the Court of Justice. In particular, in the Polypropylene judgment has been established that, in the absence of proof to the contrary, “there must be a presumption that the undertakings participating in concerting arrangements and remaining active on the market take account of the information exchanged with their competitors when determining their conduct on that market”.[46] This presumption, generally referred to as the “Anic presumption”, concerns the nexus between the dissemination of information and the actual behavior of market participants. Subsequent case law has reaffirmed the Anic presumption and clarified some aspects: for instance, in the Westfalen Gassen judgment,[47] the Court of First Instance stated that an undertaking taking part in anti-competitive meetings must “put forward evidence to establish that its participation in those meetings was without any anti-competitive intention by demonstrating that it had indicated to its competitors that it was participating in those meetings in a spirit that was different from theirs”.[48] “The reason underlying that principle of law is that, having participated in the meeting without publicly distancing itself from what was discussed, the undertaking gave the other participants to believe that it subscribed to what was decided there and would comply with it”.[49]
Against this background, the key function of public distancing would be to rebut the Anic presumption, which assumes that an undertaking engaging in collusive interactions with competitors necessarily takes into account the information exchanged when making market decisions.
The Anic presumption and public distancing were applied by the Court in a different scenario.[50] In particular, in Eturas,[51] after recalling its previous case law[52] according to which the concept of a concerted practice implies, in addition to the agreement, subsequent market behavior and a nexus between these two elements,[53] the Court ruled that an undertaking can only be held liable for implicitly consenting to anti-competitive behavior if (i) there is actual market behavior and (ii) there is a nexus between the agreement and such behavior.[54]
Applying this reasoning to pricing algorithms, it can be assumed that, if an undertaking uses pricing algorithms knowing that these algorithms facilitate anti-competitive practices, such as signaling intended price changes to competitors, its actions could be classified as a concerted practice. [55] Even if the undertaking merely receives signals and adjusts its market behavior in response, rather than actively sending signals, it may still fall within the scope of art. 101 TFEU. This means that passive participation can still give rise to liability for anticompetitive conduct. Therefore, it could be argued that in this scenario, liability for breach of art. 101(1) TFEU would also be imposed as passive participation in the anti-competitive practice, thus allowing the presumption of anticompetitive effects even in cases of indirect contacts between competitors[56] In such cases, the burden shifts to the undertaking to prove that it has publicly distanced itself from the anticompetitive practice. The use of this type of algorithm to signal and adjust prices based on competitors’ responses changes the nature of an undertaking’s commercial decisions, that hence “should be considered concerted rather than independent”.[57] Since, as explained above, the behavior of the algorithm is attributable to the undertaking because it is fully aware of how it works and willfully determines its objectives, the undertaking itself bears the burden of proving that it has publicly, formally and unequivocally distanced itself from the indirect contact. In the absence of a public dissociation, the Anic presumption applies and art. 101 TFEU is enforced.
iii.2. Digital eye scenario
Unlike the previous scenario, self-learning algorithms do not follow a consistent pattern: they can either maintain competitive behavior or adopt collusive behavior, depending on the market circumstances. These algorithms simply observe and analyse the prices set by competitors and react autonomously in smart way. In this scenario, there is no indirect contact between undertakings through continuous price signalling and subsequent alignment with the responses received from competitors. Therefore, what has been said about the previous scenario cannot be applied here by analogy.
Various theories have been proposed in the literature regarding the use of self-learning algorithms by undertakings. Some argue that the use of such algorithms by companies may be one of the “correlation factors” by which independent competitors can predict each other’s behavior and are therefore strongly induced to coordinate their behavior on the market, thereby creating a collective dominant position within the meaning of art. 102 TFEU.[58] According to the case law of the Court of Justice, “a finding that two or more undertakings hold a collective dominant position must, in principle, proceed upon an economic assessment of the position on the relevant market of the undertakings concerned, prior to any examination of the question whether those undertakings have abused their position on the market.”[59] Two steps have to be distinguished: the first is to assess whether the economic nexus or the correlation factors between these undertakings [60] allow them to act as a collective entity on the market independently of their competitors, customers and consumers;[61] the second stage is to assess whether such an entity has abused of its dominant position. However, by qualifying algorithms as the economic correlation factors that allow companies to act as a collective entity, the literature inverts the two phases, as the abuse resulting from the coordination of algorithms leading to supra-competitive prices becomes the precondition for the collective entity. Nevertheless, the coordination between algorithms is not the result of a collective scenario, but rather the result of different unilateral decisions of single undertakings.[62] The collective scenario appears only after the coordination has taken place. Moreover, this theory entails significant complications from an enforcement perspective, as under art. 102 TFEU both the existence of a collective dominant position and specific abuses must be proven beyond the mere use of algorithms.
A different suggestion that has been made in literature is to adopt a broad interpretation of concerted practices under art. 101 TFEU.[63] More in details, the author suggests an objective framing of the notion of concerted practice, emphasizing conduct data and qualifying the subjective factor of meeting of minds not as a constitutive element of the offence, but rather as an aggravating factor of antitrust liability.[64] Although this thesis has been supported by several authors, it is contrary to the very rationale of art. 101 TFEU. A common feature of all the types of conduct prohibited by art. 101 TFEU, whether agreements, concerted practices or decisions of associations of undertakings, is indeed that they require a meeting of intent on the part of two or more undertakings.[65] Although the subjective element has been weakened over time, it remains a constitutive requirement that must be demonstrated for art. 101 TFEU to be applied.[66] This result cannot change, regardless of the method of interpretation used. Indeed, the ultimate purpose of European antitrust law remains the possibility for each company to autonomously determine its economic behavior on the market. In the Digital Eye scenario, as already explained, the use of self-learning algorithms implies a separation between the intent of the company and the decisions of the algorithm. The undertaking is therefore not in a position to determine in advance the outcome produced by the algorithm. The application of art. 101 based on objective liability is excluded by the case law of the Court of Justice, which instead supports the personal liability of the economic entity that has committed the infringement.[67] Even if the ruling was not about AI, the principle expressed by the Court could be apply also in the pricing algorithms context since as the court said the parent company can only be held liable if the subsidiary company is part of the economic unit that has committed the infringement, therefore, mutatis mutandis, the undertaking can only be held liable if the algorithm is part of its economic unit, which, as mentioned above, is difficult to maintain.
IV. Concluding remarks
The analysis performed in the previous sections seems to lead to the following conclusions. Regarding the attribution of the algorithm’s behavior to the undertaking, the answer varies depending on the scenario in question. In the Predictable Agent scenario, based on an objective construction, the algorithm is deemed to be a tool that acts according to the instructions, and under the full control, of the company. This means that the company is aware of and endorse the anti-competitive conducts that the algorithm may undertake; such behavior can therefore be attributed to the undertaking. In contrast, in the Digital Eye scenario, the algorithm enjoys a significant degree of autonomy to the point that it cannot be considered as acting on behalf of or under the direction of the company. Accordingly, as the algorithm is not part of the economic unit of the company, it is difficult to attribute its conducts to the company. The conclusions also differ with regard to the applicability of art. 101 to so-called algorithmic collusion. In the Predictable Agent scenario, algorithmic collusion may be considered a concerted practice, as algorithms consistently signal price changes, and companies subsequently align their market behavior with competitors' responses. This process creates an implicit form of communication between the undertakings. The presumption of the undertaking’s participation in the concerted practice must be rebutted by the company through public distancing.
In the Digital Eye scenario, however, art. 101 TFEU is not applicable because there is a misalignment between the intents of the undertaking and the conducts of the algorithm. The subjective element remains a constitutive element of the concerted practice that must be demonstrated for art. 101 TFEU to apply. Consequently, accepting its applicability would raise significant enforcement issues.
Commissioner Vestager[68] has proposed the concept of “compliance by design”. Borrowing from the “privacy by design” principle of data protection rules,[69] this concept implies that remedies for algorithmic price coordination should be applied ex ante, i.e. when algorithms are designed and programmed. In particular, these algorithms should be programmed to comply with antitrust rules and not to set collusive prices, even if they result from oligopolistic interdependencies. The idea is to make companies responsible for the use of pricing algorithms by imposing programming obligations. If a company programs the algorithm employed in its commercial activity, it is directly subject to the programming obligations. If, on the other hand, the company purchases the software from a third party, the same obligations will be imposed on the external programmer, while the company will be required to verify that this obligation has been effectively fulfilled before concluding the transaction. The same ex ante conformity assessment approach is found in the AI Act. The higher the risk associated with the use of a particular AI system, the greater the responsibility of those who develop and use it, including a ban on the use of technologies whose risk is deemed unacceptable.
One criticism of compliance by design is the concern that algorithms might still produce collusive outcomes despite following the design rules. In such a case, the undertaking could defend itself by claiming to have followed the rules. This concern is addressed by the concept of “outcome visibility”,[70] which would operate ex post, complementing compliance by design where it fails or is insufficient to prevent price coordination by algorithms.[71] The ratio behind this approach is to prevent companies from taking advantage of algorithmic collusion by requiring them to restore competition when it is distorted by artificial intelligence. In other words, even if companies comply with the programming obligations, if an anti-competitive result occurs, it will be visible to the outside world.[72] Should this be the case, in fact, companies will be aware of such an occurrence as they will notice an otherwise unjustified increase in profits. According to the principle of outcome visibility, companies have an obligation to restore prices to pre-algorithmic collusion levels as soon as they become aware of the circumstances. In this way, companies will be sanctioned both for failing to comply by design (i.e. not taking the required action) and for taking advantage of the anti-competitive result achieved by algorithms, even if programmed in accordance with the guidelines.
--------------------
European Papers, Vol. 9, 2024, No 3, pp. 1048-1061
ISSN 2499-8249 - doi: 10.15166/2499-8249/798
* PhD candidate in European Law, University of Rome “La Sapienza”, maria.giacalone@uniroma1.it.
[1] Communication COM(2014) 442 final from the Commission of 2 July 2014, Towards a thriving data-driven economy.
[2] “An algorithm is any well-deuned computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output in a unite amount of time. An algorithm is thus a sequence of computational steps that transform the input into the output”. TH Cormen, CE Leiserson, RL Rivest and C Stein, Introduction to Algorithms (The MIT Press 2022) 5.
[3] Pricing algorithms improve quality by fostering transparency and innovation. They help companies better understand consumer preferences, revealing priorities like durability or sustainability. This encourages firms to refine products and develop new offerings tailored to market demands. By streamlining processes and reducing production costs, algorithms free up resources that can be reinvested in enhancing quality. Additionally, they shift competition beyond price, focusing on attributes like product performance and customer satisfaction. As a result, algorithms create a virtuous cycle where companies are incentivized to innovate and improve, ultimately benefiting consumers with higher-quality products and services.
[4] See Organisation for Economic Co-operation and Development (OECD), Algorithms and Collusion: Competition Policy in the Digital Agehttps://web-archive.oecd.org/2019-02-17/449397-Algorithms-and-colllusion...(2019) web-archive.oecd.org 2; Competition and Market Authority (CMA), Pricing algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing (2018) assets.publishing.service.gov.uk 19.
[5] OECD, Algorithms and Collusion: Competition Policy in the Digital Age cit.; Autorité de la concurrence, and Bundeskartellamt, Algorithms and Competition (2019) www.autoritedelaconcurrence.fr.
[6] Joined cases 40 to 48, 50, 54 to 56, 111, 113 and 114-73 Suiker Unie v Commission ECLI:EU:C:1975:174.
[7] Suiker Unie v Commission cit. para. 174; case C‐8/08 T-Mobile ECLI:EU:C:2009:343 para. 33.
[8] The characteristics of the market in which the oligopolists operate are such that they do not compete on price and are able to achieve supra-competitive profits without entering into an explicit agreement or concerted practice in violation of European competition law. Cooperative pricing is therefore a logical outcome of the market itself. As a result, in an attempt to name the market effects created by oligopolistic interdependence, Posner termed the phenomenon the oligopoly problem. For further details see N Petit, ‘The Oligopoly Problem in EU Competition Law’ in I Liannos and D Geradin (eds.) Research Handbook in European Competition Law (Edward Elgar 2013) 3-4. R Posner, ‘Oligopoly and the Antitrust Laws: A Suggested Approach’ (1969) Stanford Law Review 21.
[9] Council Regulation (EC) No 139/2004 of 20 January 2004 on the control of concentrations between undertakings (EC Merger Regulation).
[10] Case 48-69 Imperial Chemical Industries Ltd. v Commission ECLI:EU:C:1972:70 paras 65-66; Suiker Unie v Commission cit. para. 174; T-Mobile cit. para. 32.
[11] A Ezrachi and ME Stucke, Virtual Competition: the promise and perils of the algorithm-driven economy (Harvard University Press 2016).
[12] This scenario involves pricing algorithms that aim to implement an agreement already entered by the parties. Two phases can be distinguished: a first phase in which an anti-competitive practice is implemented by a human agreement, and a second one in which the algorithm facilitates the implementation of the prohibited agreement.
[13] The hub and spoke scenario refers to situations where a third party (the hub) provides the same or similar algorithms to competitors (the spokes). The subsequent use of these algorithms by competitors in the market stabilises prices, leads to a coordinated outcome and has anti-competitive effects.
[14] Autorité de la concurrence and Bundeskartellamt, Algorithms and Competition cit. 27.
[15] See, Autorité de la concurrence and Bundeskartellamt Algorithms and Competition cit.; Competition and Market Authority (CMA) Pricing algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing cit.; OECD, Algorithms and Collusion: Competition Policy in the Digital Age cit.
[16] A Ezrachi and ME Stucke, Virtual Competition: the promise and perils of the algorithm- driven economy cit.
[17] A Ezrachi and ME Stucke, ‘Artificial Intelligence & Collusion’ (2017) University of Illinois Law Review 1775, 1783.
[18] Ibid. 1795.
[19] P Manzini, ‘Algoritmi collusivi e diritto antitrust europeo’ (2019) Mercato Concorrenza Regole 172; M Gal, ‘Algorithmic-facilitated collusion’ (2017) OECD Roundtable on Algorithmic Collusion one.oecd.org 25.
[20] M Filippelli ‘La collusione algoritmica’ (2021) Orizzonti del Diritto Commerciale 375, 390.
[21] Case C-337/10 Neidel ECLI:EU:C:2012:263 para. 23.
[22] Competition and Market Authority (CMA), Pricing algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing cit.
[23] Proposal COM(2021) 206 final from the Commission of 21 April 2021 for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
[24] Art. 3 AI Act.
[25] Case C-542/14 VM Remonts ECLI:EU:C:2016:578 para. 24.
[26] In this regard, an algorithm is qualified as a product in the AI Liability Directive (COM(2022) 496 final) currently under discussion.
[27] Proposal COM(2021) 206 final cit. 13.
[28] OECD, Algorithms and Collusion: Competition Policy in the Digital Age cit. 32.
[29] A Ezrachi and ME Stucke, ‘Artificial Intelligence & Collusion: When Computers Inhibit Competition’ (2017) University of Illinois Law Review 1795.
[30] Resolution P8_TA(2017)0051 from the European Parliament of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. The 2017 resolution has been overtaken by more recent advancements, particularly the approach outlined in the AI Act. However, the AI Act implicitly acknowledges the diversity of AI systems, especially through its recognition of “levels of autonomy” and “adaptability post-deployment.” This supports the distinction between algorithms with predictable, human-defined objectives and those that evolve autonomously through self-learning mechanisms. In this regard, Consideration 12 of the AI Act specifically states “[…] AI systems are designed to operate with varying levels of autonomy, meaning that they have some degree of independence of actions from human involvement and of capabilities to operate without human intervention. The adaptiveness that an AI system could exhibit after deployment refers to self-learning capabilities, allowing the system to change while in use”.
[31] Case C-41/90, Höfner ECLI:EU:C:1991:161
[32] “It must be observed, in the context of competition law, first that the concept of an undertaking encompasses every entity engaged in an economic activity, regardless of the legal status of the entity and the way in which it is financed [...]”. Höfner cit. para. 21.
[33] P Manzini, ‘Algoritmi collusivi e diritto antitrust europeo’ cit. 178-179.
[34] Communication COM 2022/C 248/01 from Commission of 30 June 2022, Guidelines on vertical restraints para 29.
[35] Ibid. para. 29.
[36] F Amato, ‘Titolo VII Norme comuni sulla concorrenza, sulla fiscalità e sul ravvicinamento delle legislazioni’ in A Tizzano (eds) Trattati dell’Unione europea (Giuffrè editore 2014) 1025. See also case T-66/99 Minoan Lines SA v Commission, ECLI:EU:T:2003:337 para. 124-125; case C-217/05 Confederación Española de Empresarios de Estaciones de Servicio, ECLI:EU:C:2006:784 paras 43-45; case T-325/01, DaimlerChrysler AG v Commission ECLI:EU:T:2005:322 para 88.
[37] Suiker Unie cit. para. 480; DaimlerChrysler AG v Commission cit. para. 86.
[38] Commission, Final report on the E-commerce Sector Inquiry, Brussels (10 May 2017) ec.europa.eu 175-176.
[39] A Ezrachi and ME Stucke, ‘Virtual Competition: the promise and perils of the algorithm- driven economy’ cit. 65.
[40] L Calzolari ‘The Misleading Consequences of Comparing Algorithmic and Tacit Collusion: Tackling Algorithmic Concerted Practices Under Art. 101 TFEU’ (2021) European Papers www.europeanpapers.eu 1193.
[41] Ibid. See also, L Calzolari, ‘La collusione fra algoritmi nell’era dei big data: l’imputabilità alle imprese delle “intese 4.0” ai sensi dell’art. 101 TFUE’ (2018) Rivista di diritto dei media 11.
[42] See R Whish and D Bailey, Competition Law (8th ed.) (OUP 2016); A Tizzano (eds) Trattati dell’Unione europea cit.
[43] Imperial Chemical Industries Ltd. v Commission cit. para. 64; See also T-Mobile cit. para. 26 and the case law cited therein.
[44] I Apostolakis, ‘Antitrust liability in cases of indirect contacts between competitors: VM Remonts’ (2017) CMLRev 621.
[45] Communication C/2023/4752 final from the Commission of 17 July 2023, Guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union to horizontal cooperation agreements para. 414.
[46] Case C-49/92 P Commission v Anic Partecipazioni ECLI:EU:C:1999:356 para. 121.
[47] Case T-303/02 Westfalen Gassen Nederland v Commission ECLI:EU:T:2006:374.
[48] Westfalen Gassen Nederland v Commission cit. para. 76. See also case C-199/92 P Hüls v Commission ECLI:EU:C:1999:358, para. 155 and case C-235/92 P Montecatini v Commission ECLI: EU:C:1999:362, para. 181.
[49] Westfalen Gassen Nederland v Commission cit. para. 77.
[50] Both judgments were issued with reference to the hub and spoke scenario.
[51] Case C-74/14 Eturas ECLI:EU:C:2016:42.
[52] See in particular, case C-286/13 P Dole Food Company, Inc. ECLI:EU:C:2015:184; Eturas cit. para. 42
[53] Ibid.
[54] Eturas cit. para. 45.
[55] N Colombo, ‘Virtual Competition: Human Liability Vis-a-Vis Artificial Intelligence’s Anticompetitive Behaviours’ (2018) European Competition and Regulatory Law Review 13.
[56] I Apostolakis, ‘Antitrust liability in cases of indirect contacts between competitors: VM Remonts’ cit. 622.
[57] P Manzini, ‘Algoritmi collusivi e diritto antitrust europeo’ cit.; See also OECD, ‘Note from the European Union’ one.oecd.org 33.
[58] P Manzini, ‘Algoritmi collusivi e diritto antitrust europeo’ cit.
[59] Joined cases C-395/96 P and C-396/96 P Compagnie Maritime Belge Transports and Others v Commission ECLI:EU:C:2000:132 para. 38.
[60] Compagnie Maritime Belge Transports and Others v Commission cit. para. 41.
[61] Ibid. para. 42.
[62] In this sense, see L Calzolari ‘The Misleading Consequences of Comparing Algorithmic and Tacit Collusion: Tackling Algorithmic Concerted Practices Under Art. 101 TFEU’ cit. 1204
[63] M Filippelli ‘La collusione algoritmica’ cit.
[64] Ibid. 396.
[65] R Whish and D Bailey, Competition Law cit.; A Tizzano (eds) Trattati dell’Unione europea cit.
[66] For an infringement of art. 101 TFEU to be attributed to an undertaking, cooperation must be conscious, and intentional coordination between the parties must be demonstrated. In this regard, see Imperial Chemical Industries Ltd. v Commission cit. para. 64-65; Suiker Unie v Commission cit. para. 173-174; Commission v Anic Partecipazioni cit. para.115; case C 199/92 P Hüls v Commission EU:C:1999:358 para. 158. While this principle originates from older case law, it has been consistently reaffirmed in subsequent judgments. Most recently, see case T-8/16 Toshiba Samsung Storage v Commission ECLI:EU:T:2019:522, para. 381. This reinforces the enduring relevance of the requirement for intentional coordination in establishing a violation of art. 101 TFEU.
[67] Case C-97/08 P Akzo Nobel NV and Others v Commission ECLI:EU:C:2009:536 para. 77.
[68] M Vestager, ‘Algorithms and competition’, speech at Bundeskartellamt 18th Conference on Competition, Berlin 16 March 2017 ec.europa.eu.
[69] According to art. 25(1) of the General Data Protection Regulation (GDPR), the data protection by design approach ensures that privacy and data protection issues are considered during the design phase of any system, service, product, or process, and throughout its entire lifecycle. Specifically, the data controller must implement technical and organizational measures from the early stages of designing processing operations, so that the right to data protection is safeguarded from the outset.
[70] A Deng, ‘What Do We Know About Algorithmic Tacit Collusion?’ (2018) papers.ssrn.com 12.
[71] See V Cafaro ‘Algorithmic tacit collusion: a regulatory approach’ (2023) The Competition Law Review 9.
[72] As argued by Deng “the point is that no matter how complicated and incomprehensible the computerized decision process is, the outcome is always observable and can be interpreted by human decision-makers”. A Deng, ‘4 Reasons We May Not See Colluding Robots Anytime Soon’ (2017) papers.ssrn.com.