Wednesday, September 15, 2021
This paper provides a full characterization of the price effects of horizontal mergers in the Cournot model with heterogeneous firms and constant returns to scale. We show that the price change brought about by a merger only depends on the smaller merging firm's share and the number of firms, but is independent of the distribution of shares among other firms. Price effects are determined by factors that are either directly observable by competition authorities or can be bounded under relatively mild assumptions on demand curvature or pass-through. Estimates based on concentration measures can instead be seriously misleading. We also provide closed-form solutions for calibration that approximate merger effects on the basis of simple pre-merger parameters.
Tuesday, September 14, 2021
Background and objective
The objective of the conference, organized by the TSE Digital Center at the Toulouse School of Economics, with the help of CEPR, is to discuss recent contributions to the understanding of the digitial economy and its consequences for modern societies. Keeping the spirit of previous years, the conference will feature contributions in economics, theoretical, econometric, experimental and policy oriented, as well as contributions from other social sciences and computer and data science.
Toulouse School of Economics (new building)
1 Esplanade de l'Université
31080 Toulouse cedex 06
Conference secretariat: Florence Chauvet
The present chapter investigates the antitrust rules applicable to loyalty or fidelity rebate schemes under EU competition and US antitrust law. It finds that antitrust liability for a dominant company will more readily be established in the EU, where the applicability of economics-based tests is still being navigated. In the US, rebates by a monopoly player will usually be found to be anti-competitive where they constitute predatory pricing, although they might also run into antitrust liability where they constitute exclusive dealing arrangements. This divergence can be explained by the different ideological underpinnings in the two jurisdictions. Overall, however, the (case) law on loyalty rebates is still in a state of flux in both jurisdictions. In recent years, both jurisdictions have gradually moved to a little more convergence in their treatment of exclusivity rebates. At this point, however, both the US Supreme Court and the European Court of Justice will need to weigh in on the future of the antitrust assessment of loyalty-inducing rebates.
Monday, September 13, 2021
Current discussions about how to regulate platforms revolve around the extent to which existing frameworks can and should be applied to modern-day platform firms and business models. We outline and explain the economic and strategic features of platforms, and compare and contrast them to utility industries often considered structurally similar. We will then outline the manner in which these industries have been regulated and the rules for regulatory intervention and assess how these approaches and others currently being discussed are likely to affect competition and innovation on modern-day platforms.
Hospitals anchor much of US health care and receive a third of all medical spending, including various subsidies. Nevertheless, some become insolvent and exit the market. Research has documented subsequent access problems; however, less is understood about broader implications. We examine over 100 rural hospital closures spanning 2005-2017 to quantify the effects on the local economy. We find sharp and persistent reductions in employment, but these localize to health care occupations and are largely driven by areas experiencing complete closures. Aggregate consumer financial health is only modestly affected, and housing markets were already depressed prior to hospital closures.
Sunday, September 12, 2021
Friday, September 10, 2021
Supply chains for many agricultural products have an hour-glass shape; in between a sizable number of farmers and consumers is a smaller number of processors. The concentrated nature of the meat processing sectors in the United States implies that disruption of the processing capacity of any one plant, from accident, weather, or as recently witnessed – worker illnesses from a pandemic – has the potential to lead to system-wide disruptions. We explore the extent to which a less concentrated meat processing sector would be less vulnerable to the risks of temporary plant shutdowns. We calibrate an economic model to match the actual horizontal structure of the U.S. beef packing sector and conduct counter-factual simulations. With Cournot competition among heterogeneous packing plants, the model determines how industry output and producer and consumer welfare vary with the odds of exogenous plant shutdowns under different horizontal structures. We find that increasing odds of shutdown results in a widening of the farm-to-retail price spread even as packer profits fall, regardless of the structure. Results indicate that the extent to which a more diffuse packing sector performs better in ensuring a given level of output, and thus food security, depends on the exogenous risk of shutdown and the level of output desired; no horizontal structure dominates. These results illustrate the consequences of policies and industry efforts aimed at increasing the resilience of the food supply chain and highlight that there are no easy solutions to improving the short-run resilience by changing the horizontal concentration of meat packing.
Thursday, September 9, 2021
U.S. antitrust law empowers enforcers to review pending mergers that might undermine competition. But there is growing evidence that the merger-review regime is failing to perform its core procompetitive function. Industry concentration and the power of dominant firms are increasing across key sectors of the economy. In response, progressive advocates of more aggressive antitrust interventions have critiqued the substantive merger-review standard, arguing that it is too friendly to merging firms. This Article traces the problem to a different source: the merger-review process itself. The growing length of reviews, the competitive restrictions merger agreements place on acquisition targets during review, and the targets’ resulting loss of strength harm competition and consumers. As a result, an enforcement regime designed to protect competition is damaging it instead. The rise of antitrust reverse termination fees (“ARTFs”)—payments from the acquirer to the target if the merger fails antitrust review—demonstrates the anticompetitive effect of the review process. This Article argues that these fees represent the parties’ negotiated prediction of the competitive costs to the target of entering the merger agreement (and therefore the competitive gains to the acquirer and other rivals in the relevant market). ARTFs also indicate the possibility of anticompetitive manipulation of the merger-review process. Knowing that reviews sometimes take over a year to resolve, acquirers can enter a merger agreement and use an ARTF to buy competitive peace—even when they expect the merger will be rejected—all the while harming consumers. Reform proponents have suggested several ways potentially to shorten merger investigations, such as limiting enforcement agencies’ discovery demands, but these modifications only reduce the problem at the margins. This Article proposes a more effective reform: a requirement that the antitrust enforcement agencies announce a group of highly concentrated markets in which they will challenge any proposed merger, unless one of the firms is failing. This strategy, which the antitrust agencies have employed in an ad hoc fashion in the past, will discourage anticompetitive mergers and eliminate lengthy reviews that harm consumers.
Wednesday, September 8, 2021
The probabilistic patent theory espoused by Carl Shapiro and Mark Lemley suggests that the lawful term of a patent is limited by the probability that the patent will be held valid and enforceable. For example, under this theory a patent with a 60% chance of being held valid and enforceable would lawfully grant 60% of a statutory patent term; any enforcement beyond that point would risk violating the antitrust laws.
This article—the first to meaningfully challenge the probabilistic patent theory in nearly 20 years—explains that Shapiro and Lemley’s theory has at least three fatal flaws: First, it depends on a “judicially-created” view of patents the Supreme Court has since rejected in Oil States Energy Services v. Greene’s Energy Group. Second, it mistakes a decrease in the value of property in light of litigation risk for a decrease in the ownership or scope of the property; as with all other forms of litigation regarding property, patent litigation may be “probabilistic” but the property in dispute is not. Third, because no patent is without some (often undefinable) level of risk, this theory would shorten the enforceable term of every patent—and would moreover do so to an un undeterminable extent.
Finally, the article refutes the suggestion, accepted by the California Supreme Court, that the U.S. Supreme Court adopted the probabilistic patent theory in its 2013 decision in FTC v. Actavis, Inc. As the article demonstrates, Actavis instead adopted a theory based only on the probabilities of litigation, not the probabilities of a patent.
In this brief essay I set out my views on the way in which competition assessments should be conducted so that they be consistent with the standard of proof in EU competition cases and Easterbrook’s error cost minimization principle. My goal is to develop a normative benchmark against which to assess the European Commission’s actual decision-making practice. I conclude by discussing whether actual practice is consistent with such a benchmark.
Monday, September 6, 2021
Unlike Rock & Roll, where the creativity of seminal groups and individuals peaks in their 20s and 30s, we have a number of antitrust law stars in the same age range as Paul McCartney, Paul Simon, Bob Dylan, Jimmy Page, Eric Clapton, Debbie Harry, and Bruce Springsteen. The 70+ group in antitrust continues to produce important work. This is a time to salute some of these pioneers who continue to rock on and rock hard: Eleanor Fox (NYU), Harry First (NYU), Herb Hovenkamp (Penn), Steve Salop (Georgetown), Tim Muris (George Mason), Doug Ginsburg (George Mason), among others. They are just as creative and productive now as thirty years ago.
The Collusive Efficacy of Competition Clauses in Bertrand Markets with Capacity-Constrained Retailers
The Collusive Efficacy of Competition Clauses in Bertrand Markets with Capacity-Constrained Retailers
We study the collusive efficacy of competition clauses (CC) such as the meeting competition clause (MCC) and the beating competition clauses (BCC) in a general framework. In contrast to previous theoretical studies, we allow for repeated interaction among the retailers and heterogeneity in their sales capacities. Besides that, the selection of the form of the CC is endogeneized. The retailers choose among a wide range of CC types - including the conventional ones such as the MCC and the BCCs with lump sum refunds. Several common statements about the collusive (in)efficacy of CCs cannot be upheld in our framework. We show that in the absence of hassle costs, MCCs might induce collusion in homogeneous markets even if they are adopted only by few retailers. If hassle and implementation costs are mild, collusion can be enforced by BCCs with lump sum refunds. Remarkably, these findings hold for any reasonable rationing rule. However, a complete specification of all collusive CCs is in general impossible without any further reference to the underlying rationing rule.
Friday, September 3, 2021
This paper describes the role that data portability and interoperability measures can play in promoting competition both within and among digital platforms. In particular, these measures can address consumer lock-in, promote unbundling, and enable multi-homing. However, they will not be effective in every market, and in some cases may unintentionally hamper competition.
The implementation of portability and interoperability measures with regards to digital platforms is still limited in some cases, and at its early stages in others. However, these limited experiences point to some lessons learned. In particular, the objective of portability and interoperability measures matters. When implemented with objectives other than competition (such as data protection), these measures may not have procompetitive impacts unless designed with market dynamics in mind. Further, these measures may have unintended consequences if they create new entry barriers or entrench incumbent technologies. In addition, implementation mechanisms will be determinative of the effectiveness of these measures; for example, competition authority or independent third party oversight may be needed to set interoperability standards and adjudicate disputes.
Looking forward, the competition concerns motivating data portability and interoperability may be observed in a growing array of sectors, ranging from automobiles to finance. Promoting competition in the design of these measures, or proposing their implementation in order to encourage competition, may therefore be of increasing importance for the competition policy community.
Thursday, September 2, 2021
In this note, we investigate the causal link between market concentration and markups in a retail setting. We study the Washington retail cannabis industry, which features exogenous variation in market concentration that resulted from retail licenses being awarded via lotteries. We observe wholesale prices and can therefore directly compute transaction-level markups. We find a negative causal relationship between markups and concentration in this setting, where more concentrated markets have significantly lower markups, retail prices, and wholesale prices. The negative effect of concentration on prices provides direct evidence of countervailing buyer power by retailers. These results highlight the value of using industry specific data and rich models of competition to advance the debate on concentration and markups.
Wednesday, September 1, 2021
Common regulatory perspective on the relationship between data, value, and competition in online platforms has increasingly centered on the volume of data accumulated by incumbent firms. This view posits the existence of "data network effects," where more data leads to product improvements, which in turn leads to additional users and more data. In particular, this has raised concerns around incumbent data advantage creating an insurmountable barrier to entry and leading to winner-take-all outcomes in online platforms.
However, this perspective generally does not reflect the value of data in practical settings. More recent work across economics, management science, and engineering shows that there are a variety of factors that impact the value of data and that implications for competition are much more complex and subtle. The framework in this paper presents four key factors – data quality, scale and scope of data, and data uniqueness – that can influence the value that firms can derive from data.
Applying the framework to Netflix, Waymo, and the online advertising industry provides compelling evidence that incumbent data advantage, while generating value for innovation and for the consumer experience, does not necessarily lock out competitors and is not determinative of success. These examples illustrate that data can often serve as a catalyst for innovation that benefits both consumers and the broader ecosystem. The extent to which data accumulation can provide actual incremental value, and whether this is a cause for concern in enabling healthy competition, requires a case-by-case evaluation using the framework, as these factors depend significantly on the domain and context.