Antitrust & Competition Policy Blog

Editor: D. Daniel Sokol
University of Florida
Levin College of Law

A Member of the Law Professor Blogs Network

Wednesday, May 23, 2012

EU Competition Law and Economics

Posted by D. Daniel Sokol

Damien Geradin, Covington & Burling LLP & Tilburg University, Anne Layne-Farrar, Compass Lexecon, and Nicolas Petit, University of Liege have a new treatise on EU Competition Law and Economics.

ABSTRACT: This is the first EU competition law treatise that fully integrates economic reasoning in its treatment of the decisional practice of the European Commission and the case-law of the European Court of Justice. Since the European Commission's move to a "more economic approach" to competition law reasoning and decisional practice, the use of economic argument in competition law cases has become a stricter requirement. Many national competition authorities are also increasingly moving away from a legalistic analysis of a firm's conduct to an effect-based analysis of such conduct, indeed most competition cases today involve teams composed of lawyers and industrial organisation economists.

Competition law books tend to have either only cursory coverage of economics, have separate sections on economics, or indeed are far too technical in the level of economic understanding they assume. Ensuring a genuinely integrated approach to legal and economic analysis, this major new work is written by a team combining the widely recognised expertise of two competition law practitioners and a prominent economic consultant. The book contains economic reasoning throughout in accessible form, and, more pertinently for practitioners, examines economics in the light of how it is used and put to effect in the courts and decision-making institutions of the EU. A general introductory section sets EU competition law in its historical context. The second chapter goes on to explore the economics foundations of EU competition law. What follows then is an integrated treatment of each of the core substantive areas of EU competition law, including Article 101 TFEU, Article 102 TFEU, mergers, cartels and other horizontal agreements and vertical restraints.

May 23, 2012 | Permalink | Comments (0) | TrackBack (0)

Tuesday, May 22, 2012

Competition and Online Search - Blog Symposium Recap

Posted by D. Daniel Sokol

The blog symposium of antitrust law and economics regardong online search issues over the past two days has been great. To review the posts, please see the links below:

Mark Jamison, University of Florida, Warrington College of Business
Adam Thierer, George Mason
Eric Clemons, Wharton School of the University of Pennsylvania
Dan Crane, Michigan Law School
James Grimmelman, New York Law School
Marina Lao, Seton Hall
Bob Litan, Kauffman Foundation
Eugene Volokh, UCLA
Marvin Ammori, Center for Internet and Society at Stanford Law School
Mark Patterson, Fordham Law
Frank Pasquale, Seton Hall
Allen Grunes, Brownstein Hyatt

 

Tomorrow I invite the posters to comment.

May 22, 2012 | Permalink | Comments (0) | TrackBack (0)

Is there a basis in antitrust law for requiring ‘neutral’ search results? Comments by Marina Lao

Search, ‘Essential Facilities,” and the Duty to Deal

Posted by Marina Lao (Seton Hall)

Do the essential facilities doctrine and the antitrust duty to deal provide an antitrust basis to prohibit search “bias,” as some have suggested? In antitrust discourse, search bias is generally understood to mean a search engine’s favoring its own content over that of a competitor in an adjacent market in the organic search results. It would, for example, involve Google automatically returning a Google map in response to a search for “McDonald’s,” suggesting that the user wants directions, instead of applying some sort of “neutral” standard to determine whether a Mapquest or Bing map should be displayed instead. In an article on “Search, Essential Facilities, and the Duty to Deal” that I hope to be able to post on ssrn shortly, I argue that these principles simply do not “fit” in the search context.

The desire to invoke the duty to deal and essential facility is understandable, since Section 2 of the Sherman Act requires not only possession of monopoly power but also exclusionary conduct. And, seeking the competitive advantages that flow to a firm from integration, alone, is not considered exercise of monopoly power in an antitrust sense. In other words, absent an antitrust duty to deal (whether or not involving an essential facility), Google’s favoring its own content would not constitute exclusionary conduct that could give rise to Section 2 liability. I focus my comments only on Google, though all three major search engines tend to give preference to their own content, because Bing and Yahoo! do not have sufficient share of the general search traffic to have any Section 2 exposure.

Even before Trinko expressed its disfavor of the essential facility doctrine, courts have only sparingly applied it to mandate access. Thus, “essentiality” is strictly construed. Alaskan Airlines and other cases have said that a facility controlled by a single firm will be deemed essential only if control “carries with it the power to eliminate competition in the downstream market.” This strict construction is appropriate since antitrust law generally expects firms to compete with their own resources, not a competitor’s. In my article, I discuss in more detail why Google’s search engine could not reasonably be considered critical to competitive viability in a vertical market. While being visible via a search engine’s organic results is obviously a very attractive way to reach potential customers, it is not essential. There are other ways to reach potential customers, though they may not be as good as being included in the search results, and certainly not free.

Another problematic issue is that there may not be any denial of access in the first place. Denial of access is usually a non-issue in most essential facility cases--the monopolist has either denied access or it hasn’t. But, in the situations where a Google rival is said to be “denied access” and foreclosed from competition, the competitor websites are, in fact, still readily accessible to anyone using various keywords on Google search. For example, enter “map sites” into the Google search box, and Mapquest will top the organic results returned by Google. It is just that, for certain queries, Google may list its own content prominently whereas a rival site might not be shown or is lower in the rankings. For example, search for “McDonald’s,” and a Google map will appear first in the search results, followed by a link to Mapquest. Thus, the implicit premise of the complaints about denial of access seems to be that the organic results list is the alleged essential facility, and that access requires nothing less than having access to high ranking for any search term that might reasonably direct traffic to the competitor site. Given how strictly “essentiality” is defined, there is no basis for construing denial of access this broadly.

This takes us to a related issue that is often overlooked, and that is the feasibility of sharing of the facility (or “nonrivalrousness”). The essential facility doctrine does not require a monopolist of even a clearly essential facility to “share” it with rivals if sharing would be impractical or would detract from the monopolist’s use. In the case of search, is sharing possible? It depends on which is the alleged essential facility. If it is the search engine, then of course it can be shared but, in that case, there is no denial of access. If the ranked results list is alleged essential facility, and being denied a coveted top ranking is the alleged denial of access, then the “facility” is not nonrivalrous—there is only one top rank and so forth. In that case, it would seem that Google would have no antitrust obligation to give up any top slot in the results to a competitor if it needs those slots for its own promotion. The law is quite clear that the monopolist does not have to step aside to let a competitor use an essential facility if the facility cannot accommodate both. And, if there is no obligation to do that, there would logically be no need to adopt some “neutral” standard to allocate the scarce resource.

These are but a few intractable conceptual problems with trying to fit the essential facility doctrine to search that I raise in my article. Finally, I do not mean to suggest that how Google runs its search engine can never give rise to antitrust liability. My point is merely that search bias, or a search engine’s favoring its own content, alone cannot provide the basis for section 2 liability. If there were evidence, for example, that Google specifically demoted a website for doing business with Google’s rival, such as for advertising on Bing or Yahoo!, that would go beyond a pure refusal to deal and would be more akin to Lorain Journal.

May 22, 2012 | Permalink | Comments (0) | TrackBack (0)

Are there practical remedies that wouldn’t involve federal regulation of search results? Comments of Frank Pasquale

Posted by Frank Pasquale (Seton Hall)

Could Google make a mistake? Could the company itself, or a small group of its thousands of employees, act with motives that diverge from its publicly stated mission? If such actions hurt the owners of a website, should they in some cases be able to seek recourse from some other entity than Google itself? I believe the answer to all these questions is yes. Answering them helps us confront issues of discrimination, malfeasance, nonfeasance, and technological due process in a rapidly changing online environment. They also suggest how Google might better respond to ongoing investigations and concern about its practices.

Imagine that you own Company A, and your main competitor is the persistent (but demonstrably worse) Company B. In searches for the products you sell, you reliably end up in the top five results in the studies you’ve commissioned; your competitors at Company B are on the fifth or sixth pages.* What happens if Google purchases Company B, and immediately after the purchase, Company B appears to dominate the first page of results, and your company has been relegated to later pages? Stipulate that repeated appeals to webmaster forums and other mechanisms of corporate due process fail. Should there be some type of remedy?

This may seem like a crude and overdrawn hypothetical, but it is a good heuristic for testing the extraordinary skepticism toward independent evaluation of Google's performance and motives expressed in the Ammori/Pelican paper prepared for Google. (For those who prefer more subtle scenarios, please consult my discussion of Google's potential bias against rival video sites after it purchased YouTube, in the article Internet Nondiscrimination Principles. The recently released Statement of EC VP Almunia on the Google antitrust investigation also evokes relevant scenarios.) After reviewing Ammori & Pelican’s work, I worry they would oppose any remedy at law for the owner of Company A. I was particularly troubled by their critique of third-party investigative committees (be they government agencies or nongovernment organizations) which would be able to understand how decisions to change ranking methods originated and how they were implemented.

Qualified Transparency as Remedy

Changes in ranking methodology are rigorously tested and documented. When a website suddenly tumbles dozens of places, and has a plausible story about being targeted as a potential rival of an established Google interest or a space the company is planning to invest in, is it too much to ask for some third party to review the particular factors that led to the demotion? Given how quickly a sudden drop can occur, we are not discussing an infinite variety of changes to be reviewed. Nor are we demanding the disclosure of the entire algorithm to a third-party auditor, or even the revelation of the relevant changes in the algorithm to the party involved, much less the general public. In my early work on this topic, my co-author and I even pointed to the precedent of the supersecretive FISA court as a model, to underscore how much we respected the intellectual property rights of the company (and particularly the value of trade secrecy).

As Mark Patterson has shown, the Dodd-Frank Act already requires far more disclosure from rating agencies. Patterson makes these key points:

Google has been alleged to have manipulated its search results (or ratings) in much the same way that the rating agencies have been alleged to have manipulated credit ratings. Although Google is alleged to have manipulated the ratings of competitors (e.g., potentially competing “vertical” search engines) and credit rating agencies are alleged to have manipulated the ratings of customers (issuers of financial products), the basic phenomenon is the same. . . . lack of transparency in quality can give an information provider market power, as does an absence of price transparency.

[A] requirement imposed on credit-rating agencies in the recent Dodd-Frank financial reform legislation is also well-suited to address competition issues [involving information providers]. In Dodd-Frank, Congress directed the SEC to prescribe rules that, when credit-rating agencies make ‘material changes’ to ‘rating procedures and methodologies,’ ensure that: ‘the changes are applied consistently to all credit ratings to which the changed procedures and methodologies apply. . . and [the CRA] publicly discloses the reason for the change . . .’

Such requirements do not impinge on the information providers’ editorial judgment; they simply require that some information about it be given.

I have discussed other potential models for detecting anti-competitive biases. In contexts ranging from privacy rights to false advertising, authorities in the US and Europe have recognized the need for fast, flexible “quick looks” at suspect business practices. In one of the scenarios I have mentioned involving the FTC and the NAD, 95% of problematic situations are quickly resolved in a self-regulatory fashion. This is not a recipe for the litigation nightmares industry advocates so frequently invoke.

The institutions devoted to fair data practices in Europe that I mention in the Northwestern piece may at first seem inapposite here; complainants like Foundem want more, not less, exposure. However, at bottom the two types of disputes share a critical element: each aggrieved party may feel that the correct data about it has not been logged, used, or processed accurately. We now need to decide whether some entity outside of Google, whether as a result of competition law complaints, consumer protection law, or other principles of commercial fairness, has authority to review and offer its judgment on such questions.

What that entity ultimately does about its findings is not my central concern at this time. Simply informing consumers about potential biases would be a valuable public service. Search is often a credence service, and in areas ranging from health care to law we recognize the need for third parties to verify and validate the quality of credence services provided. In past work, I have proposed a limited right of annotation (as minor as an asterisk) for certain entities aggrieved by problematic search engine rankings.

Both disclosure and annotation remedies can be implemented in many different ways, with whatever depth and breadth the situation may warrant. As Ayres and Braithwaite demonstrated two decades ago, “government can support and encourage industry self-regulation.” Is it too much to ask for some entity outside of Google to be able to “look under the hood” and understand what is going on in plausibly contested scenarios? If so, such an abdication of administrative responsibility in the face of technical complexity bodes ill not merely for a level competitive playing field, but for democratic and judicial processes themselves. Antitrust law flirts with irrelevance if it disdains the technical tools necessary to understand a modern information economy.

Avoiding Digital Feudalism

Disclosure and auditing are not merely remedies; they may even be considered salutary extensions of current business practices. Search engines rank websites at least in part by algorithmically processing signals from the site (such as the number of links to it, the number of links to those links, whether users have clicked on the site when it was ranked in search results previously, etc.). Let's say that, for whatever reason, Google's technology fails to pick up on all the signals relating to a given website. The webmaster for the site might complain, and may well get a response. Establishing webmaster forums that allow for that type of dialogue between the ranked and the ranker has, to this point at least, seemed like a fair and responsible business practice to Google itself.

If Google thought of its rankings as a kind of virtual world, whose members have essentially accepted (via terms of service) the absolute sovereignty of the ruler of that territory in the metaverse, such a dialogical process would make little sense. In the digital feudalism of virtual worlds, no one has a right to question the unilateral decision of the ruler. Errors would only have meaning as lost profit opportunities, not as failures to run a competition properly. At its best, Google recognizes itself in the latter role, rather than as a purely profit-maximizing entity.

Google is not alone in exercising power over the internet. Apple, Facebook, Twitter, and Amazon can also rely on opaque technologies, sometimes leaving users in the dark as to exactly why any given app, story, or book is featured at a particular time. Though we are focusing on Google now, the problems raised in its various antitrust disputes are not isolated. Rather, we should expect any company aspiring to order vast amounts of information to try to keep its methods secret, if only to reduce controversy and foil copycat competitors. However wise this secrecy may be as a business strategy, it devastates our ability to truly understand the social world Silicon Valley is creating. Moreover, like a modern-day Ring of Gyges, opacity creates ample opportunities to hide anti-competitive, discriminatory, or simply careless conduct behind a veil of technical inscrutability. Qualified transparency can address these concerns while respecting intellectual property rights.

It is disappointing to see Marvin Ammori, who has done so much to bring carriers’ troubling practices to light (and to justice), ignore parallel problems in other parts of the internet. If policymakers accept the arguments he is making now with respect to Google, he may undermine all he has accomplished in the realm of net neutrality. Bottlenecks at any layer of the Internet create opportunities for the exercise of undue power over the flow of information and ideas.

* I don’t even raise the possibility of the site owner knowing whether it is in the top 5 results generally, because, in an era of personalization, only Google can know that. At present, we can only hope for relatively good sampling of sites.

May 22, 2012 | Permalink | Comments (0) | TrackBack (0)

“Search Neutrality” and Network Neutrality: Birds of a Very Different Feather - Comments by Marvin Ammori

Posted by Marvin Ammori, Center for Internet and Society at Stanford Law School

Last week, I spoke on an excellent panel with law professors Eugene Volokh, James Grimmelmann, Dawn Nunziato, and Frank Pasquale. We discussed remedies for “search bias” alleged in the Google antitrust inquiry.

“Search neutrality” came up as a usual proposed remedy, with Grimmelmann devoting his talk to critiquing the concept and Frank devoting part of his talk to defending it. One speaker suggested that search neutrality and network neutrality have a lot in common –and that people who support network neutrality should also support search neutrality.

But very few network neutrality proponents support search neutrality or have advocated for it. While some suggest cynical reasons for supporting the latter, there are actually enormous, principled distinctions between network neutrality and search neutrality. Despite the deliberate linguistic similarity, the concepts have about as much in common as George Washington and George Washington Carver.

The goal of this post is to explain some of those distinctions. The economics and policy considerations for network neutrality and “search neutrality” are very different. I think these distinctions help explain why network neutrality has had (and continues to have) enormous support in the consumer, civil liberties, tech, and user communities, while “search neutrality” has had (and should continue to have) such minimal support.

Definitions.

Network neutrality is a requirement imposed on ISPs, like cable and phone companies, forbidding them from blocking or discriminating against websites or software online.

“Search neutrality” would be a requirement that search engines (like Bing) cannot discriminate against sites that compete with the search engine or with another site owned by its parent company (for Bing, Microsoft). But the usual example is Google search.

Distinctions:

Here are several key distinctions, some of which are “distinctions in kind” and some “distinctions in degree.”

Market Problem:

Network neutrality is aiming to solve a problem that the market cannot solve because of the economics and regulation for broadband delivery. Search neutrality is not.

1. Terminating access monopoly. Search engines do not have an appreciable terminating access monopoly. ISPs do. In the broadband context, Netflix or Amazon cannot reach Luke Pelican as a consumer unless they go through his ISP (Comcast). Luke’s ISP has a monopoly on access terminating to him. Even though Luke has an initial choice among ISPs, he has chosen Comcast, is locked into a multi-month contract, and it would be costly to switch merely because of one site like Netflix or Amazon. By contrast, no search engine has a termination access monopoly over Luke regarding websites. Luke can simply punch in the URL for Bing, Yelp, MapQuest, or any other company. Those companies can advertise online and offline to let people know that they offer specialized search results. And while Luke may usually use Google, he can switch at any time, without breaking a contract or paying any fees. He can try DuckDuckGo, Bing, Yelp, or any other search tool.

2. Economic barriers to entry. The barriers to entry and competition are nearly insurmountable in public networks but not in search. Reflecting the huge costs of entering the market for networks, public networks (such as telephone lines and cable lines) were generally built with government-backed monopolies and guaranteed rates of return during eras of public utilities regulation. Today’s networks were not built in competitive environments with risk capital—their return on capital was guaranteed by government monopoly-regulation. To this day, many of these networks continue to receive subsidies through a billion dollar Universal Service Fund and other subsidy-programs such as accelerated depreciation, suggesting that many regions of the nation cannot sustain one carrier, let alone multiple carriers. As Harvard’s Susan Crawford has argued, it is unlikely that wireless networks—which also have huge barriers to entry in terms of available spectrum licenses—can compete with cable wireline networks, as demonstrated by the Verizon-cable deal. Search engines like Yahoo! and Google did not require such expenditures. Google was started by small groups of students working out of dorm rooms with limited resources who attracted risk-capital from investors. Risk-capital is available from a host of sources for technology investors. Even now, companies like Blekko and DuckDuckGo take on Google and Bing with risk-capital from venture capitalists and angel investors. Name a new cable or phone company building a network (other than Google itself in Kansas City).

3. Government-imposed entry barriers. To start a wireless network, you need licenses from the government. There is a finite number of such licenses. A recent episode with a hedge-fund backed company called LightSquared emphasized the difficulty of getting government permission to provide wireless broadband to compete with the existing giants of AT&T and Verizon. For wired networks, the government-imposed barriers include legal hurdles to access rights-of-way to dig up streets and to attach wires on utility poles and franchise requirements in localities--and the government inaction of not regulating the “special access” terms for backhaul for entrants even though the government regulates the access to utility poles that now benefit incumbents. When it comes to search, there are very few government barriers to entry limiting the potential for competition to enter and exit the market.

4. Switching costs. The market for ISPs has more friction than the market for search. Consumers face large costs if they were to switch from one ISP to another because they want to reach a site that is discriminated against. Consumers usually have to pay early termination fees to get out of long contracts. For search, users can just click to another search engine. The main switching costs in search concern changing the default search engine for the browser on your laptop or smartphone and getting used to the interface of another search engine. Neither of these costs is particularly high.

5. Public investment. Similarly, the market for broadband is already deeply affected by government investment, suggesting the right to condition subsidies. Some have argued that since the public largely paid for the public Internet networks of the cable and phone companies, access to the Internet should be provided on a nondiscriminatory basis for the public. Indeed, the stimulus funds for broadband had such strings attached—non-discrimination rules for those who accepted funds. The public does not subsidize search engines through tax dollars.

Digital Industrial Policy:

Not only is the market problem different for broadband and search, but most Americans (from what I can tell) want different things from broadband and from search, as a matter of “digital industrial policy.”

6. Interconnection and balkanization. One argument for network neutrality is ensuring “seamless” and complete interconnection between communications networks and the applications riding atop them. (Kevin Werbach’s Only Connect focuses on this argument, derived from 47 U.S.C. 251.) The Internet, technically speaking, is in fact a system of interconnected networks, hence the name Internet. If some ISPs block some sites, and others discriminate against other technologies, then the “Internet” would represent only partially interconnected networks at best. This would mean that the Internet in China would be different from the Internet in the US; and the Internet on Comcast would be different from the Internet on AT&T. This would impose far more costs on developers of applications and their users, and make users unable to access all the speech that is available on the Internet without discrimination. Network neutrality addresses that problem by ensuring all networks interconnect seamlessly, without discriminations that balkanize. Search and “search neutrality” do not implicate the same interconnection issues. There is no longstanding government policy of interconnection suggesting that search engines should “interconnect” (whatever that would mean here) with each other or specialized competitors. Nor is there a strong economic or democratic argument for such interconnection.

7. General purpose technology. As a matter of digital policy, many argue that the Internet should remain a “general-purpose technology” as such technologies have a disproportionate impact on creating economic growth. The Internet was not traditionally optimized for any specific purpose but for general purposes. Each web “application” literally applies the Internet’s general-purpose technology in a new way. This is perhaps the main reason that the Internet has supported such economic and democratic innovation. Without network neutrality, broadband providers could make their networks more specialized or optimized for specific purposes rather than for general purposes. As a matter of digital policy, many oppose that. Whatever one might think of search, nobody thinks of it as a general purpose technology that can support the nearly infinite number of applications that the Internet can support. Search, while extremely valuable and used for many purposes, is pretty much a single purpose technology.

8. Historical success. The protocols and business practices underlying the Internet have reflected principles of non-discrimination (now often called the broad end-to-end principle and expounded in the classic book on network neutrality). Meanwhile, the entire point of search from the very beginning has been to discriminate. The very purpose of a search engine is to sift through information and deliver the most relevant information to the user. By their nature search engines must discriminate, otherwise they provide little value to users. Communications networks have had nondiscrimination policies since 1910 for telephone lines and earlier on telegraph and postal networks. Sometimes people discuss this in terms of layers—US policy has been to regulate the physical layer with nondiscrimination rules but not the content layer, and it has generally worked. Indeed, it is because of nondiscrimination rules at the physical layer (e.g. network neutrality) that the upper layers can be unregulated and competitive (e.g., no need for search neutrality).

9. Definitional/conceptual issues. It is not clear what the “industrial policy” for search neutrality would be. Because search is based on discriminating among sites to choose and rank the first, second, and third results, “search neutrality” is something of an oxymoron that has almost a dozen definitions (as analyzed by James Grimmelmann and Eric Goldman) Meanwhile, network neutrality has been defined in principle, beginning with the AT&T-BellSouth merger and continuing through the FCC’s most recent orders.

In short, network neutrality and “search neutrality” address different problems and do so in different ways for different purposes.

Those who defend search neutrality cannot merely analogize to network neutrality—the analogy fails pretty decisively.

Each concept must stand or fall on its own arguments and merits, not on linguistic similarities. As these distinctions suggest, there are many reasons why network neutrality would have so much popular and expert support—and why “search neutrality” has so little.

Different Supporters and Opponents.

The godfather of “search neutrality” is Frank Pasquale, a beloved, polymathic law professor at Seton Hall (and a friend), who often steps into the lion’s den with antitrust economists to argue for broader conceptions of the good than mere, American-centered efficiency. He co-authored perhaps the leading article on search neutrality, a follow-up arguing for a government-funded search engine that might obviate the need for search neutrality, and has increasingly staked out a position focused on transparency arguments and disclosure rather than conduct. Opponents of search neutrality include Law School professor James Grimmelmann and Santa Clara Law School professor Eric Goldman who have argued, among other things, that “search neutrality” is incoherent, undefined, and self-contradicting. I don’t know for sure, but I am guessing all three support some version of network neutrality.

At the same time, no nonprofit advocacy organization that fought for network neutrality has spoken out in favor of “search neutrality,” to my knowledge. I am not sure which consumer groups, if any, advocate for search neutrality today. The “advocates” for search neutrality in Washington, DC, have at times seemed purely strategic and half-hearted. In the FCC’s network-neutrality proceeding, cable and phone companies that oppose network neutrality, such as AT&T and Time Warner Cable, cynically invoked “search neutrality” as a bogeyman to distract from the core debates in those proceedings. (See filings here, here, and here for examples.) Today, the leading corporate advocates for “search neutrality” understandably are a coalition of companies, called “FairSearch,” that see themselves as competing with Google are arguing for a “search neutrality” requirement to be imposed on Google. More specialized search companies in that coalition, such as Yelp, MapQuest, TripAdvisor, and Foundem (a product search site or, depending on your viewpoint, a spammy mirror site), have alleged discrimination either by reducing the ranking of the site or elevating the ranking of Google search products. The coalition also includes Microsoft, which owns rival search engine Bing, and is not necessarily alleging search discrimination against its own Bing products in Google Search. Moreover, studies suggest that Microsoft’s Bing and Google’s search both return their own affiliated sites with similar prominence within search results. So Microsoft’s argument on “search neutrality,” like those of cable and phone companies, comes off as somewhat more strategic than principled—since Bing does not profess to offer “neutrality” vis a vis specialized search providers.

Disclosure as usual: I am one of Google's many policy advisors on these issues but don't speak for them.]

May 22, 2012 | Permalink | Comments (0) | TrackBack (0)

Is search protected by the first amendment? Comments by Eugene Volokh

Posted by Eugene Volokh

The argument in the First Amendment Protection for Search Engine Search Results white paper (which was written by me but commissioned by Google) is simple: Once, the leading sources to which people turned for useful information were newspapers, guidebooks, and encyclopedias. Today, these sources also include search engine results, which people use along with other sources to learn about news, local institutions, products, services, and more. Then and now, the First Amendment has protected all these forms of speech from government attempts to regulate what they present or how they present it.

Google, Microsoft’s Bing, Yahoo! Search, and other search engines are speakers. First, they sometimes convey information that the search engine company has itself prepared or compiled. Second, they direct users to material created by others (and such references are themselves constitutionally protected speech).

Third, and most valuably, search engines select and sort the results, aiming to give users what the search engine companies see as the most helpful and useful information. (That’s how each search engine company tries to keep users coming back to it rather than to its competitors.) This selection and sorting is a mix of science and art: It uses computerized algorithms, but those algorithms themselves inherently incorporate the search engine company engineers’ judgments about what material users are most likely to find responsive to their queries.

In this respect, each search engine’s editorial judgment is much like many other familiar editorial judgments’ -- newspapers’ judgments about which topics to cover, which wire service stories to run (and where), and which columnists to include; guidebooks’ judgments about which local attractions to mention; the judgment of sites such as DrudgeReport.com about what sites to link to; and more.

Thus, for instance, when many newspapers chose to publish TV listings, they were free to choose to do so without regard to whether this choice undermined the market for TV Guide. Likewise, search engines are free to include and highlight their own listings of (for example) local review pages even though Yelp might prefer that the search engines instead rank Yelp’s information high­er. And this First Amendment protection is even more clearly present when a speaker, such as a search engine, makes not just the one include-or-not editorial judgment, but rather many judgments about how to design the algorithms that produce and rank search results that -- in the search engine engineers’ opinion -- are likely to be most useful to users.

Indeed, two federal court decisions (Search King, Inc. v. Google Technology, Inc. and Langdon v. Google, Inc.) have already held that search results, including the choices of what to include in those results, are “entitled to full constitutional protection.” And this conclusion is compelled by Supreme Court precedents. Internet speech, and interactive speech, is fully constitutionally protected. Facts and opinions on nonpolitical questions are likewise fully protected, as are choices about how to select and arrange the material in one’s speech product.

This full protection remains when the choices are implemented with the help of computerized algorithms. Those algorithms represent the choices of their human authors. The algorithms produce results that are read by human readers; the First Amendment value of speech stems from the value of the speech to listeners or readers as well as from the value of the speech to speakers. And the objections to Google’s placement of its thematic search results arise precisely because Google employees are said to have made a con­scious choice to include those results in a particular place.

Moreover, antitrust law can no more trump this constitutional protection than can other laws that are aimed at protecting the supposed “fairness” or “neutrality” of speech. In the era before the Internet, many towns had only one newspaper that had a practical monopoly on text news coverage. But even then the Court stressed that the government may not “comp[ell] . . . a newspaper to print that which it would not otherwise print” and that the newspaper maintained its rights to select what to include and what to exclude “no matter how secure [its] local monopoly.”

The same logic applies to search engines -- but more so. There are no “one-search-engine towns”: All Internet users can quickly switch search engines if they find that their current search engine provides coverage that they sees as unfair or incomplete.

Most of us started out by using search engines other than Google; we switched to Google because we had heard that it provided superior results. We can easily switch away if we conclude the results are no longer satisfactory. This user power -- and not governmental coercion -- is the proper remedy for any perceived unfair selection by search engines, and the proper deterrent to such supposed unfairness. (For more on this, and on the other points I mention, see the white paper itself, which has more detailed arguments, citations, and in particular a discussion of Lorain Journal Co. v. United States and other First Amendment and antitrust cases in Part IV.)

This conclusion is consistent with 47 U.S.C. § 230’s protection of search engines (among others) from civil liability. The premise of § 230 is that online content providers are entitled both to immunity from liability for other people’s speech (§ 230(c)(1)) and to the right to select what speech to include (§ 230(c)(2)).

Indeed, § 230 was enacted in response to a court decision (Stratton Oakmont v. Prodigy) that concluded that online organizations had to choose whether to be (1) common carriers, immune from liability but barred from making editorial judgments, or (2) publishers, entitled to make editorial judgment but subject to liability for others’ speech. Congress specifically intended to overrule that decision, and to make it easier for content providers -- such as search engines -- to both make editing judgments of their own, and avoid civil liability for the speech of others.

May 22, 2012 | Permalink | Comments (0) | TrackBack (0)

Monday, May 21, 2012

Is there a basis in antitrust law for requiring ‘neutral’ search results - Comments of Allen Grunes

Posted by Allen Grunes (Brownstein Hyatt)

As we get closer to the Federal Trade Commission making a decision on its Google investigation, both the FTC and Google have been sending public signals to each other. Last month the FTC retained an outside trial lawyer, Beth Wilkinson, signaling that it was serious about the possibility of litigating. The parallel to DOJ retaining David Boies to litigate the Microsoft case was clear, unmistakable, and probably intentional.

In an interesting twist, Google has responded with a new wave of scholarship attacking the legal bases of a possible FTC action from a variety of angles. It is remarkable to note the number of law professors who are now doing research and writing articles “supported by Google.” The law professors Google has financially supported are on the right, the left, and the center of academic antitrust.

I bring this up because, at least in my experience, it is a novel tactic. It reminds me of the massive lobbying effort that AT&T undertook in connection with its attempted takeover of T-Mobile. Remember the 1,500 cupcakes AT&T delivered to the FCC before the merger was announced? Former FCC Commissioner James Quello apparently once said: “If you can’t eat their food and drink their booze and still vote against them, you shouldn’t have this job.” Google has added a scholarly twist – and in that way, perhaps the strategy is typically Google. So I’ll update the Quello comment: If you can’t read their law review articles and still vote against them, maybe you shouldn’t have this job.

Is there a basis in antitrust law for requiring “neutral” search results? Everyone knows that Google’s mantra is “Don’t be evil.” Few people know what it means. In their 2004 IPO letter, Google’s founders have a section titled “Don’t be evil.” That section stresses the importance of a wall between ads and search results. Ads are to be relevant and clearly labeled. Search results are to be unbiased and objective, and not for sale. “This is similar to a well-run newspaper” the founders’ letter says.

And indeed, the comparison to a well-run newspaper is apt. Google is an advertising-supported media business, much like radio, television and newspapers. Only Google’s product is search results, not journalism or television programs.

From an antitrust context, two characteristics of media markets may have something to tell us. First, advertising-supported media are two-sided markets. The more viewers, listeners or readers you have, the more valuable you are to advertisers. The converse is also true. As you lose audience, you lose advertisers. As you lose advertisers, your product declines and you lose audience. In the newspaper industry, that is called the “death spiral.”

Second, to quote Justice Frankfurter in the Associated Press case: “Truth and understanding are not wares like peanuts or potatoes.” The news and information businesses are affected by the public interest. In that same 2004 IPO letter, Google’s founders recognize that searching and organizing the world’s information “is an unusually important task” that should be carried out by a company that is “interested in the public good.” Why? Because “a well functioning society should have abundant, free and unbiased access to high quality information.” That is an almost perfect statement of the importance of Google as a media business to the marketplace of ideas. Maurice Stucke and I have argued elsewhere that more antitrust enforcement, not less, is appropriate in such a case. The basic insight comes from Associated Press, this time from Justice Black: “The First Amendment, far from providing an argument against the Sherman Act, here provides powerful reasons to the contrary.”

With these two insights about media markets, let’s come back to the question of whether there is a basis in antitrust law for requiring “neutral” search results.

It is difficult to say what a truly “neutral” search result would be, and it is true that no one has a right to appear at a particular place on Google’s search results page – say first, or second, or tenth, or eighteenth. But I don’t think that is the right question. Rather, the question is whether there is antitrust significance if Google in fact deliberately (and without justification) demoted search results either individually or in the aggregate.

And here I think that the answer is yes.

Consider the Microsoft decision. Favoring your own products or services, as Microsoft did with Explorer, is not enough to show monopoly maintenance. After all, Internet Explorer users could still download Netscape, so in that sense competition was just “one click away.”

The necessary next step has to be some affirmative conduct to shrink your would-be rival’s share. That was a critical element to the D.C. Circuit in Microsoft. We know how Microsoft did it, according to the court: by conduct including restrictive agreements with third parties and deception of developers. What are some of the possible parallels?

In a two-sided market, scale comes from two related things: audience and ads. In the Internet world, that means hits and ads or other revenue sources. Stop an emerging competitor from getting enough hits, and you make it less valuable to advertisers or business partners. Stop it from getting ads (or other revenue) and it won’t grow. That is the newspaper “death spiral.”

So think Lorain Journal, updated. Recall that in Lorain Journal, the newspaper refused to deal with advertisers who advertised on the new competitor, a radio station. Deliberately demoting search results is akin to a refusal to deal, since a significant loss of ranking amounts to the same thing. But notice something else: as in Lorain Journal, the conduct is aimed not just at the competitor, but also – and in fact primarily – at the competitor’s advertisers or other revenue sources. Cut off its air supply, so to speak, so it won’t grow into a competitive threat.

To complete the inquiry, I think you need to ask: What are the threats to Google’s dominance over search? Or, for that matter, to its dominance over online advertising? To the extent that vertical search services or social media may be such threats, I think you have the ingredients for a case very similar to the one DOJ brought against Microsoft. But the key is in the documents. Few would have guessed that Microsoft viewed Netscape as a threat to its operating system monopoly.

Notice something interesting here. When you recast the question, the focus is on competitive entry or expansion. It is perfectly consistent with dynamic competition. In Microsoft, the focus was on whether the defendant was taking affirmative steps to prevent the emergence of something new, something that could displace its dominance in the operating system market.

When you recast the question, whether or not Google has a “duty to deal” is not the central issue. And there is no need to assume or prove that there is such a thing as a “neutral” search result.

May 21, 2012 | Permalink | Comments (0) | TrackBack (0)

How can we measure Google’s market power? - Comments of Mark Patterson

Posted by Mark R. Patterson (Fordham)

Surprisingly, given the amount of attention that has been given to whether Google has acted anticompetitively in the search market, much less attention has been paid to whether Google has market power. Those who favor antitrust scrutiny of Google generally cite its large market share, from which they infer or assume its dominance. Those who are skeptical of competition law’s role in regulating search, on the other hand, cite Google’s “competition is only a click away” mantra to suggest that Google’s market position is precarious. In fact, the issue of Google’s power is more interesting than either of these approaches suggests.

The Limited Relevance of Market Share

Competition law uses market share as a proxy for power because it often reflects the ability of a firm to act without regard to competition. Where the product is information, however, competing firms may be able to respond quickly and easily. For example, if Google were to act anticompetitively, a competing search engine would easily be able to “produce” products to meet the demand of those who were unhappy with Google’s products. The products at issue are search results, so the only obstacle to a competing search engine producing more of them would be the installation of more server capacity to deliver the results to customers. Although expanding server capacity imposes some costs and takes some time, those limitations are small compared with, say, expansion of capacity in the production of the archetypal widget. Hence, market share is a relatively poor proxy for power when the product at issue is information.

There is another, perhaps more important element related to the volume of search results delivered, though, and that is the advantage a search engine gains from the information gathered from searches. A search engine delivering a larger volume of search results gains valuable information from its users’ searches and thus is likely to deliver better search results. This effect is not, however, so much one of current market share as of the cumulative number of searches delivered, so that it does not support using current market share as a measure of power. It is better treated, perhaps, as exclusive access to a valuable input, in the sense that prior searches are the raw material from which search results are in part derived. Although this advantage may be important, the focus of the comments here is on another possible source of power: the difficulty of evaluating the quality of search results.

Knowing When to “Click Away” from Google

As Google argues, users can easily switch to other search engines. But the ease of clicking to another search site does not mean that Google has no power. For Google to be constrained, it must also be the case that users can determine when it is advantageous to click away. Because search information is provided for free, users will make that determination by comparing search engines on the basis of quality. Users may find it quite difficult to determine whether the quality of the results they are receiving from a particular search engine justify switching. As Kenneth Arrow described, “[information’s] value for the purchaser is not known until he knows the information, but then he has in effect acquired it without cost.” As a result, in many instances of search, a consumer will be seeking information in circumstances in which she will be unable to evaluate the quality of the information she receives.

Although one is sometimes confident before performing a search that its quality will be high, as for example when one searches for a particular institution, like the “Antitrust and Competition Policy Blog,” more often one is searching for something less specific, like “New York hotels.” In that case, one cannot be sure before performing the search, and perhaps even before going to the resulting web pages or even the hotels themselves, whether one has received useful results. Even if a user ran the search on another search engine and obtained different search results, it would not be clear which results were better. In that sense, a search result can be a credence good, a good whose quality is difficult or impossible to assess. This lack of transparency in quality can give an information provider market power, just as can an absence of transparency in price for other products.

Significant Power to Distort Search Results?

The real question, though, particularly for Sherman Act § 2 and Article 102 TFEU, is whether Google has significant power. One possible way to make this assessment would be to consider whether moving a site up or down several places in search results would show significant market power. As is discussed in the fuller version of these comments, the existence and apparent success of search engine optimization are evidence that there is little user response to relatively minor differences in search results. If there is no significant user response to moving a site down several spots in the results, then we can ask if that constitutes a significant lowering of search-result quality, from which we could infer the presence of significant market power.

One way to quantify the loss in quality of moving sites in search results, which is offered tentatively here, is to use the prices paid for placement in Google’s AdWords results. Because the value of search positions in the so-called “organic” results is not quantified, but AdWords positions are, we can use AdWords as a proxy for the organic results. Indeed, if the price difference between positions in the AdWords results differs significantly, one would expect the difference in value between similar positions in the organic results to be comparable, or even greater. Although actual prices for AdWords are not easy to obtain, one can use Google’s own “Traffic Estimator” to estimate some figures. For example, using the keyword phrase “kitchen faucet,” the Traffic Estimator provides the following numbers for different specified maximum costs per click (“CPC”):

maximum

CPC (specified)

estimated average CPC

estimated ad position

estimated daily clicks

estimated daily cost

$2.50

$1.26

1.49

404.08

$510.90

$2.00

$1.11

1.71

379.86

$421.63

$1.50

$0.92

2.14

338.75

$312.35

$1.00

$0.70

3.00

264.30

$184.10

$0.50

$0.44

5.83

119.69

$ 52.16

As can be seen in the table, the price difference between ad position 2 and 3 in these estimates is greater than ($0.92 – $0.70) / $0.92 = 23.9%. The difference between positions 1 and 2 is greater than ($1.26 – $0.92) / $1.26 = 27.0%. For this keyword phrase, then, if Google could move a site from position 3 to position 2 or from position 2 to position 1, it would be decreasing the value of its placement by approximately 25%. The ability to lower quality by that amount would, following the U.S. Merger Guidelines test, show the existence of market power and, because 25% is significantly greater than the Guidelines’ 10% threshold, perhaps also monopoly power.

Admittedly, this approach is not unproblematic. Among other things, it uses prices paid by advertisers as a proxy for value to consumers. That is perhaps not entirely unreasonable in a two-sided market like Google’s, in that AdWords advertising serves not only an informational role but a signaling one. An investment in AdWords is an indication that the advertiser expects to receive a return on that investment, and that return will come from consumers’ purchases from that advertiser. Hence, the willingness to spend on advertising is likely related to the value placed by consumers on positioning in search results.

Conclusion

In any case, these comments are offered here not so much as presenting a precise means of assessing market power as an example of the kind of approach that could be used for an product like Google’s. As information products come to constitute a larger proportion of the market, and to be involved in a larger proportion of allegations regarding anticompetitive conduct, it seems likely that competition law will have to develop entirely new techniques for addressing the special problems posed by information.

This post is derived from a working paper available here.

May 21, 2012 | Permalink | Comments (1) | TrackBack (0)

Is there a basis in antitrust law for requiring ‘neutral’ search results - Comments of James Grimmelman

Posted by James Grimmelman

The heart of the gathering antitrust case against Google appears to be that it sometimes "manipulates" the order in which it presents search results, in order to promote its own services or to demote competitors. The argument has intuitive appeal in light of Google's many representations that its rankings are calculated "automatically" and "objectively," rather than reflecting "the beliefs and preferences of those who work at Google." But as a footing for legal intervention, manipulation is shaky ground. The problem is that one cannot define "manipulation" without some principled conception of the baseline from which it is a deviation. To punish Google for being non-neutral, one must first define "neutral," and this is a surprisingly difficult task.

In the first place, search engines exist to make distinctions among websites, so equality of outcome is the wrong goal. Nor is it possible to say, except in extremely rare cases (such as, perhaps, "4263 feet in meters") what the objectively correct best search results are. The entire basis of search is that different users have different goals, and the entire basis of competition in search is that different search engines have different ways of identifying relevant content. Courts and regulators who attempt to substitute their own judgments of quality for a search engine's are likely to do worse by its users.

Neutrality, then, must be a process value: even-handed treatment of all websites, whether they be the search engine's friends or foes. Call this idea "impartiality." Tarleton Gillespie suggested the term to me in conversation.) The challenge for impartiality is that search engines are in the business of making distinctions among websites (Google alone makes hundreds of changes a year).

A strong version of impartiality would be akin to Rawls's veil of ignorance: algorithmic changes must be made without knowledge of which websites they will help and hurt. This is probably a bad idea. Consider the DecorMyEyes scam: an unethical glasses merchant deliberately sought out scathing reviews from furious former customers, because the attention _qua_ attention boosted his search rank. Google responded with an algorithmic tweak specifically targeted at websites like his. Strong impartiality would break the feedback loops that let search engines find and fix their mistakes.

Instead, then, the anti-manipulation case hinges on a weaker form of impartiality, one that prohibits only those algorithmic changes that favor Google at the expense of its competitors. Here, however, it confronts one of the most difficult problems of high-technology antitrust: weighing pro-competitive justifications and anti-competitive harms in the design of complicated and rapidly changing products. Many self-serving innovations in search also have obvious user benefits.

One example is Google's treatment of product-search sites like Foundem and Ciao. Google has admitted that it applies algorithmic penalties to price-comparison sites. This may sound like naked retaliation against competitors, but the sad truth is that most of these "competitors" are threats only to Google's users, not to Google itself. There are some high-quality product-search sites, but also hundreds of me-too sites with interchangeable functionality and questionable graphic design. When users search for a product by its name, these me-too sites are trying to reintermediate a transaction that has very little need of them. Ranking penalties directed at this category share some of the pro-consumer justification of Google's recent moves against webspam.

A slightly different practice is Google's increasing use of what it calls [Universal Search], in which it offers news, image, video, local, and other specialized search results on the main results page, intermingled with the classic "ten blue links." Since Google has competition in all of these specialized areas, Universal Search favors Google's own services over competitors'. Universal Search is an obvious departure from neutrality, whatever your baseline--but is it bad for consumers? The inclusion of maps and local results is an overwhelming positive: it saves users a click and helps them get the address they're looking for more directly. Other integrations, such as Google's attempts to promote its Google+ social network by [integrating social results], are more ambiguous. Some integration rather than none is almost certainly the best overall design, and any attempt to draw a line defining which integration is permissible will raise sharp questions about regulatory competence.

Some observers have suggested not that Google be prohibited from offering Universal Search, but that it be required to modularize the components, so that users could choose which source of news results, map results, and so on would be included. This idea is structurally elegant, but in-house integration also has important pragmatic benefits. Google and Bing don't just decide _which_ map results to show, they also decide _when_ to show map results, and what the likely quality of any given map result is compared with other possible results. These comparative quality assessments don't work with third-party plugin services.

It makes sense for general-purpose search engines to turn their expertise as well to specialized search. Once they do, it makes sense for them to return their own specialized results alongside their general-purpose results. And once they do that, it also makes sense for them to invite users to click through to their specialized subsites to explore the specialized results in more depth. All of these moves are so immediately beneficial to users that regulators concerned about Universal Search should tread with great caution.

For more on these issues, see my papers Some Skepticism About Search Neutrality, The Google Dilemma, and The Structure of Search Engine Law.

May 21, 2012 | Permalink | Comments (1) | TrackBack (0)

Is there a basis in antitrust law for requiring ‘neutral’ search results - Comments of Dan Crane

Some Skepticism About Search Neutrality

Posted by Daniel A. Crane

Would it be a good idea for antitrust law to require dominant Internet search engines to be “neutral” in listing their organic search hits, where “neutrality” would prohibit the search engine from giving preference to its own or affiliates’ websites? In two forthcoming symposia essays, I argue that it would be a very bad idea. To be clear, I do argue that dominant search engines like Google in the U.S. and Europe, Baidu in China, and Yahoo in other parts of Asia should have completely immunity from antitrust law for the way they implement their search engines. However, a general principle of search neutrality is unsupported by antitrust principles and would be a disaster for search engine innovation.

My argument is in two parts. First, the argument for a search neutrality principle applicable to Google (for now, I’ll stick with Google) rests on two falsifiable empirical claims: (1) that Google is dominant in Internet search; and (2) that Google is able to leverage its dominance in Internet search to distort competition in Internet commerce. The argument may founder on proposition (1). Studies have shown that most users are willing to multi-home, or switch from one search engine to another. If, as Google claims, competition is always just “one click away” and users are willing to click, then it seems unlikely that Google should be considered dominant in Internet search.

But assuming for the sake of the argument that Google is dominant in search and that leveraging to an adjacent website is, in theory, a rational business move if Google can pull it off, one may ask whether this vision has any correlation with reality. Just because a search engine is dominant vis-à-vis other search engines, it does not necessarily have the power to promote or demote adjacent websites to its advantage and in a way that seriously affected the overall competitiveness of the adjacent market. This would only be true if search engines were indispensible portals for accessing websites. They are not. Users link to websites from many origins—for example, bookmarks, links on other websites, or links forwarded in e-mails—other than search engines. Even dominant search engines account for a relatively small percentage of the traffic origins.

For example, when Google acquired the travel-search software company ITA in 2011, rivals complained that Google would use its dominance in search to steer consumers to a Google travel site instead of rival sites like Expedia, Travelocity, and Priceline. But even if Google did that, it is hard to imagine that this could be fatal to rival travel search sites. According to compete.com data, only a small volume of traffic into the three big travel search sites originated with a Google search—12% for Expedia and 10% for Travelocity and Priceline. The percentage of Yahoo! travel and Bing travel (Microsoft’s service) originating with Google is even smaller—7% and 4% respectively.

Google’s relatively low share in search referral is not limited to the travel sites. It includes news sites like The New York Times, the Huffington Post, Foxnews.com, and Politico, where the percentage of incoming traffic from Google is 20%, 6%, 11%, and 12% respectively. It also includes social media sites like Facebook and Twitter, where Google’s search referral share has recently been around 10-11%. In many cases, other search engines, websites, or services are significantly more important in referring traffic than Google. The Drudge Report refers twice as many readers to Politico (24% to 12%), Yahoo refers more readers to Facebook, and Facebook (11% to 10%) refers more than twice as many users to Twitter (27% to 11%).

In my forthcoming essays, I discuss some reasons to be careful with these data. Still, I have not yet seen a convincing case that search dominance leads to referral dominance. A major flaw in the monopoly leverage story is that even if a particular search engine were dominant as a search vehicle, search engines are not necessarily dominant when it comes to reaching websites. In most cases, a critical mass of users know where they want to go without conducting a search. Manipulation of a search engine to favor particular sites might induce more traffic to visit the site, but it seems unlikely that it could foreclose customers from reaching competitive sites. My second argument is that a general search neutrality principle would freeze search engine innovation. Most of the arguments in favor of the search neutrality seem to imagine Internet search as it was five or ten years ago, when a search engine’s job was to return a list of ten blue links. That vision is outdated. Increasingly, search engines are not merely providing intermediate information but ultimate information, the answers themselves. Or, if the search engine remains a step removed from the ultimate information, it is integrated with the ultimate information. Increasingly, it is not accurate to speak about search engines and websites as distinct spaces or the relationship between search and content as vertical. The lines are quickly blurring.

This progression is driven by users’ own preferences. As the head of Yahoo Labs and Yahoo’s search strategy explained in 2009, “[p]eople don’t really want to search. Their objective is to quickly uncover the information they are looking for, not to scroll through a list of links to Web pages.” Consequently, search engines are no longer just focusing on document retrieval. Instead, they are working towards direct question answering. By figuring out the intent of the person conducting the search and then displaying all the related content that he might want to see, search engines are shifting away from the paradigmatic ten blue links towards a world of richer results.

“Neutrality” is an incomprehensible and undesirable principle in this context. If a search engine’s algorithm determines that a user is probably asking for directions to a country club, the best answer is not to display a set of links about country clubs but to display a map with navigation functionality. That map will often be a proprietary function of the search engine, an opportunity for the search engine to display more advertising and hence further monetize its Internet presence. Displaying a proprietary map in response to a search query isn’t “neutral,” but prohibiting it would lock the search engine into a dated and uninformative way of responding to a user’s query. A general principle of search neutrality would be a disaster for innovation.

May 21, 2012 | Permalink | Comments (0) | TrackBack (0)

Does the FTC have grounds to pursue a Section 5 case against Google? Comment by Bob Litan

Posted by Robert E. Litan (Kauffman Foundation and Brookings)

The FTC hasn’t enjoyed much luck over the years convincing the courts that various parties have committed standalone violations of Section 5 of the FTC Act, which punishes “unfair methods of competition” and “unfair and deceptive practices.” This string of losses occurred despite the Supreme Court’s broad language in FTC v. Indiana Federation of Dentists (1986), which states that Section 5 of the Act gives the Commission broader authority than what exists under the basic antitrust laws (Sections 1 and of 2 of the Sherman Act) to “stop in their incipiency acts which, when full blown, would violate those Acts.”

The large gap between this broad rhetoric and actual case law is not hard to understand. “Incipiency” is hard to define outside the merger context, where the DOJ and FTC have set forth relatively hard numerical guidelines for suggesting when mergers might violate Section 7 of the Clayton Act. And apart from the equivalent of common law fraud and misrepresentation, it is hard to know what “unfairness and deception” under Section 5 of the FTC Act actually mean.

These definitional problems notwithstanding, certain critics have charged that Google’s relatively recent entry beyond generalized search into “specialized” or “universal” search – which provides, among other things, price and product comparisons (like “Sony Camera” or “best airline fares from Kansas City to Dallas”) -- runs afoul of Section 5. Google asked me and Hal Singer to look into this claim, based solely on publicly available information, and see whether there is much to it. In a recent paper posted on SSRN, we concluded there isn’t. In brief, apart from the generic problems in bringing Section 5 cases just noted, here are the most notable allegations and why, in our opinion, they are without either legal or economic basis.

First, some have claimed that by placing its universal search results above those of independent search engines – like Kayak, Yelp, and so on – Google has acted “unfairly” and should somehow be punished, even perhaps by being prohibited from engaging in universal search altogether. That Google’s motives are anticompetitive is undermined by the fact that Google’s search rivals, notably Bing and Yahoo, engage in the very same behavior. Moreover, as antitrust scholars well know, the antitrust laws were designed to protect competition, not specific competitors. For the FTC to push Section 5 beyond this basic principle would open up unbounded inquiries that could threaten aggressive competition throughout the economy.

A second potential critique is that Google has engaged in something deceptive by seemingly reneging on previous statements in its IPO filings and its corporate mission statement that it wanted users to quickly leave its site, and now through universal search wants them to stay a little longer. To punish Google’s change in business model because it is seemingly inconsistent with statements made long ago would set a dangerous precedent: subjecting companies to potentially draconian remedies (in Google’s case, either prohibition of universal search or regulation of the order in which those search results appear) that clearly would chill innovation. People change their minds when they get new facts. John Maynard Keynes even once famously proclaimed that as a virtue. If that is the case for people, it should certainly be true for companies.

Finally, to prevail in a traditional antitrust tying or bundling claim (outside of Section 5), plaintiffs must establish that the two services are distinct antitrust products. Independent of the two foregoing allegations, proponents of antitrust scrutiny claim that Google’s universal search is somehow fundamentally different from the company’s original general search offering. This premise is wrong and the distinction between the two types of search is largely semantic. As antitrust scholar Einer Elhauge has pointed out, antitrust economics tells us that two seemingly different products are really a single one if consumers would not reassemble them in the absence of a combined offering. Because search users would not likely integrate universal search with general search if Google didn’t do it for them, the two services are best understood as a unified offering (and thus as constituting a single market).

May 21, 2012 | Permalink | Comments (0) | TrackBack (0)

Can there be a market for unpaid search results and could Google be classified as a public utility? - Comments of Eric Clemons

Search as a Regulated Public Utility

Posted by Eric Clemons (Wharton)

Should search be a public utility? I’m not sure that search needs to be a public (state owned) company, in the sense that telecommunications in the UK and indeed most of the world was a public utility, part of the national postal system in many cases. But search needs to be at least a regulated common carrier, like AT&T was between 1913 and 1984. A little less regulation would have been appropriate towards the end of AT&T’s monopoly dominance, but throughout most of AT&T’s regulated lifetime it provided the best telephone service in the world, and did so at acceptable prices.

Of course we are living in an era that distrusts government and virtually all regulation. Remember what Wall Street, Private Equity Firms, and Proprietary Strategy did to the American economy and the American capitalist system just a few years ago? That’s what power and greed, private information, and lack of regulation can achieve at their worst; this was not a failure of capitalism, but a failure of market professionals when they enjoyed the end of regulation and an enormous lack of transparency. Remember why we have a Food and Drug Administration? If not, go back and reread Upton Sinclair’s the Jungle and then think about whether eliminating all regulation and all regulatory oversight would be such a good thing. Regulations matter.

What regulations would I expect a common carrier in search to have to observe? 1. There would be no preferencing of the search engine’s own offerings. In the case of Google, Google travel services, or YouTube services, or financial data services, or ticketing services, would have to be given a quality score the same way any other firm or service is ranked and rated, and would have to show up in search in ranked and rated order.

2. The ranking algorithm would have to be fair, as established with a third party audit. There could be no manual overrides and no preferencing, either for a fee, or because of the desire to place their own offerings ahead of others, or to establish or promote an opinion favorable to the firm or the politics of its executives. These executives could support political candidates and parties the way any other executives already can but they could not continue to do so by the manipulation of search order.

3. Algorithms could be updated as needed, of course, but an archival copy of the algorithm at any point in time would need to be preserved for audit at a later date.

4. I’m not sure paid search should be permitted. Paid advertising on the right side of the page could remain, but it would have to look and feel like advertising, not like search results.

Search firms have to make money somehow, of course. Search service firms should be encouraged to compete to create a market for paid search services. These services thus would be available for a fee. A portal like Comcast could buy search services and provide them free for its users, much as some do with antivirus protection. Individuals could choose instead to buy search services, much as they already do with antivirus protection.

I know that users expect search to be free. Years ago we all expected television to be free, and virtually all of us enjoy a wide arrange of premium television services today, and all of us pay a significant monthly bill for them. We know that even today’s student users compete with each other to demonstrate the best technology, almost always including an iPad and a smart phone; these devices are not free, nor are their monthly service bills. There is every reason to expect that users who routinely pay a monthly fee for texting, and another monthly fee for data usage, would be willing to pay a small monthly fee for superior search. If not, they could always get their search free from a company that bought search, and then provides search free to users for the sort of bundled paid search results that these users saw in the past.

I’m thinking that this would involve splitting Google into three or more companies, one for search, one for Android devices, and one for ancillary services. Google would be free to maintain a fourth company, which purchases services from the others, or from Bing, or from Facebook, or from anyone else with a superior offerings.

<>

May 21, 2012 | Permalink | Comments (1) | TrackBack (0)

Can there be a market for unpaid search results and could Google be classified as a public utility? Comments of Adam Thierer

Posted by Adam Thierer (George Mason)

If you blink your eyes in the Information Age you can miss revolutions. Let’s take a quick walk back through our turbulent recent history:

• Just five years ago, MySpace dominated social networking and had The Guardian wondering, “Will MySpace Ever Lose Its Monopoly?” A short time later, MySpace lost its early lead and became a major liability for owner Rupert Murdoch. Murdoch paid $580 million for MySpace in 2005 only to sell it for $35 million in June 2011.

• Just six to eight years ago, the mobile landscape was ruled by Palm, BlackBerry, Nokia, and Motorola. Palm is now all but dead and BlackBerry is trying to stay afloat while Nokia and Motorola had to cut deals with Microsoft and Google respectively in order to survive.

• Just 10 years ago, AOL’s hegemony in online services was thought to be unassailable, especially after its merger with Time Warner. But the merger quickly went off the rails and AOL’s online “dominance” quickly evaporated. Losses grew to over $100 billion and the entire deal unraveled within just a few years as AOL’s old dial-up, walled-garden business model had been completely superseded by broadband and the new Web 2.0 world.

• Just 12 years ago, Yahoo! and AltaVista were the go-to companies for online search. No one turns to them first today when they go looking for information online.

• And just 15 years ago, Microsoft was on everyone’s mind. Today, the firm is struggling to remain part of cocktail party chatter when the topic of modern Tech Titans is discussed. For example, a recent Fast Company cover story on “The Great Tech War of 2012” only mentioned Microsoft in passing. The rise of search, social media, and cloud computing represented disruptive shifts that Microsoft wasn’t prepared for.

The graveyard of tech titans is littered with the names of many other once-mighty giants. Schumpeter’s “gales of creative destruction” have rarely blown harder through any sector of our modern economy.

And so now we come to the question of Google’s dominance in the field of search. Should we be worried? Some say yes, and the rhetoric of public utilities and essential facilities is increasingly creeping into policy discussions about the Internet, including the search layer. A growing cabal of cyberlaw experts—Tim Wu, Dawn Nunziato, Frank Pasquale, among many others—argue that some sort of regulation is needed.

But the recent history I recounted above makes it clear that patience and humility are the more sensible policy prescriptions. Calls for regulation or public utility classification are particularly premature and problematic. As I argued in my recent white paper, “The Perils of Classifying Social Media Platforms as Public Utilities,” search and social media platforms do not resemble traditional public utilities and there are good reasons why policymakers should avoid a rush to regulate them as such.

First, there has not been any serious showing of monopoly power in the search or social media sectors in which Google operates. It’s also impossible to find any way in which consumer welfare is currently being harmed by Google. All their products are free and constantly evolving. New technologies and rivals continue to emerge. DuckDuckGo, for example, differentiates itself in search by stressing privacy above all else. Meanwhile, the contours of these markets are constantly evolving in a dynamic way, making market definition challenging. Is Facebook a search company? Signs are good that it soon could soon become a formidable one.

These market-definition considerations are especially important because of how long it takes to formulate regulations or impose antitrust remedies. In a market that changes this rapidly, taking several months or even years to complete rulemakings or litigate remedies will almost certainly mean that most rules will be completely out of date by the time they are implemented. And once implemented, there will be very little incentive to rework them as rapidly as the market contours change. Regulation could retard innovation in search and social media markets by denying firms the ability to evolve or innovate across pre-established, artificial market boundaries. Second, treating these digital services as regulated utilities would harm consumer welfare because public utility regulation has traditionally been the archenemy of innovation and competition. Public utility regulation has a long, lamentable history that has been well-documented by economists and political scientists. That’s why it is usually considered the last resort, not the first option. Moreover, the traditional goals of public utility regulation -- universal service, price competition, and quality service -- are already being achieved without intervention. And as Marvin Ammori and Luke Pelican outline in a new study, all the proposed antitrust remedies to deal with Google in particular also have serious downsides. Almost all the cures would be worse than whatever disease it is critics hope to solve with antitrust intervention.

Third, treating today’s leading search and social media providers as digital essential facilities threatens to convert “natural monopoly” or “essential facility” claims into self-fulfilling prophecies. The very act of imposing utility obligations on a particular platform or company tends to lock it in as the preferred or only choice in its sector. Public utility regulation also shelters a utility from competition once it is enshrined as such. Also, by forcing standardization or a common platform, regulation can erect de jure or de facto barriers to entry that restrict beneficial innovation and the disruption of market leaders.

Fourth, because social media are fundamentally tied up with the production and dissemination of speech and expression, First Amendment values are at stake, warranting heightened constitutional scrutiny of proposals for regulation. As Eugene Volokh noted in a recent white paper, social media providers should possess the editorial discretion to determine how their platforms are configured and what can appear on them.

Will Google meet the same fate as earlier Tech Titans? It’s impossible to know. But with the wrecking ball of creative digital destruction doing such a fine job of keeping competition and innovation thriving, we’d be smart to reject heavy-handed, top-down regulation of such a dynamic segment of our economy at this time.

May 21, 2012 | Permalink | Comments (0) | TrackBack (0)

Can there be a market for unpaid search results and could Google be classified as a public utility? Comments of Mark Jamison

Posted by Mark A. Jamison (University of Florida Warrington College of Business)

Recent calls for ex ante regulation of Google are reminiscent of other calls for regulation of IT companies. Remember the calls to treat Windows like a public utility, or iTunes as an essential facility? These were all misguided because they misconstrued the basics of the proposed regulations. The calls for regulation then and now also contain an unstated premise that rules designed for truly monopoly industries with public franchises and stable, long-lived technologies could be successfully applied to companies whose technologies change daily and whose customers readily move on when something better comes along.

Advocates of ex ante regulation of Google generally frame Google as a public utility, a common carrier, or a holder of an essential facility. Google fits none of these. A pubic utility is a firm that is given a public franchise to hold a 100% market share for a service that is essential to a modern economy. Local electricity, water, and natural gas providers are typical utilities. Google isn’t like them. There is no public franchise for providing general search. Most estimates find that Google has about a 65% market share in general search in the U.S., and a much smaller market share in overall search. And while Google is important, it is not an essential gateway of commerce, as evidenced by how quickly customers switched to Yahoo! when Google had a software glitch a few years ago.

A common carrier is a firm, such as a telecommunications provider or railroad that transports items on someone’s behalf. Regulations for common carriers come from the English common law public calling concept that emerged centuries ago when certain trades that were essential to the functioning of local economies were in short supply so that tradesmen could exploit customers with unique circumstances. Google does not fit the basic premises that make a firm a common carrier. Google does not transport information on a customer’s behalf. Google finds information and provides advertising, but telecom companies provide the transport. Also, general search and advertising are not in short supply, and Google’s pricing approach does not permit exploitation of unique circumstances.

A firm falls under the essential facilities doctrine if the firm is a monopolist or near monopolist in the final goods market, controls an input that rivals need to be able to compete in that market, and denies rivals access to the critical input. Google fits none of these criteria. Google is far from being a monopoly in the relevant retail markets, such as operating systems (Android vs. Apple OS), calendars (Google Calendar vs. Windows Live), and video displays (YouTube vs. Vimeo). And Google does not exclude rivals from advertising or from being included in general search.

Even though Google clearly does not fit these categories of regulation, proponents of regulation remain, claiming vaguely that oversight would benefit competition. This is incorrect. Regulation of Google would likely diminish, if not kill the innovations, investment, and expansion of output that competition is supposed to encourage.

Read my full paper here.

May 21, 2012 | Permalink | Comments (0) | TrackBack (0)

Sunday, May 20, 2012

Symposium on Competition in Online Search May 21-22, 2012

Posted by D. Daniel Sokol

Over the next two days (May 21-22, 2012), the blog will host a symposium in relation to the Federal Trade Commission’s investigations into Google. The FTC recently announced that it had hired well-known litigator Beth Wikinson of Paul, Weiss, Rifkind, Wharton & Garrison LLP to work on their investigations into Google. A number of commentators have interpreted this as a sign that the FTC is preparing to litigate, but absent from the reporting of this announcement was any elaboration of what the antitrust case against Google actually might be. For our blog symposium, we will hear the views from a number of academics and policy experts who have written about the Google investigations. A number of complaints have been made against Google, including by its competitors Microsoft, Expedia and Yelp who say that Google favors its own content and forecloses the ability of other sites to compete. Google argues that its actions are pro-competitive and are all about providing the best possible experience for the user. Google says it is a guide and not a gatekeeper on the internet. An important question for our symposium is whether antitrust law actually applies to the allegations that have been made against Google.

We have a great group of commentators for the symposium, including:
● Mark Jamison, University of Florida, Warrington College of Business
● Adam Thierer, George Mason
● Eric Clemons, Wharton School of the University of Pennsylvania
● Dan Crane, Michigan Law School
● James Grimmelman, New York Law School
● Marina Lao, Seton Hall
● Maurice Stucke, University of Tennessee College of Law
● Bob Litan, Kauffman Foundation
● Eugene Volokh, UCLA
● Marvin Ammori, Center for Internet and Society at Stanford Law School
● Mark Patterson, Fordham Law
● Frank Pasquale, Seton Hall
● Allen Grunes, Brownstein Hyatt

Topics:
● Can there be a market for unpaid search results and could Google be classified as a public utility? (Mark Jamison; Adam Thierer; Eric Clemons, Mark Patterson)
● Is there a basis in antitrust law for requiring ‘neutral’ search results (Dan Crane; James Grimmelman; Marina Lao; Maurice Stucke and Allen Grunes together)
● Does the FTC have grounds to pursue a Section 5 case against Google? (Bob Litan)
● Is search protected by the first amendment? (Eugene Volokh)
● Are there practical remedies that wouldn’t involve federal regulation of search results? (Marvin Ammori; Frank Pasquale)

May 20, 2012 | Permalink | Comments (0) | TrackBack (0)

Antitrust Energy

Posted by D. Daniel Sokol

Barak Orbach, University of Arizona and D. Daniel Sokol, University of Florida - Levin College of Law; University of Minnesota School of Law describe Antitrust Energy.

ABSTRACT: Marking the centennial anniversary of Standard Oil Co. v. United States, we argue that much of the critique of antitrust enforcement and the skepticism about its social significance suffer from “Nirvana fallacy”— comparing existing and feasible policies to ideal normative policies, and concluding that the existing and feasible ones are inherently inefficient because of their imperfections. Antitrust law and policy have always been and will always be imperfect. However, they are alive and kicking. The antitrust discipline is vibrant, evolving, and global. This essay introduces a number of important innovations in scholarship related to Standard Oil and its modern applications and identifies shifts in antitrust that will keep the field energized for some time to come.

May 20, 2012 | Permalink | Comments (0) | TrackBack (0)