Saturday, December 30, 2017

If a false analyst opinion falls in the computer system, does it make a sound?

There is so much to unpack in FINRA’s recent settlement with Citigroup Global Markets over its analyst ratings. (Press release here.)

The short version is that due to a glitch in one of Citigroup’s clearing firms, there was a nearly five year period when its displayed ratings for 1800 different equity securities (buy, sell, hold) were incorrect.  Buys were listed as sells; securities that weren’t covered received a rating, etc.  As I understand it, the research reports themselves were accurate – so you could probably click through to see the true rating – but in various summaries made electronically available to customers and brokers, the bottom line recommendation was wrong.  My guess – and this is just a guess – is that some brokers and customers probably figured out that the summary ratings were unreliable and made a habit of clicking through to check the research reports, but nonetheless FINRA alleges the mistakes impacted trading in various ways, including by allowing trades in violation of certain account parameters (i.e., accounts that were supposed to be restricted to securities rated “buy,” and so forth).

The problem did not go entirely unnoticed within the firm, but there wasn’t a firm level understanding of how systematic the issue was.  So, brokers would report problems with individual ratings, and maybe they were corrected and maybe not, but no one seems to have had their eye on the system as a whole. 

So, first - wow.

Second, I can’t help but point out that European regulations will soon require that investment banks charge for their research rather than provide it “free” to brokerage clients.  This has created some rather complex questions regarding the value of analyst research as research (rather than as an entrée for schmoozing with corporate insiders, as Matt Levine has extensively discussed).  Which means we now have a new datapoint: if Citi can go nearly five years with no one caring much about systematic mistakes in its analyst recommendations, that, umm, is rather suggestive of the research’s value.  I also look forward to someone doing the empirical work to determine if these mistakes had any impact on market prices.  I mean, Citi leaves a big footprint; it’s not impossible to imagine the errors had some detectable effect on stock prices, if only briefly. 

Third, the extent to which securities laws prohibit false statements of opinion is a perennially hot topic.  Technically, Rule 10b-5, Section 11, and Section 18 prohibit false statements of fact.  Assuming the distinction between fact and opinion is a clear one (which is a whole ‘nother issue) – does that mean people are free to falsely talk up stocks as great buys (in their opinion) even when they are not?

Courts have settled on the rule that when a speaker misrepresents his own opinion – claims that he thinks a stock is a great buy when he thinks it is in fact no such thing – that is a false statement of fact, namely, about the fact of the speaker’s own opinion.  The Supreme Court has also made clear that statements of opinion may be false in the sense that they mislead about the factual basis that underlies the opinion.  Throughout this debate, it has been generally assumed that falsity in the first sense is, functionally, indistinguishable from scienter: after all, how could anyone accidentally misrepresent their own opinion about something? 

Well, it seems Citigroup has now found a way.

Interestingly, this is not a case where a company seems to have accidentally bumbled into a series of mistakes that happily worked out in the company’s favor.  In this case, the false reports were all over the map, and did not appear to result in any particular benefit to Citi.  So, it appears that once computer algorithms get involved, it is in fact possible to falsely state one’s opinion without harboring fraudulent intent.

But only up to a point - some brokers noticed problems with particular stocks and reported them, though Citi took its own sweet time about making a fix.  Does that mean the company acted with scienter with respect to particular false recommendations?  And if so, can we expect to see lawsuits?  (Loss causation will be an issue, naturally, but - well.  We can stay tuned to see if someone comes up with an argument).

https://lawprofessors.typepad.com/business_law/2017/12/if-a-false-analyst-opinion-falls-in-the-computer-system-does-it-make-a-sound.html

Ann Lipton | Permalink

Comments

Thanks for this post, Ann. I think you've got it all right. I am participating in an AALS discussion group next week in which I plan to bring this up as an example of the kind of case that forces us back to the core of securities fraud under Section 10(b) and Rule 10b-5: manipulation or deception. It may be very hard to prove facts that fit into that rubric.

Posted by: joanheminway | Jan 1, 2018 7:18:27 AM

Oh that sounds fascinating - this is such an intriguing little situation but I can't help but think a harbinger of things to come in our increasingly-automated world.

Posted by: Ann Lipton | Jan 1, 2018 9:08:24 AM

Post a comment