Thursday, May 9, 2019
In light of the rough and tumble bar passage declines over the past half-dozen years of so, numerous blogs and articles have appeared, trying to shed light on what factor or factors might be at play, running the gamut from changes in the bar exam test instrument, changes in law school admissions, changes in law school curriculum, etc. In addition, the academic support world has righty focused attention on how students learn (and how we can better teach, assist, coach, counsel, and educate our students to "learn to learn"). Indeed, I often prowl the internet on the lookout for research articles exploring potential relationships among the social (belonging), the emotional (grit, resiliency, mindset) and the cognitive in relationship to improving student learning.
Nevertheless, with so much riding on what is really happening to our students in their law school learning and bar preparation experiences, I am a little leery about much of the research because, to be frank, I think learning is, well, much more complicated than some statistical experiments might suggest.
Take one popular issue...growth mindset. Studies appear to demonstrate that a growth mindset correlates with improved test scores in comparison to a fixed mindset. But, as statisticians worth their salt will tell you, correlation does not mean causation. Indeed, it maybe that we ought not focus on developing positive mindsets but instead help our students learn to learn to solve legal problem and then, along the way, their mindsets change. It's the "chicken and the egg" problem, which comes first. Indeed, there is still much to learn about the emotional and its relationship with learning.
Take another popular issue...apparent declines, at least with some segments of bar takers - in LSAT scores. Many argue that such declines in LSAT scores are indeed the culprit with respect to declines in bar exam outcomes. But, to the extent LSAT might be a factor, by most accounts, its power is very limited in producing bar exam results because other variables, such as law school GPA are much more robust. In short, LSAT might be part of the story...but it is not the story, which is to say that it is not truly the culprit. Indeed, I tend to run and hide from articles or blogs in which one factor is highlighted to the exclusion of all else. Life just isn't that simple, just as learning is not either.
So, as academic support professionals indebted to researchers on learning, particular cognitive scientists and behaviorists, here are a few thoughts - taking from a recent article in Nature magazine - that might be helpful in evaluating to what extent research findings might in fact be beneficial in improving the law school educational experience for our students.
- First, be on the lookout for publication bias. Check to see who has funded the research project. Who gains from this research?
- Second, watch out for positive statistical results with low statistical power. Power is just a fancy word for effect or impact. If research results indicate that there is a positive statistical relationship between two variables of interest, say LSAT scores and bar exam scores, but the effect or impact is low, then there must be other latent factors at play that are even more powerful. So, be curious about what might be left unsaid when research results suggest little statistical power.
- Third, be on the guard for research results that just seem stranger than the truth. They might be true but take a closer look at the underlying statistical analysis to make sure that the researchers were using sound statistical tests. You see, each statistical test has various assumptions with respect to the data that must be met, and each statistical test has a purpose. But, in hopes of publishing, and having accumulated a massive data set, there's a temptation to keep looking for a statistical analysis that produces a positive statistical result even when the most relevant test for the particular experiment uncovers no statistically meaningful result. Good researchers will stop at that point. However, with nothing left to publish, some will keep at it until they find a statistical test, even if it is not the correct fit, that produces a statistical result. As a funny example, columnist Dorothy Bishop in Nature remarks about a research article in which the scientists deliberately keep at it until they found a statistical analysis that produced a positive statistical result, namely, that listening to the Beatles doesn't just make one feel younger...but makes one actually younger in age.
- Fourth, do some research on the researchers to see if the research hypothesis was formed on the fly or whether it was developed in connection with the dataset. In other words, its tempting to poke around the data looking for possible connections to explore and then trying to connect the dots to form a hypothesis, but the best research uses the data to test hypothesis, not develop post-hoc hypothesis.
Here's a link to the Nature magazine article to provide more background about how to evaluate research articles: https://www.nature.com. (Scott Johns).