Sunday, April 22, 2012
If one jumbo jet crashed in the US each day for a week, we'd expect the FAA to shut down the industry until the problem was figured out. But in our health care system, roughly 250 people die each day due to preventable error. A vice president at a health care quality company says that "If we could focus our efforts on just four key areas - failure to rescue, bed sores, postoperative sepsis, and postoperative pulmonary embolism - and reduce these incidents by just 20 percent, we could save 39,000 people from dying every year." The aviation analogy has caught on in the system, as patient safety advocate Lucian Leape noted in his classic 1994 JAMA article, Error in Medicine. Leape notes that airlines have become far safer by adopting redundant system designs, standardized procedures, checklists, rigid and frequently reinforced certification and testing of pilots, and extensive reporting systems. Advocates like Leape and Peter Provonost have been advocating for adoption of similar methods in health care for some time, and have scored some remarkable successes.
But the aviation model has its critics. The very thoughtful finance blogger Ashwin Parameswaran argues that, "by protecting system performance against single faults, redundancies allow the latent buildup of multiple faults." While human expertise depends on an intuitive grasp, or mapping, of a situation, perhaps built up over decades of experience, technologized control systems privilege algorithms that are supposed to aggregate the best that has been thought and calculated. The technology is supposed to be the distilled essence of the insights of thousands, fixed in software. But the persons operating in the midst of it are denied the feedback that is cornerstone of intuitive learning. Parameswaram offers several passages from James Reason's book Human Error to document the resulting tension between our ability to accurately model systems and an intuitive understanding of them. Reason states:
[C]omplex, tightly-coupled and highly defended systems have become increasingly opaque to the people who manage, maintain and operate them. This opacity has two aspects: not knowing what is happening and not understanding what the system can do. As we have seen, automation has wrought a fundamental change in the roles people play within certain high-risk technologies. Instead of having ‘hands on’ contact with the process, people have been promoted “to higher-level supervisory tasks and to long-term maintenance and planning tasks." In all cases, these are far removed from the immediate processing. What direct information they have is filtered through the computer-based interface. And, as many accidents have demonstrated, they often cannot find what they need to know while, at the same time, being deluged with information they do not want nor know how to interpret.
A stark choice emerges. We can either double down on redundant, tech-driven systems, or we can try to restore smaller scale scenarios where human judgment actually stands a chance of comprehending the situation. For those who accept the inevitability of larger, more interconnected, and more technologized finance systems, the work of Kenneth Bamberger and Erik Gerding may provide a useful framework for eliminating the most troubling potential effects of automation. They have outlined commendable changes for the current regulatory framework. We will need to begin to recognize this regulatory apparatus as a "process of integrating human intelligence with artificial intelligence." (For more on that front, the recent "We, Robot" conference at U. Miami is also of great interest.) [FP]