HealthLawProf Blog

Editor: Katharine Van Tassel
Case Western Reserve University School of Law

Thursday, October 9, 2014

Uterus Transplants?

While controversial among some ethics experts, uterus transplantation has been performed several times, most commonly in Sweden. A few weeks ago, a mother for the first time gave birth to a baby gestated in a transplanted uterus.

Should we worry about uterus transplants? Transplanting life-extending organs, like hearts, livers, lungs and kidneys, has become well-accepted, but observers have raised additional questions about transplantation for life-enhancing body parts like faces and hands. As long as transplant recipients have their new organs, they must take drugs to prevent their immune systems from rejecting the transplanted organs. The risks can be substantial. For example, the immunosuppressive drugs put people at an increased risk of cancer. It is one thing to assume health risks for the possibility of a longer life, but are the risks of being a transplant recipient justified by improvements in the quality of life?

Continue reading

October 9, 2014 in Bioethics, Biotech, Innovation | Permalink | Comments (0) | TrackBack (0)

Monday, June 2, 2014

PCAST, Big Data, and Privacy

The President’s Council of Advisors on Science and Technology (PCAST) has issued a report intended to be a technological complement to the recent White House report on big data. This PCAST report, however, is far more than a technological analysis—although as a description of technological developments it is wonderfully accessible, clear and informative.  It also contains policy recommendations of sweeping significance about how technology should be used and developed.  PCAST’s recommendations carry the imprimatur of scientific expertise—and lawyers interested in health policy should be alert to the normative approach of PCAST to big data. 

Here, in PCAST’s own words, is the basic approach: “In light of the continuing proliferation of ways to collect and use information about people, PCAST recommends that policy focus primarily on whether specific uses of information about people affect privacy adversely. It also recommends that policy focus on outcomes, on the “what” rather than the “how,” to avoid becoming obsolete as technology advances. The policy framework should accelerate the development and commercialization of technologies that can help to contain adverse impacts on privacy, including research into new technological options. By using technology more effectively, the Nation can lead internationally in making the most of big data’s benefits while limiting the concerns it poses for privacy. Finally, PCAST calls for efforts to assure that there is enough talent available with the expertise needed to develop and use big data in a privacy-sensitive way.”  In other words:  assume the importance of continuing to collect and analyze big data, identify potential harms and fixes on a case-by-case basis possibly after the fact, and enlist the help of the commercial sector to develop profitable privacy technologies. 

The report begins with an extremely useful (and particularly frightening if you aren’t familiar with the internet of things) description of big data possibilities, now and in the near-term future.  The description emphasizes the distinction between data “born digital”—that is, created in digital form—and data “born analog”—arising from the characteristics of the physical world and then becoming accessible in digital form.  Data born analog are highly likely to contain more information than just that of particular digital interest; for example, surveillance cameras record everything that is occurring in a particular location, not just acts that are the target of surveillance.  But with analytics that allow data fusion, the combination of data sources may reveal new meanings, for example profiling individuals. Big data are high volume, high velocity, and high variety, an intersection that presents serious privacy challenges.

PCAST then attempts to anticipate the privacy harms that might be associated with big data collection and analysis.  The harms are in the main presented as byproducts of the benefits of developments of particular types of technologies.  The list is impressive, but may miss additional harms associated with the development of a big data world.  Here’s a table listing developments, benefits, and harms; I’ve marked with an asterisk benefits that I’ve reconstructed from what PCAST says but that PCAST does not state explicitly. 

Technological development

Benefit

Associated Harm

Digital communication

Social networking across geographical boundaries; social and political participation on a far larger scale

Shared pipelines and the possibility of interception

Virtual home

Ability to store, organize, and share personal records, e.g. cloud storage of photographs.

“Home as one’s castle” should extent to “castle in the cloud,” not currently protected

Inferred facts about individuals

Delivery of desired or needed services, e.g. targeted marketing

Inferences may be drawn about highly sensitive facts about the individual (e.g. sexual orientation)—facts of which the individual may not even be aware (e.g. early dementia)

Locational identification

Services such as navigation or routes, finding people or services nearby, avoiding hazards

Stalking and tracking

Personal profiles

Benefits of use of statistically valid algorithms

False conclusions about individuals may be drawn

Discovery of special cases that apply to individuals within a population

May allow tailoring of services to special cases—e.g. personalized medicine, instruction linked to learning styles*

Foreclosure of autonomy—individuals may not want to take the predicted path

Identification of individuals

May allow individuals to be warned or protected or otherwise benefited*

Loss of desired anonymity

PCAST intentionally omitted from this list desires that information be used fairly and that individuals know what others know about them or are doing with their information.  In the view of PCAST, neither of these “harms” can be sufficiently defined to enable policy recommendations.  Also omitted from this list are more overarching concerns such as effects on identity, security, stigmatization of groups, freedom of expression, or political liberty.

PCAST’s discussion of the current technologies of privacy protection is highly informative and readers with interests in this area would do well to read the report—I won’t summarize it here.  The report also debunks several standard methods for privacy protection:  notice and choice (a “fantasy”), de-identification (ineffective in light of the development of analytics enabling re-identification), and non-retention or deletion (hopeless given potential for copying including the creation of multiple copies at the point analog data become digital).

Instead, the report suggests several different approaches for protection against data misuse.  As a successor to notice/consent, PCAST recommends the development of “privacy preference profiles,” perhaps by third parties such as the ACLU or Consumer Reports; apps or other internet entities could then indicate whether their privacy policies comport with a profile specified by the consumer. Or, the profile developers might offer the service of vetting apps.  Ideally, technologies could be developed to perform the vetting automatically.  PCAST also recommends developing use controls associated with data collection, use, and subsequent transmission of data or uses.  Metadata might serve this purpose but there is clearly need for further development.  Another suggested strategy is audit capability as a deterrent to misuse.  Finally, PCAST suggests implementing the Consumer Privacy Bill of Rights through recognition of potential harmful uses of data.  Emphasis should be placed on development of best practices to prevent inappropriate data use throughout the data life cycle. 

Five major policy approaches (they are called recommendations, but they are far better characterized as general directions rather than specific recommendations) conclude the report.   They are:

            --attention should focus on uses of big data rather than collection and analysis

            --policies should not be stated in terms of technical solutions but in terms of intended outcomes  

            --the US should strengthen privacy-related research, including relevant social science informing successful application of technologies

            --the US Office of Science and Technology Policy should increase education and training efforts

            --the US should take international leadership by adopting policies that stimulate the development of privacy protective technologies.

These recommendations seem remarkably anodyne after the detailed discussion of technologies that preceded them.  Moreover, they are also preceded by some other, less anodyne policy observations (I found these quite troubling—for reasons I just begin to suggest parenthetically below):

            --basing policy on data collection is unlikely to succeed, except in very limited contexts (such as health information) where there may be possibilities for meaningful notice and consent.  (Why, I ask, is notice/consent the only way to approach collection practices?  What about other sorts of restrictions on collection? Or, is the thought that getting the data is both inevitable and desirable, no matter what the context?)

            --regulating at the moment individuals are particularized by analytics might be technically possible—but even so, it’s preferable to focus on harms downstream (Doesn’t this expose people to risks of harm, correctable only after the fact? Shouldn’t we consider building ways to detect and deter re-identification that could intervene before the harm occurs?)

            --drafting savvy model legislation on cyber-torts might help improve the current patch-work of liability rules for privacy violations (Why not a public law approach to violations rather than placing the onus on individual litigation?)

            --forbidding the government from certain classes of uses might be desirable, even if these uses remain available in the private sector (So is the government the only or even primary problem with big data use???)

Leslie Francis

           

 

 

 

 

 

 

 

June 2, 2014 in Innovation, Obama Administration, Policy, Politics, privacy | Permalink | Comments (0) | TrackBack (0)

Friday, May 17, 2013

Evidenced Based Practice: When will Law Catch up with Medicine?

 Two widely reported studies this week about bed rest for women at risk of preterm delivery and reduction of salt consumption in order to promote heart health highlight two things we don’t think about enough—that a lot of standard medical practices are without any foundation in science and a lot of legal ones probably are too.    However, medicine has more and more taken the public health approach of examining the practices of individual doctors to see how effective they actually are in the general population.    For example, it is old news that prescribing bed rest to pregnant women at risk of preterm delivery is not effective.  But what this Obstetrics & Gynecology study found that “activity restriction”, such as quitting work, is still prescribed to one at three women at risk for preterm delivery.  The accompanying Bed Rest in Pregnancy: Time to Put the Issue to Rest makes an ethical argument that continuing to prescribe bed rest in the absence of evidence of its effectiveness violates the principles of autonomy and beneficence.

The Institute of Medicine just issued this report Sodium Intake in Populations: Assessment of Evidence “found no consistent evidence to support an association between sodium intake and either a beneficial or adverse effect on most direct health outcomes other than some CVD outcomes (including stroke and CVD mortality) and all-cause mortality.”

We have similar research in law- a lot of it coming from the Empirical Legal Studies movement, including work done at the Center for Empirical Legal Research at Washington University Berkeley Emperical Legal Research , the Centre for Emprical Legal Studies at UCL (formerly known as University College London) among many others, but it is not as well funded or coming from as well established sources as the studies which attempt to find an evidence basis for medical practices.  The salt reduction report was commissioned by the Institute of Medicine   In contrast, the ACLU supports its empirical argument that the death penalty does not deter crime on an opinion survey of police chiefs.  Translating information from research scientists to practicing physicians is still a slow process,—but no one questions the underlying principle that medical practice should be based on scientific evidence.

Part of the issue is funding.   Medicine as a whole is in a constant quest to contain costs and stopping ineffective practices is an important component of that effort.    But beyond a small number of progressive funders like the Robert Wood Johnson Foundation, there isn’t a lot of demonstrable interest, the kind supported by funding studies, in law making bodies in finding out what legal practices work and what do not.   

 This isn’t a new observation.  Bryant Garth outlined the problem in 1997 when he explained the importance of more social science research into the foundational principles of practices civil procedure.   But the steady flow of studies questioning conventional wisdom coming from the medical field has, as yet, no real counterpart in the world of law making.

Of course there will always be the problem of knowing the unknowable.  But it would be interesting for law makers to consider taking a lesson from public health in challenging assumptions about the human body and mind or even more generally the physical world that underlie both common law and statutes.  

 

May 17, 2013 in Bioethics, Comparative Effectiveness, Cost, Effectiveness, Health Care Costs, Health Law, Innovation, Policy, Public Health, Quality Improvement, Reform, Research | Permalink | Comments (0) | TrackBack (0)