Wednesday, January 29, 2020
Andrew Dustan, Kristine Koutout & Greg Leo, Beliefs About Beliefs About Gender
Do women believe that leaders in science, technology, engineering, and math (STEM) fields believe that women are bad at doing science? Such beliefs about beliefs—second order beliefs—could drive women to sort out of STEM fields, leading to the observed gender gap in employment (Beede et al., 2011). Importantly, this belief-driven sorting could occur regardless of leaders’ true beliefs about women’s scientific abilities. When historically persistent beliefs about the differences between men and women—first-order beliefs—cause disparities, they may generate second-order beliefs that perpetuate those disparities even once first-order beliefs change. To facilitate investigating questions of this nature, we develop an incentive-compatible
experimental framework for measuring first- and second-order beliefs about the difference in any quantifiable characteristic between any two populations. We implement this procedure in a lab experiment to elicit beliefs about men’s and women’s performance on a timed math task and choices in an abstract bargaining task.
We find an interesting contrast between first- and second-order beliefs. There is no evidence that men’s and women’s first-order beliefs differ; however, both men and women believe that such differences exist. While a large majority of people believe that most men believe men outscore women on the math task, the majority also believe that most women do not share this belief. In the bargaining task, we again find that people believe that men and women hold different first-order beliefs even though we observe no such differences in the data. In summary, even when men and women have similar first-order
beliefs, second-order beliefs about men and women can vary substantially.
Tuesday, June 11, 2019
Melissa Hamilton, The Sexist Algorithm, 38 Behavioral Sciences & the Law 145 (2019)
Algorithmic risk assessment tools are informed by scientific research concerning which factors are predictive of recidivism and thus support the evidence‐based practice movement in criminal justice. Automated assessments of individualized risk (low, medium, high) permit officials to make more effective management decisions. Computer generated algorithms appear to be objective and neutral. But are these algorithms actually fair? The focus herein is on gender equity. Studies confirm that women typically have far lower recidivism rates than men. This differential raises the question of how well algorithmic outcomes fare in terms of predictive parity by gender.
This essay reports original research using a large dataset of offenders who were scored on the popular risk assessment tool COMPAS. Findings indicate that COMPAS performs reasonably well at discriminating between recidivists and non‐recidivists for men and women. Nonetheless, COMPAS algorithmic outcomes systemically overclassify women in higher risk groupings. Multiple measures of algorithmic equity and predictive accuracy are provided to support the conclusion that this algorithm is sexist.
Friday, March 15, 2019
Kimberly Houser, Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making, 22 Stanford Tech. L. Rev. (forthcoming)
After the first diversity report was issued in 2014 revealing the dearth of women in the tech industry, companies rushed to hire consultants to provide unconscious bias training to their employees. Unfortunately, recent diversity reports show no significant improvement, and, in fact, women lost ground during some of the years. According to a 2016 Human Capital Institute survey, nearly 80% of leaders were still using gut feeling and personal opinion to make decisions that affected talent-management practices. By incorporating AI into employment decisions, we can mitigate unconscious bias and variability in human decision-making. While some scholars have warned that using artificial intelligence (AI) in decision-making creates discriminatory results, they downplay the reason for such occurrences – humans. The main concerns noted relate to the risk of reproducing bias in an algorithmic outcome (“garbage in, garbage out”) and the inability to detect bias due to the lack of understanding of the reason for the algorithmic outcome (“black box” problem). In this paper, I argue that responsible AI will abate the problems caused by unconscious biases and noise in human decision-making, and in doing so increase the hiring, promotion, and retention of women in the tech industry. The new solutions to the garbage in, garbage out and black box concerns will be explored. The question is not whether AI should be incorporated into decisions impacting employment, but rather why in 2019 are we still relying on faulty human-decision making?
Thursday, October 1, 2015
Kathleen Darcy (Michigan State), Medicalizing Gender: How the Legal and Medical Professions Shaped Women's Experience as Lawyers, 4 Tennessee J. Race, Gender & Social Justice 31 (2015)
Abstract:Despite significant progress, women in the legal profession still have not advanced into positions of power at near the rate in which they saturate the legal market. Scholars agree that simply waiting for parity is not sufficient, and, thus, they have identified many of the barriers that contribute to women’s difficulties. To date, however, the role that scientific and medical understandings play on the evolution of law, and on women as lawyers, has not received examination until now. To this end, I posit that medicine played a significant role in shaping societal expectations and assumptions about gender, and was similarly influenced by already-existing societal assumptions about gender. This created a complex and substantial barrier that kept women from exploring options outside the “spheres” of society they traditionally occupied. This article explores how medically-supported gender theories, in practice, have actually operated to limit women’s professional progress, relegating them to traditional gender roles and halting their ascension in the ranks of the legal profession. I examine how this barrier operates in three ways: how early women lawyers adopted these medical theories into views about their own gender; how society and those around these early women lawyers adopted these views to shape expectations about women as lawyers; and how the court explicitly and implicitly relied on these assumptions about gender to keep women out of the legal profession. An examination of how these medical and scientific theories about gender have shaped the ways society views gender, and vice versa, can help illuminate the discussion on the barriers that impede modern women lawyers.
Thursday, August 13, 2015
The title, from an Atlantic piece, just caught my eye.
Why can’t people imagine a future without falling into the sexist past? Why does the road ahead keep leading us back to a place that looks like the Tomorrowland of the 1950s? Well, when it comes to Moneypenny, here’s a relevant datapoint: More than two thirds of Facebook employees are men. That’s a ratio reflected among another key group: futurists.
Both the World Future Society and the Association of Professional Futurists are headed by women right now. And both of those women talked to me about their desire to bring more women to the field. Cindy Frewen, the head of theAssociation of Professional Futurists, estimates that about a third of their members are women. Amy Zalman, the CEO of the World Future Society, says that 23 percent of her group’s members identify as female. But most lists of “top futurists” perhaps include one female name. Often, that woman is no longer working in the field.