|(Image source: PixLoger from Pixabay|
A lot has been made of the the potential of AI algorithms to exhibit racially biased outcomes based on the data given to them. Now, a new report from New York University's AI Now Institute has also highlighted diversity issues among the engineers themselves who are creating AI.
The AI Now Institute, an interdisciplinary institute focused on researching the social implications of artificial intelligence, focuses on four key domains: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure.
The report, “Discriminating Systems: Gender, Race, and Power in AI,” highlights what its authors call a “diversity crisis” in the AI sector across gender and race.According to the AI Now Institute the report “is the culmination of a year-long pilot study examining the scale of AI’s current diversity crisis and possible paths forward,” and “draws on a thorough review of existing literature and current research working on issues of gender, race, class, and artificial intelligence.
“The review was purposefully scoped to encompass a variety of disciplinary and methodological perspectives, incorporating literature from computer science, the social sciences, and humanities. It represents the first stage of a multi-year project examining the intersection of gender, race, and power in AI, and will be followed by further studies and research articles on related issues. “
Among the report's key findings:
- Black and Latinx workers are substantially underrepresented in the tech workforce.
Black workers are substantially underrepresented in the AI sector. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%.
Latinx workers fair only slightly better, making up 3.6% of Google's workforce, 5% at Facebook, and 6% at Microsoft.
- Woman are underrepresented as AI research staff.
Women comprise only 15% of AI research staff at Facebook and 10% at Google.
There is no public data on trans workers or other gender minorities as AI research staff.
- Women authors are underrepresented at AI conferences.
Only 18% of authors at leading AI conferences are women.
- Computer science as a whole is experiencing a historic low point for diversity.
The report found that women make up only 18% of computer science majors in the US as of 2015 (down from 37% in 1984). Women currently make up 24.4% of the computer science workforce, and their median salaries are 66% of those of their male counterparts.
The numbers become even more alarming when race is taken into account. The proportion of bachelors degrees in engineering awarded to black women declined 11% from the year 2000 to 2015. “The number of women and people of color decreased at the same time that the tech industry was establishing itself as a nexus of wealth and power,” the report states. “This is even more significant when we recognize that these shocking diversity figures are not reflective of STEM as a whole: in fields outside of computer science and AI, racial and gender diversity has shown a marked improvement.”
The Pipeline Isn't Enough
Based its findings, the AI Now Institute is calling for the AI sector to drastically shift how it addresses diversity. However, it also cautions that fixing the school-to-industry pipeline and focusing on “women in tech” efforts will not be enough. “The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others,” according to the report.
The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others.
“Despite many decades of ‘pipeline studies’ that assess the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI industry,” the report reads. “The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether.”
On the technological side, the report also calls for the AI sector to reevaluate the use of AI for the “classification, detection, and prediction of race and gender.”
“Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots, predict ‘criminality’ based on facial features, or assess worker competence via ‘micro-expressions,' ”
the report states. “Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.”
Systems that use physical appearance as a proxy for character or interior states are deeply suspect.
The Need for Transparency
For its part, the AI Now Institute makes several recommendations on addressing both diversity in the AI workforce as well as bias in the algorithms themselves.
In terms of diversity, the report says transparency is key and recommends companies increase transparency with regards to salaries and compensation, harassment and discrimination reports, and hiring practices. It also calls for a change in hiring practices, including targeted recruitment to increase diversity and a commitment to increase “the number of people of color, women, and other underrepresented groups at senior leadership levels of AI companies across all departments.”
Addressing bias in AI systems will be a different, but equally important, challenge. In addition to transparency around when, where, and how AI systems are deployed and why they are being used, the institute also calls for more rigorous testing across the entire lifecycle of AI systems to ensure there is continuous monitoring for evidence of bias or discrimination.
Furthermore, there should be risk assessments done to evaluate whether certain systems should be designed at all. “The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise,” the report says.
The Feedback Loop
Evidence of the both the technological and diversity issues in the AI sector have been widely reported in recent years.
A 2013 study conducted by Harvard University professor Latanya Sweeney found that searching two popular search engines for names commonly associated with blacks (i.e DeShawn, Darnell, and Jermaine) yielded Google AdSense ads related to arrests in the majority of cases. By contrast, white-associated names such as Geoffrey, Jill, and Emma yielded more neutral results
A 2016 investigation by ProPublica found that an AI tool being used by American judges to evaluate the risk of an accused committing another crime in the future was “racially biased and inaccurate,” and was almost twice as likely to label African-Americans as a higher risk than whites.
Among various studies, AI Now Institute's report also cites a 2019 study conducted by researchers at Northeastern University that found that Facebook's ad delivery service was delivering targeted ads to users based on racial and gender stereotypes. In job ads for example, supermarket cashier positions were disproportionately shown to females, taxi driver positions were shown to blacks, and lumber industry jobs were shown disproportionately to white males.
Issues of race and gender inequality at some of the top tech companies known for creating AI technologies have also found their way into recent headlines. As of this writing Microsoft is in the middle of a class action gender discrimination lawsuit by workers alleging that the company has failed to seriously address hundreds of harassment and discrimination complaints.
Uber, which has made no secret of its autonomous car development efforts, is currently under federal investigation for gender discrimination. A 2017 audit of Google's pay practices by the US Department of Labor found a difference of up to six to seven standard deviations between pay for men and women in nearly every job category at the company.
And a widely publicized 2018 blog post by Mark Luckie, a former manager at Facebook, accused the company of workplace discrimination against its black employees.
This is not to say that companies are not aware of these issues. At its 2019 I/O Developer Conference, Google announced that it was actively researching methods to eliminate potential biases in algorithms it employs for image recognition and other tasks.
According to the AI Now Institute, issues with workplace diversity and bias in AI systems commingle into what it calls a “discrimination feedback loop” in which AI continues to exhibit problematic biases and discrimination, much in in part due to a lack of input and consideration from engineers from underrepresented racial and gender groups.
“Discrimination and inequity in the workplace have significant material consequences, particularly for the underrepresented groups who are excluded from resources and opportunities,” the report says. “...These patterns of discrimination and exclusion reverberate well beyond the workplace into the wider world. Industrial AI systems are increasingly playing a role in our social and political institutions, including in education, healthcare, hiring, and criminal justice. Therefore, we need to consider the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems.”
Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, blockchain, and robotics.
The nation's largest embedded systems conference is back with a new education program tailored to the needs of today's embedded systems professionals, connecting you to hundreds of software developers, hardware engineers, start-up visionaries, and industry pros across the space. Be inspired through hands-on training and education across five conference tracks. Plus, take part in technical tutorials delivered by top embedded systems professionals. Click here to register today!