Algorithmic Equity Toolkit

Developing an Algorithmic Equity Toolkit with Government, Advocates, and Community Partners

Summer 2019

 

[Toolkit homepage, under construction]

 

At the UW eScience Institute’s Data Science for Social Good program this summer, fellows will create an Algorithmic Equity Toolkit—a set of tools for identifying and auditing public sector algorithmic systems, especially automated decision-making and predictive technologies. Extensive evidence demonstrates harms across highly varied applications of machine learning; for instance, automated bail decisions reflect systemic racial bias, facial recognition accuracy rates are lowest for women and people of color, and algorithmically supported hiring decisions show evidence of gender bias. Federal, state, and local policy does not adequately address these risks, even as governments adopt new technologies. The toolkit will include both technical and policy components: (1) an interactive tool that illustrates the relationship between how machine learning models are trained and adverse social impacts; (2) a technical questionnaire for policymakers and non-experts to identify algorithmic systems and their attributes; and (3) a stepwise evaluation procedure for surfacing the social context of a given system, its technical failure modes (i.e., potential for not working correctly, such as false positives), and its social failure modes (i.e. its potential for discrimination when working correctly). The toolkit will be designed to serve (i) employees in state and local government seeking to surface the potential for algorithmic bias in publicly operated systems and (ii) members of advocacy and grassroots organizations concerned with the social justice implications of public sector technology.