Human Rights Data Analysis Group
We interact with the outputs from quantitative models multiple times a day. As methods from statistics, machine learning, and artificial intelligence become more ubiquitous, so too do calls to ensure that these methods are used “for good” or at the very least, ethically. But how do we know if we are achieving “good”? This question will frame a presentation of case studies from the Human Rights Data Analysis Group (HRDAG), a Bay Area nonprofit that uses data science to analyze patterns of violence. Examples will include collaborations with US-based organizations investigating police misconduct and partnerships with international truth commissions and war crimes prosecutors. HRDAG projects will be used to illustrate challenges of real-world data, including incomplete and unrepresentative samples, and adversarial political and/or legal climates. The potential harm that can be done when inappropriately analyzing and interpreting incomplete and imperfect data will be especially highlighted, including questions such as: How can we develop approaches to help us identify the cases where analytical tools can do the most good, and avoid or mitigate the most harm? We propose starting with two simple questions: What is the cost of being wrong? And who bears that cost?