“Bias exists in a blind spot for people. They don’t know they’re being biased. This has been a problem in scientific research and discovery forever,” says David Anderson, a member of Data Community DC’s Advisory Board, acknowledging that data scientists love to come up with new ways to address problems, but sometimes those problems are very old, such as bias.
As in other fields, when data scientists allow bias to impact their work, the results can be wide-ranging, difficult to predict, and sometimes, dangerous. That’s part of the reason Anderson, along with a team led by Andrew Nicklin, who is the Director of Data Practices at the Center for Government Excellence (GovEx) at Johns Hopkins University*, will be presenting a workshop on just this topic at the upcoming Data for Good Exchange 2018, to be held on Sunday, September 16, 2018 at Bloomberg’s Global Headquarters in New York. The title of their workshop is “An Ethics and Algorithms Toolkit for Government (and anyone else!).”
Adding their perspectives will be Joy Bonaguro, the first Chief Data Officer for the City and County of San Francisco; Jane Wiseman, an Innovations in American Government Fellow at the Ash Center for Democratic Governance and Innovation at Harvard Kennedy School; Maureen (Mo) Johnson, a community manager at Data for Democracy who is leading the Community Principles for Ethical Data Sharing (CPEDS) initiative that was launched at Data for Good Exchange last year; and Miriam McKinney, an analyst with GovEx.
During the workshop, the team will unveil a toolkit they have developed to help governments find and detect bias in their data-driven initiatives. Workshop participants should come prepared with a particular scenario to evaluate – preferably a current problem they’re looking to solve. The next best option would be having a data set they plan to use in the near future. But participants should focus on a real or potential problem, not something hypothetical. Says McKinney: “Keep it as personal as possible.” The team will also use the workshop to get input on improving the toolkit. Although it has been designed for governments, Nicklin thinks there’s a good chance it could also be adapted by not-for-profits and members of the private sector as well.
Larger cities are able to employ people to think about the responsible use of data, says Nicklin, but smaller cities rarely have similar resources. “I’m not convinced there is a powerful level of awareness of this problem,” he says. “What nobody has had is a toolkit that a government employee can use to sit down and really evaluate how risky their situation is and, once they understand how risky it is, what tools they might have to mitigate some of that risk and manage it.”
The toolkit developed by Nicklin and his colleagues asks a series of questions that grade the different types of risk in a data-driven initiative. Then, depending upon the level of risk, the toolkit contains suggestions for mitigations. One question, for instance, asks if the data being used is appropriate for the particular scenario. Depending upon the answer, the toolkit might recommend a conversation with the investigator who initially collected the data, a public conversation, or using a different data set entirely.
“The application of ethics to data science really gets into the nitty-gritty,” says Johnson, and many practitioners don’t know where to begin.
The development of the toolkit was funded by Bloomberg Philanthropies through its What Works Cities initiative.
*The Center for Government Excellence (GovEx) at Johns Hopkins University is funded as part of Bloomberg Philanthropies’ What Works Cities initiative. GovEx provides technical assistance and strategic training that enhances cities’ capacities to leverage cutting-edge data management, performance management, and advanced analytics practices to improve outcomes for residents across all policy areas.