Skip to main content

Rethinking Explainability in Artificial Intelligence - Nicole Rigillo

Artificial Intelligence


Moot Court Room
Old College
South Bridge


Thu 19 September 2019

Speaker : Nicole Rigillo, PhD, Berggruen Research Fellow at Element AI

About the seminar
AI systems are increasingly being used to both augment and replace human decision-makers of all kinds: an HR representative screening a job applicant, a judge assessing a prisoner’s bail request, or an immigration agent issuing a visa, for example. Concerns about fairness and transparency in such systems has led to legislation demanding that decision-making AI be explainable, both to users and those ultimately affected by algorithmic decisions. This positions explainability as a crucial – yet often ill-understood – epistemic interface between humans and intelligent machines. Drawing on interviews with members of Element AI’s explainability team, I show how explainability experts frame and encode notions of trust, accountability, and bias as they produce models intended to make an AI system's decision-making process more legible and actionable to humans. I argue that for AI systems to be ethical and trustworthy, system designers and regulators must better account both for the differences in human and machine forms of intelligence, as well as for the subtle conceptual shifts that accompany the development of AI systems capable of autonomously making decisions about us.

About the speaker
Nicole Rigillo is an anthropologist and a Berggruen Research Fellow based at Element AI in Montreal, Canada. There she engages AI scientists in dialogue on how artificial intelligence is changing what it means to be human. Her current research centers around explainable AI and spaces of epistemic negotiation between humans and intelligent machines, ethical AI, and AI as a specific form of intelligence. Her postdoctoral research at the University of Edinburgh examined how civic and environmental activists use WhatsApp to improve municipal governance in Bangalore, raising questions concerning the effects of encrypted dark social networks on democracy and the public sphere. Her PhD research at McGill University focused on how mandatory corporate social responsibility in India altered an earlier model of welfare universalism by redistributing social responsibilities among heterogeneous groups of non-state actors.

This event is free and open to all. No registration necessary.