AI Explorer: Trustworthiness in AI Systems
AI Explorer events feature two brief (TED-style) talks on key Artificial Intelligence topics and allow time for questions and discussion by all participants.
Abstract: The Designing Trustworthy AI Systems (DTAIS) program is built around a core tension between the opportunity for ubiquitous AI to transform work for social good and the emergent risks around bias, security and privacy that arise as AI tools are increasingly embedded in core routines and value generating institutional functions. This presentation will provide an overview of DTAIS research on trust and system architecture, focusing on two in-progress studies. First, in the context of image classification, we explore how a typical focus on accuracy scores (this classifier is 89% accurate) can hide critical differences in the distribution of classification errors (when it’s wrong, how wrong is it?). We relate differences in error distribution to the formation of trust among end-users. Second, in the context of human in/on-the-loop AI systems, we examine perceived and revealed differences between human control over the systems, and discuss the implications of these differences for policy.
Presenter: Dr. Zoe Szajnfarber is Professor and Chair of the Department of Engineering Management and Systems Engineering at the George Washington University. She is also the PI and co-Director of the NSF-funded Designing Trustworthy AI Systems (DTAIS) traineeship program. Dr. Szajnfarber’s research focuses on the design and development of complex systems, and the organizations that create them. She holds a bachelors in Engineering Science from the University of Toronto, dual master’s degrees in Aeronautics & Astronautics and Technology & Policy from MIT and a PhD in Engineering Systems, also from MIT.
More details: www.incose.org/ai