In AI We Trust
The increasing prevalence of artificial intelligence (AI) in society presents enormous opportunities while posing challenges and risks. Few companies or industries today remain untouched by AI, and those that haven’t adopted AI yet are wrestling with whether and how to integrate it into their systems and decision-making.
In September 2021, GW’s School of Engineering and Applied Science was awarded $3 million from the National Science Foundation Research Traineeship (NRT) program to transform the graduate education model and prepare future designers to navigate the opportunities and risks inherent in designing new AI algorithms and deploying them in real-world systems.
Zoe Szajnfarber, professor and chair of the Engineering Management and Systems Engineering Department, is co-leading the project with Robert Pless, professor and chair of the Computer Science Department, and colleagues from both departments. Szajnfarber explains what it means for AI to be trustworthy, what skills future AI designers need and their new approach to training Ph.D.s.
Q: What do we mean by trustworthy AI? How do we characterize trust in the AI context?
Q: Is there something you feel people misunderstand or misjudge about AI?
A: AI is often talked about as though it’s one uniform thing that can solve all our problems, but that’s misleading. There are currently a large number of algorithmic approaches that broadly fall under the umbrella of AI, and they vary widely in terms of both their capabilities for inference and prediction and the opportunity for unknowingly introducing bias and the limits of where they will work.
What I think is not very well understood, in popular coverage, is how AI can fail in ways traditional computational approaches do not, which has significant implications for how to best leverage and regulate it. Advanced AI predictions are often driven by surprising (and highly context-specific) connections made by processing vast amounts of data.
A classic example involves an extremely accurate AI-generated prediction of whether a picture includes a dog or a wolf. While a human might have focused on the shape of the ears or the color of the eyes, the algorithm noticed that wolf pictures were much more likely to have snow in the background, which proved a very useful classifier in this context. However, this highly accurate classifier—snow—might also label a skier as a wolf when applied to new data in a way that the human model focusing on the pointy ears never would. Currently, regulations and certification approaches create safeguards for typical failure modes, but the way that AI generates predictions changes that game. Researchers are actively working on tools to support better explainability and interpretability.
Q: For future designers
Q: This project takes a different approach to Ph.D. education. Can you explain?
A: Ph.D. programs often emphasize technical excellence and depth in a narrow focus area.
A group of George Washington University Ph.D. fellows met with members of the Virginia Task Force 1, a domestic and international disaster response resource sponsored by the Fairfax County Fire and Rescue Department. They spoke to experts to learn about the different elements that go into a successful search and rescue situation and potential opportunities for AI to support their operations.
A: The first two weeks of our bootcamp are focused on exposing ourselves to the real-world messiness of implementing AI in operational systems. For this year, we identified three sites that varied in their level of engagement with and adoption of AI tools as well as the safety criticalness of their application area.
We met with Comcast’s AI research division to discuss the opportunities and risks of implementing AI across their technology platforms, for example, with voice-assisted search via TV remote controls and home security. We then visited the MITRE Corporation, a federally funded research and development center and leader in air traffic safety and associated regulation. We toured several of their labs and had a chance for informal discussion with their technologists focused on evaluating the adoption of new AI and machine-learning tools in their systems.
Finally, we visited the Fairfax County Urban Search and Rescue training site, where the tools tend to be less technologically advanced, but there’s an interest in exploring advanced decision-support systems that could improve their ability to rescue victims from collapsed structures. As a result, the visit focused more on learning about their context and probing potential research opportunities. As part of the bootcamp, we spent time collectively digesting what we’d learned (and were inspired by), and what that means for the types of problems we wanted to work
Q: What makes GW uniquely situated to do this kind of work and training?
When I talk to prospective students, I always emphasize the value of being a small tight-knit community in a world-class city. One of the advantages of our small faculty size is that we have more opportunities to interact. In a larger school, Dr. Pless and I may never have connected, and that connection and the research conversations it has spawned has been one of the most fun and rewarding parts of this