For the first time in its 26-year history, The Web Conference was held online in 2020, with content streaming interactively over both Zoom and Amazon Chime platforms.
Presentations at the conference included a tutorial from Krishnaram Kenthapadi, a principal scientist working on the Amazon Web Services machine learning team, and industry colleagues from LinkedIn and Fiddler Labs. The nine-part tutorial, Explainable AI in Industry, first focuses on theoretical underpinnings, followed by several industry case studies.
AI has the potential to make a large impact in everything from mortgage loans to pharmaceutical drug discovery, so it’s important that researchers, regulators, and organizations are able to fully understand how AI applications are producing recommendations through machine learning model predictions or how reinforcement learning models come to “learn” how to perform certain tasks.
“Artificial intelligence is increasingly playing an integral role in determining our day-to-day experiences,” Kenthapadi explained. “Moreover, with the proliferation of AI-based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching.”
“The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models and a demand for model transparency and interpretability,” Kenthapadi continued. “In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety, such as healthcare and automated transportation, and critical applications including predictive maintenance, exploration of natural resources, and climate change modeling. As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale.”
Kenthapadi explains that the tutorial initially focuses on the need for explainable AI from business, model, and regulatory perspectives, and tools and techniques for providing explainability as a central component of AI and machine learning systems.
The collaborators then focus on the applications of explainability techniques within industry by presenting case studies spanning domains such as search and recommendation systems, hiring, healthcare, and lending. They present practical challenges and lessons learned in deploying explainable models within several web-scale applications.
The tutorial concludes by identifying open problems and research directions for the data-mining and machine learning communities.
The collaborators, Krishna Gade, founder and CEO of Fiddler Labs; Sahin Cem Geyik of LinkedIn; Varun Mithal, senior software engineer, data mining and machine learning, LinkedIn; Ankur Taly, head of data science, Fiddler Labs; and Kenthapadi, first presented their tutorial at last summer’s Knowledge Discovery and Data Mining (KDD) conference. Kenthapadi joined Amazon last fall, and is focusing on AWS projects related to fairness, explainability, and privacy in machine learning.