Shashank is Data Sciences leader with diverse experience across verticals including CPG, Retail, Hitech and E-commerce domains. He is currently heading the advanced analytics and data sciences practice of CrunchMetrics and Subex. In the past, he has worked in VMware, Amazon, Flipkart and Target and has been involved in solving various complex business problems using Machine Learning and Deep Learning. He has been part of the program committee of several international conferences like ICDM and MLDM and was selected as a mentor in Global Datathon 2018 organized by Data Sciences Society. He has multiple publications in the field of data sciences, machine learning, deep learning and image recognition in several international journals of repute to his credit. He has also published two open source libraries on Python and is an active contributor to the global Python community.
Day 2 - Tech Talks 29 May
Leveraging Game Theory for Explainable AI (XAI)
Modern complex machine learning and deep learning models are naturally opaque and as artificial intelligence becomes an increasing part of our daily lives, the need to trust the AI based systems with all manners of decision making and predictions is paramount. Explainable AI refers to methods and techniques in the application of artificial intelligence technology (AI) such that the results of the solution can be understood by human experts. It contrasts with the concept of the “black box” in machine learning and enables transparency. On a global level, this means that we understand which features the model is using, and to what extent, when making a decision. For each single feature, we would want to understand how this feature is used, depending on the values it takes. And on a local level, that is, for any individual data point, we would want to see why the model made a certain decision. This can give us more insight into where and why the model might fail.
In this talk, Shashank is going to discuss a game theory based technique for AI explainability.