How to explain AI model’s decision?
Date & Location: Feb 27 in Akamai office, Hamada 3 – Herzelia
Description: We will have overview of existing approaches to explain model results and present of our own results in this area. We also will have case study from our host – Akamai.
We are going to cover:
- why we need to explain model’s decisions.
- black box vs model specific methods.
- black box methods suited for Images (CNN) and sequences (RNN).
- explanation of neural networks.
- explanation of XGboost models, including our own research
Case study from Akamai security research group:
Botents often use domain generation algorithms (DGA) to select a domain name through which bots can establish a resilient communication channel with their command and control servers. Akamai’s security platforms scan over 2.2 trillion DNS requests per day and detect thousands of algorithmically generated domain names per hour using neural networks that inspect the lexicograpic structure of domain names and their access patterns by worldwide users. Akamai’s data science teams often deal with providing reasonings for decisions made by these neural networks. In this short talk, we’ll present a brief overview of the models and an analysis that was conducted to providing model explainability.
I am (David Gruzman, NestLogic) will do overview of the existing methods. Co presented is Yael Daihes. She’s a data scientist on Akamai’s enterprise security research group.
Follow the this link to register.