Talk Abstract: There’s a trade-off between complexity and interpretability of Machine Learning algorithms. This may pose challenges in building trust in those models, but also affect our society as a whole. Kasia will talk about current ways of evaluating those opaque (‘black-box’) models and their caveats. Then, she’ll introduce Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models – using breast cancer data as a specific case scenario. Finally, she’ll discuss why using frameworks such as LIME is important not just from technical, but also ethical point of view.
Bio: Kasia Kulma holds a PhD in evolutionary biology from Uppsala University and is now a Data Scientist at Aviva. She has experience in building recommender systems, customer segmentations, web applications and is now leading an NLP project. She is the author of the blog R-tastic and is a mentor in R-Ladies London. She is an R-enthusiast interested in data (science) ethics, evidence based medicine and general machine learning modelling.