Imagine you are working as a credit analyst at a financial institution and are using a smart algorithm developed by your tech team to evaluate the creditworthiness of customers. Undoubtedly, it has improved the efficiency but isn’t this scary that not even the developers of the algorithm understand how exactly it evaluates a customer profile and makes the decision it does — or even worse, how to prevent someone exploiting it.
Well, here comes the critical role of Model Explainability or more precisely Model Interpretability.
Interpretability can be defined as the degree to which a human can understand the cause of a decision or the degree to which a human can consistently predict a model's result. The higher the interpretability of a model, the easier it is to comprehend why certain decisions or predictions have been made.
On certain occasions, it is just the prediction that is of importance, and the “why” part can be ignored. For example, Netflix making a certain movie recommendation. However, there are situations where “why” can be a matter of life and death. Imagine, a self-driving car killing a coachman. Obviously, who could have incorporated horse carts in the model.