Generalized Additive Model

In many social science and business problems, it is often more important to explain why a phenomenon happens than improving the model’s predictability on the event happening. Having an interpretable model is, therefore, crucial in understanding how different factors interact with the outcome of interest. The model’s interpretability is also important in highly regulated business environments, such as loan approval decisions. Even in situations where prediction accuracy is more important than the “why”, an interpretable model can help debug more complicated models and guide new approaches to feature engineering and data preprocessing.

In this context, generalized additive models (GAM) offer a middle ground between simple models, such as those we fit with linear regression, and more sophisticated machine learning models like neural networks that usually promise superior prediction performance to simple models. GAM can also be used in various tasks: regression, classification, binary choice.

Learn more - My Medium Post

Written on July 9, 2020