Feature importance and model interpretation in Python
$19.99 Track price
In this practical course, we are going to focus on feature importance and model interpretation in supervised machine learning using Python programming language.
Feature importance makes us better understand the information behind data and allows us to reduce the dimensionality of our problem considering only the relevant information, discarding all the useless variables. A common dimensionality reduction technique based on feature importance is the Recursive Feature Elimination.
Model interpretation helps us to correctly analyze and interpret the results of a model. A common approach for calculating model interpretation is the SHAP technique.
With this course, you are going to learn:
How to calculate feature importance according to a model
SHAP technique for calculating feature importance according to every model
Recursive Feature Elimination for dimensionality reduction, with and without the use of cross–validation
All the lessons of this course start with a brief introduction and end with a practical example in Python programming language and its powerful scikit–learn library. The environment that will be used is Jupyter, which is a standard in the data science industry. All the Jupyter notebooks are downloadable.
This course is part of my Supervised Machine Learning in Python online course, so you’ll find some lessons that are already included in the larger course.
Specification: Feature importance and model interpretation in Python
Be the first to review “Feature importance and model interpretation in Python” Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
There are no reviews yet.