Stacking and Voting Classifiers with Scikit Learn

In the field of machine learning, ensemble methods are widely used to enhance the performance of individual models by combining predictions from multiple base classifiers. Two popular approaches for ensemble learning are stacking and voting classifiers. In this article, we will introduce these techniques and demonstrate how to implement them using Scikit Learn, a powerful machine learning library.

Stacking Classifiers

Stacking, also known as stacked generalization, is a method that combines predictions from multiple base classifiers by training a meta-classifier on their outputs. The idea behind stacking is to leverage the strengths of different models and build a more accurate and robust final model.

The following steps outline the process of creating a stacking classifier using Scikit Learn:

  1. Step 1: Create Base Classifiers: Start by selecting a set of base classifiers with diverse characteristics. These classifiers can be any of the machine learning algorithms available in Scikit Learn, such as logistic regression, decision trees, or support vector machines.

  2. Step 2: Split the Data: Divide the available training data into two parts. The first part will be used to train the base classifiers, while the second part will be used to create the input for the meta-classifier.

  3. Step 3: Train Base Classifiers: Train each base classifier on the first part of the training data. Each classifier will generate predictions for the second part of the training data.

  4. Step 4: Create Input for Meta-Classifier: Combine the predictions generated by the base classifiers from step 3 to create a new training set. Each prediction serves as a new feature.

  5. Step 5: Train Meta-Classifier: Train a meta-classifier (e.g., logistic regression, random forest) on the transformed training set obtained in step 4. This classifier will learn to combine the predictions from the base classifiers.

  6. Step 6: Make Predictions: Once the stacking classifier is trained, it can be used to make predictions on new unseen data.

Voting Classifiers

Voting classifiers are another type of ensemble technique where multiple base classifiers are used to predict the class labels of unseen data. However, instead of combining the predictions through a meta-classifier like in stacking, voting classifiers take a majority vote or an average of the predictions from the base classifiers.

Scikit Learn provides two types of voting classifiers: hard voting and soft voting.

  • Hard Voting: In hard voting, each base classifier predicts a class label, and the majority class label is selected as the final prediction.

  • Soft Voting: In soft voting, each base classifier provides a probability distribution over all possible classes. The class label with the highest average probability across all classifiers is selected as the final prediction.

To use voting classifiers in Scikit Learn, follow these steps:

  1. Step 1: Create Base Classifiers: Select a set of base classifiers, similar to the stacking approach.

  2. Step 2: Define Voting Classifier: Create a voting classifier object, specifying the set of base classifiers and the voting method (either 'hard' or 'soft').

  3. Step 3: Train Voting Classifier: Train the voting classifier on the training data.

  4. Step 4: Make Predictions: Use the trained voting classifier to make predictions on new unseen data.


Ensemble methods such as stacking and voting classifiers can significantly improve the predictive performance of machine learning models. In this article, we have explored the concepts of stacking and voting classifiers and demonstrated how to implement them using the Scikit Learn library. These techniques enable us to combine the strengths of different models and create more accurate and robust predictions.

noob to master © copyleft