DEV Community

Cover image for Unleash the Power of Boosting: A Practical Guide to Ensemble Learning - PART III
Seenivasa Ramadurai
Seenivasa Ramadurai

Posted on

Unleash the Power of Boosting: A Practical Guide to Ensemble Learning - PART III

Boosting with AdaBoost: A Deep Dive into Handwritten Digit Recognition

Welcome back! In our previous blog, we introduced Bagging as one of the essential ensemble learning techniques. Today, we're going to take a closer look at Boosting, a powerful technique that can turn weak learners into strong models. We will specifically use the AdaBoost algorithm with the famous handwritten digits dataset from scikit-learn. We will also implement a FastAPI API to serve our trained model for predictions.

What is Boosting?

Boosting is an ensemble learning technique that works by combining multiple weak learners to create a strong predictive model. The process involves training a sequence of models, each one trying to correct the mistakes made by the previous model. Over time, this sequential correction leads to a highly accurate model.

In Boosting, each model is trained on a weighted dataset, where misclassified data points receive higher weights, forcing the next model to focus on these difficult cases. This iterative correction boosts the model's performance, making it more accurate compared to individual weak learners.

Boosting: The AdaBoost Algorithm

AdaBoost (Adaptive Boosting) is one of the most widely used boosting algorithms. It combines the predictions of weak learners (typically shallow decision trees) and adjusts the weights of misclassified data points to focus the learning process on hard-to-classify instances.

We will implement the AdaBoost classifier using the handwritten digits dataset from scikit-learn. Let's break down the entire process from training the model to setting up a FastAPI service for real-time predictions.

1. Dataset Overview

We use the handwritten digits dataset from scikit-learn, which contains 8x8 pixel images of digits from 0 to 9. Each image is flattened into a 64-element vector representing the pixel intensity values.

We will train our AdaBoost classifier to predict the digits based on these pixel values.

2. Code Implementation: AdaBoost with the Handwritten Digits Dataset

Step 1: Import Libraries

We begin by importing the necessary libraries, including those for model training, evaluation, and FastAPI.

from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import load_digits       
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from fastapi import FastAPI, Depends
from pydantic import BaseModel, Field
import pandas as pd
import pickle
import uvicorn
import numpy as np
Enter fullscreen mode Exit fullscreen mode

Step 2: Load the Dataset

We load the digits dataset and split it into training and testing sets.

# Load the digits dataset
digits = load_digits()
X = digits.data
y = digits.target

# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
Enter fullscreen mode Exit fullscreen mode

Step 3: Display Test Samples

We randomly select 20 test samples and display the corresponding images.

# Display test samples and their indices
test_indices = np.random.choice(len(X_test), 20, replace=False)
print("\nSample Test Digits and their indices:")
for idx in test_indices:
    print(f"Index: {idx}, Digit: {y_test[idx]}")

# Display sample digits from test set
plt.figure(figsize=(15, 12))
for i, idx in enumerate(test_indices):
    plt.subplot(5, 4, i+1)
    plt.imshow(X_test[idx].reshape(8, 8), cmap='gray', interpolation='nearest')
    plt.axis('off')
    plt.title(f'Digit: {y_test[idx]}', fontsize=12, pad=8)
plt.suptitle('Sample Digits from Test Dataset', fontsize=16, y=0.95)
plt.tight_layout()
plt.show()
Enter fullscreen mode Exit fullscreen mode

Image description

Step 4: Train AdaBoost Classifier

We initialize the AdaBoost classifier using a DecisionTreeClassifier as the weak learner and train the model.

# Initialize the base Decision Tree classifier
tree = DecisionTreeClassifier(max_depth=1, random_state=42)

# Initialize the AdaBoost classifier
boosting = AdaBoostClassifier(estimator=tree, n_estimators=100, random_state=42)

# Train the AdaBoost classifier
boosting.fit(X_train, y_train)
Enter fullscreen mode Exit fullscreen mode

Step 5: Evaluate the Model

We evaluate the model using accuracy, classification report, and a confusion matrix.

# Make predictions on the test set
y_pred = boosting.predict(X_test)

# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
print(f"AdaBoost classifier accuracy: {accuracy:.2f}")
print("\nClassification Report:")
print(classification_report(y_test, y_pred))

# Visualize the confusion matrix
conf_matrix = confusion_matrix(y_test, y_pred)
sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title('Confusion Matrix')
plt.show()
Enter fullscreen mode Exit fullscreen mode

Step 6: Save the Trained Model

We save the trained AdaBoost model using pickle so that it can be loaded later for predictions via the FastAPI API.

# Save the trained model
with open('boosting_model.pkl', 'wb') as f:
    pickle.dump(boosting, f)
Enter fullscreen mode Exit fullscreen mode

Step 7: Sample Prediction

We use some test samples to make predictions and compare the results.

# Sample prediction
new_data = X_test[:5]
prediction = boosting.predict(new_data)
print(f"Sample Prediction: {prediction}")
print(f"Actual Values: {y_test[:5]}")
print(f"Predicted Values: {y_pred[:5]}")
Enter fullscreen mode Exit fullscreen mode

3. FastAPI Implementation

Now, let’s expose the trained model via a FastAPI API for real-time predictions. The API will have endpoints for digit prediction, as well as fetching specific digits from the dataset.

Step 1: Define the Input Model

class DigitInput(BaseModel):
    """Pydantic model for digit recognition input."""
    pixel_values: list[float] = Field(..., 
        description="Array of 64 pixel values (8x8 image flattened)",
        min_items=64, 
        max_items=64
    )
Enter fullscreen mode Exit fullscreen mode

Step 2: Implement FastAPI Routes

app = FastAPI(
    title="Digit Recognition API",
    description="API for recognizing handwritten digits using AdaBoost classifier",
    version="1.0.0"
)

@app.get("/")
def read_root():
    return {"message": "Welcome to the Digit Recognition API"}

@app.post("/predict")
def predict(data: DigitInput):
    """Predict the digit from input pixel values."""
    try:
        with open('boosting_model.pkl', 'rb') as f:
            boosting = pickle.load(f)

        # Convert input data to correct shape
        input_data = np.array(data.pixel_values).reshape(1, -1)
        prediction = boosting.predict(input_data)

        return {
            "predicted_digit": int(prediction[0]),
            "confidence": "high" if max(boosting.predict_proba(input_data)[0]) > 0.8 else "low",
        }
    except Exception as e:
        return {"error": str(e)}
Enter fullscreen mode Exit fullscreen mode

Step 3: Run the FastAPI Server

To run the FastAPI server, use uvicorn:

uvicorn main:app --host 0.0.0.0 --port 8000
Enter fullscreen mode Exit fullscreen mode

4. Conclusion

In this blog, we have implemented AdaBoost for handwritten digit recognition using the digits dataset. We also built a FastAPI application to expose the trained model for real-time predictions. By using boosting, we were able to significantly improve model performance over individual weak learners.

Boosting is an essential technique in the machine learning toolbox, and understanding its application can give you an edge in building high-performance models.

Interpreting AdaBoost Classifier Performance on Handwritten Digits

Image description

The AdaBoost classifier achieved an accuracy of 80% on the handwritten digits dataset, which is a solid performance, but let's dive deeper into the classification results.

Classification Report Highlights:

  • Precision: How many of the predicted digits were correct. For example, precision for digit 0 is 0.96, meaning 96% of the predicted digit 0's were correct.

  • Recall: How many of the actual digits were identified correctly. For digit 8, recall is 0.98, meaning 98% of actual digit 8's were correctly predicted.

  • F1-Score: A balance between precision and recall. For digit 6, the F1 score is 0.93, indicating good balance in performance for that digit.

  • Support: The number of actual occurrences of each digit in the test set. For example, digit 0 appeared 53 times in the test set.

Key Insights:

  • The classifier performed best on digits 0, 6, and 7, achieving high precision and recall.
  • For digits 2, 4, and 5, the model faced challenges, with lower recall and F1 scores.
  • Overall, the model demonstrated consistent performance, but struggled a bit with some digits like 2 and 8, where recall was low (0.49 for 2 and 0.56 for 8).

Sample Prediction:

The model correctly predicted most of the sample digits from the test set (6, 9, 3, 7), but incorrectly predicted digit 2 as 8. This highlights areas for improvement, as the model might confuse digits with similar visual features.

AdaBoost vs. CNN for Image Classification:

While AdaBoost provides solid performance, Convolutional Neural Networks (CNNs) are the go-to model for image-based tasks like digit recognition. CNNs automatically learn spatial patterns, offering superior performance on image data. If you're working with images and require higher accuracy, CNNs would be a better choice due to their ability to capture more complex patterns.

In this case, a CNN could potentially outperform AdaBoost by recognizing the intricate features of handwritten digits more effectively.

Thanks
Sreeni Ramadorai

Top comments (0)