DEV Community

Seenivasa Ramadurai
Seenivasa Ramadurai

Posted on

End-to-End Guide: Building, Containerizing, and Deploying ML Models with Docker(Desktop)

In this blog, we’ll dive into the journey of building and deploying a machine learning (ML) model. But before we get into the details, let’s start with a simple question

What exactly is ML?

Machine Learning (ML) is a branch of artificial intelligence (AI) where computers learn to recognize patterns and make decisions without being explicitly programmed for every scenario. Instead of writing rules for the machine to follow, you feed it lots of data, and it figures out the rules on its own.

Here’s how it works in simpler terms:

  1. Input Data: You give the machine examples of what you want it to learn. For instance, a bunch of pictures labeled "cat" and "not cat."
  2. Training: The machine analyzes the data and builds a model—a mathematical representation of the patterns it found in the data.
  3. Prediction: You give it new, unlabeled data (like a picture of a dog), and it tries to decide, based on its training, whether it’s a "cat" or "not cat."
  4. Feedback: If it gets it wrong, you adjust the training to help it improve.

In essence, ML is like teaching a dog tricks. It doesn’t fully understand why it’s doing something, but it learns through repetition and rewards!

Machine learning can be broadly categorized into three types:

  1. Supervised Learning: In this approach, we provide the model with both data and labels. The model learns to make predictions based on these labeled examples.

  2. Unsupervised Learning: Here, the model is given data without labels or targets. It identifies patterns or structures within the data and groups similar data points together.

  3. Reinforcement Learning: In this type of learning, the model interacts with an environment and learns through trial and error, receiving rewards or penalties based on its actions.

Now, let’s walk through the end-to-end process of model building and deployment. If we break down the typical machine learning workflow, a large portion of the time is spent on data collection, preprocessing, and feature engineering. The data we collect will depend on the problem we're trying to solve or predict using ML or deep learning models.

However, in this blog, we’ll skip the stages of Data Collection, Data Cleaning, Feature Engineering, and Exploratory Data Analysis (EDA). Instead, we’ll jump directly into building the model. To make things simpler, we’ll use a sample dataset from the well-known scikit-learn library, which includes a variety of datasets. For this example, we’ll focus on the IRIS dataset. Using this dataset saves us time, allowing us to dive straight into building a supervised classification model.

About the IRIS Dataset:

The IRIS dataset consists of 150 rows and 4 features (or columns), which are:

  • Sepal Length (cm)
  • Sepal Width (cm)
  • Petal Length (cm)
  • Petal Width (cm)

The dataset also contains target labels:

  • Setosa
  • Versicolor
  • Virginica

Our goal is to build a model that can classify the IRIS flower based on these features and predict whether the flower is Setosa, Versicolor, or Virginica.

Since this is a classification problem, we will use a simple algorithm called Logistic Regression and utilize the scikit-learn library in Python. As with any ML model, we need training, testing, and validation datasets. We’ll use the train_test_split method from scikit-learn to split our data into training and testing sets.

Here is the syntax for the train_test_split() method:

train_test_split(*arrays, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None)
Enter fullscreen mode Exit fullscreen mode

For more details on how to use train_test_split, check out the official documentation here: train_test_split Documentation.

Now, let’s move ahead and start building our model!

Step 1:

This code demonstrates how to load the IRIS dataset, train a machine learning model using Logistic Regression, evaluate its performance, and save the trained model for later use. It also shows how to combine the dataset features with their corresponding target labels and display the data in a readable format.

Image description

Here’s a step-by-step breakdown:

  1. Import Necessary Libraries:

    • load_iris: Loads the IRIS dataset from sklearn.datasets.
    • accuracy_score: Used to evaluate the model’s performance by comparing the predicted values with the actual ones.
    • train_test_split: Splits the data into training and testing sets.
    • LogisticRegression: The model used for training and prediction.
    • joblib: Used to save the trained model for future use.
    • numpy: Used for array operations.
    • pandas: Used to display and manipulate the data in a tabular format.
  2. Load the IRIS Dataset:

    • dataset_iris = load_iris() loads the IRIS dataset.
    • X = dataset_iris.data assigns the feature data (sepal length, sepal width, petal length, and petal width) to X.
    • y = dataset_iris.target assigns the target labels (species) to y.
    • print(dataset_iris.data.shape) prints the shape of the dataset to show the number of samples and features.
    • print(dataset_iris.target_names) prints the names of the target classes (Setosa, Versicolor, Virginica).
  3. Split the Data into Training and Testing Sets:

    • The data is split into training and testing sets using train_test_split, with 80% of the data for training and 20% for testing (test_size=0.20).
  4. Train the Logistic Regression Model:

    • lr = LogisticRegression() creates an instance of the Logistic Regression model.
    • lr.fit(X_train, y_train) trains the model using the training data.
    • pred = lr.predict(X_test) generates predictions on the test data.
  5. Evaluate the Model’s Accuracy:

    • accuracy = accuracy_score(y_test, pred) calculates the accuracy of the model by comparing the predicted values with the actual ones.
    • print(f"Model Accuracy {accuracy:.2f}%") displays the model's accuracy as a percentage with two decimal places.
  6. Make a Prediction on a Single Sample:

    • A sample from the test data (sample = X_test[0]) is selected.
    • prediction = lr.predict([sample]) predicts the class label for this single sample.
    • print(f"Prediction for the sample data point is {dataset_iris.target_names[prediction]}") outputs the predicted class name for the sample.
  7. Save the Trained Model:

    • joblib.dump((lr, feature_names), "iris_model.pkl") saves the trained model along with the feature names into a file named iris_model.pkl.
  8. Combine Data and Target Labels:

    • y_named replaces numeric target labels with the corresponding class names (Setosa, Versicolor, Virginica).
    • data_combined = np.column_stack((X, y_named)) combines the features and target labels into a single array.
    • df_combined = pd.DataFrame(data_combined, columns=columns) creates a pandas DataFrame for better readability.
    • print(df_combined) displays the combined data with both features and target labels in a tabular format.

Output:

Image description

Classification Report

Image description

The classification report provides detailed metrics for evaluating the performance of a classification model for each class. Let’s break it down:

Metrics for Each Class:

Setosa, Versicolor, Virginica:
These are the three classes (species of flowers in the IRIS dataset).

Explanation of the Classification Report:

The classification report provides detailed metrics for evaluating the performance of a classification model for each class.

Why Use Accuracy?

Accuracy is a straightforward and intuitive metric to evaluate model performance. In this report:

  • The dataset is balanced, meaning the classes have roughly equal numbers of instances. This makes accuracy a meaningful measure.
  • Accuracy provides a quick understanding of how well the model performs overall, making it a useful high-level metric.

However, in cases where the dataset is imbalanced, precision, recall, and F1-score become more important, as accuracy alone can be misleading. For example, in an imbalanced dataset, a model that predicts only the majority class can achieve high accuracy without being effective for the minority class.

for more about Accuracy and classification_report please visit below url.

https://scikit-learn.org/1.5/modules/generated/sklearn.metrics.classification_report.html

Step 2:

Our model is now ready, and with an accuracy of 93%, I think it's performing quite well. The next step is to expose it as a REST API so that external users can send sample data to test or interact with the model. This is similar to deploying it into production, making it accessible for real-world use.

To build a REST API and expose our model to the external world, we’ll be using FastAPI. To receive the model features as input, we’ll define a Python class called IrisInputData, which will be derived from Pydantic’s base model, as shown below.

Image description

Now run the app.py you will see the following msg in console output

Image description

API Swagger docs

Image description

Let us test our model passing the IrisInputdata as shown below, I have taken the input values from Model output 4 rows values

Image description

Here is the model prediction

Image description

Step 3:

Now, we're ready to deploy our model in a production environment where it can scale based on the number of incoming requests. To do this, we'll containerize our API and deploy it to Docker Desktop. The key difference is that when we later move our workload (API) to Kubernetes (K8s), we'll need to create deployment files, set up services, replicas, and deploy it as a K8s pod. But for now, we'll keep it simple by using Docker Desktop to deploy our model, allowing us to test and scale the API locally.

First, we need to create a Docker image to package our API code and its dependencies. To do this, we'll write a Dockerfile with the following instructions:

Image description

Next, we'll use the docker build command to build the image from the Dockerfile, and then use the docker run command to start our application as a Docker container.

Note: Before running the docker build command, ensure that Docker Desktop is downloaded and installed on your system.

Image description

here is the output of docker build command

Image description

Here is the Docker image that has been built and saved to the local Docker registry.

Image description

Next, let's run this image as a Docker container and expose it to the outside world (allowing access from the host computer) through port 8000.

Here is the docker run command we need to run .

docker run -d -p 8001:8000 --name sreen-iris-ml-api sreeni-ml-model-v1:latest

Explanation:

-d: Run the container in detached mode (in the background). pass -it (for interactive mode)
-p 8001:8000: Maps port 8000 inside the container to port 8001 on the host machine.
--name sreen-iris-ml-api: Assigns the name sreen-iris-ml-api to the container.
sreeni-ml-model-v1:latest: The image name with the tag latest.

The below screenshot shows sreeni-ml-model-v1 docker image is running

Image description

if you browse the url http://localhost:8001/docs#/default/predict_predict_post from your local computer browser

Image description

In conclusion, we have successfully containerized our machine learning model and deployed it using Docker. By following the above steps to build and run the Docker image, we ensured that the API is accessible from the host machine on port 8001. This setup allows us to easily scale and manage our model, and when the time comes, we can transition to Kubernetes for further scalability in a production environment.

Conclusion

In this post, we successfully walked through the process of building, containerizing, and deploying a machine learning model using Docker. First, we built an ML model using the IRIS dataset, trained it to predict flower species based on input features like sepal length and width, petal length, and petal width. After training the model, we saved it and created an API to interact with it.

Next, we containerized the model and API by creating a Dockerfile, building the Docker image, and running the container with the necessary port mappings, making our API accessible from the host machine for local testing and development.

This process not only simplifies deployment but also lays the foundation for scaling and transitioning to Kubernetes when the workload increases. Docker offers a powerful, flexible solution for containerizing applications and models, making them portable and easier to manage.

What's Next?

  • Scalability: After mastering Docker, you can scale your model by exploring Kubernetes for container orchestration, which provides enhanced scalability and easier management in production environments.
  • Security: Make sure to secure your containerized API by implementing authentication and authorization mechanisms, especially when deploying to production.
  • Deployment to Cloud: Consider deploying your Dockerized model to cloud platforms like AWS, Azure, or Google Cloud for broader access and availability.
  • Model Improvements: As you gather more data, consider improving the model's accuracy or experimenting with more complex algorithms to refine predictions.

Feel free to reach out with any questions or comments. Happy coding and happy scaling!

Thanks
Sreeni Ramadorai.

Top comments (0)