DEV Community

Cover image for Lesson 11 – Getting Started with TensorFlow, Docker, and Kubernetes
Daniel Azevedo
Daniel Azevedo

Posted on

Lesson 11 – Getting Started with TensorFlow, Docker, and Kubernetes

Hi devs,

In this post, I'll guide you through the process of creating a simple machine learning app using TensorFlow, containerizing it with Docker, and deploying it on Kubernetes. Don't worry if you're new to these tools—I'll break everything down step by step.

Step 1: Building a Basic TensorFlow Model in Python

Let's kick things off by writing a simple TensorFlow app that trains a model using the MNIST dataset, which is a classic dataset for handwritten digit recognition.

Here’s the Python code to get started:

# app.py
import tensorflow as tf
from tensorflow.keras import datasets, layers, models

# Load and preprocess the MNIST dataset
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()
train_images, test_images = train_images / 255.0, test_images / 255.0

# Build a simple convolutional neural network
model = models.Sequential([
    layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
    layers.MaxPooling2D((2, 2)),
    layers.Flatten(),
    layers.Dense(64, activation='relu'),
    layers.Dense(10, activation='softmax')
])

# Compile and train the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=1)

# Evaluate the model
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')
Enter fullscreen mode Exit fullscreen mode

This script does a few things:

  1. Loads the MNIST dataset and normalizes it.
  2. Builds a simple convolutional neural network (CNN).
  3. Trains the model on the training data and evaluates it on the test data.

After running this code, you should see the model train for one epoch, and then it will print out the test accuracy.

Step 2: Dockerizing the TensorFlow App

Now, let's containerize this Python app using Docker. This ensures the app runs consistently across different environments.

First, create a Dockerfile in the same directory as your app.py:

# Dockerfile
FROM tensorflow/tensorflow:2.7.0

WORKDIR /app

COPY . .

CMD ["python", "app.py"]
Enter fullscreen mode Exit fullscreen mode

This Dockerfile is pretty straightforward:

  • It starts from an official TensorFlow image (tensorflow/tensorflow:2.7.0).
  • Sets /app as the working directory.
  • Copies all the files in the current directory to the Docker image.
  • Runs app.py when the container starts.

To build and run the Docker container, use the following commands:

# Build the Docker image
docker build -t tensorflow-app .

# Run the Docker container
docker run tensorflow-app
Enter fullscreen mode Exit fullscreen mode

Once the container starts, the TensorFlow app will run inside the container, and you'll see the output in your terminal.

Step 3: Deploying with Kubernetes

Now that our app is containerized, the next step is deploying it to a Kubernetes cluster. We’ll use a basic YAML file to describe the deployment.

Here’s a simple kubernetes.yaml for deploying the TensorFlow app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tensorflow-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tensorflow-app
  template:
    metadata:
      labels:
        app: tensorflow-app
    spec:
      containers:
        - name: tensorflow-app
          image: tensorflow-app:latest
          ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

This configuration defines:

  • A Deployment named tensorflow-app with 1 replica.
  • The app will be using the Docker image tensorflow-app:latest.
  • The app exposes port 80 for access.

To deploy this to your Kubernetes cluster, run the following:

kubectl apply -f kubernetes.yaml
Enter fullscreen mode Exit fullscreen mode

This will create a deployment and run the container inside your cluster. To expose the app externally, you can create a service:

kubectl expose deployment tensorflow-app --type=LoadBalancer --port=80
Enter fullscreen mode Exit fullscreen mode

Once the service is up, Kubernetes will assign an external IP (if you're using a cloud provider) to access your app.

Conclusion

This workflow is essential for scaling machine learning models in production environments, allowing you to manage and deploy models efficiently.

With these tools in your arsenal, you're well on your way to tackling more complex ML workloads and scaling them across environments. Keep experimenting, and stay tuned for more advanced topics on scaling AI with Kubernetes!

Keep Coding :)

Top comments (0)