DEV Community

Cover image for Implementing MLOps with GitHub Actions
Lark Mullins for Craftwork

Posted on • Updated on

Implementing MLOps with GitHub Actions

Machine Learning Operations (MLOps) is an essential practice for deploying, managing, and monitoring machine learning models in production. By combining the principles of DevOps with machine learning, MLOps aims to streamline the end-to-end lifecycle of ML models. GitHub Actions, a powerful CI/CD tool, can play a crucial role in implementing MLOps by automating workflows. In this article, we will discuss how to implement MLOps using GitHub Actions, providing a detailed, step-by-step guide.

Why Use GitHub Actions for MLOps?

GitHub Actions allows you to automate your software workflows directly from your GitHub repository. It supports continuous integration and continuous deployment (CI/CD), making it an ideal tool for MLOps. With GitHub Actions, you can automate tasks such as testing, building, deploying, and monitoring your ML models.

Benefits of Using GitHub Actions:

  • Integration with GitHub: Seamlessly integrates with your GitHub repositories, making it easy to manage workflows within the same platform.
  • Custom Workflows: Define custom workflows using YAML syntax to suit your specific needs.
  • Scalability: Run workflows on GitHub-hosted or self-hosted runners to scale with your requirements.
  • Extensive Marketplace: Access to a marketplace with numerous pre-built actions to extend your workflows. Implementing MLOps with GitHub Actions

Setting Up Your Repository

First, ensure your repository is set up with the necessary files and structure for your ML project. This typically includes:

data/:              Directory for storing datasets.
models/:            Directory for storing trained models.
src/:               Directory for source code.
tests/:             Directory for test scripts.
requirements.txt:   Project dependencies.
Enter fullscreen mode Exit fullscreen mode

Creating a Workflow File

GitHub Actions uses YAML files to define workflows. These files are stored in the .github/workflows/ directory of your repository. Below is an example of a basic workflow for training and deploying a machine learning model.

name: MLOps Workflow
on:
 push:
   branches:
     - main
 pull_request:
   branches:
     - main
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: '3.8'
      - name: Install dependencies
        run: |
          python -m pip install - upgrade pip
          pip install -r requirements.txt
      - name: Run tests
        run: |
          pytest tests/
      - name: Train model
        run: |
          python src/train_model.py
      - name: Save model artifact
        uses: actions/upload-artifact@v2
        with:
          name: trained-model
          path: models/
  deploy:
    runs-on: ubuntu-latest
    needs: build
    steps:
      - name: Download model artifact
        uses: actions/download-artifact@v2
        with:
          name: trained-model
          path: models/
      - name: Deploy model
        run: |
          python src/deploy_model.py
Enter fullscreen mode Exit fullscreen mode

Automating Data Pipeline

A robust data pipeline is crucial for any ML project. Automate the steps of data collection, preprocessing, and storage to ensure a consistent and reproducible process.

Data Collection

Create scripts to automate the data collection process. For example, you might have a script that fetches data from an API or a database and saves it to the data/ directory.

Data Preprocessing

Include a preprocessing script (src/preprocess.py) to clean and transform raw data into a suitable format for model training. Automate this step in your GitHub Actions workflow:

jobs:
  preprocess:
  runs-on: ubuntu-latest
steps:
  - name: Checkout code
    uses: actions/checkout@v2
  - name: Set up Python
    uses: actions/setup-python@v2
    with:
      python-version: '3.8'
  - name: Install dependencies
    run: |
      python -m pip install - upgrade pip
      pip install -r requirements.txt
  - name: Preprocess data
    run: |
      python src/preprocess.py
Enter fullscreen mode Exit fullscreen mode

Version Control for Code and Data

Using version control systems for your code, data, and models ensures reproducibility and traceability.

Code Versioning

Use Git to manage and track changes to your codebase. Ensure all team members follow best practices for commits and branching.

Data and Model Versioning

Use tools like DVC (Data Version Control) to track changes in datasets and model artifacts. Integrate DVC with your Git repository to version control data and models:

- name: Install DVC
  run: |
    pip install dvc
- name: Pull data and model files
  run: |
    dvc pull
Enter fullscreen mode Exit fullscreen mode

Experiment Tracking

Track experiments to understand the impact of changes and identify the best-performing models. Tools like MLflow, TensorBoard, or Weights & Biases can be integrated into your workflow.

Example with MLflow

- name: Set up MLflow
  run: |
    pip install mlflow
- name: Run MLflow experiment
  run: |
    mlflow run src/train_model.py
Enter fullscreen mode Exit fullscreen mode

Continuous Integration & Continuous Deployment (CI/CD)

CI/CD pipelines automate the process of testing, validating, and deploying ML models. This ensures that any changes to the model or its dependencies are rigorously tested before being deployed to production.

Example CI/CD Pipeline

jobs:
  build:
  runs-on: ubuntu-latest
steps:
  - name: Checkout code
    uses: actions/checkout@v2
  - name: Set up Python
    uses: actions/setup-python@v2
    with:
      python-version: '3.8'
  - name: Install dependencies
    run: |
      python -m pip install - upgrade pip
      pip install -r requirements.txt
  - name: Run tests
    run: |
      pytest tests/
  - name: Train model
    run: |
      python src/train_model.py
  - name: Save model artifact
    uses: actions/upload-artifact@v2
    with:
      name: trained-model
      path: models/
Enter fullscreen mode Exit fullscreen mode

Containerization and Orchestration

Containerization ensures consistency across different environments. Docker is commonly used to containerize ML models and their dependencies.

Dockerfile Example

FROM python:3.8-slim
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install - no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "src/deploy_model.py"]
Docker Compose for Local Development
Enter fullscreen mode Exit fullscreen mode
version: '3.8'
services:
  ml_service:
  build: .
  ports:
    - "5000:5000"
  volumes:
    - .:/app
Enter fullscreen mode Exit fullscreen mode

Model Deployment

Deploy the model to a production environment. This could involve deploying to cloud services like AWS, Google Cloud, or Azure, or to an on-premises server.

Example Deployment Script

- name: Deploy to AWS
  run: |
    aws s3 cp models/trained-model s3://your-bucket-name/models/trained-model
    aws sagemaker create-model - model-name your-model-name - primary-container Image=your-container-image,S3ModelArtifacts=s3://your-bucket-name/models/trained-model
    aws sagemaker create-endpoint-config - endpoint-config-name your-endpoint-config - production-variants VariantName=AllTraffic,ModelName=your-model-name,InitialInstanceCount=1,InstanceType=ml.m4.xlarge
    aws sagemaker create-endpoint - endpoint-name your-endpoint - endpoint-config-name your-endpoint-config
Enter fullscreen mode Exit fullscreen mode

Model Monitoring and Retraining

Implement continuous monitoring to track model performance and automate retraining to ensure the model remains accurate over time.

Monitoring Script

- name: Monitor model
  run: |
    python src/monitor_model.py
Enter fullscreen mode Exit fullscreen mode

Retraining Pipeline

on:
  schedule:
    - cron: '0 0 * * 1' # Every Monday at midnight
jobs:
  retrain:
  runs-on: ubuntu-latest
  steps:
    - name: Checkout code
      uses: actions/checkout@v2
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: '3.8'
    - name: Install dependencies
      run: |
        python -m pip install - upgrade pip
        pip install -r requirements.txt
    - name: Retrain model
      run: |
        python src/train_model.py
    - name: Save retrained model
      uses: actions/upload-artifact@v2
      with:
        name: retrained-model
        path: models/
    - name: Deploy retrained model
      run: |
        python src/deploy_model.py
Enter fullscreen mode Exit fullscreen mode

Conclusion

Implementing MLOps with GitHub Actions allows you to automate and streamline the lifecycle of your machine learning models, from development to deployment and monitoring. By leveraging GitHub Actions, you can ensure that your ML models are robust and reliable.

Top comments (0)