DEV Community

Cassie Breviu for Microsoft Azure

Posted on • Edited on

ONNX: No, it's not a Pokemon! Deploy your ONNX model with C# and Azure Functions

Ok you got a ML model working in Jupyter notebook, now what? Lets deploy it! There are many ways to operationalize your model. In this tutorial we are going to be using the ONNX model format and runtime. ONNX gives you the ability to use the same model and application code across different platforms. This means I can create this model in Python with SciKit Learn and use the resulting model in C#! Say whaaat? Yes, that is right. Save it to ONNX format then run it and do the inferencing in C# with the onnxruntime!

We are going to be using a model created with Python and SciKit Learn from this blog post to classify wine quality based on the description from a wine magazine. We are going to take that model, update it to use a pipeline and export it to an ONNX format.

We want to use ONNX format is because this is what will allow us to deploy it to many different platforms. There are other benefits (such as performance) to using ONNX. Learn more about that here. Since I ♥ C# we are going to use C# and the onnxruntime nuget library available for dotnet. However, if you prefer to use a different language many are supported. Checkout all the supported libraries and languages here

Prerequisites

NOTE: If you prefer to create the model locally I recommend downloading Anaconda. However, this tutorial is written as if you are using Azure ML.

What is Open Neural Network Exchange (ONNX)?

ONNX is an open/common file format to enable you to use models with a variety of frameworks, tools, runtimes, and compilers. Once the model is exported to the ONNX format then you can use the ONNX Runtime: a cross-platform, high performance scoring engine for ML models. This provides framework interoperability and helps to maximize the reach of hardware optimization.

Create the Model with Azure Machine Learning

We have a model from the previous blog post to classify wine quality that we will use as the example model. See the note below if you have your own model you would like to use. Additionally, you should already have an Azure Machine Learning (AML) Studio created to generate the model. If not, follow these steps to create the workspace.

NOTE: To use your own model visit the ONNX GitHub tutorials to see how to convert different frameworks and tools.

1. Create a Cloud Compute Instance in the Azure ML Resource

  • Navigate to your Azure ML Resource (https://ml.azure.com)
  • Click on the nav "Compute"
  • Click "New"
  • Enter a name for the resource
  • Select the machine size
  • Click "Create"

2. Clone the Git repo with the Jupyter Notebook

  • Open JupyterLab for the compute instance in AML
  • Click the Terminal to open a terminal tab
  • Clone the GitHub repo
  • The onnx-csharp-serverless folder will appear. Navigate to the Jupyter Notebook onnx-csharp-serverless/notebook/wine-nlp-onnx.ipynb.
git clone https://github.com/cassieview/onnx-csharp-serverless.git
Enter fullscreen mode Exit fullscreen mode

3. Install the package

ONNX has different packages for python conversion to the ONNX format. Since we used SciKit Learn we will use the skl2onnx package to export our trained model

In the terminal install the package with the following command

pip install skl2onnx
Enter fullscreen mode Exit fullscreen mode

4. Run the code

The notebook provided goes over how to create a basic bag-of-words style NLP model. Run each cell until you get to the Export the Model step near the bottom of the notebook.

The first cell in the export block of the notebook is importing the ONNX packages.

from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import StringTensorType
Enter fullscreen mode Exit fullscreen mode

The next cell is running convert_sklearn and sending in the the following values:

  • The first parameter is a trained classifier or pipeline. In our example we are using a pipeline with 2 steps to vectorize and classify the text.
  • The second parameter is a string name value of your choice.
  • Lastly setting the initial_types This is a list with each list item a tuple of a variable name and a type.
model_onnx = convert_sklearn(pipeline,
                             "quality",
                             initial_types=[("input", StringTensorType([None, 1]))])
Enter fullscreen mode Exit fullscreen mode

Then we will simply export the model.

with open("pipeline_quality.onnx", "wb") as f:
    f.write(model_onnx.SerializeToString())
Enter fullscreen mode Exit fullscreen mode

5. Save Model to Azure Storage

Now that we have exported the model into ONNX format lets save it to Azure Storage. Follow these steps to create a storage account and upload the model created.


Deploy Model with Azure Functions

Azure Functions is the serverless Azure Resource we are going to use to deploy our model.

1. Install VS Code Azure Function extension:

  • Install the Azure Functions extension. You can use the Azure Functions extension to create and test functions and deploy them to Azure.
  • In Visual Studio Code, open Extensions and search for Azure functions, or select this link
  • Select Install to install the extension for Visual Studio Code:

2. Use VS Code to create Azure function

  • Hit CTRL-SHIFT-P to open the command pallet
  • Type create function and select the create function option
  • From the popup select Create new project and create a folder for the project
  • When prompted for a language select C#. Note that you have many language options with functions!
  • Next select a template. We want an HttpTrigger and give it a name
  • Next it will prompt you for a namespace. I used Wine.Function but feel free to name it as you wish.
  • Access Rights are next, select Function
  • Select open in current window
  • You should be prompted in the bottom right corner to restore packages. If not you can always open the terminal and run dotnet restore to restore nuget packages.

3. Run the project to validate its working

  • Hit F5 to run project and test that its working
  • Once the function is up there will be a localhost endpoint displayed in the terminal output of VS Code. Paste that into a browser with a query string to test that it is working. The endpoint will look something like this:
  http://localhost:7071/api/wine?name=test
Enter fullscreen mode Exit fullscreen mode
  • The result in the browser should look like this Hello, test. This HTTP triggered function executed successfully.
  • Stop the run.

4. Install the Nuget Packages

Sweet, we now have an Azure Function ready to go. Lets install the nuget package we need to inference with our exported model in C#.

Open the terminal and run the below commands to install the ONNX package

dotnet add package Microsoft.ML.OnnxRuntime --version 1.2.0
dotnet add package System.Numerics.Tensors --version 0.1.0
Enter fullscreen mode Exit fullscreen mode

Add the Azure Storage package with the below command

dotnet add package Azure.Storage.Blobs
Enter fullscreen mode Exit fullscreen mode

Import the libraries at the top of the C# class.

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using Microsoft.ML.OnnxRuntime;
using System.Numerics.Tensors;
Enter fullscreen mode Exit fullscreen mode

5. Update the Code

Copy and paste the below code into the class created. Read through the comments to understand what is happening at each step in the code.

        public static async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
            ILogger log, ExecutionContext context)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");

            string review = req.Query["review"];

            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);
            review = review ?? data?.review;

            // Get path to model to create inference session.
            var modelPath = GetFileAndPathFromStorage(context, "model327", "pipeline_quality.onnx");
            var inputTensor = new DenseTensor<string>(new string[] { review }, new int[] { 1, 1 });

            // Create input data for session.
            var input = new List<NamedOnnxValue> { NamedOnnxValue.CreateFromTensor<string>("input", inputTensor) };

            // Create an InferenceSession from the Model Path.
            var session = new InferenceSession(modelPath);

            // Run session and send input data in to get inference output. The run is returned as an object but it is a list. Call ToList then get the Last item. Then use the AsEnumerable extension method to return the Value result as an Enumerable of NamedOnnxValue.
            var output = session.Run(input).ToList().Last().AsEnumerable<NamedOnnxValue>();

            // From the Enumerable output create the inferenceResult by getting the First value and using the AsDictionary extension method of the NamedOnnxValue.
            var inferenceResult = output.First().AsDictionary<string, float>();

            // Return the inference result as json.
            return new JsonResult(inferenceResult);
        }
Enter fullscreen mode Exit fullscreen mode

Add the GetFileAndPathFromStorage helper method to the class below the Run method. This method will download the model from the storage account we created and uploaded the model to.

 internal static string GetFileAndPathFromStorage(ExecutionContext context, string containerName, string fileName)
        {
            //Check if file already exists
            var filePath = System.IO.Path.Combine(context.FunctionDirectory, fileName);
            var fileExists = File.Exists(filePath);

            if (!fileExists)
            {
                //Get model from Azure Blob Storage.
                var connectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");
                var blobServiceClient = new BlobServiceClient(connectionString);
                var containerClient = blobServiceClient.GetBlobContainerClient(containerName);
                var blobClient = containerClient.GetBlobClient(fileName);

                // Download the blob's contents and save it to a file
                BlobDownloadInfo blobDownloadInfo = blobClient.Download();
                using (FileStream downloadFileStream = File.OpenWrite(filePath))
                {
                    blobDownloadInfo.Content.CopyTo(downloadFileStream);
                    downloadFileStream.Close();
                }

            }

            return filePath;
        }
Enter fullscreen mode Exit fullscreen mode

6. Update the storage parameters

  • Update the Storage Account connection in the local.settings.json to connect to your storage account. You will find the connection string in the created resource in Azure under Access Keys.
  • Update the containerName and fileName to the names you used.

7. Test the endpoint

Its time to test the function locally to make sure everything is working correctly.

  • Hit F5 to run project and test that its working
  • This time we are going to use an actual wine review from Wine Enthusiast. Rather than doing this through the browser we are going to use Postman.
  • Grab the endpoint from the terminal in vs code and paste it into a new tab in Postman.
  • Change the GET dropdown to POST
  • Select the Body tab
  • Select the raw radiobutton
  • check the text dropdown to JSON
  • Paste the body into the text editor
{
  "review": "From the famous Oakville site, this aged wine spends three years in 100% new French oak, one in neutral oak and an additional year in bottle. Though it has had time to evolve, it has years to go to unfurl its core of eucalyptus, mint and cedar. It shows an unmistakable crispness of red fruit, orange peel and stone, all honed by a grippy, generous palate."
}
Enter fullscreen mode Exit fullscreen mode
  • Hit send and you should see inference results.

8. Deploy the Endpoint to Azure

WOOHOO! We have created our model, the C# Azure Function, and tested it locally with Postman. Lets deploy it! The VS Code Azure Functions extension makes deploying to Azure quite simple. Follow these steps to publish the function from VS Code.

Once the app is deployed we need update some application settings in the Function App.

  • Navigate to the Function App in Azure Portal
  • Select the name of the Function App you created
  • Select Configuration
  • Click New Application setting and add the storage connection string name and value.
  • Click Edit on WEBSITE_RUN_FROM_PACKAGE and change the value to 0. This allows us to write files from our function. (You may have to redeploy your function from VS Code after making this change.)
  • Save the changes

Play the Somm Wine Game here

I created a game from these models called Somm Wine Game. Its game to see how well you can describe your wine like a sommelier! It uses these machine learning models to predict the price, points, and variety based on a description. The better your description is the more accurate the predictions!

Dont let the learning stop here! Check out these additional resources for topics covered in this tutorial.

ONNX Docs
Onnx C# API Docs
Scikit learn pipeline onnx
ONNX Runtime Github
Blog post on how models were built

Cheers to you!


Top comments (2)

Collapse
 
dmorel profile image
dmorel

Hello,
Verry Good article.
But i have a problem.
The inputTensor declared is not the expected type.
"System.numeric.Tensor.denseTensor" vs "Microsoft.ML.Onnyruntimes.Tensors.Tensor"

Collapse
 
dmorel profile image
dmorel

Ok, the usings must be:
using Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;