DEV Community

Khoa Pham
Khoa Pham

Posted on • Edited on

Machine learning in iOS: Azure Custom Vision and CoreML

You may have read my previous article about Machine Learning in iOS: IBM Watson and CoreML. So you know that Machine Learning can be intimidating with lots of concepts and frameworks to learn, not to mention that we need to understand algorithms in Python.

Talking about image labelling, you might have a bunch of images and you want to train the machine to understand and classify them. Training your own custom deep learning models can be challenging. The easiest approach is to start with the easy step and to use some cloud services, so that you don’t get discouraged at first. Imagine that the code to train the model is already written for you, all you have to do is to tag related images, and get the trained result.

In this tutorial we will take a look at Azure Custom Vision service by Microsoft, that allows us to build custom image classifier.

Microsoft Azure Custom Vision

Custom Vision is a service that allows us to build custom image classifier, firstly announced at MSBuild 2017, you can watch the keynote here. It is one of many services in Cognitive Services in the AI + Machine Learning product in Azure. Other services include speech, language, search, bot, etc.

The Custom Vision Service is a Microsoft Cognitive Service that lets you build custom image classifiers. It makes it easy and fast to build, deploy, and improve an image classifier. The Custom Vision Service provides a REST API and a web interface to upload your images and train the classifier.

To begin your work, go to Custom Vision home page and click **Get Started. **You should have an Azure account and if not, just register here for free. The free tier is enough for us to get started using the service, with 2 projects and 5000 training images per project.

The usage is extremely easy. Here is the dashboard for Custom Vision, I can’t expect it to be simpler. Follow me with the next steps.

Step 1: Create new project

Create a new project called Avengers. For laziness and for easy compare with other cloud services, here we use the same dataset from the post Machine Learning in iOS: IBM Watson and CoreML. Just to recap: last time we made an app that recognised superheroes. And since last post, people request that they want to see more superheroes ❤️

Note that in the Domain section, you need to select General (compact). As this produces lightweight model that can be consumed in mobile, meaning that the trained model can be exported to .mlmodel, the format supported by CoreML.

Step 2: Add images

Click Add images and select images for each superhero. Name a proper tag.

About data set, this What does Custom Vision Service do well? says that:

Few images are required to create a classifier or detector. 50 images per class are enough to start your prototype. The methods Custom Vision Service uses are robust to differences, which allows you to start prototyping with so little data. The means Custom Vision Service is not well suited to scenarios where you want to detect subtle differences. For example, minor cracks or dents in quality assurance scenarios.

So our images can be sufficient for this tutorial.

Step 3: Train

Click Train to start the training process. It shouldn’t take a long time as Custom Vision uses transfer learning.

Azure allows us to use Prediction API to perform prediction based on our trained model. But in this case, we just want to get the trained model to embed in our iOS apps to run offline. So click Export **and select **CoreML.

Using CoreML model in iOS app

We use the same project as in the Machine Learning in iOS: IBM Watson and CoreML. The project is on GitHub, we use CoreML and Vision framework to perform prediction based our trained model.

We name the model to AzureCustomVision.mlmodel and add it to our project. Xcode 9 can autogenerate a class for it, so we get the class AzureCustomVision. Then we can construct Vision compatible model VNCoreMLModel and request VNCoreMLRequest, and finally send the request to VNImageRequestHandler. The code is pretty straightforward:

Build and run the app. Select your superheroes and let the app tell you who he/she is. Our dataset is not that big, but you can see the model predicts pretty well with very high confidence. For now, we have images for only 4 superheroes, but you can add many more depending on your need.

What about other cloud services?

We have covered IBM Watson and Microsoft Azure Custom Vision. There are other cloud services that worth checking out. They can be as easy as uploading images and training, or more advanced with custom TensorFlow code execution or complex rules.

  • Vize.ai: The UI is very intuitive and all steps to train shouldn’t take long.

  • Lobe.ai: This is like playing in a playground where we can drag and drop and connect components to instruct machine learning tasks.

  • Amazon Rekognition: It can perform many image and video analysis task, such as facial recognition, text in image, image detection, … However I don’t see the option to train our own dataset.

  • Google Cloud Vision: this is the same as Amazon Rekognition service in that it is exposed via API calls and we can’t specify custom dataset. There is Google Cloud ML Engine that allows us to train using TensorFlow code, but this requires some machine learning understanding.

  • Google Cloud AutoML: As the time of this post, it is still in alpha, but I hope the experience will be simple for newcomers. You can watch the introduction here.

This post Comparing Machine Learning (ML) Services from Various Cloud ML Service Providers gives a lot of insight into some popular machine learning cloud services together with sample code, worth taking a look too.

Where to go from here

Here are some links to get started with using cloud services, especially Azure Custom Vision:

❤️ Support my apps ❤️

❤️❤️😇😍🤘❤️❤️

Top comments (0)