DEV Community

Jeremy
Jeremy

Posted on

Model for Detecting License Plates in Cropped Image

Abstract

Now that we have per-car images, we need to find license plates. License plates might not be present, or might be otherwise difficult to find with heuristic, thus we use YOLO again to find the license plate bounding box.

Step-by-Step

License plate detection workflow visualized

Recipe

We now have cropped images per car. Our high-level goal is OCR on the license plate. OCR needs a tight view to only the license plate characters, and not other distractions. This means within our cropped images, we should find license plate (this article), and crop image to just license plate (next).

We tested heuristics to find license plates such as contour-detection, and they did not generalize well. See image examples below – illustrating that even with reliable car detection, license plates can be easy (centered, large), harder (off-center, smaller), or even not-present. YOLO proved to be a better tool for variation, and thus we train second model for license plates.

Easy example

Easy-to-see license plate. Centered, large.

Less-easy example

Also easy, but less-easy. License plate is no longer centered on car. Can’t find license plate based on location in cropped image. And license plate is smaller in this view.

Harder example

Harder example. This is a real output from step 3 model for detecting cars. Just not-yet in-frame. Contour heuristic thinks this is a license plate, but we need model capable of understanding its incomplete, and to move on.

  1. Collect training data

    • Run step 1 through 4 for a bit to collect training data. In my case, I compiled ~30 cropped images of cars with visible license plates. Example such as the following: Example cropped car image
  2. Label training data

    • This is time-consuming and manual step, but important.
    • I used roboflow.com (same as step 3). I recommend their tooling. I’m sure there’s alternatives if you prefer, but roboflow made e2e process easier. Training the YOLO model requires label data in a particular format, which roboflow automates, and roboflow has online tooling to streamline your labeling data.
    • Upload your images
    • Hand-label your images. I suggest learning the hotkeys. Sample labeling from roboflow Sample labeling from roboflow - one of many
    • I suggest defining guidelines for yourself to mitigate the monotony. My bounding boxes for labeling generally stretched within the license plate - not encircling the plate border itself. Otherwise top-left to bottom-right within the plate rectangle, and repeat. I’m not saying use the same bounding box. I’m suggesting: know what your guideposts are. Knowing your guideposts will make your labeling consistent and help the tedium.
    • Download training data. This is just the formatted xml capturing all your bounding boxes for training.
  3. Train model YOLO

    • Note that my output model is “v27” ie I trained several models to experiment with different parameters. I recommend testing and iterating on configurations that work for you: nano model vs medium, less epochs or more, etc.
    • On MacBook M1 Pro with 32GB memory, took <1 hour to train the winning model.
    from ultralytics import YOLO
    
    ###########
    # Training
    # Load the model
    baseDir = '/Users/japollock/Projects/TrainHighwayCarDetector/'
    model = YOLO(baseDir + 'resources/yolov8n.pt')
    
    results = model.train(
       imgsz=1280,
    #   epochs=100,
    #   batch=16,
    #   device='mps',
       data=baseDir + 'resources/v4data_LP/data.yaml',
       name=yolov8n_100_16_LP_v27
    )
    ###########
    
  4. Test YOLO.

    • I trained five model variants on model and epoch variations. nano, medium, xlarge models, and 50 or 100 epochs. I measured performance on precision, recall, map50, and map50-95. Long story short, all variants had near-equal very good performance. Thus I choose simplest model nano with default epochs
    from ultralytics import YOLO
    import cv2 
    from PIL import Image
    import imutils
    
    ###########
    # predict
    baseDir = '/Users/japollock/Projects/TrainHighwayCarDetector/'
    
    ###########
    # model
    model = YOLO(baseDir + 'src/runs/detect/yolov8n_100_16_LP_v27/weights/best.pt')
    
    ###########
    # images
    img = cv2.imread(baseDir + 'photos/yolo_licensePlates/licensePlates/IMG_4566_0002.jpg')
    
    results = model.predict(
       source=img,
       imgsz=1280
    )
    
    cv2.imshow("image", results[0].plot())
    cv2.waitKey(0)
    

    Sample classification from best license plate model
    Sample classification from best model. “license-plates” is the classifier label.

Now we have model detecting license plates inside the cropped image for cars. This enables next step, which is to crop down image to just license plate characters, on the way to OCR after.

Next Link: TBD -- Crop bounding box for license plate

Top comments (0)