DEV Community

Andrés Baamonde Lozano
Andrés Baamonde Lozano

Posted on • Edited on

Building an object tracker(I): Building/testing a tracker

This post series will show a little 'how to'. For building an object tracker with opencv.

Setup environment

The setup it's quite simple, only create a virtual environment and install opencv.

python3 -m venv venv
source venv/bin/activate
pip install opencv-contrib-python
pip install opencv-python
Enter fullscreen mode Exit fullscreen mode

Our 'Detected' object

I have created a base object for any kind of detection, with common attributes. Positions, detection rectangles and a unique identifier.

class BaseObject(object):
    identifier = None
    position = None
    positions = []
    rectangle = None
    modified = None
    detections = []
Enter fullscreen mode Exit fullscreen mode

The tracker

The tracker has a 'track' method that manages new detections, associating it with a previous detection if it's necessary.

That association will be delegated to the detected objects 'equals ' magic method(implemented on the following post)


class ObjectTracker(object):
    def __init__(self):
        self.objects = []
        self._current_id = 1

    def track(self, obj):
        match = self.get(obj)
        if match is None:
            return self.insert(obj)
        else:
            return self.update(obj)

    def get(self, obj):
        matches = list(
            filter(
                lambda x: x == obj,
                self.objects))
        return matches[0] if len(matches) > 0 else None

    def insert(self, obj):
        obj.identifier = self._current_id
        obj.modified = datetime.now()
        obj.detections.append(obj.modified)
        obj.positions = [obj.position]
        self.objects.append(obj)
        self._current_id += 1
        return obj

    def update(self, obj):
        entity = self.get(obj)
        self.objects = list(
            filter(
                lambda x: x.identifier != entity.identifier,
                self.objects))
        entity.position = obj.position
        entity.positions.append(obj.position)
        entity.modified = datetime.now()
        entity.detections.append(entity.modified)
        self.objects.append(entity)
        return entity

Enter fullscreen mode Exit fullscreen mode

Testing it!

Previously to the test we create a object which we initialize with attributes associated to our detection (car attributes, person attributes, some keypoints detected with SIFT/SURF ...) any value that you consider relevant or a unique feature of your objects.

My custom object

class MycustomObjectClass(BaseObject):
    my_unique_field = None

    def __repr__(self):
        return "{0} {1}".format(self.identifier, self.my_unique_field)

    def __eq__(self, other):
        return self.my_unique_field == other.my_unique_field

Enter fullscreen mode Exit fullscreen mode

Tests

Now we test our tracker class, insert and update methods will the ones that need to be tested.


class ObjectTrackerTest(unittest.TestCase):
    def setUp(self):
        self.tracker = ObjectTracker()

    def tearDown(self):
        pass

    @classmethod
    def setUpClass(cls):
        pass

    @classmethod
    def tearDownClass(cls):
        pass

    def test_insert(self):
        obj = MycustomObjectClass()
        obj.my_unique_field = "secret"
        obj.position = 1
        self.tracker.track(obj)
        self.assertEquals(len(self.tracker.objects), 1)

    def test_update(self):
        obj = MycustomObjectClass()
        obj.my_unique_field = "secret"
        obj.position = 1

        obj2 = MycustomObjectClass()
        obj2.my_unique_field = "asdf"
        obj2.position = 2

        obj3 = MycustomObjectClass()
        obj3.my_unique_field = "secret"
        obj3.position = 3

        self.tracker.track(obj)
        self.tracker.track(obj2)
        self.assertEquals(len(self.tracker.objects), 2)
        self.tracker.track(obj3)
        self.assertEquals(len(self.tracker.objects), 2)

Enter fullscreen mode Exit fullscreen mode

The features

In the opencv tutorials, there are good examples of feature extration (also feature matching) that can be used for your toy feature extractor.

Delimiting our ROI

But if you are playing with a video. You will prefer delimiting rois(region of interest) because proccesing all image will be expensive, one technique you can apply is background substraction; in this example, you can see a result of a bg substract to give you an idea of the output of the function.

import cv2

video_path = "video/video.avi"

cap = cv2.VideoCapture(video_path)

fgbg = cv2.createBackgroundSubtractorMOG2()

while(1):
    ret, frame = cap.read()
    fgmask = fgbg.apply(frame)
    cv2.imshow('frame-mask', fgmask)
    cv2.imshow('frame', frame)

    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

Enter fullscreen mode Exit fullscreen mode

background subtract

The result of that function is simple, will be a mask of the movement based on the previous state of the screen. Changes on the input frame will be 'marked' . So, you can use the result of that function for masking your current frame.

On the next episode ...

We will detect the object and extract a few of features from it. With the object already detected we put it into the tracker and track it along the video.

Top comments (0)