Object tracking algorithms play a crucial role in computer vision applications like surveillance systems, video analysis, and augmented reality. These algorithms enable us to track and monitor the movement of specific objects within a video stream. In this article, we will explore two popular object tracking algorithms - CamShift and Optical Flow - and implement them using OpenCV and Python.
The CamShift (Continuously Adaptive Mean Shift) algorithm is a robust method for object tracking in videos. It is based on the Mean Shift algorithm but incorporates adaptability to handle dynamic changes in object scale and orientation.
To implement the CamShift algorithm, we need the following dependencies:
pip install opencv-python
pip install numpy
import cv2
import numpy as np
video = cv2.VideoCapture('path_to_input_video.mp4')
# Create an initial region of interest (ROI) or let the user select it
ret, frame = video.read()
x, y, w, h = cv2.selectROI(frame)
roi = frame[y:y+h, x:x+w]
# Convert the ROI to HSV color space
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
# Calculate the histogram of the ROI
roi_hist = cv2.calcHist([hsv_roi], [0], None, [180], [0,180])
cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
# Set termination criteria for the mean shift iterations
termination_criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
while True:
ret, frame = video.read()
if not ret:
break
# Convert the frame to HSV color space
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Calculate the histogram backprojection
backprojection = cv2.calcBackProject([hsv], [0], roi_hist, [0,180], 1)
# Apply mean shift iterations to adjust the ROI
_, track_window = cv2.CamShift(backprojection, (x, y, w, h), termination_criteria)
# Draw the tracked object on the frame
points = cv2.boxPoints(track_window).astype(np.int32)
cv2.polylines(frame, [points], True, (0, 255, 0), 2)
cv2.imshow('Object Tracking', frame)
if cv2.waitKey(30) == 27: # Press ESC to quit
break
video.release()
cv2.destroyAllWindows()
The Optical Flow algorithm estimates the motion of objects between two consecutive frames. It works by analyzing the local image intensity changes and calculating the displacement vectors for each pixel.
import cv2
import numpy as np
video = cv2.VideoCapture('path_to_input_video.mp4')
# Parameters for Lucas-Kanade Optical Flow
lk_params = dict(winSize=(15, 15),
maxLevel=4,
criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# Create an initial frame
_, prev_frame = video.read()
prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY)
prev_pts = cv2.goodFeaturesToTrack(prev_gray, maxCorners=100, qualityLevel=0.3, minDistance=7)
while True:
ret, frame = video.read()
if not ret:
break
# Convert the current frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Calculate the optical flow using Lucas-Kanade method
curr_pts, status, _ = cv2.calcOpticalFlowPyrLK(prev_gray, gray, prev_pts, None, **lk_params)
# Select good points
good_new = curr_pts[status == 1]
good_old = prev_pts[status == 1]
# Draw the motion vectors on the frame
for i, (new, old) in enumerate(zip(good_new, good_old)):
a, b = new.ravel()
c, d = old.ravel()
frame = cv2.line(frame, (a, b), (c, d), (0, 255, 0), 2)
frame = cv2.circle(frame, (a, b), 5, (0, 255, 0), -1)
cv2.imshow('Optical Flow', frame)
if cv2.waitKey(30) == 27: # Press ESC to quit
break
# Update the previous frame and points
prev_gray = gray
prev_pts = good_new.reshape(-1, 1, 2)
video.release()
cv2.destroyAllWindows()
Object tracking algorithms, such as CamShift and Optical Flow, are powerful tools for monitoring and analyzing object motion in video streams. In this article, we have demonstrated their implementation using OpenCV and Python. By applying these algorithms, you can build robust object tracking systems for various computer vision applications. Experiment with different videos and tune the parameters to achieve accurate and efficient object tracking.
noob to master © copyleft