Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Former MrBeast exec sues over ‘years’ of alleged harassment
  • Some of them Were Scary Good. They were all pretty scary.
  • JiuwenClaw Pioneers “Coordination Engineering”: Next leap to harness engineering
  • North Korean hacker mediocre use AI to steal millions.
  • I’m Growing on Instagram After 10 Years — Here’s What I‘m Doing Differently
  • The Coding for Building a Hyperopt-based Conditional Bayesian Optimization Pipeline with Early Stopping and Hyperopt
  • Join Us for Our Livestream: Musk and Altman on the Future of OpenAI
  • A detection tool claims that the Pope’s warnings about AI were AI-generated.
AI-trends.todayAI-trends.today
Home»Tech»Roboflow Supervision: Building an End to End Object Tracking and Analytical System

Roboflow Supervision: Building an End to End Object Tracking and Analytical System

Tech By Gavin Wallace03/08/20256 Mins Read
Facebook Twitter LinkedIn Email
Samsung Researchers Introduced ANSE (Active Noise Selection for Generation): A
Samsung Researchers Introduced ANSE (Active Noise Selection for Generation): A
Share
Facebook Twitter LinkedIn Email

The following advanced Roboflow Supervision We will create a full object detection pipeline by using the Supervision Library. ByteTracker allows us to track objects in real time. We can also add detection smoothing as well as define polygon areas for specific video streams. We annotate the frames with bounding box, object IDs and speed information as we process them. This allows us to analyze and track object behavior in real-time. The goal of this video is to show how you can integrate detection, object tracking, zone-based analyses, and visual annotating into one seamless, intelligent workflow. Click here to see the Full Codes here.

Opencv Python with!pip Install Supervision
Install!pip --upgrade supervisor 


Download cv2
Import numpy as an np
Import supervision (sv)
Now you can import ultralytics YOLO
Import matplotlib.pyplot into plt
Import defaults from Collections


model = YOLO('yolov8n.pt')

Installation of the packages is first. This includes Supervision and Ultralytics. Importing all libraries is done after we ensure we have the latest Supervision version. After initializing the YOLOv8n, the model that will be the main detector of our pipeline. See the Full Codes here.

try:
 ByteTrack = tracker()
Except AttributeError
   try:
 ByteTracker = tracker()
 Except AttributeError
       print("Using basic tracking - install latest supervision for advanced tracking")
 No tracker


try:
   smoother = sv.DetectionsSmoother(length=5)
Except AttributeError
 Smoother = None
   print("DetectionsSmoother not available in this version")


try:
   box_annotator = sv.BoundingBoxAnnotator(thickness=2)
   label_annotator = sv.LabelAnnotator()
 If the trace annotation is set to sv:
       trace_annotator = sv.TraceAnnotator(thickness=2, trace_length=30)
   else:
       trace_annotator = None
Except AttributeError
   try:
       box_annotator = sv.BoxAnnotator(thickness=2)
       label_annotator = sv.LabelAnnotator()
       trace_annotator = None
 Except AttributeError
       print("Using basic annotators - some features may be limited")
       box_annotator = None
       label_annotator = None 
       trace_annotator = None


def create_zones(frame_shape):
 H, W = Frame_Shape[:2]
  
   try:
       entry_zone = sv.PolygonZone(
           polygon=np.array([[0, h//3], [w//3, h//3], [w//3, 2*h//3], [0, 2*h//3]]),
           frame_resolution_wh=(w, h)
       )
      
       exit_zone = sv.PolygonZone(
           polygon=np.array([[2*w//3, h//3], [w, h//3], [w, 2*h//3], [2*w//3, 2*h//3]]),
           frame_resolution_wh=(w, h)
       )
 Except TypeError
       entry_zone = sv.PolygonZone(
           polygon=np.array([[0, h//3], [w//3, h//3], [w//3, 2*h//3], [0, 2*h//3]])
       )
       exit_zone = sv.PolygonZone(
           polygon=np.array([[2*w//3, h//3], [w, h//3], [w, 2*h//3], [2*w//3, 2*h//3]])
       )
  
   return entry_zone, exit_zone

We use the Supervision Library to set up all of its essential components, such as object tracking using ByteTrack and optional smoothing with DetectionsSmoother. Flexible annotators are also used for bounding labels, traces, and boxes. We use try-except blocks for compatibility between versions. We also define dynamic polygons within the frame for monitoring specific areas like entry and departure zones, which enables advanced spatial analytics. Visit the Full Codes here.

AdvancedAnalytics is a class of software.
   def __init__(self):
       self.track_history = defaultdict(list)
       self.zone_crossings = {"entry": 0, "exit": 0}
       self.speed_data = defaultdict(list)
      
   def update_tracking(self, detections):
       if hasattr(detections, 'tracker_id') and detections.tracker_id is not None:
           for i in range(len(detections)):
               track_id = detections.tracker_id[i]
 If track_id does not equal None:
                   bbox = detections.xyxy[i]
 Center = np.array[(bbox[0] + bbox[2]) / 2, (bbox[1] + bboxReturn[3]) / 2])
                   self.track_history[track_id].append(center)
                  
                   if len(self.track_history[track_id]) >= 2:
                       prev_pos = self.track_history[track_id][-2]
                       curr_pos = self.track_history[track_id][-1]
                       speed = np.linalg.norm(curr_pos - prev_pos)
                       self.speed_data[track_id].append(speed)
  
   def get_statistics(self):
       total_tracks = len(self.track_history)
       avg_speed = np.mean([np.mean(speeds) for speeds in self.speed_data.values() if speeds])
       return {
           "total_objects": total_tracks,
           "zone_entries": self.zone_crossings["entry"],
           "zone_exits": self.zone_crossings["exit"],
           "avg_speed"If np.isnan (avg_speed), then 0
       }


def process_video(source=0, max_frames=300):
   """
 Advanced video surveillance features for processing sources
 If you have a webcam, the video source is 0 or the video path.
   max_frames: limit processing for demo
   """
   cap = cv2.VideoCapture(source)
 AdvancedAnalytics is a synonym for analytics()
  
 Read, ret = frame.()
 If not, ret
       print("Failed to read video source")
 Return to the Homepage
  
   entry_zone, exit_zone = create_zones(frame.shape)
  
   try:
       entry_zone_annotator = sv.PolygonZoneAnnotator(
           zone=entry_zone,
           color=sv.Color.GREEN,
           thickness=2
       )
       exit_zone_annotator = sv.PolygonZoneAnnotator(
           zone=exit_zone,
           color=sv.Color.RED,
           thickness=2
       )
 Except (AttributeError or TypeError).
       entry_zone_annotator = sv.PolygonZoneAnnotator(zone=entry_zone)
       exit_zone_annotator = sv.PolygonZoneAnnotator(zone=exit_zone)
  
   frame_count = 0
   results_frames = []
  
   cap.set(cv2.CAP_PROP_POS_FRAMES, 0) 
  
 While ret, frame_count 

AdvancedAnalytics is a class that tracks object movements, calculates speed and counts zone crossings. It provides rich, real-time insights into video. We read every frame of the source video and pass it through our pipeline for detection, smoothing, and tracking. Annotating frames with bounding box, label, zone overlays and live statistics gives us an effective, flexible tool for object monitoring. We collect and visualize data in the loop. Finally, we print out final statistics to show the Roboflow Supervision system’s full capabilities. Click here to see the Full Codes here.

Create_demo_video():
   """Create a simple demo video with moving objects"""
   fourcc = cv2.VideoWriter_fourcc(*'mp4v')
   out = cv2.VideoWriter('demo.mp4', fourcc, 20.0, (640, 480))
  
   for i in range(100):
       frame = np.zeros((480, 640, 3), dtype=np.uint8)
      
       x1 = int(50 + i * 2)
       y1 = 200
       x2 = int(100 + i * 1.5)
       y2 = 250
      
       cv2.rectangle(frame, (x1, y1), (x1+50, y1+50), (0, 255, 0), -1)
       cv2.rectangle(frame, (x2, y2), (x2+50, y2+50), (255, 0, 0), -1)
      
       out.write(frame)
  
   out.release()
 "demo.mp4"


demo_video = create_demo_video()
analytics = process_video(demo_video, max_frames=100)


print("nTutorial completed! Key features demonstrated:")
print("✓ YOLO integration with Supervision")
print("✓ Multi-object tracking with ByteTracker")
print("✓ Detection smoothing")
print("✓ Polygon zones for area monitoring")
print("✓ Advanced annotations (boxes, labels, traces)")
print("✓ Real-time analytics and statistics")
print("✓ Speed calculation and tracking history")

We generate synthetic video demonstrations with two moving rectangles to simulate tracked objects. We can validate the detection, tracking and zone monitoring without needing to use a real input. On the clip generated, we run the process_video feature. Then, at the end of our summary, we show all the features we implemented. This shows the Roboflow Supervision power in terms of real-time analytics.

As a conclusion, our pipeline successfully integrates object detection and tracking with zone monitoring. Real-time analytics is also integrated. We show you how to use annotated videos frames for key insights, such as the speed of objects, zones crossings and track history. This system allows us to move beyond simple detection, and create a surveillance or analytics solution that is smarter. We now have the foundation for a robust system that can be used in production or research.


Take a look at the Full Codes here. Please feel free to browse our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe Now our Newsletter.


Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost was his most recent venture. This platform, dedicated to Artificial Intelligence, is well-known for its technical and accessible coverage of news on machine learning and deep understanding. Over 2 million views per month are a testament to the platform’s popularity.

stem
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

JiuwenClaw Pioneers “Coordination Engineering”: Next leap to harness engineering

22/04/2026

The Coding for Building a Hyperopt-based Conditional Bayesian Optimization Pipeline with Early Stopping and Hyperopt

22/04/2026

Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram

22/04/2026

OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs

22/04/2026
Top News

AI-Generated Videos Against ICE Are Being Fanfic-Treated

Anthropic claims that Claude has its own set of emotions

AI activists rethink their strategy in the face of a changing industry

Netflix adds AI powered by ChatGPT to stop you from scrolling forever

Apple Engineers Inspect Bacon Packages to Level Up US Manufacturers

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

12 Social Media Metrics You Ought to Be Monitoring (And Why)

27/05/2025

How to build an end-to-end model optimization pipeline with NVIDIA’s Model Optimizer using FastNAS pruning and fine-tuning

03/04/2026
Latest News

Former MrBeast exec sues over ‘years’ of alleged harassment

22/04/2026

Some of them Were Scary Good. They were all pretty scary.

22/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.