Quick Overview: Are you struggling to get your YOLO-based vehicle detection pipeline to perform well in real-world conditions? You are not alone. Most teams build something that works in a notebook and falls apart the moment it hits live traffic, bad weather, or a multi-camera setup. The gap between a working demo and a production system is wider than most expect, and this guide is built to close it.
No longer is real-time vehicle monitoring relegated to the realm of futuristic concepts. It is now the backbone of smart-city infrastructure, logistics, and even highway safety systems worldwide. As traffic volumes increase and infrastructure ages, transportation agencies and companies need a way to address these challenges without failing. They’re looking to deep learning techniques such as YOLO (You Only Look Once) object detection and CNNs. They’re capable of detecting, classifying, and counting vehicles at speeds that would defy human capabilities.
According to TRB-NAS (2023), the accuracy rate of AI perception systems is now about 94%. A report from INRIX, the Global Traffic Scorecard, estimates that the economic cost to the U.S. each year due solely to traffic congestion is $87 Billion.
The implications this has for an organization trying to build an Intelligent Transportation System (ITS) can be quite real indeed.
This guide breaks down exactly how YOLO and CNN architectures work for vehicle detection, how to implement real-world pipelines, and what engineering decisions actually matter when you move from a Jupyter notebook to a production traffic monitoring system.
This blog answers questions like:
- How to build a YOLO vehicle detection system from scratch in Python
- What is the best YOLO model for real-time traffic monitoring in 2025 and 2026?
- How can I accurately count vehicles without double-counting using DeepSORT?
- Can YOLOv8 or YOLO11 run on NVIDIA Jetson Nano or Raspberry Pi for edge traffic monitoring?
- How do I improve vehicle detection accuracy at night, in rain, or in fog?
- What datasets should I use to train a custom vehicle detector for highway or city traffic?
- How do I integrate YOLO-based detection with license plate recognition (ANPR)?
- How do smart cities in the US, UK, UAE, India, and Singapore deploy AI traffic analytics?
- How do I handle vehicle occlusion in dense urban traffic with DeepSORT and ReID?
- What does it cost to build an enterprise vehicle monitoring system with AI?
Whether you are an engineer prototyping a traffic AI solution or a CTO evaluating vendors for enterprise deployment, understanding this technology stack will sharpen your decisions at every layer of the build.
Why Traditional Vehicle Monitoring Falls Short and What Computer Vision Changes
Traditional traffic monitoring systems included inductive loops embedded in asphalt, radar guns, and counting surveys. Each of these systems has a common drawback: it measures something at any given point in isolation. There is no visual context, no ability to classify vehicles, and poor performance in bad weather.
Camera-based computer vision in future industries, such as transportation, solves this comprehensively. A single camera feed processed by a YOLO model can simultaneously handle multiple detection tasks.
Traditional Monitoring vs. Computer Vision: Capability Comparison

The move from sensor-based monitoring systems to vision-based monitoring systems is not merely a technological upgrade. It is an architectural shift toward data richness, and YOLO is the engine driving it.
Understanding YOLO Architecture: Why Speed and Accuracy Both Matter
YOLO’s primary contribution was its novel approach to object detection as a singular regression task. Previous architectures, such as R-CNN and Fast R-CNN, followed a two-stage approach in which the model first predicted object classes and then classified them. YOLO’s innovative approach was its singular pass through a neural network, and hence the name You Only Look Once.
In YOLO, the input image gets divided into an SxS grid. Each cell predicts B bounding boxes with confidence scores and C class probabilities. The final prediction tensor shape is SxSx(Bx5 + C). This design enables YOLO to process frames at 30-150+ FPS, depending on the hardware, which is the threshold for genuine real-time processing.
YOLO Version Comparison for Traffic Use Cases
| Version | Speed (GPU) | Key Strength | Best For |
| YOLOv5 | 50-140 FPS | Community support, stable | Production-proven systems, legacy integrations |
| YOLOv8 | 45-160 FPS | Segmentation + detection, small objects | Highways, multi-class traffic, ANPR pipelines |
| YOLO11 | 60-180 FPS | Transformer backbone, occlusion handling | Dense urban traffic, smart city ITS deployments |
| YOLO26 | 70-200 FPS | Edge-optimized variants, lowest latency | Jetson edge inference, embedded deployments |
For most production traffic monitoring systems, YOLOv8 or YOLO11 is the best starting point: mature enough to have resolved deployment edge cases and modern enough to meet the accuracy demands of commercial ITS projects.
The CNN Backbone: Feature Extraction That Powers Detection Quality
Every YOLO model is built on a CNN backbone that extracts hierarchical visual features from raw pixel data. Understanding this layer is important when you need to tune detection accuracy for specific conditions, such as nighttime scenes, adverse weather, or partial occlusion.
YOLO models use purpose-built backbones (Darknet, CSPDarknet, C2f) optimized for detection speed rather than classification accuracy. That is the correct trade-off for real-time traffic pipelines.
CNN Pipeline Components in YOLO
| Component | Function | Why It Matters for Vehicle Detection |
| Stem / Backbone | Downsamples image, extracts multi-scale features | Captures features from small motorcycles to large trucks in same frame |
| Neck (PAN / FPN) | Combines features across scales | Enables simultaneous detection of near and distant vehicles |
| Detection Head | Outputs boxes, confidence, class probabilities | Per-frame output used by DeepSORT tracker for ID assignment |
For custom vehicle detection teams working on custom vehicle detectors, such as mining trucks, ambulances, or self-driving delivery robots, transfer learning takes place in the backbone. The benefit of fine-tuning rather than training from scratch is reduced data requirements and compute costs to achieve production-level accuracy.
Tip: When working on vehicle detection tasks, fine-tuning the neck and head of the model and freezing the backbone achieves 80% or more of the accuracy of fine-tuning the entire model at a fraction of the cost. You can opt for AI-powered MVP Development services to pilot test the project, before committing full-time.
Implementation: Building a YOLO Vehicle Detection Pipeline from Scratch
The following is a step-by-step guide to building custom CNN and YOLO models for vehicle detection systems. This is the basic architecture implemented by CMARIX in their traffic monitoring systems.
Step 1: Environment Setup
Install core dependencies. GPU acceleration requires CUDA 11.8+ with PyTorch:
pip install ultralytics opencv-python-headless numpy torch torchvisionFor machine learning with Python in production pipelines, always pin dependency versions and use virtual environments to avoid library conflicts across deployment environments.
Step 2: Load Model and Run Inference
from ultralytics import YOLO import cv2 model = YOLO('yolov8n.pt') # nano for edge; yolov8x.pt for max accuracy cap = cv2.VideoCapture('traffic_feed.mp4') while cap.isOpened(): ret, frame = cap.read() if not ret: break results = model(frame, classes=[2, 3, 5, 7]) # car, motorcycle, bus, truck annotated = results[0].plot() cv2.imshow('Vehicle Detection', annotated) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()The class filter (classes=[2, 3, 5, 7]) uses COCO dataset indices. It immediately halves false positives in traffic scenarios by ignoring pedestrians, animals, and objects irrelevant to vehicle monitoring.
Step 3: Add DeepSORT for Multi-Object Tracking
Detection alone is not sufficient for counting or behavioral analysis. DeepSORT Object Tracking provides unique IDs to vehicles in each frame, enabling unique vehicle counting, dwell time analysis, and trajectory mapping:
from deep_sort_realtime.deepsort_tracker import DeepSort tracker = DeepSort(max_age=30, n_init=3, nms_max_overlap=0.7) # In the inference loop: detections = [] for box in results[0].boxes: x1,y1,x2,y2 = box.xyxy[0].tolist() conf = box.conf[0].item() cls = int(box.cls[0].item()) detections.append(([x1,y1,x2-x1,y2-y1], conf, cls)) tracks = tracker.update_tracks(detections, frame=frame) for track in tracks: if not track.is_confirmed(): continue track_id = track.track_id ltrb = track.to_ltrb() # Persistent bounding box with IDThe max_age=30 parameter keeps a track alive for 30 frames after losing detection.
Vehicle Counting and Classification: From Detection to Traffic Analytics
Raw detections are inputs, not outputs. For meaningful Vehicle Counting and Classification, you need virtual counting lines or zones that trigger when a tracked vehicle crosses them:
# Virtual counting line at y=400 LINE_Y = 400 counted_ids = set() vehicle_counts = {'car': 0, 'bus': 0, 'truck': 0, 'motorcycle': 0} CLASS_NAMES = {2:'car', 3:'motorcycle', 5:'bus', 7:'truck'} for track in confirmed_tracks: cx = int((track.to_ltrb()[0] + track.to_ltrb()[2]) / 2) cy = int((track.to_ltrb()[1] + track.to_ltrb()[3]) / 2) if cy > LINE_Y and track.track_id not in counted_ids: counted_ids.add(track.track_id) cls_name = CLASS_NAMES.get(track.det_class, 'unknown') vehicle_counts[cls_name] = vehicle_counts.get(cls_name, 0) + 1This is helpful for real-time dashboards, traffic optimization systems, and data feeds for AI in logistics and transportation analytics systems. The counted_ids set prevents double-counting, the most common bug in naive vehicle counting systems.
Automatic Number Plate Recognition (ANPR): Adding Identity to Detection
While we can detect what is on the road with detection systems, we can identify who is on the road with Automatic Number Plate Recognition systems.
A production ANPR pipeline runs as a two-stage detector:
- Stage 1: YOLO detects the full vehicle bounding box
- Stage 2: A specialized YOLO model crops the license plate region and passes it to an OCR engine (EasyOCR, Tesseract, or PaddleOCR)
import easyocr reader = easyocr.Reader(['en']) def extract_plate(frame, plate_box): x1,y1,x2,y2 = [int(v) for v in plate_box] plate_crop = frame[y1:y2, x1:x2] results = reader.readtext(plate_crop) if results: return max(results, key=lambda r: r[2])[1] # Highest confidence return NoneThe accuracy of ANPR in difficult conditions, such as angle, glare, and occlusion, improves most when the system is trained on country-, state-, and municipality-level region-specific plate formats rather than on general global datasets.

Edge AI Deployment: Running YOLO on NVIDIA Jetson and Raspberry Pi
Cloud-based inference causes unacceptable latency in responding to real-time traffic response systems. Edge AI for low-latency inference solves this problem by performing inference directly on the hardware where the data was captured in the first place.
Edge Hardware Comparison for Vehicle Monitoring
| Device | AI Performance | FPS (YOLOv8m) | Best Use Case | Price Range |
| NVIDIA Jetson Orin Nano | 40 TOPS | 25-35 FPS | Intersections, parking lots | $150-$250 |
| NVIDIA Jetson AGX Orin | 275 TOPS | 80-120 FPS | Multi-camera highway systems | $600-$900 |
| Raspberry Pi 5 + Hailo-8L | 26 TOPS | 15-25 FPS | Low-traffic zones, parking | $80-$120 |
| Intel NUC + iGPU | 10-15 TOPS | 10-18 FPS | Office parking, private lots | $300-$600 |
TensorRT Optimization for Jetson Deployment
Export YOLOv8 to TensorRT engine (run on Jetson) from ultralytics import YOLO model = YOLO('yolov8n.pt') model.export(format='engine', half=True, imgsz=640, device=0) # Exports yolov8n.engine - 3-5x faster than PyTorch on Jetson with FP16FP16 quantization (half=True) generally yields 2-4x performance gains with less than 1% accuracy loss on vehicle detection tasks.
CMARIX has successfully deployed edge AI for vehicle monitoring systems running on Jetson platforms, with TensorRT-optimized YOLO achieving sub-20ms per-frame inference latency, meeting real-time requirements even in scenarios with 8+ simultaneous camera feeds at intersections.
Building Real-Time Traffic Dashboards: From Raw Inference to Actionable Insight
Building browser-based AI dashboards for traffic monitoring systems requires connecting the Python inference backend to a frontend via WebSockets or REST APIs:
from fastapi import FastAPI, WebSocket import asyncio, json, time app = FastAPI() @app.websocket('/ws/traffic') async def traffic_stream(websocket: WebSocket): await websocket.accept() while True: data = { 'timestamp': time.time(), 'counts': vehicle_counts, 'active_tracks': len(current_tracks), 'avg_speed_kmh': calculate_avg_speed() } await websocket.send_text(json.dumps(data)) await asyncio.sleep(1)This architecture feeds live count data, track counts, and calculated speed metrics to a browser frontend, making traffic analytics available to operators without requiring them to watch raw video streams.
From Prototype to Production: What Enterprise Vehicle Monitoring Actually Requires
Getting a YOLO model to work in a Jupyter notebook is a weekend project. Getting it to run reliably across 200 intersection cameras, 24 hours a day, 7 days a week, under varying weather conditions, with 99.5% uptime SLAs is a full engineering program. For organizations lacking specialized in-house expertise, the most efficient path to scale is to hire a dedicated AI development team focused on machine learning development solutions.
The gap between prototype and production in AI surveillance and vehicle monitoring is large. Organizations that have successfully crossed it share common architectural patterns, which CMARIX has observed in AI surveillance software development.
Prototype vs. Production: Architecture Checklist
| Dimension | Prototype | Production (CMARIX Standard) |
| Model Updates | Manual weight swap | A/B tested rollout with rollback |
| Accuracy Monitoring | None | Drift detection with auto-alert thresholds |
| Hardware Failure | System goes offline | Failover nodes, hot standby |
| Data Pipeline | Local CSV logs | Kafka streams to TimescaleDB / InfluxDB |
| Compliance | None | GDPR / PDPA / local privacy law adherence |
Teams evaluating whether to build in-house or partner with an enterprise AI software development company should weigh not only model development costs but also the full lifecycle costs of maintaining production computer vision infrastructure at scale.
Training Data: Building or Choosing the Right Vehicle Detection Dataset
Model quality is directly determined by the quality of the training data. For vehicle detection, these are the proven starting points:
| Dataset | Size | Best For | Notes |
| UA-DETRAC | 140,000 frames | Dense traffic, occlusion | Chinese highways; excellent for multi-vehicle scenes |
| COCO (vehicle classes) | 120,000+ images | General transfer learning baseline | Not traffic-specialized; fine-tuning required |
| CityScapes | 25,000 frames | Urban city traffic | Dense instance segmentation; strong for smart city deployments |
| Custom Domain Data | 2,000-5,000 per class | Specialized vehicle types | Required for mining trucks, ambulances, regional plates |
For custom dataset creation, Roboflow and CVAT are the standard annotation platforms. Budget approximately 2,000 to 5,000 annotated frames per new vehicle class for fine-tuning an existing YOLO model to production accuracy.
Improving Accuracy in Low Light, Rain, and Adverse Conditions
It is not indicative of how it will perform at 2 AM in the rain. Research by the IEEE on the robustness of deep learning to adverse weather conditions (2023) found that standard YOLOs can lose 20-35% of their accuracy.
A layered approach to robustness addresses this:
- Augmentation during training: Utilize the albumentations library to introduce low light, rain, fog, and motion blur during the training phase itself (RandomBrightnessContrast, RandomFog, MotionBlur)
- Night-specific models: Train separate model weights on the night-time dataset and implement time-of-day switching during inference.
- Infrared camera integration: With infrared cameras, the dependency on light is removed, allowing YOLO models to be trained on infrared images.
- CLAHE preprocessing: Contrast-Limited Adaptive Histogram Equalization can be applied as a preprocessing step before the inference phase.
import cv2 def preprocess_low_light(frame): lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB) l, a, b = cv2.split(lab) clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8)) l = clahe.apply(l) enhanced = cv2.merge([l, a, b]) return cv2.cvtColor(enhanced, cv2.COLOR_LAB2BGR)Handling Occlusion: Tracking Vehicles When They Block Each Other
Heavy traffic conditions ensure constant occlusion. Buses occlude cars, while trucks cause occlusion in adjacent lanes. In the absence of occlusion handling, the tracking systems would fail to identify vehicles when a certain amount of occlusion is involved.
Production-grade approaches to occlusion:
| Technique | Simple Meaning | Why It Is Useful |
| ReID Models | Recognizes the same vehicle by its appearance. | Helps the system give the same ID to a vehicle when it reappears after being hidden. |
| Kalman Filter Prediction | Predicts where the vehicle will move next. | Keeps tracking the vehicle even when it is not visible for a few frames. |
| Multi-Camera Triangulation | Uses multiple cameras covering the same area. | If one camera cannot see the vehicle, another camera can still track it. |
| IOU Threshold Tuning | Adjusts how bounding boxes are matched. | Prevents wrong ID assignments when vehicles overlap or are very close. |
For high-occlusion scenarios such as toll booths and parking garages, engineering teams at CMARIX have found that using YOLOv1.1’s improved small-object detection and ReID reduces ID-swap errors by 40-60% compared to baseline results with DeepSORT and YOLOv5.
IoT Integration: Connecting Vehicle Monitoring to the Broader Transportation Stack
While standalone vehicle detection systems are undoubtedly beneficial, connected vehicle detection systems are more transformative.
IoT Integration for Vehicle Health Monitoring expands the vehicle detection systems to the broader transportation system. Many municipalities have started seeking a unified security stack, beyond vehicles. They are integrating an AI-driven enterprise face recognition platform that enables complete perimeter security and multimodal urban monitoring, ensuring that both vehicle and pedestrian safety are managed under a single intelligent umbrella.
- Traffic signal management: Vehicle detection provides real-time vehicle counts as input to adaptive signal control algorithms (SCOOT, SCATS), reducing congestion at intersections by 15-30%.
- Fleet management systems: ANPR feed can be used in conjunction with telematics systems to automatically capture arrival/departure times.
- Emergency response management: Vehicle detection can identify abnormalities in vehicle movement, such as stationary vehicles or wrong-way drivers, triggering automatic alerts to the traffic management center.
- Predictive maintenance: Computer vision-based monitoring of heavy vehicle undercarriages can be used to detect mechanical abnormalities before roadside breakdowns occur.
The data architecture for connecting the systems typically employs MQTT for edge-to-cloud messaging, Apache Kafka for high-throughput stream processing, and TimescaleDB/InfluxDB for time-series data storage.
YOLO Vehicle Monitoring Across Global Deployments: Smart City and Regional Contexts
The needs for vehicle monitoring differ significantly depending on geographical, traffic, regulatory, and infrastructure development factors. We work with clients on this, and the technical needs differ significantly by region.
| Region | Key Deployment Context | Technical Priority | Common Use Case |
| USA / Canada | Enterprise-Grade Vehicle Monitoring” | High FPS, multi-lane detection | Adaptive signal control, freeway monitoring |
| UK / Europe | ANPR-heavy enforcement, GDPR compliance | Plate reading accuracy, data privacy | Congestion charge zones, bus lane enforcement |
| UAE / Saudi Arabia | Smart city infrastructure (Dubai, NEOM) | Edge AI for harsh heat conditions | Expressway analytics, toll automation |
| India | Dense urban traffic, mixed vehicle types | Occlusion handling, class diversity | Traffic police analytics, smart city mission |
| Singapore / SEA | ERP (Electronic Road Pricing), port monitoring | Sub-10ms latency, ANPR precision | ERP toll enforcement, port vehicle tracking |
| Australia | Mining vehicle safety, rural highways | Custom vehicle classes, low-connectivity edge | Mine site safety zones, outback highway cameras |
For organizations in these geographies seeking YOLO vehicle detection solutions, edge AI traffic analytics, or real-time ANPR solutions, CMARIX offers regionally aware solutions that account for local traffic patterns, regulatory requirements, and infrastructure limitations.
Building Enterprise-Grade Vehicle Monitoring: Architecture, Team, and Partner Decisions
System Architecture
A cloud-native, microservices-based architecture can be implemented by deploying IoT gateways and collecting data. These data points can be collected from vehicle sensors, such as GPS, telematics, and cameras.
Moreover, AWS IoT Core and Azure IoT Hub can be leveraged for real-time data ingestion via the MQTT protocol, whereas Apache Kafka can be used to handle millions of vehicles using Kubernetes. Additionally, the advantages of using AI and ML can be achieved by implementing anomaly detection and predictive maintenance, whereas the advantages of using HIPAA and GDPR can be achieved by implementing encryption and zero-trust security.
Team Structure
Create a federated enterprise architecture team with an Enterprise Architecture Lead at the helm and 8 to 12 other members. The key roles in this team are:
| Role | Number of Specialists | Key Focus Area |
| IoT Specialists | 3–4 | Device connectivity, sensor integration, telematics data capture |
| Data Engineers | 2 | Data pipelines, real-time fleet data processing, analytics readiness |
| DevOps Engineers | 2 | Infrastructure automation, CI/CD, system reliability |
| Security Experts | 1–2 | Device security, data protection, compliance |
| Product Owner | 1 | Fleet KPIs, product direction, stakeholder alignment |
Partner Selection
Identify technology partners for each identified technology layer. For example, for
- IoT infrastructure technology layers: AWS
- Edge hardware technology layers: Qualcomm and NVIDIA
- Telematics technology layers: Samsara and Verizon.
However, it is recommended to hire a dedicated AI development team to assist with evaluating and selecting the most suitable technology partners for each of these technology layers. This will help evaluate and select the best technology partners through structured RFPs based on quantifiable parameters such as uptime SLA (> 99.99%), API maturity, integration flexibility, and cost per vehicle.
For example, start with a controlled proof-of-concept for features such as geofencing and OMS validation. This will help validate the technology’s feasibility, evaluate the performance of the technology partners, and reduce the risk of long-term lock-in with them before scaling the platform for the entire fleet.
| Technology Layer | Evaluation Criteria | Example Vendors |
| Cloud/IoT | Scalability, Security | AWS, Azure |
| Hardware | Edge Processing | Qualcomm, NVIDIA |
| Telematics | Real-time Data | Samsara, Geotab |
If your organization is planning to implement Artificial Intelligence in traffic monitoring, fleet intelligence, and transportation technology solutions, we at CMARIX can guide you in making your dream a reality with an implementation roadmap.
Conclusion
The YOLO and CNN architectures are no longer just tools but are now production-ready solutions for real-time vehicle detection and monitoring. The technology works, and it works well. The real question for any organization is not whether or when the technology will be ready, but whether its organization, implementation, and infrastructure are ready to support it.
The gap between the demo for detecting and the actual production system for monitoring traffic is where engineering decisions are made, including dataset quality, edge hardware, tracker optimization, robustness in bad weather, IoT, and visualizations. These are far more complex and require more expertise than simply choosing the model itself.
CMARIX brings that full-stack expertise to transportation and enterprise AI projects, from expert AI consulting services at the architecture stage through to production deployment and ongoing model maintenance. If you are building a vehicle monitoring system that needs to work in the real world and not just in a benchmark, contact CMARIX to discuss your requirements. The infrastructure intelligence for the smart cities of the future is being developed today. The teams that get the engineering right in model selection, edge computing, tracking architecture, and operational resiliency will set the bar for AI in logistics and transportation for the next decade.
FAQs for YOLO Vehicle Detection
How do I track unique vehicles and avoid double-counting with YOLO?
You can use the YOLO model with a tracking algorithm such as DeepSORT or ByteTrack. This way, the vehicles are assigned unique IDs and the double-counting problem is solved.
Can I run YOLOv8/YOLO11 on edge devices like Raspberry Pi or NVIDIA Jetson?
Yes. YOLOv8 and YOLO11 models are efficient on the NVIDIA Jetson platform. However, Raspberry Pi 4 and 5 can be used for the model with reduced resolution.
How can I improve YOLO vehicle detection accuracy at night or in low light?
You can improve YOLO’s vehicle detection accuracy at night and in poor lighting by including images from the dataset taken under such conditions. You can also use the Contrast-Limited Adaptive Histogram Equalization method and an infrared camera for this purpose.
What is the best dataset for training a custom vehicle detector?
Some popular datasets include the COCO dataset, which is generally good for object detection; the BDD100K dataset, which is great for detecting various driving scenarios; the UA-DETRAC dataset, which is great for surveillance scenarios involving traffic; and the Cityscapes dataset.
How do I handle occlusion in heavy traffic?
Tracking algorithms such as ByteTrack, which can track an object’s ID even when it is not visible, can be very helpful in such cases. In addition, partially occluded vehicle images can be included in the training set, and using multiple cameras and a bird’s-eye view can be helpful in such cases.
Traffic AI Decoder: Abbreviations and Full Forms Used in This Guide
| Abbreviation | Full Form |
| YOLO | You Only Look Once |
| CNN | Convolutional Neural Network |
| ANPR | Automatic Number Plate Recognition |
| ITS | Intelligent Transportation System |
| IoT | Internet of Things |
| GPU | Graphics Processing Unit |
| CUDA | Compute Unified Device Architecture |
| FPS | Frames Per Second |
| ReID | Re-Identification |
| CLAHE | Contrast Limited Adaptive Histogram Equalization |
| MQTT | Message Queuing Telemetry Transport |
| API | Application Programming Interface |
| OCR | Optical Character Recognition |
| SLA | Service Level Agreement |
| POC | Proof of Concept |




