"Explore YOLOv8n for efficient, real-time object detection applications."
By Niraj Ghetiya
09/28/2024
Table of Contents
YOLO (You Only Look Once) is a state-of-the-art, real-time object detection algorithm that revolutionized computer vision tasks. Over the years, YOLO has gone through multiple iterations, from YOLOv1 to YOLOv8. Each version has improved in terms of accuracy, speed, and flexibility. YOLOv8, released by Ultralytics, is one of the latest and most efficient versions in the YOLO family, featuring advancements in architecture, modularity, and extensibility.
YOLOv8n, where "n" stands for nano, is a lightweight variant of the YOLOv8 model. It is designed for applications where computational efficiency and speed are prioritized over extreme accuracy. While it may not be as accurate as the larger YOLOv8 variants (like YOLOv8s, YOLOv8m, or YOLOv8l), YOLOv8n excels in edge and mobile device applications where resources are limited.
Key trade-offs include:
Before setting up YOLOv8n, ensure you have the following installed on your system:
You can install YOLOv8n using the ultralytics
package, which includes all the YOLOv8 models, including the nano variant:
pip install ultralytics
Once installed, you can verify the installation by running:
yolo help
This command will display available options and verify that YOLOv8 is correctly installed.
YOLOv8n can be easily used for object detection using pre-trained models.
You can load the pre-trained YOLOv8n model with just a few lines of code:
from ultralytics import YOLO
# Load the YOLOv8n model
model = YOLO('yolov8n.pt') # Download and load the pre-trained model
Once the model is loaded, you can run inference on an image:
# Inference on an image
results = model('path/to/your/image.jpg')
# Show the results
results.show() # This will display the image with the detected objects
Running YOLOv8n on a video file is similar to images:
# Inference on a video
results = model('path/to/your/video.mp4')
# Save the output video with annotations
results.save(save_dir='path/to/save/directory')
YOLOv8n offers significant improvements over earlier versions in terms of speed and efficiency. The following table summarizes some benchmarks for YOLOv8n compared to other YOLOv8 variants:
| Model | Parameters | mAP@0.5 | Speed (FPS) |
|---------|------------|---------|-------------|
| YOLOv8n | 3.2M | 50.1% | 120 |
| YOLOv8s | 11.2M | 54.5% | 90 |
| YOLOv8m | 25.9M | 56.8% | 60 |
| YOLOv8l | 43.7M | 59.9% | 40 |
YOLOv8n stands out as an ideal choice for scenarios where speed and minimal computational load are essential, making it suitable for real-time object detection.
To fine-tune YOLOv8n for a custom object detection task, you first need to prepare a dataset. YOLOv8 expects annotations in YOLO format (.txt
files containing class labels and bounding box coordinates).
A typical dataset structure looks like this:
dataset/
│
├── images/
│ ├── train/
│ ├── val/
│
└── labels/
├── train/
├── val/
Once your data is prepared, you can start training YOLOv8n:
# Train the YOLOv8n model on a custom dataset
model.train(data='path/to/dataset.yaml', epochs=100, imgsz=640)
After training, you can evaluate the model's performance on your validation set:
# Evaluate the trained model
metrics = model.val(data='path/to/dataset.yaml')
This will return metrics like mAP, precision, and recall, helping you gauge the model's performance.
YOLOv8n’s lightweight architecture and high speed make it perfect for several real-world applications:
YOLOv8n brings a balanced mix of performance and speed to the table, making it an excellent choice for low-latency object detection tasks on devices with limited resources. Whether you're working on edge devices, real-time surveillance, or mobile applications, YOLOv8n offers a lightweight, fast, and flexible solution. By leveraging the pre-trained models and fine-tuning them on custom datasets, you can implement cutting-edge object detection in a wide range of practical applications.
References