This project connects YOLOv5 with ROS (Robot Operating System), enabling real-time object detection directly from robot sensor topics.
The system subscribes to image topics published by the robot, processes the images through YOLOv5, and performs efficient object detection.
It is designed to serve as a flexible, plug-and-play perception module for various robotic platforms requiring visual detection or perception capabilities.
- ROS Integration: Seamless connection between YOLOv5 and ROS.
- Topic Subscription: Automatically subscribes to ROS image topics for detection input.
- Visualization: Displays YOLOv5 detection results in a pop-up window with bounding boxes.
- Data Interface: Provides access to bounding box data for each detected object for downstream processing.
- Code Documentation: Includes clear modification notes and comments to help users adapt the code for their own robot platforms.
This makes the project suitable for a wide range of robotic applications, including autonomous navigation, warehouse robots, and mobile manipulation tasks.
- Input: Any ROS image topic published by a robot’s camera or sensor (e.g.,
/camera/image_raw). - Output:
- A visualization window showing YOLOv5 detection bounding boxes.
- Access to bounding box data for further algorithmic processing.
🚧 Currently being updated...
Detailed setup instructions, including installation, topic configuration, and example launch files, will be added soon.
This implementation is inspired by and extends the following open-source project:
yolov5_ros
This project follows the same open-source license as YOLOv5 and the referenced yolov5_ros repository.
Please review the respective repositories for specific license information.