Citation
Xie, Yumeng and Manshor, Noridayu and Husin, Nor Azura and Liu, Chengzhi
(2025)
Improving YoloPX using YoloP and Yolov8 for panoptic
driving perception.
International Journal on Informatics Visualization, 9 (1).
pp. 248-257.
ISSN 2549-9904; eISSN: 2549-9610
Abstract
Autonomous driving technology (ADS) has seen significant advancements over the past decade, with car manufacturers investing heavily in its development to meet the growing demand for safer, more efficient, and eco-friendly transportation solutions. The panoptic driving perception system is central to ADS, essential for accurately interpreting the driving environment. This system requires high precision, lightweight design, and real-time responsiveness to detect surrounding vehicles, lane lines, and drivable areas effectively. This study introduces an enhanced YOLOPX model that combines YOLOP and YOLOv8 to create an adaptive multi-task learning network capable of traffic object detection, drivable area segmentation, and lane detection. The model integrates YOLOP's detection head with YOLOPX's anchor-free detection head to improve generalization, incorporates YOLOv8's advanced backbone structure to enhance feature extraction accuracy, and retains YOLOP's three-neck architecture to optimize multi-task processing. The improved model employs a mode loss function for segmentation tasks, enhancing generalization and improving lane detection accuracy. Experiments conducted using the BDD100k dataset demonstrated the model's effectiveness: achieving 98.8% accuracy and 27.6% IoU for lane line detection, 90.4% mIoU for drivable area segmentation, and 85.9% recall and 76.9% mAP50 for traffic object detection. This model represents a significant advancement in ADS, enhancing both the safety and reliability of autonomous vehicles.
Download File
Additional Metadata
Actions (login required)
 |
View Item |