Pricing

login
Home / Papers / Center Intersection of Union loss with You Only Look Once...

Center Intersection of Union loss with You Only Look Once for Object Detection and Recognition

88 Citations2024
Surendar Rama Sitaraman, M. V. S. Narayana, Jayapal Lande
2024 International Conference on Intelligent Algorithms for Computational Intelligence Systems (IACIS)

The new loss function is introduced in the YOLO v5 which improves the performance of object detection and the CIoU-YOLO v5 method attained the Average Precision (AP) of 44.7% on MS-COCO 2017 dataset and 53.7% AP on KITTI dataset which is superior than Dual-path Lightweight Module (DLM).

Abstract

The main aim of object detection is to extract the feature for different sizes trough hierarchically stacking the multiple scale feature maps. The 2 Dimension (2D) object detection is the difficult in facilitates the perceptron system to knowing the environment. Though, that is not easy for deciding the transmit semantic data to low-level layers when minimizing loss of semantic data of high-level features. In this research, the Center Intersection of Union loss with You Only Look Once (CIoU-YOLO) method for detecting the objects. The datasets used for detecting the objects are MS-COCO 2017 and KITTI and the images in the dataset are pre-processed by using the point clouds. Then, the detection is performed by using the CIoU – YOLO v5 method. The new loss function is introduced in the YOLO v5 which improves the performance of object detection. The CIoU-YOLO v5 method attained the Average Precision (AP) of 44.7% on MS-COCO 2017 dataset and 53.7% AP on KITTI dataset which is superior than Dual-path Lightweight Module (DLM).

Use the desktop version to access all features