FFCA-YOLO for Small Object Detection in Remote Sensing Images
A lite version of FFCA-YOLO (L-FFCA-YOLO) is optimized by reconstructing the backbone and neck of FFCA-YOLO based on partial convolution (PConv).
Abstract
Issues such as insufficient feature representation and background confusion make detection tasks for small object in remote sensing arduous. Particularly when the algorithm will be deployed on board for real-time processing, which requires extensive optimization of accuracy and speed under limited computing resources. To tackle these problems, an efficient detector called FFCA-YOLO(Feature enhancement, Fusion and Context Aware YOLO) is proposed in this paper. FFCA-YOLO includes three innovative lightweight and plug-and-play modules: feature enhancement module(FEM), feature fusion module(FFM) and spatial context aware module(SCAM). These three modules improve the network capabilities of local area awareness, multi-scale feature fusion and global association cross channels and space, respectively, while trying to avoid increasing complexity as possible. Thus the weak feature representations of small objects are enhanced and the confusable backgrounds are suppressed. Two public remote sensing datasets(VEDAI and AI-TOD) for small object detection and one self-built dataset(USOD) are used to validate the effectiveness of FFCA-YOLO. The accuracy of FFCA-YOLO reaches 0.748, 0.617 and 0.909(in terms of mAP <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">50</sub> ) that exceeds several benchmark models and state-of-the-art methods. Meanwhile, the robustness of FFCA-YOLO is also validated under different simulated degradation conditions. Moreover, to further reduce computational resource consumption while ensuring efficiency, a lite version of FFCA-YOLO(L-FFCA-YOLO) is optimized by reconstructing the backbone and neck of FFCA-YOLO based on partial convolution. L-FFCA-YOLO has faster speed, smaller parameter scale, lower computing power requirement but little accuracy loss compared with FFCA-YOLO. The source code will be available at https://github.com/yemu1138178251/FFCA-YOLO.