login
Home / Papers / Continual Detection Transformer for Incremental Object Detection

Continual Detection Transformer for Incremental Object Detection

38 Citations•2023•
Yaoyao Liu, B. Schiele, A. Vedaldi
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

This paper proposes a ContinuaL DEtection TRansformer (CL-DETR), a new method for transformer-based IOD which enables effective usage of KD and ER in this context and introduces a Detector Knowledge Distillation (DKD) loss, focusing on the most informative and reliable predictions from old versions of the model.

Abstract

Incremental object detection (IOD) aims to train an object detector in phases, each with annotations for new object categories. As other incremental settings, IOD is subject to catastrophic forgetting, which is often addressed by techniques such as knowledge distillation (KD) and exem-plar replay (ER). However, KD and ER do not work well if applied directly to state-of-the-art transformer-based object detectors such as Deformable DETR [59] and UP-DETR [9]. In this paper, we solve these issues by proposing a ContinuaL DEtection TRansformer (CL-DETR), a new method for transformer-based IOD which enables effective usage of KD and ER in this context. First, we introduce a Detector Knowledge Distillation (DKD) loss, focusing on the most informative and reliable predictions from old versions of the model, ignoring redundant background predictions, and ensuring compatibility with the available ground-truth labels. We also improve ER by proposing a calibration strategy to preserve the label distribution of the training set, therefore better matching training and testing statistics. We conduct extensive experiments on COCO 2017 and demonstrate that CL-DETR achieves state-of-the-art results in the IOD setting.11Code: https://1yy.mpi-inf.mpg.de/CL-DETR/