4.
The timely and accurate identification of traffic signs plays a significant role in realizing the autonomous driving of vehicles. However, the size of traffic signs accounts for a low proportion of the input picture, which increases the difficulty of detection. This paper proposes an improved faster R-CNN traffic sign detection method. ResNet50-D feature extractor, attention-guided context feature pyramid network (ACFPN), and AutoAugment technology are designed for the faster R-CNN model. ResNet50-D is selected as the backbone network to obtain more characteristic information. ACFPN is performed to decrease the loss of contextual information. And data augmentation and transfer learning are adopted to make model training more convenient and time-saving. To prove the availability of the proposed method, we compare it with mainstream approaches (SSD, YOLOv3, RetinaNet, cascade R-CNN, FCOS, and CornerNet-Squeeze) and state-of-the-art methods. Experimental results on the CCTSDB dataset show that the improved faster R-CNN achieves the frames per second of 29.8 and the mean average precision of 99.5%, which is superior to the state-of-the-art methods and more suitable for traffic sign detection. Moreover, the proposed model is extended to the Tsinghua-Tencent 100 K (TT100K) dataset and also achieves a competitive detection result.
相似文献