首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对SSD(Single Shot MultiBox Detector)目标检测算法对小目标检测能力不足的问题,提出一种引入视觉机制和多尺度语义信息融合的VFF-SSD(Vision Feature Fusion SSD)改进算法。为了增大浅层网络的感受野提高特征提取能力,首先在SSD浅层特征层中加入视觉机制,然后利用改进PANet(Path Aggregation Network)多尺度特征融合网络与深层特征增强网络得到新的特征层,旨在增强浅层网络的语义信息并加强深层特征的特征表达能力,最后应用注意力机制模块提高对重要信息的学习能力。实验结果表明,在PASCAL VOC2007测试集检测的mAP(Mean Average Precision)值达到81.1%,对数据集中小目标的mAP值较原SSD提高了6.6%。  相似文献   

2.
针对当前目标检测算法对小目标及密集目标检测效果差的问题,该文在融合多种特征和增强浅层特征表征能力的基础上提出了浅层特征增强网络(SEFN),首先将特征提取网络VGG16中Conv4_3层和Conv5_3层提取的特征进行融合形成基础融合特征;然后将基础融合特征输入到小型的多尺度语义信息融合模块中,得到具有丰富上下文信息和空间细节信息的语义特征,同时把语义特征和基础融合特征经过特征重利用模块获得浅层增强特征;最后基于浅层增强特征进行一系列卷积获取多个不同尺度的特征,并输入各检测分支进行检测,利用非极大值抑制算法实现最终的检测结果。在PASCAL VOC2007和MS COCO2014数据集上进行测试,模型的平均精度均值分别为81.2%和33.7%,相对于经典的单极多盒检测器(SSD)算法,分别提高了2.7%和4.9%;此外,该文方法在检测小目标和密集目标场景上,检测精度和召回率都有显著提升。实验结果表明该文算法采用特征金字塔结构增强了浅层特征的语义信息,并利用特征重利用模块有效保留了浅层的细节信息用于检测,增强了模型对小目标和密集目标的检测效果。  相似文献   

3.
Objects that occupy a small portion of an image or a frame contain fewer pixels and contains less information. This makes small object detection a challenging task in computer vision. In this paper, an improved Single Shot multi-box Detector based on feature fusion and dilated convolution (FD-SSD) is proposed to solve the problem that small objects are difficult to detect. The proposed network uses VGG-16 as the backbone network, which mainly includes a multi-layer feature fusion module and a multi-branch residual dilated convolution module. In the multi-layer feature fusion module, the last two layers of the feature map are up-sampled, and then they are concatenated at the channel level with the shallow feature map to enhance the semantic information of the shallow feature map. In the multi-branch residual dilated convolution module, three dilated convolutions with different dilated ratios based on the residual network are combined to obtain the multi-scale context information of the feature without losing the original resolution of the feature map. In addition, deformable convolution is added to each detection layer to better adapt to the shape of small objects. The proposed FD-SSD achieved 79.1% mAP and 29.7% mAP on PASCAL VOC2007 dataset and MS COCO dataset respectively. Experimental results show that FD-SSD can effectively improve the utilization of multi-scale information of small objects, thus significantly improve the effect of the small object detection.  相似文献   

4.
In the field of security, faces are usually blurry, occluded, diverse pose and small in the image captured by an outdoor surveillance camera, which is affected by the external environment such as the camera pose and range, weather conditions, etc. It can be described as a problem of hard face detection in natural images. To solve this problem, we propose a deep convolutional neural network named feature hierarchy encoder–decoder network (FHEDN). It is motivated by two observations from contextual semantic information and the mechanism of multi-scale face detection. The proposed network is a scale-variant style architecture and single stage, which are composed of encoder and decoder subnetworks. Based on the assumption that contextual semantic information around face being auxiliary to detect faces, we introduce a residual mechanism to fuse context prior-based information into face feature and formulate the learning chain to train each encoder–decoder pair. In addition, we discuss some important factors in implement details such as the distribution of training dataset, the scale of feature hierarchy, and anchor box size, etc. They have some impact on the detection performance of the final network. Compared with some state-of-the-art algorithms, our method achieves promising performance on the popular benchmarks including AFW, PASCAL FACE, FDDB, and WIDER FACE. Consequently, the proposed approach can be efficiently implemented and routinely applied to detect faces with severe occlusion and arbitrary pose variations in unconstrained scenes. Our code and results are available on https://github.com/zzxcoder/EvaluationFHEDN.  相似文献   

5.
刘笑楠  武德彬  刘振宇  戚雪 《电讯技术》2023,63(11):1797-1802
针对原始SSD(Single Shot Multibox Detector)算法未充分利用各特征层之间关系导致浅层特征层缺乏小目标语义信息的问题,为了提高对小目标的检测能力,提出了一种结合PANet多尺度特征融合网络和自上向下特征融合路径的TTB-SSD(Top to Bottom SSD)改进算法。首先,使用PANet多尺度特征融合网络对特征进行反复提取,从而获得丰富的多尺度语义信息;然后,使用一种深层特征融合模块将浅层特征层的空间信息传递到深层特征层,进而更准确地对小目标进行定位;最后,为了增强浅层特征层的语义信息,构造了自上向下的特征融合路径,从而强化浅层对小目标检测的准确率。实验结果表明,在PASCAL VOC2007测试集检测的mAP(Mean Average Precision)值达到80.5%,对目标的mAP较原始SSD提高了5.7%,证明了该算法对小目标检测的有效性。  相似文献   

6.
In the field of weakly supervised semantic segmentation (WSSS), Class Activation Maps (CAM) are typically adopted to generate pseudo masks. Yet, we find that the crux of the unsatisfactory pseudo masks is the incomplete CAM. Specifically, as convolutional neural networks tend to be dominated by the specific regions in the high-confidence channels of feature maps during prediction, the extracted CAM contains only parts of the object. To address this issue, we propose the Disturbed CAM (DCAM), a simple yet effective method for WSSS. Following CAM, we adopt a binary cross-entropy (BCE) loss to train a multi-label classification model. Then, we disturb the feature map with retraining to enhance the high-confidence channels. In addition, a softmax cross-entropy (SCE) loss branch is employed to increase the model attention to the target classes. Once converged, we extract DCAM in the same way as in CAM. The evaluation on both PASCAL VOC and MS COCO shows that DCAM not only generates high-quality masks (6.2% and 1.4% higher than the benchmark models), but also enables more accurate activation in object regions. The code is available at https://github.com/gyyang23/DCAM.  相似文献   

7.
Multi-label classification with region-free labels is attracting increasing attention compared to that with region-based labels due to the time-consuming manual region-labeling process. Existing methods usually employ attention-based technology to discover the conspicuous label-related regions in a weakly-supervised manner with only image-level region-free labels, while the region covering is not precise without exploring global clues of multi-level features. To address this issue, a novel Global-guided Weakly-Supervised Learning (GWSL) method for multi-label classification is proposed. The GWSL first extracts the multi-level features to estimate their global correlation map which is further utilized to guide feature disentanglement in the proposed Feature Disentanglement and Localization (FDL) networks. Specifically, the FDL networks then adaptively combine the different correlated features and localize the fine-grained features for identifying multiple labels. The proposed method is optimized in an end-to-end manner under weakly supervision with only image-level labels. Experimental results demonstrate that the proposed method outperforms the state-of-the-arts for multi-label learning problems on several publicly available image datasets. To facilitate similar researches in the future, the codes are directly available online at https://github.com/Yong-DAI/GWSL.  相似文献   

8.
Keypoint-based object detection achieves better performance without positioning calculations and extensive prediction. However, they have heavy backbone, and high-resolution is restored using upsampling that obtain unreliable features. We propose a self-constrained parallelism keypoint-based lightweight object detection network (SCPNet), which speeds inference, drops parameters, widens receptive fields, and makes prediction accurate. Specifically, the parallel multi-scale fusion module (PMFM) with parallel shuffle blocks (PSB) adopts parallel structure to obtain reliable features and reduce depth, adopts repeated multi-scale fusion to avoid too many parallel branches. The self-constrained detection module (SCDM) has a two-branch structure, with one branch predicting corners, and employing entad offset to match high-quality corner pairs, and the other branch predicting center keypoints. The distances between the paired corners’ geometric centers and the center keypoints are used for self-constrained detection. On MS-COCO 2017 and PASCAL VOC, SCPNet’s results are competitive with the state-of-the-art lightweight object detection. https://github.com/mengdie-wang/SCPNet.git.  相似文献   

9.
单阶段多框架目标检测算法在目标检测领域取得 了成功的应用,但其针对公共数据集中船舶检测的平均精度明 显低于其它刚体类目标类别,同时现有公开数据集中的船舶数量较少且类别单一。为提高检 测精度,提出一种基于改 进VGG网络的单阶段船舶检测算法,在原有VGG底层网络的基础上加入异步卷积和最大池化的 交替连接结构,保证 实时处理的同时提高船舶检测的平均精度。为增加训练所需的船舶数量和类别,广泛收集互 联网中包含船舶的图片, 建立了包含22507个船舶目标的数据集,其中6902个目标标签细分为七类船舶。实验将公开数据集VOC2007和 VOC2012中的图片缩小至300训练后,SSS D在VOC2007test中的平均检测精度均值可达79.3%,平均检测速度 超过40 fps。通过迁移参数的方法,在自建数据集中训练后,对大类 船舶检测的平均精度超 过84%,对七类船舶检测的平均精度均值超过89%,领先现有同类船舶检测 算法。  相似文献   

10.
针对智慧交通管理系统中交通车辆监控、车流量统计、违法车辆追踪等问题,为了提高目标车辆检测的准确率和效率,提出了一种改进的SSD( Single Shot MultiBox Detector)目标检测算法.该算法将相邻的卷积层进行特征信息融合,提高准确率;通过减少部分卷积层的深度,提高计算效率;为了提高泛化能力,在减少1...  相似文献   

11.
Generative Adversarial Networks (GANs) have facilitated a new direction to tackle the image-to-image transformation problem. Different GANs use generator and discriminator networks with different losses in the objective function. Still there is a gap to fill in terms of both the quality of the generated images and close to the ground truth images. In this work, we introduce a new Image-to-Image Transformation network named Cyclic Discriminative Generative Adversarial Networks (CDGAN) that fills the above mentioned gaps. The proposed CDGAN generates high quality and more realistic images by incorporating the additional discriminator networks for cycled images in addition to the original architecture of the CycleGAN. The proposed CDGAN is tested over three image-to-image transformation datasets. The quantitative and qualitative results are analyzed and compared with the state-of-the-art methods. The proposed CDGAN method outperforms the state-of-the-art methods when compared over the three baseline Image-to-Image transformation datasets. The code is available at https://github.com/KishanKancharagunta/CDGAN.  相似文献   

12.
单发多框检测器SSD是一种在简单、快速和准确性之间有着较好平衡的目标检测器算法。SSD网络结构中检测层单一的利用方式使得特征信息利用不充分,将导致小目标检测不够鲁棒。该文提出一种基于注意力机制的单发多框检测器算法ASSD。ASSD算法首先利用提出的双向特征融合模块进行特征信息融合以获取包含丰富细节和语义信息的特征层,然后利用提出的联合注意力单元进一步挖掘重点特征信息进而指导模型优化。最后,公共数据集上进行的一系列相关实验表明ASSD算法有效提高了传统SSD算法的检测精度,尤其适用于小目标检测。  相似文献   

13.
Infrared dim and small target detection is a key technology for space-based infrared search and tracking systems. Traditional detection methods have a high false alarm rate and fail to handle complex background and high-noise scenarios. Also, the methods cannot effectively detect targets on a small scale. In this paper, a U-Transformer method is proposed, and a transformer is introduced into the infrared dim and small target detection. First, a U-shaped network is constructed. In the encoder part, the self-attention mechanism is used for infrared dim and small target feature extraction, which helps to solve the problems of losing dim and small target features of deep networks. Meanwhile, by using the encoding and decoding structure, infrared dim and small target features are filtered from the complex background while the shallow features and semantic information of the target are retained. Experiments show that anchor-free and transformer have great potential for infrared dim and small target detection. On the datasets with a complex background, our method outperforms the state-of-the-art detectors and meets the real-time requirement. The code is publicly available at https://github.com/Linaom1214/U-Transformer.  相似文献   

14.
The existing deraining methods based on convolutional neural networks (CNNs) have made great success, but some remaining rain streaks can degrade images drastically. In this work, we proposed an end-to-end multi-scale context information and attention network, called MSCIANet. The proposed network consists of multi-scale feature extraction (MSFE) and multi-receptive fields feature extraction (MRFFE). Firstly, the MSFE can pick up features of rain streaks in different scales and propagate deep features of the two layers across stages by skip connections. Secondly, the MRFFE can refine details of the background by attention mechanism and the depthwise separable convolution of different receptive fields with different scales. Finally, the fusion of these outputs of two subnetworks can reconstruct the clean background image. Extensive experimental results have shown that the proposed network achieves a good effect on the deraining task on synthetic and real-world datasets. The demo can be available at https://github.com/CoderLi365/MSCIANet.  相似文献   

15.
The saliency prediction precision has improved rapidly with the development of deep learning technology, but the inference speed is slow due to the continuous deepening of networks. Hence, this paper proposes a fast saliency prediction model. Concretely, the siamese network backbone based on tailored EfficientNetV2 accelerates the inference speed while maintaining high performance. The shared parameters strategy further curbs parameter growth. Furthermore, we add multi-channel activation maps to optimize the fine features considering different channels and low-level visual features, which improves the interpretability of the model. Extensive experiments show that the proposed model achieves competitive performance on the standard benchmark datasets, and prove the effectiveness of our method in striking a balance between prediction accuracy and inference speed. Moreover, the small model size allows our method to be applied in edge devices. The code is available at: https://github.com/lscumt/fast-fixation-prediction.  相似文献   

16.
高分辨率遥感影像中地物目标往往与所处场景类别息息相关,如能充分利用场景对地物目标的约束信息,有望进一步提升目标检测性能。考虑到场景信息和地物目标之间的关联关系,提出全局关系注意力(RGA)引导场景约束的高分辨率遥感影像目标检测方法。首先在多尺度特征融合检测器的基础网络之后,加入全局关系注意力学习全局场景特征;然后以学到的全局场景特征作为约束,结合方向响应卷积模块和多尺度特征模块进行目标预测;最后利用两个损失函数联合优化网络实现目标检测。在NWPU VHR-10数据集上进行了4组实验,在场景信息约束的条件下取得了更好的目标检测性能。  相似文献   

17.
Semantic segmentation aims to map each pixel of an image into its corresponding semantic label. Most existing methods either mainly concentrate on high-level features or simple combination of low-level and high-level features from backbone convolutional networks, which may weaken or even ignore the compensation between different levels. To effectively take advantages from both shallow (textural) and deep (semantic) features, this paper proposes a novel plug-and-play module, namely feature enhancement module (FEM). The proposed FEM first uses an information extractor to extract the desired details or semantics from different stages, and then enhances target features by taking in the extracted message. Two types of FEM, i.e., detail FEM and semantic FEM, can be customized. Concretely, the former type strengthens textural information to protect key but tiny/low-contrast details from suppression/removal, while the other one highlights structural information to boost segmentation performance. By equipping a given backbone network with FEMs, there might contain two information flows, i.e., detail flow and semantic flow. Extensive experiments on the Cityscapes, ADE20K and PASCAL Context datasets are conducted to validate the effectiveness of our design. The code has been released at https://github.com/SuperZ-Liu/FENet.  相似文献   

18.
随着地铁乘客的大量增加,实时准确地监测地铁站内客流量对于保证乘客安全具有重要意义。针对地铁场景复杂、行人目标小等特点,该文提出了多尺度加权特征融合(MWF)网络,实现地铁客流量的精准实时监测。在数据预处理阶段,该文提出过采样目标增强算法,对小目标占比不足的图片进行拼接处理,增加小目标在训练时的迭代频率。其次,在单镜头多核检测器(SSD)网络基础上添加了基于VGG16网络的特征提取层,将不同尺度的特征层以不同方式进行加权融合,并选出最优的特征融合方式。最终,结合小目标过采样增强算法,得到多尺度加权特征融合模型。实验证明,该方法与SSD网络相比,在保证实时性的同时,检测精度提升了5.82%。  相似文献   

19.
Current state-of-the-art two-stage models on instance segmentation task suffer from several types of imbalances. In this paper, we address the Intersection over the Union (IoU) distribution imbalance of positive input Regions of Interest (RoIs) during the training of the second stage. Our Self-Balanced R-CNN (SBR-CNN), an evolved version of the Hybrid Task Cascade (HTC) model, brings brand new loop mechanisms of bounding box and mask refinements. With an improved Generic RoI Extraction (GRoIE), we also address the feature-level imbalance at the Feature Pyramid Network (FPN) level, originated by a non-uniform integration between low- and high-level features from the backbone layers. In addition, the redesign of the architecture heads toward a fully convolutional approach with FCC further reduces the number of parameters and obtains more clues to the connection between the task to solve and the layers used. Moreover, our SBR-CNN model shows the same or even better improvements if adopted in conjunction with other state-of-the-art models. In fact, with a lightweight ResNet-50 as backbone, evaluated on COCO minival 2017 dataset, our model reaches 45.3% and 41.5% AP for object detection and instance segmentation, with 12 epochs and without extra tricks. The code is available at https://github.com/IMPLabUniPr/mmdetection/tree/sbr_cnn.  相似文献   

20.
Zhou  Quan  Wang  Jie  Liu  Jia  Li  Shenghua  Ou  Weihua  Jin  Xin 《Mobile Networks and Applications》2021,26(1):77-87

The huge computational overhead limits the inference of convolutional neural networks on mobile devices for object detection, which plays a critical role in many real-world scenes, such as face identification, autonomous driving, and video surveillance. To solve this problem, this paper introduces a lightweight convolutional neural network, called RSANet: Towards Real-time Object Detection with Residual Semantic-guided Attention Feature Pyramid Network. Our RSANet consists of two parts: (a) Lightweight Convolutional Network (LCNet) as backbone, and (b) Residual Semantic-guided Attention Feature Pyramid Network (RSAFPN) as detection head. In the LCNet, in contrast to recent advances of lightweight networks that prefer to utilize pointwise convolution for changing the number of feature maps, we design a Constant Channel Module (CCM) to save the Memory Access Cost (MAC) and design Down Sampling Module (DSM) to save the computational cost. In the RSAFPN, meanwhile, we employ Residual Semantic-guided Attention Mechanism (RSAM) to fuse the multi-scale features from LCNet for improving detection performance efficiently. The experiment results show that, on PASCAL VOC 20007 dataset, RSANet only requires 3.24 M model size and needs only 3.54B FLOPs with a 416×416 input image. Compared to YOLO Nano, our method obtains a 6.7% improvement in accuracy and requires less computation. On MS COCO dataset, RSANet only requires 4.35 M model size and needs only 2.34B FLOPs with a 320×320 input image. Our method obtains a 1.3% improvement in accuracy compared to Pelee. The comprehensive experiment results demonstrate that our model achieves promising results in terms of available speed and accuracy trade-off.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号