首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目标检测的区域编码和区域逻辑运算   总被引:3,自引:0,他引:3  
聂守平  刘峰  王弘 《中国激光》2004,31(2):85-189
研究了图像逐行扫描、逐行编码的方法,提出了存储区域信息的新的数据结构,在传统的图像逻辑运算的基础上,研究了区域逻辑运算,并将区域编码和区域逻辑运算应用于目标检测,并给出了实验结果。  相似文献   

2.
运动目标检测和目标区域的估算   总被引:1,自引:0,他引:1  
于雪莲  宋洋  刘晓红 《通信技术》2011,44(5):119-121,145
采用三帧差分法检测静止背景下视频序列中的运动目标,得到运动目标的二值图像。并利用矩方法对运动目标的几何特征进行了研究,在此基础上提出了一种运动目标的质心和最小外接矩形的估算算法,并对该算法进行了改进,使之适合噪声干扰比较大的情况。该算法可实现运动目标进行快速准确定位和区域估算,实验表明该算法的时间复杂度和空间复杂度低,效果良好,且具有很好的鲁棒性。  相似文献   

3.
A method of 'network filtering' has been proposed recently to detect the effects of certain external perturbations on the interacting members in a network. However, with large networks, the goal of detection seems a priori difficult to achieve, especially since the number of observations available often is much smaller than the number of variables describing the effects of the underlying network. Under the assumption that the network possesses a certain sparsity property, we provide a formal characterization of the accuracy with which the external effects can be detected, using a network filtering system that combines Lasso regression in a sparse simultaneous equation model with simple residual analysis. We explore the implications of the technical conditions underlying our characterization, in the context of various network topologies, and we illustrate our method using simulated data.  相似文献   

4.
Existing high-precision object detection algorithms for UAV(unmanned aerial vehicle)aerial images often have a large number of parameters and heavy weight,which makes it difficult to be applied to mobile devices.We propose three YOLO-based lightweight object detection networks for UAVs,named YOLO-L,YOLO-S,and YOLO-M,respectively.In YOLO-L,we adopt a deconvolution approach to explore suitable upsampling rules during training to improve the detection accuracy.The convolution-batch normalization-SiLU activation function(CBS)structure is replaced with Ghost CBS to reduce the number of parameters and weight,meanwhile Maxpool max-imum pooling operation is proposed to replace the CBS structure to avoid generating parameters and weight.YOLO-S greatly reduces the weight of the network by directly introducing CSPGhostNeck residual structures,so that the parameters and weight are respectively decreased by about 15%at the expense of 2.4%mAP.And YOLO-M adopts the CSPGhostNeck residual structure and deconvolution to reduce parameters by 5.6%and weight by 5.7%,while mAP only by 1.8%.The results show that the three lightweight detection networks proposed in this paper have good performance in UAV aerial image object detection task.  相似文献   

5.
针对目前算法对遥感图像中背景复杂、目标小而密集的复杂场景下的目标检测精度低的问题,提出了一种基于YOLOv3的改进算法,在YOLOv3的基础上,结合了密集连接网络,利用密集连接块来提取深层特征,增强特征传播,同时引入Distance-IoU(DIoU) loss作为坐标预测的损失函数,使边界框的定位更加准确,此外针对目...  相似文献   

6.
Zhou  Quan  Wang  Jie  Liu  Jia  Li  Shenghua  Ou  Weihua  Jin  Xin 《Mobile Networks and Applications》2021,26(1):77-87

The huge computational overhead limits the inference of convolutional neural networks on mobile devices for object detection, which plays a critical role in many real-world scenes, such as face identification, autonomous driving, and video surveillance. To solve this problem, this paper introduces a lightweight convolutional neural network, called RSANet: Towards Real-time Object Detection with Residual Semantic-guided Attention Feature Pyramid Network. Our RSANet consists of two parts: (a) Lightweight Convolutional Network (LCNet) as backbone, and (b) Residual Semantic-guided Attention Feature Pyramid Network (RSAFPN) as detection head. In the LCNet, in contrast to recent advances of lightweight networks that prefer to utilize pointwise convolution for changing the number of feature maps, we design a Constant Channel Module (CCM) to save the Memory Access Cost (MAC) and design Down Sampling Module (DSM) to save the computational cost. In the RSAFPN, meanwhile, we employ Residual Semantic-guided Attention Mechanism (RSAM) to fuse the multi-scale features from LCNet for improving detection performance efficiently. The experiment results show that, on PASCAL VOC 20007 dataset, RSANet only requires 3.24 M model size and needs only 3.54B FLOPs with a 416×416 input image. Compared to YOLO Nano, our method obtains a 6.7% improvement in accuracy and requires less computation. On MS COCO dataset, RSANet only requires 4.35 M model size and needs only 2.34B FLOPs with a 320×320 input image. Our method obtains a 1.3% improvement in accuracy compared to Pelee. The comprehensive experiment results demonstrate that our model achieves promising results in terms of available speed and accuracy trade-off.

  相似文献   

7.
Occlusion edges correspond to range discontinuity in a scene from the point of view of the observer. Detection of occlusion edges is an important prerequisite for many machine vision and mobile robotic tasks. Although they can be extracted from range data, extracting them from images and videos would be extremely beneficial. We trained a deep convolutional neural network (CNN) to identify occlusion edges in images and videos with just RGB, RGB-D and RGB-D-UV inputs, where D stands for depth and UV stands for horizontal and vertical components of the optical flow field respectively. The use of CNN avoids hand-crafting of features for automatically isolating occlusion edges and distinguishing them from appearance edges. Other than quantitative occlusion edge detection results, qualitative results are provided to evaluate input data requirements and to demonstrate the trade-off between high resolution analysis and frame-level computation time that is critical for real-time robotics applications.  相似文献   

8.
赵琰  赵凌君  匡纲要 《电子学报》2021,49(9):1665-1674
针对合成孔径雷达(Synthetic Aperture Radar,SAR)图像中飞机目标散射点离散化程度高,周围背景干扰复杂,现有算法对飞机浅层语义特征表征能力弱等问题,本文提出了基于注意力特征融合网络(Attention Feature Fu-sion Network,AFFN)的SAR图像飞机目标检测算法.通过引入瓶颈注意力模块(Bottleneck Attention Module,BAM),本文在AFFN中构建了包含注意力双向特征融合模块(Attention Bidirectional Feature Fusion Module,ABFFM)与注意力传输连接模块(Attention Transfer Connection Block,ATCB)的注意力特征融合策略并合理优化了网络结构,提升了算法对飞机离散化散射点浅层语义特征的提取与判别.基于自建的Gaofen-3与TerraSAR-X卫星图像混合飞机目标实测数据集,实验对AFFN与基于深度学习的通用目标检测以及SAR图像特定目标检测算法进行了比较,其结果验证了AFFN对SAR图像飞机目标检测的准确性与高效性.  相似文献   

9.
基于弱语义注意力的遥感图像可解释目标检测   总被引:2,自引:0,他引:2  
近些年来随着遥感技术的快速发展,遥感图像目标检测成为了当前的研究热点.针对遥感图像背景复杂以及现有目标检测模型缺乏可解释性等问题,本文提出了一种基于弱语义注意力的遥感图像可解释目标检测方法.具体地,首先通过多层级特征金字塔来解决遥感图像中目标尺度变化范围大的问题.其次,利用检测框的角度回归来解决遥感图像目标定向的问题....  相似文献   

10.
针对Faster区域卷积神经网络目标检测算法,提出了一种自适应候选区域建议网络.在训练过程中根据当前损失反馈调节候选区域数目,使候选区域在一定范围内动态变化,进而节省开销,并记录下表现最好的候选区域数目;在测试时用记录的候选区域数目进行测试.针对Softmax函数对候选区域进行分类时需要人为选取置信度阈值带来的时间成本...  相似文献   

11.
Crumpled sheets of paper tend to exhibit a specific and complex structure, which is described by physicists as ridge networks. Existing literature shows that the automation of ridge network detection in crumpled paper is very challenging because of its complex structure and measuring distortion. In this paper, we propose to model the ridge network as a weighted graph and formulate the ridge network detection as an optimization problem in terms of the graph density. First, we detect a set of graph nodes and then determine the edge weight between each pair of nodes to construct a complete graph. Next, we define a graph density criterion and formulate the detection problem to determine a subgraph with maximal graph density. Further, we also propose to refine the graph density by including a pairwise connectivity into the criterion to improve the connectivity of the detected ridge network. Our experimental results show that, with the density criterion, our proposed method effectively automates the ridge network detection.  相似文献   

12.
序列图像中跟踪目标的一种简单算法   总被引:2,自引:0,他引:2  
倪军  袁家虎  吴钦章 《半导体光电》2005,26(Z1):140-142,145
介绍了一种对空间目标快速捕获与跟踪的简单算法.该算法主要利用高帧频序列图像中,相邻两帧图像相关性大的特点,采用自适应阈值的方法对目标进行分割.文章详细介绍了算法的设计思想.最后给出了用计算机进行仿真试验的结果,验证了这种算法既可以对目标进行有效的分割,也能够保证实时处理的要求.  相似文献   

13.
TD-SCDMA网络中的干扰问题及其优化方案分析   总被引:1,自引:1,他引:1  
李斌  雷菁 《现代电子技术》2010,33(17):49-51,56
现阶段,各种系统外和系统内的干扰对TD-SCDMA网络性能造成了较为严重的影响,因此,干扰的优化成为了TD-SCDMA无线网络优化工作中的一个重要环节。首先对TD-SCDMA系统中存在的各种干扰类型及其常见问题进行分类与定位,接着结合工程实践经验,给出了TD-SCDMA网络干扰问题的优化流程,最后通过具体的优化案例对TD-SCDMA网络的干扰优化做了进一步的分析。  相似文献   

14.
空间目标白天光电探测能力分析   总被引:2,自引:0,他引:2  
卢栋 《现代电子技术》2011,34(16):176-178,182
天空背景在白天时的强光给空间目标的光电探测带来了很大的难度,针对白天探测的特点,基于极限探测信噪比、对比度及极限探测星等探测能力模型,说明了光谱滤波方法能有效提高白天探测能力,对比各种滤波效果,提出窄带滤波作为最优光谱滤波方法。从光学系统参数角度出发,通过分析计算得出在一定条件下综合权衡各参数的影响,减小视场,提高光学探测口径,增大焦距,有利于提高白天光电探测能力,为光学探测器的设计提供了一定的参考依据。  相似文献   

15.
An eye detection method for facial images using Zernike moments with a support vector machine (SVM) is proposed. Eye/non‐eye patterns are represented in terms of the magnitude of Zernike moments and then classified by the SVM. Due to the rotation‐invariant characteristics of the magnitude of Zernike moments, the method is robust against rotation, which is demonstrated using rotated images from the ORL database. Experiments with TV drama videos showed that the proposed method achieved a 94.6% detection rate, which is a higher performance level than that achievable by the method that uses gray values with an SVM.  相似文献   

16.
目标检测与跟踪技术在军事、航天以及国民经济的各个领域发挥着重要的作用.对目标检测与跟踪的专利技术进行了综述,重点关注近年的热点技术,将近年来目标检测与跟踪的专利技术大致分为以下几类:块匹配、基于特征的方法、图像间相减、基于变换域的方法、基于梯度的方法和统计方法,并介绍了每类方法的优势与问题,希望对审查与申请提供一定的借鉴.  相似文献   

17.
刘景波  秦娜  金炜东 《中国激光》2008,35(s2):341-344
提出一种新的室内夜间微弱光源照明情况下的运动目标检测方法。首先进行背景建模, 获取稳固的背景图像, 之后对背景和当前帧图像进行图像增强处理, 提高其清晰度; 采用相对背景减法检测前景运动目标, 并对差分图像进行去噪和修补; 利用前景目标区域、阴影区域和背景区域像素亮度值存在差异的特点, 检测和去除背景差分图像中可能存在的阴影, 获得准确的运动目标。在室内夜间环境下采集视频进行试验, 结果验证了所提方法的有效性。  相似文献   

18.
具有实时检测、跟踪和分析判断的智能化监控系统是智能化监控系统发展的必然趋势,本文提出一种在实时监控下检测和跟踪运动目标的方法,首先使用背景差分法检测出运动目标,并定期使用背景更新策略对参考背景进行局部更新,这样可以提高目标检测的精确度。目标跟踪时在不同的尺度空间获得目标的关键点,增强算法目标在遮挡情况下的鲁棒性,接下来使用mean shift算法估计目标在下一帧的位置。采用IBM研究中心的测试视频序列对本文的方法进行了测试,实验结果表明,该方法是有效可行的。  相似文献   

19.
视频序列中的运动目标检测与跟踪   总被引:1,自引:0,他引:1  
李春生  龚晓峰 《现代电子技术》2009,32(23):149-151,157
提出一种视频序列中的运动目标检测跟踪算法。该方法采用直方图统计与多帧平均混合作为动态背景更新法,经过噪音消除、形态学处理、阴影处理后,用区域标记法提取目标。利用目标特征参数建立目标数组,通过当前帧目标数组和前一帧目标数组距离匹配实现运动目标的快速跟踪。该方法与传统方法相比具有更好的学习能力,从而有效地提高了运动目标检测的正确率和快速性。实验结果表明该方法具有良好的鲁棒性和自适应性。  相似文献   

20.
针对视频序列中运动目标的检测与跟踪,提出了一种基于边缘检测的背景差分算法。这种算法计算量较小,比较简单,在可以得到完整的运动目标的边缘信息的前提下,还不会受到背景的改变和光线的变化等外界因素的干扰。实验结果表明该方法具有良好的鲁棒性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号