首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
在人工智能技术的支持下,无人机初步获得智能感知能力,在实际应用中展现出高效灵活的数据收集能力。无人机视角下的目标检测作为关键核心技术,在诸多领域中发挥着不可替代的作用,具有重要的研究意义。为了进一步展现无人机视角下的目标检测研究进展,本文对无人机视角下的目标检测算法进行了全面总结,并对已有算法进行了归类、分析和比较。1)介绍无人机视角下的目标检测概念,并总结无人机视角下目标检测所面临的目标尺度、空间分布、样本数量、类别语义以及优化目标等5大不均衡挑战。在介绍现有研究方法的基础上,特别整理并介绍了无人机视角下目标检测算法在交通监控、电力巡检、作物分析和灾害救援等实际场景中的应用。2)重点阐述从数据增强策略、多尺度特征融合、区域聚焦策略、多任务学习以及模型轻量化等方面提升无人机视角下目标检测性能的方法,总结这些方法的优缺点并分析了其与现存挑战之间的关联性。3)全面介绍基于无人机视角的目标检测数据集,并呈现已有算法在两个较常用公共数据集上的性能评估。4)对无人机视角下目标检测技术的未来发展方向进行了展望。  相似文献   

2.
随着深度学习的不断发展,基于深度学习的显著性目标检测已经成为计算机视觉领域的一个研究热点。首先对现有的基于深度学习的显著性目标检测算法分别从边界/语义增强、全局/局部结合和辅助网络三个角度进行了分类介绍并给出了显著性图,同时对三种类型方法进行了定性分析比较;然后简单介绍了基于深度学习的显著性目标检测常用的数据集和评估准则;接着对所提基于深度学习的显著性目标检测方法在多个数据集上进行了性能比较,包括定量比较、P-R曲线和视觉比较;最后指出现有基于深度学习的显著性目标检测方法在复杂背景、小目标、实时性检测等方面的不足,并对基于深度学习的显著性目标检测的未来发展方向,如复杂背景、实时、小目标、弱监督等显著性目标检测进行了探讨。  相似文献   

3.
基于深度学习的小目标检测算法综述   总被引:1,自引:0,他引:1       下载免费PDF全文
随着人工智能技术的发展,深度学习技术在人脸识别、行人检测、无人驾驶等领域得到了广泛的应用。而目标检测作为机器视觉中最基本、最具有挑战性的问题之一,近年来受到了广泛的关注。针对目标检测特别是小目标检测问题,归纳了常用的数据集和性能评价指标,并对各类常见数据集的特点、优势及检测难度进行对比,系统性地总结了常用的目标检测方法和小目标检测面临的挑战,梳理了基于深度学习的小目标检测方法的最新工作,重点介绍了基于多尺度的小目标检测方法和基于超分辨率的小目标检测方法等,同时介绍了针对目标检测方法的轻量化策略和一些轻量化模型的性能,并总结了各类方法的特点、优势和局限性等,展望了基于深度学习的小目标检测方法的未来发展方向。  相似文献   

4.
目标检测算法性能优劣既依赖于数据集样本分布,又依赖于特征提取网络设计.从这2点出发,首先通过分析COCO 2017数据集各尺度目标属性分布,探索了数据集固有的导致小目标检测准确率偏低的潜在因素,据此提出CP模块,该模块以离线方式调整数据集小目标分布,一方面对包含小目标图片进行上采样,另一方面对图片内小目标进行复制粘贴.然后,针对网络特征提取能力问题,受课程学习(CL)思想启发,提出CL层,该层用目标标签引导网络学习,用CL因子控制学习强度,使样本特征增强,便于网络进行特征提取.在COCO 2017数据集上使用CP模块,并在CenterNet中嵌入CL层,进行多组对比实验,采用平均检测准确率、小目标检测准确率、中目标检测准确率和大目标检测准确率作为评价指标,实验结果证明了CP模块和CL层的有效性.  相似文献   

5.
独特的拍摄视角和多变的成像高度使得遥感影像中包含大量尺寸极其有限的目标,如何准确有效地检测这些小目标对于构建智能的遥感图像解译系统至关重要。本文聚焦于遥感场景,对基于深度学习的小目标检测进行全面调研。首先,根据小目标的内在特质梳理了遥感影像小目标检测的3个主要挑战,包括特征表示瓶颈、前背景混淆以及回归分支敏感。其次,通过深入调研相关文献,全面回顾了基于深度学习的遥感影像小目标检测算法。选取3种代表性的遥感影像小目标检测任务,即光学遥感图像小目标检测、SAR图像小目标检测和红外图像小目标检测,系统性总结了3个领域内的代表性方法,并根据每种算法使用的技术思路进行分类阐述。再次,总结了遥感影像小目标检测常用的公开数据集,包括光学遥感图像、SAR图像及红外图像3种数据类型,借助于3种领域的代表性数据集SODA-A(small object detection datasets)、AIR-SARShip和NUAA-SIRST(Nanjing University of Aeronautics and Astronautics,single-frame infrared small target),进一步对主流的遥感影像目标检测算法在面对小目标时的性能表现进行横向对比及深入评估。最后,对遥感影像小目标检测的应用现状进行总结,并展望了遥感场景下小目标检测的发展趋势。  相似文献   

6.
视频显著性检测是计算机视觉领域的一个热点研究方向,其目的在于通过联合空间和时间信息实现视频序列中与运动相关的显著性目标的连续提取.由于视频序列中目标运动模式多样、场景复杂以及存在相机运动等,使得视频显著性检测极具挑战性.本文将对现有的视频显著性检测方法进行梳理,介绍相关实验数据集,并通过实验比较分析现有方法的性能.首先,本文介绍了基于底层线索的视频显著性检测方法,主要包括基于变换分析的方法、基于稀疏表示的方法、基于信息论的方法、基于视觉先验的方法和其他方法五类.然后,对基于学习的视频显著性检测方法进行了总结,主要包括传统学习方法和深度学习方法,并着重对后一类方法进行了介绍.随后,介绍了常用的视频显著性检测数据集,给出了四种算法性能评价指标,并在不同数据集上对最新的几种算法进行了定性和定量的比较分析.最后,对视频显著性检测的关键问题进行了总结,并对未来的发展趋势进行了展望.  相似文献   

7.
目标检测的任务是从图像中精确且高效地识别、定位出大量预定义类别的物体实例。随着深度学习的广泛应用,目标检测的精确度和效率都得到了较大提升,但基于深度学习的目标检测仍面临改进与优化主流目标检测算法的性能、提高小目标物体检测精度、实现多类别物体检测、轻量化检测模型等关键技术的挑战。针对上述挑战,本文在广泛文献调研的基础上,从双阶段、单阶段目标检测算法的改进与结合的角度分析了改进与优化主流目标检测算法的方法,从骨干网络、增加视觉感受野、特征融合、级联卷积神经网络和模型的训练方式的角度分析了提升小目标检测精度的方法,从训练方式和网络结构的角度分析了用于多类别物体检测的方法,从网络结构的角度分析了用于轻量化检测模型的方法。此外,对目标检测的通用数据集进行了详细介绍,从4个方面对该领域代表性算法的性能表现进行了对比分析,对目标检测中待解决的问题与未来研究方向做出预测和展望。目标检测研究是计算机视觉和模式识别中备受青睐的热点,仍然有更多高精度和高效的算法相继提出,未来将朝着更多的研究方向发展。  相似文献   

8.
小目标检测广泛应用于视频监控等各种任务,在各领域均有着重要作用.由于待测目标尺寸小、特征弱等原因,目前的检测算法对小目标的检测性能仍值得进一步提升.现有基于设计特征的传统方法在复杂背景的应用场景下检测精度低、鲁棒性弱,基于深度学习的检测算法存在数据集难获取、小目标特征难提取等问题.面向解决低信杂比图像中小目标因面积占比小导致的特征提取难的问题,提出了一个深度分割模型用于小目标检测.为进一步提升检测性能、降低漏检率,充分应用多波段图像信息,设计了一个基于深度分割模型的多波段融合小目标检测方法.在仿真数据集上的实验结果表明,该方法有效提高了小目标检测的准确率,为小目标检测的后续研究提供了新的思路.  相似文献   

9.
基于计算机视觉的果实目标检测识别是目标检测、计算机视觉、农业机器人等多学科的重要交叉研究课题,在智慧农业、农业现代化、自动采摘机器人等领域,具有重要的理论研究意义和实际应用价值。随着深度学习在图像处理领域中广泛应用并取得良好效果,计算机视觉技术结合深度学习方法的果实目标检测识别算法逐渐成为主流。本文介绍基于计算机视觉的果实目标检测识别的任务、难点和发展现状,以及2类基于深度学习方法的果实目标检测识别算法,最后介绍用于算法模型训练学习的公开数据集与评价模型性能的评价指标,且对当前果实目标检测识别存在的问题和未来可能的发展方向进行讨论。  相似文献   

10.
现有目标检测算法主要以图像中的大目标作为研究对象,针对小目标的研究比较少且存在检测精确度低、无法满足实时性要求的问题,基于此,提出一种基于深度学习目标检测框架PVANet的实时小目标检测方法。首先,构建一个专用于小目标检测的基准数据集,它包含的目标在一幅图像中的占比非常小且存在截断、遮挡等干扰,可以更好地评估小目标检测方法的优劣;其次,结合区域建议网络(RPN)提出一种生成高质量小目标候选框的方法以提高算法的检测精确度和速度;选用step和inv两种新的学习率策略以改善模型性能,进一步提升检测精确度。在构建的小目标数据集上,相比原PVANet算法平均检测精确度提高了10.67%,速度提升了约30%。实验结果表明,该方法是一个有效的小目标检测算法,达到了实时检测的效果。  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

16.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
19.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

20.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号