首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
在人工智能技术的支持下,无人机初步获得智能感知能力,在实际应用中展现出高效灵活的数据收集能力。无人机视角下的目标检测作为关键核心技术,在诸多领域中发挥着不可替代的作用,具有重要的研究意义。为了进一步展现无人机视角下的目标检测研究进展,本文对无人机视角下的目标检测算法进行了全面总结,并对已有算法进行了归类、分析和比较。1)介绍无人机视角下的目标检测概念,并总结无人机视角下目标检测所面临的目标尺度、空间分布、样本数量、类别语义以及优化目标等5大不均衡挑战。在介绍现有研究方法的基础上,特别整理并介绍了无人机视角下目标检测算法在交通监控、电力巡检、作物分析和灾害救援等实际场景中的应用。2)重点阐述从数据增强策略、多尺度特征融合、区域聚焦策略、多任务学习以及模型轻量化等方面提升无人机视角下目标检测性能的方法,总结这些方法的优缺点并分析了其与现存挑战之间的关联性。3)全面介绍基于无人机视角的目标检测数据集,并呈现已有算法在两个较常用公共数据集上的性能评估。4)对无人机视角下目标检测技术的未来发展方向进行了展望。  相似文献   

2.
随着深度学习的不断发展,基于深度学习的显著性目标检测已经成为计算机视觉领域的一个研究热点。首先对现有的基于深度学习的显著性目标检测算法分别从边界/语义增强、全局/局部结合和辅助网络三个角度进行了分类介绍并给出了显著性图,同时对三种类型方法进行了定性分析比较;然后简单介绍了基于深度学习的显著性目标检测常用的数据集和评估准则;接着对所提基于深度学习的显著性目标检测方法在多个数据集上进行了性能比较,包括定量比较、P-R曲线和视觉比较;最后指出现有基于深度学习的显著性目标检测方法在复杂背景、小目标、实时性检测等方面的不足,并对基于深度学习的显著性目标检测的未来发展方向,如复杂背景、实时、小目标、弱监督等显著性目标检测进行了探讨。  相似文献   

3.
基于深度学习的小目标检测算法综述   总被引:1,自引:0,他引:1       下载免费PDF全文
随着人工智能技术的发展,深度学习技术在人脸识别、行人检测、无人驾驶等领域得到了广泛的应用。而目标检测作为机器视觉中最基本、最具有挑战性的问题之一,近年来受到了广泛的关注。针对目标检测特别是小目标检测问题,归纳了常用的数据集和性能评价指标,并对各类常见数据集的特点、优势及检测难度进行对比,系统性地总结了常用的目标检测方法和小目标检测面临的挑战,梳理了基于深度学习的小目标检测方法的最新工作,重点介绍了基于多尺度的小目标检测方法和基于超分辨率的小目标检测方法等,同时介绍了针对目标检测方法的轻量化策略和一些轻量化模型的性能,并总结了各类方法的特点、优势和局限性等,展望了基于深度学习的小目标检测方法的未来发展方向。  相似文献   

4.
目标检测算法性能优劣既依赖于数据集样本分布,又依赖于特征提取网络设计.从这2点出发,首先通过分析COCO 2017数据集各尺度目标属性分布,探索了数据集固有的导致小目标检测准确率偏低的潜在因素,据此提出CP模块,该模块以离线方式调整数据集小目标分布,一方面对包含小目标图片进行上采样,另一方面对图片内小目标进行复制粘贴.然后,针对网络特征提取能力问题,受课程学习(CL)思想启发,提出CL层,该层用目标标签引导网络学习,用CL因子控制学习强度,使样本特征增强,便于网络进行特征提取.在COCO 2017数据集上使用CP模块,并在CenterNet中嵌入CL层,进行多组对比实验,采用平均检测准确率、小目标检测准确率、中目标检测准确率和大目标检测准确率作为评价指标,实验结果证明了CP模块和CL层的有效性.  相似文献   

5.
随着深度学习在目标检测领域的大规模应用,目标检测技术的精度和速度得到迅速提高,已被广泛应用于行人检测、人脸检测、文字检测、交通标志及信号灯检测和遥感图像检测等领域.本文在基于调研国内外相关文献的基础上对目标检测方法进行了综述.首先介绍了目标检测领域的研究现状以及对目标检测算法进行检验的数据集和性能指标.对两类不同架构的目标检测算法,基于区域建议的双阶段目标检测算法和基于回归分析的单阶段目标检测算法的一些典型算法的流程架构、性能效果、优缺点进行了详细的阐述,还补充了一些近几年来新出现的目标检测算法,并列出了各种算法在主流数据集上的实验结果和优缺点对比.最后对目标检测的一些常见应用场景进行说明,并结合当前的研究热点分析了未来发展趋势.  相似文献   

6.
独特的拍摄视角和多变的成像高度使得遥感影像中包含大量尺寸极其有限的目标,如何准确有效地检测这些小目标对于构建智能的遥感图像解译系统至关重要。本文聚焦于遥感场景,对基于深度学习的小目标检测进行全面调研。首先,根据小目标的内在特质梳理了遥感影像小目标检测的3个主要挑战,包括特征表示瓶颈、前背景混淆以及回归分支敏感。其次,通过深入调研相关文献,全面回顾了基于深度学习的遥感影像小目标检测算法。选取3种代表性的遥感影像小目标检测任务,即光学遥感图像小目标检测、SAR图像小目标检测和红外图像小目标检测,系统性总结了3个领域内的代表性方法,并根据每种算法使用的技术思路进行分类阐述。再次,总结了遥感影像小目标检测常用的公开数据集,包括光学遥感图像、SAR图像及红外图像3种数据类型,借助于3种领域的代表性数据集SODA-A(small object detection datasets)、AIR-SARShip和NUAA-SIRST(Nanjing University of Aeronautics and Astronautics,single-frame infrared small target),进一步对主流的遥感影像目标检测算法在面对小目标时的性能表现进行横向对比及深入评估。最后,对遥感影像小目标检测的应用现状进行总结,并展望了遥感场景下小目标检测的发展趋势。  相似文献   

7.
视频显著性检测是计算机视觉领域的一个热点研究方向,其目的在于通过联合空间和时间信息实现视频序列中与运动相关的显著性目标的连续提取.由于视频序列中目标运动模式多样、场景复杂以及存在相机运动等,使得视频显著性检测极具挑战性.本文将对现有的视频显著性检测方法进行梳理,介绍相关实验数据集,并通过实验比较分析现有方法的性能.首先,本文介绍了基于底层线索的视频显著性检测方法,主要包括基于变换分析的方法、基于稀疏表示的方法、基于信息论的方法、基于视觉先验的方法和其他方法五类.然后,对基于学习的视频显著性检测方法进行了总结,主要包括传统学习方法和深度学习方法,并着重对后一类方法进行了介绍.随后,介绍了常用的视频显著性检测数据集,给出了四种算法性能评价指标,并在不同数据集上对最新的几种算法进行了定性和定量的比较分析.最后,对视频显著性检测的关键问题进行了总结,并对未来的发展趋势进行了展望.  相似文献   

8.
目标检测是机器视觉领域内最具挑战性的任务之一,深度学习则是目标检测最主流的实现方法。近年来,深度学习理论及技术的快速发展,使得基于深度学习的目标检测算法取得了巨大进展,学者从数据处理、网络结构、损失函数等多方面入手,提出了一系列对于目标检测算法的改进方式。针对典型目标检测算法的改进方式进行综述。归纳了常用数据集和性能评价指标,并对数据集的特点、优势及应用领域进行了对比。梳理了典型的基于深度学习的目标检测算法的最新改进思路,从数据增强、先验框选择、网络模型的构建、预测框的选取及损失计算几个方面分别进行论述、总结与对比分析。结合当前存在的问题,展望了基于深度学习的典型目标检测算法的未来研究方向。  相似文献   

9.
目标检测的任务是从图像中精确且高效地识别、定位出大量预定义类别的物体实例。随着深度学习的广泛应用,目标检测的精确度和效率都得到了较大提升,但基于深度学习的目标检测仍面临改进与优化主流目标检测算法的性能、提高小目标物体检测精度、实现多类别物体检测、轻量化检测模型等关键技术的挑战。针对上述挑战,本文在广泛文献调研的基础上,从双阶段、单阶段目标检测算法的改进与结合的角度分析了改进与优化主流目标检测算法的方法,从骨干网络、增加视觉感受野、特征融合、级联卷积神经网络和模型的训练方式的角度分析了提升小目标检测精度的方法,从训练方式和网络结构的角度分析了用于多类别物体检测的方法,从网络结构的角度分析了用于轻量化检测模型的方法。此外,对目标检测的通用数据集进行了详细介绍,从4个方面对该领域代表性算法的性能表现进行了对比分析,对目标检测中待解决的问题与未来研究方向做出预测和展望。目标检测研究是计算机视觉和模式识别中备受青睐的热点,仍然有更多高精度和高效的算法相继提出,未来将朝着更多的研究方向发展。  相似文献   

10.
小目标检测广泛应用于视频监控等各种任务,在各领域均有着重要作用.由于待测目标尺寸小、特征弱等原因,目前的检测算法对小目标的检测性能仍值得进一步提升.现有基于设计特征的传统方法在复杂背景的应用场景下检测精度低、鲁棒性弱,基于深度学习的检测算法存在数据集难获取、小目标特征难提取等问题.面向解决低信杂比图像中小目标因面积占比小导致的特征提取难的问题,提出了一个深度分割模型用于小目标检测.为进一步提升检测性能、降低漏检率,充分应用多波段图像信息,设计了一个基于深度分割模型的多波段融合小目标检测方法.在仿真数据集上的实验结果表明,该方法有效提高了小目标检测的准确率,为小目标检测的后续研究提供了新的思路.  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

15.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

19.
20.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号