首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
目的 视频目标分割是在给定第1帧标注对象掩模条件下,实现对整个视频序列中感兴趣目标的分割。但是由于分割对象尺度的多样性,现有的视频目标分割算法缺乏有效的策略来融合不同尺度的特征信息。因此,本文提出一种特征注意金字塔调制网络模块用于视频目标分割。方法 首先利用视觉调制器网络和空间调制器网络学习分割对象的视觉和空间信息,并以此为先验引导分割模型适应特定对象的外观。然后通过特征注意金字塔模块挖掘全局上下文信息,解决分割对象多尺度的问题。结果 实验表明,在DAVIS 2016数据集上,本文方法在不使用在线微调的情况下,与使用在线微调的最先进方法相比,表现出更具竞争力的结果,J-mean指标达到了78.7%。在使用在线微调后,本文方法的性能在DAVIS 2017数据集上实现了最好的结果,J-mean指标达到了68.8%。结论 特征注意金字塔调制网络的视频目标分割算法在对感兴趣对象分割的同时,针对不同尺度的对象掩模能有效结合上下文信息,减少细节信息的丢失,实现高质量视频对象分割。  相似文献   

2.
多摄像机视野分界线快速自动生成算法*   总被引:1,自引:0,他引:1  
针对摄像机视野分界线是一种有效地解决多摄像机人体跟踪目标交接问题的工具,提出了一种基于同步视频的摄像机视野分界线快速自动生成算法,利用视野分界线和目标中心到视野分界线的距离实现多摄像机之间的目标交接.该算法不依赖摄像机的标定信息和目标颜色信息.为验证该算法的有效性,设计搭建了一个室内视觉有重叠区域的多摄像机人体跟踪系统.实验结果表明,该算法具有易实现、实时、准确率高的优点.  相似文献   

3.
目的 针对视觉目标跟踪(video object tracking,VOT)和视频对象分割(video object segmentation,VOS)问题,研究人员提出了多个多任务处理框架,但是该类框架的精确度和鲁棒性较差。针对此问题,本文提出一个融合多尺度上下文信息和视频帧间信息的实时视觉目标跟踪与视频对象分割多任务的端到端框架。方法 文中提出的架构使用了由空洞深度可分离卷积组成的更加多尺度的空洞空间金字塔池化模块,以及具备帧间信息的帧间掩模传播模块,使得网络对多尺度目标对象分割能力更强,同时具备更好的鲁棒性。结果 本文方法在视觉目标跟踪VOT-2016和VOT-2018数据集上的期望平均重叠率(expected average overlap,EAO)分别达到了0.462和0.408,分别比SiamMask高了0.029和0.028,达到了最先进的结果,并且表现出更好的鲁棒性。在视频对象分割DAVIS(densely annotated video segmentation)-2016和DAVIS-2017数据集上也取得了有竞争力的结果。其中,在多目标对象分割DAVIS-2017数据集上,本文方法比SiamMask有更好的性能表现,区域相似度的杰卡德系数的平均值JM和轮廓精确度的F度量的平均值FM分别达到了56.0和59.0,并且区域和轮廓的衰变值JDFD都比SiamMask中的低,分别为17.9和19.8。同时运行速度为45帧/s,达到了实时的运行速度。结论 文中提出的融合多尺度上下文信息和视频帧间信息的实时视觉目标跟踪与视频对象分割多任务的端到端框架,充分捕捉了多尺度上下文信息并且利用了视频帧间的信息,使得网络对多尺度目标对象分割能力更强的同时具备更好的鲁棒性。  相似文献   

4.
基于增强稀疏性特征选择的网络图像标注   总被引:1,自引:0,他引:1  
史彩娟  阮秋琦 《软件学报》2015,26(7):1800-1811
面对网络图像的爆炸性增长,网络图像标注成为近年来一个热点研究内容,稀疏特征选择在提升网络图像标注效率和性能方面发挥着重要的作用.提出了一种增强稀疏性特征选择算法,即,基于l2,1/2矩阵范数和共享子空间的半监督稀疏特征选择算法(semi-supervised sparse feature selection based on l2,1/2-matix norm with shared subspace learning,简称SFSLS)进行网络图像标注.在SFSLS算法中,应用l2,1/2矩阵范数来选取最稀疏和最具判别性的特征,通过共享子空间学习,考虑不同特征之间的关联信息.另外,基于图拉普拉斯的半监督学习,使SFSLS算法同时利用了有标签数据和无标签数据.设计了一种有效的迭代算法来最优化目标函数.SFSLS算法与其他稀疏特征选择算法在两个大规模网络图像数据库上进行了比较,结果表明,SFSLS算法更适合于大规模网络图像的标注.  相似文献   

5.
沈项军  常青  姚银  查正军 《软件学报》2015,26(S2):218-227
非结构化P2P(unstructured peer-to-peer network)对等网络中的节点资源定位的路由查询是对等网络研究中的一个主要难题,特别是当网络中客户端节点由于其频繁加入、离开导致网络结构动态变化所带来的资源查询难题.提出了一种新的基于拥塞控制的路由查询方法来实现动态网络下的资源查询.该方法分两部分实现:首先是网络资源的分组与节点重连策略.该策略使得具有同等资源的节点相互连接,并周期性地调整节点上的节点连接数量以减少同组资源节点上的负载.通过以上策略,使得网络的拓扑结构自动地从随机网络结构进化到以资源组为单位的聚类网络,从而使得网络中形成网络资源组间的查询负载均衡.另一方面,组内的节点之间的路由负载均衡是通过节点间协同学习实现的.采用协同Q-学习方法,所研究的方法不仅从节点上学习其处理能力、连接数和资源的个数等参数,还将节点的拥塞状态作为协同Q-学习的重要参数,并建立模型.通过这种技术,同一组节点上的资源查询被有目的地引导,以避开那些组内拥塞的节点,从而最终实现资源组内节点之间的查询均衡.仿真实验结果表明,相比常用的random walk资源查找方法,该研究所实现的资源定位方法能够更迅速地实现网络的资源查询.仿真结果还表明,相比random walk方法,所提出的方法在网络高强度查询和网络节点动态加入和退出的情况下进行查询具有更高的鲁棒性和适应性.  相似文献   

6.
《电子技术应用》2017,(1):145-147
针对目前基于主动视觉的PTZ摄像机控制跟踪性能差,无法连续、实时跟踪动态目标,且跟踪目标的准确度低下等缺陷,提出了一种基于核相关视觉目标跟踪算法的云台摄像机控制方法。首先设计了云台摄像机系统的整体架构。视觉目标跟踪采用核相关目标跟踪方法,时效性很高,跟踪精确度也位列于目标跟踪领域的高等水平。根据跟踪结果信息,通过PELCO-D协议控制PTZ摄像机,始终保持目标在视频画面内。并用C++实现了KCF算法控制PTZ摄像机上位机,实验验证了该种PTZ控制方法的准确性、适用性及稳定性。  相似文献   

7.
目的 人体行为识别在视频监控、环境辅助生活、人机交互和智能驾驶等领域展现出了极其广泛的应用前景。由于目标物体遮挡、视频背景阴影、光照变化、视角变化、多尺度变化、人的衣服和外观变化等问题,使得对视频的处理与分析变得非常困难。为此,本文利用时间序列正反演构造基于张量的线性动态模型,估计模型的参数作为动作序列描述符,构造更加完备的观测矩阵。方法 首先从深度图像提取人体关节点,建立张量形式的人体骨骼正反向序列。然后利用基于张量的线性动态系统和Tucker分解学习参数元组(AF,AI,C),其中C表示人体骨架信息的空间信息,AFAI分别描述正向和反向时间序列的动态性。通过参数元组构造观测矩阵,一个动作就可以表示为观测矩阵的子空间,对应着格拉斯曼流形上的一点。最后通过在格拉斯曼流形上进行字典学习和稀疏编码完成动作识别。结果 实验结果表明,在MSR-Action 3D数据集上,该算法比Eigenjoints算法高13.55%,比局部切从支持向量机(LTBSVM)算法高2.79%,比基于张量的线性动态系统(tLDS)算法高1%。在UT-Kinect数据集上,该算法的行为识别率比LTBSVM算法高5.8%,比tLDS算法高1.3%。结论 通过大量实验评估,验证了基于时间序列正反演构造出来的tLDS模型很好地解决了上述问题,提高了人体动作识别率。  相似文献   

8.
社团结构划分对复杂网络研究在理论和实践上都非常重要.借鉴分布式词向量理论,提出一种基于节点向量表达的复杂网络社团划分方法(CDNEV).为了构建网络节点的分布式向量,提出启发式随机游走模型.利用节点启发式随机游走得到的节点序列作为上下文,采用SkipGram模型学习节点的分布式向量.选择局部度中心节点作为K-Means算法的聚类中心点,然后用K-Means算法进行聚类,最终得到社团结构.在真实和模拟两种网络上做了丰富的实验,与主流的全局社团划分算法和局部社团划分算法作了比较.在真实网络上CDNEV算法的F1指标比其他算法平均提高19%;在模拟网络上,F1指标则可以提高15%.实验结果表明,相对其他算法,CDNEV算法的精度和效率都较高.  相似文献   

9.
针对基于会话的推荐算法仅对用户单一偏好进行静态建模而无法捕捉用户受环境影响偏好产生的波动, 从而降低推荐准确性的问题. 提出融合双分支动态偏好的会话推荐方法: 首先, 通过异构超图来建模不同类型信息, 设计双分支聚合机制获取以及整合异构超图中信息并且学习多类型节点之间的关系, 再用价格嵌入增强器来加强类别和价格之间关系; 其次, 设计双层偏好编码器, 其中采用多尺度时序Transformer提取用户动态价格偏好, 利用软注意机制和反向位置编码学习用户动态兴趣偏好; 最后, 用门控机制融合用户多类型动态偏好, 向用户进行推荐. 通过在Cosmetics和Diginetica-buy两个数据集上进行实验, 结果证明与其他对比算法相比在PrecisionMRR评价指标中有显著的提升.  相似文献   

10.
李国瑞 《软件学报》2014,25(S1):139-148
针对分簇结构或多Sink节点的无线传感器网络应用场景,提出了一种基于Top-|K|查询的分布式数据重构方法.该方法包括分布式迭代硬阈值算法和基于双阈值的分布式Top-|K|查询算法两个部分.其中,管理节点和成员节点同时运行分布式迭代硬阈值算法,以分布式方式实现迭代硬阈值计算.同时,管理节点和成员节点运行基于双阈值的分布式Top-|K|查询算法,以分布式方式实现前一算法中查询绝对值最大的前K项元素和操作.实验结果表明,该方法的数据重构性能与现有方法无明显差异,同时能够有效地减少管理节点和成员节点之间的交互次数,并且降低网络中传输的数据量.  相似文献   

11.
This paper deals with a new approach based on Q-learning for solving the problem of mobile robot path planning in complex unknown static environments.As a computational approach to learning through interaction with the environment,reinforcement learning algorithms have been widely used for intelligent robot control,especially in the field of autonomous mobile robots.However,the learning process is slow and cumbersome.For practical applications,rapid rates of convergence are required.Aiming at the problem of slow convergence and long learning time for Q-learning based mobile robot path planning,a state-chain sequential feedback Q-learning algorithm is proposed for quickly searching for the optimal path of mobile robots in complex unknown static environments.The state chain is built during the searching process.After one action is chosen and the reward is received,the Q-values of the state-action pairs on the previously built state chain are sequentially updated with one-step Q-learning.With the increasing number of Q-values updated after one action,the number of actual steps for convergence decreases and thus,the learning time decreases,where a step is a state transition.Extensive simulations validate the efficiency of the newly proposed approach for mobile robot path planning in complex environments.The results show that the new approach has a high convergence speed and that the robot can find the collision-free optimal path in complex unknown static environments with much shorter time,compared with the one-step Q-learning algorithm and the Q(λ)-learning algorithm.  相似文献   

12.
Technical Note: Q-Learning   总被引:6,自引:0,他引:6  
Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.This paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989). We show thatQ-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where manyQ values can be changed each iteration, rather than just one.  相似文献   

13.
This paper proposes a new approach for solving the problem of obstacle avoidance during manipulation tasks performed by redundant manipulators. The developed solution is based on a double neural network that uses Q-learning reinforcement technique. Q-learning has been applied in robotics for attaining obstacle free navigation or computing path planning problems. Most studies solve inverse kinematics and obstacle avoidance problems using variations of the classical Jacobian matrix approach, or by minimizing redundancy resolution of manipulators operating in known environments. Researchers who tried to use neural networks for solving inverse kinematics often dealt with only one obstacle present in the working field. This paper focuses on calculating inverse kinematics and obstacle avoidance for complex unknown environments, with multiple obstacles in the working field. Q-learning is used together with neural networks in order to plan and execute arm movements at each time instant. The algorithm developed for general redundant kinematic link chains has been tested on the particular case of PowerCube manipulator. Before implementing the solution on the real robot, the simulation was integrated in an immersive virtual environment for better movement analysis and safer testing. The study results show that the proposed approach has a good average speed and a satisfying target reaching success rate.  相似文献   

14.
In this article, an iterative procedure is proposed for the training process of the probabilistic neural network (PNN). In each stage of this procedure, the Q(0)-learning algorithm is utilized for the adaptation of PNN smoothing parameter (σ). Four classes of PNN models are regarded in this study. In the case of the first, simplest model, the smoothing parameter takes the form of a scalar; for the second model, σ is a vector whose elements are computed with respect to the class index; the third considered model has the smoothing parameter vector for which all components are determined depending on each input attribute; finally, the last and the most complex of the analyzed networks, uses the matrix of smoothing parameters where each element is dependent on both class and input feature index. The main idea of the presented approach is based on the appropriate update of the smoothing parameter values according to the Q(0)-learning algorithm. The proposed procedure is verified on six repository data sets. The prediction ability of the algorithm is assessed by computing the test accuracy on 10 %, 20 %, 30 %, and 40 % of examples drawn randomly from each input data set. The results are compared with the test accuracy obtained by PNN trained using the conjugate gradient procedure, support vector machine algorithm, gene expression programming classifier, k–Means method, multilayer perceptron, radial basis function neural network and learning vector quantization neural network. It is shown that the presented procedure can be applied to the automatic adaptation of the smoothing parameter of each of the considered PNN models and that this is an alternative training method. PNN trained by the Q(0)-learning based approach constitutes a classifier which can be treated as one of the top models in data classification problems.  相似文献   

15.
When a reinforcement learning agent executes actions that can cause frequent damage to itself, it can learn, by using Q-learning, that these actions must not be executed again. However, there are other actions that do not cause damage frequently but only once in a while, for example, risky actions such as parachuting. These actions may imply punishment to the agent and, depending on its personality, it would be better to avoid them. Nevertheless, using the standard Q-learning algorithm, the agent is not able to learn to avoid them, because the result of these actions can be positive on average. In this article, an additional mechanism of Q-learning, inspired by the emotion of fear, is introduced in order to deal with those risky actions by considering the worst results. Moreover, there is a daring factor for adjusting the consideration of the risk. This mechanism is implemented on an autonomous agent living in a virtual environment. The results present the performance of the agent with different daring degrees.  相似文献   

16.
With recent Industry 4.0 developments, companies tend to automate their industries. Warehousing companies also take part in this trend. A shuttle-based storage and retrieval system (SBS/RS) is an automated storage and retrieval system technology experiencing recent drastic market growth. This technology is mostly utilized in large distribution centers processing mini-loads. With the recent increase in e-commerce practices, fast delivery requirements with low volume orders have increased. SBS/RS provides ultrahigh-speed load handling due to having an excess amount of shuttles in the system. However, not only the physical design of an automated warehousing technology but also the design of operational system policies would help with fast handling targets. In this work, in an effort to increase the performance of an SBS/RS, we apply a machine learning (ML) (i.e., Q-learning) approach on a newly proposed tier-to-tier SBS/RS design, redesigned from a traditional tier-captive SBS/RS. The novelty of this paper is twofold: First, we propose a novel SBS/RS design where shuttles can travel between tiers in the system; second, due to the complexity of operation of shuttles in that newly proposed design, we implement an ML-based algorithm for transaction selection in that system. The ML-based solution is compared with traditional scheduling approaches: first-in-first-out and shortest process time (i.e., travel) scheduling rules. The results indicate that in most cases, the Q-learning approach performs better than the two static scheduling approaches.  相似文献   

17.
Aiming at fulfilling the wide-area video surveillance, this paper presents a cooperative multi-camera target tracking method for wireless camera sensor networks. In the proposed method, target detection is carried out by single-node processing based on background subtraction, whereas target tracking is performed by senor nodes cooperation based on automatic node selection. The main contributions of the proposed method are summarized as follows. First, each camera node uses an adaptive Gaussian mixture model to extract moving targets and an unscented Kalman filter to solve target tracking. Second, the correspondence between the targets in different camera views is established by homography transformation of target positions. Third, a confidence measure based on the size of the detected target blob and the estimate uncertainty of tracking is defined to achieve optimal node selection. Experimental results show that the proposed method can effectively select camera node to implement the accurate tracking in real scenes.  相似文献   

18.
Hao Xu  S. Jagannathan  F.L. Lewis 《Automatica》2012,48(6):1017-1030
In this paper, the stochastic optimal control of linear networked control system (NCS) with uncertain system dynamics and in the presence of network imperfections such as random delays and packet losses is derived. The proposed stochastic optimal control method uses an adaptive estimator (AE) and ideas from Q-learning to solve the infinite horizon optimal regulation of unknown NCS with time-varying system matrices. Next, a stochastic suboptimal control scheme which uses AE and Q-learning is introduced for the regulation of unknown linear time-invariant NCS that is derived using certainty equivalence property. Update laws for online tuning the unknown parameters of the AE to obtain the Q-function are derived. Lyapunov theory is used to show that all signals are asymptotically stable (AS) and that the estimated control signals converge to optimal or suboptimal control inputs. Simulation results are included to show the effectiveness of the proposed schemes. The result is an optimal control scheme that operates forward-in-time manner for unknown linear systems in contrast with standard Riccati equation-based schemes which function backward-in-time.  相似文献   

19.
Reinforcement learning (RL) has received some attention in recent years from agent-based researchers because it deals with the problem of how an autonomous agent can learn to select proper actions for achieving its goals through interacting with its environment. Although there have been several successful examples demonstrating the usefulness of RL, its application to manufacturing systems has not been fully explored yet. In this paper, Q-learning, a popular RL algorithm, is applied to a single machine dispatching rule selection problem. This paper investigates the application potential of Q-learning, a widely used RL algorithm to a dispatching rule selection problem on a single machine to determine if it can be used to enable a single machine agent to learn commonly accepted dispatching rules for three example cases in which the best dispatching rules have been previously defined. This study provided encouraging results that show the potential of RL for application to agent-based production scheduling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号