首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
刘燕  董蓉  李勃 《电视技术》2017,(11):32-39
图像分割是计算机视觉研究中重要的一部分,其主要目的是在图像中将兴趣域目标与背景分割,关系到后续的目标识别、图像理解等操作的准确性.经过几十年的发展,许多优秀的图像分割的方法被提出.机器学习是当今时代的研究热点,基于深度卷积神经网络等机器学习的图像分割研究进展迅速.总结介绍了应用于图像分割的几种典型机器学习方法,分析比较了相关的分割原理步骤、优缺点和发展现状.最后分析了基于机器学习的图像分割算法的发展方向.  相似文献   

2.
游戏理论的主要成就是进化稳定战略,由MaynardSmith在1982年提出。使用基于游戏模型的共同进化算法寻找ESS作为多目标问题(MOPs)的解,该算法是一种基于粗粒度并行模型的进化算法。首先,研究游戏模型的共同进化方法解决MOPs的有效性。且说明进化游戏如何由共同进化算法来具体实现,证实它是否能达到MOP的最佳均衡点。其次,通过在几个多目标问题上的严格的实验,与其它一些方法比较,评估该方法的性能。  相似文献   

3.
The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.  相似文献   

4.
Reproducible research in signal processing   总被引:2,自引:0,他引:2  
What should we do to raise the quality of signal processing publications to an even higher level? We believe it to be crucial to maintain the precision in describing our work in publications, ensured through a high-quality reviewing process. We also believe that if the experiments are performed on a large data set, the algorithm is compared to the state-of-the-art methods, the code and/or data are well documented and available online, we will all benefit and make it easier to build upon each other's work. It is a clear win-win situation for our community: we will have access to more and more algorithms and can spend time inventing new things rather than recreating existing ones.  相似文献   

5.
Liu  Derong  Zhang  Yi  Hu  Sanqing 《Wireless Networks》2004,10(4):473-483
In this paper, we develop call admission control algorithms for SIR-based power-controlled DS-CDMA cellular networks. We consider networks that handle both voice and data services. When a new call (or a handoff call) arrives at a base station requesting for admission, our algorithms will calculate the desired power control setpoints for the new call and all existing calls. We will provide necessary and sufficient conditions under which the power control algorithm will have a feasible solution. These conditions are obtained through deriving the inverse of the matrix used in the calculation of power control setpoints. If there is no feasible solution to power control or if the desired power levels to be received at the base station for some calls are larger than the maximum allowable power limits, the admission request will be rejected. Otherwise, the admission request will be granted. When higher priority is desired for handoff calls, we will allow different thresholds (i.e., different maximum allowable power limits) for new calls and handoff calls. We will develop an adaptive algorithm that adjusts these thresholds in real-time as environment changes. The performance of our algorithms will be shown through computer simulation and compared with existing algorithms.  相似文献   

6.
A wrapper-based approach to image segmentation and classification.   总被引:1,自引:0,他引:1  
The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation algorithms may be executed. We show the performance of our proposed wrapper-based segmenter on real-world and complex images of automotive vehicle occupants for the purpose of recognizing infants on the passenger seat and disabling the vehicle airbag. This is an interesting application for testing the robustness of our approach, due to the complexity of the images, and, consequently, we believe the algorithm will be suitable for many other real-world applications.  相似文献   

7.
There are still many algorithms proposed recently to reconstruct signals in Compressed Sensing (CS) setup. However, how to reconstruct sparse signals accurately with fewer measurements and less time is still a problem. It is interesting to observe that algorithms with poor performance do not mean a complete failure, as their support set may include some correct indices that some algorithms with good performance may not find out. Because of this, people proposed some fusing method using modified algorithms and partial support set, however, the reliability of the set is the key to the algorithm, and the modified method of different algorithms and the reconstruction performances of different modified algorithms are still needed to be verified. In this paper, we propose a two-stage fusing method for Compressed Sensing algorithms. From existing algorithms, we choose one as the main algorithm, some other as prior algorithms and run them in different stages. In the first stage we get high-accuracy atomic set from the prior algorithms and in the second stage we use the atomic set as the partial support set and fuse it with the main algorithm adaptively to improve the sparse signal reconstruction. The proposed method is suitable for most CS algorithms which work with different principles. According to the simulation results, the proposed method improves the performance of participating algorithms and is superior to other fusing methods in both reconstruction accuracy and reconstruction time.  相似文献   

8.
在WCDMA系统中,lub接口是UTRAN(UMTS无线接入网)内的无线接口,用于连接UTRAN中的NodeB和RNC,是NodeB侧承载用户数据的重要接口。本文通过对已有流控算法的分析,研究了基于Iub Bufier占用率的反压流控算法和基于上行负载因子修正的流控算法,针对反压流控拥塞门限不可变的缺陷,采用上述两种算法相结合米缓解lub接口的拥塞、提高系统的吞吐率。并通过对Node B传输层和系统的乔吐率进行仿真,验证了该算法的有效性。  相似文献   

9.
Admission control for statistical QoS: theory and practice   总被引:3,自引:0,他引:3  
In networks that support quality of service, an admission control algorithm determines whether or not a new traffic flow can be admitted to the network such that all users will receive their required performance. Such an algorithm is a key component of future multiservice networks because it determines the extent to which network resources are utilized and whether the promised QoS parameters are actually delivered. The goals in this article are threefold. First, we describe and classify a broad set of proposed admission control algorithms. Second, we evaluate the accuracy of these algorithms via experiments using both on-off sources and long traces of compressed video; we compare the admissible regions and QoS parameters predicted by our implementations of the algorithms with those obtained from trace-driven simulations. Finally, we identify the key aspects of an admission control algorithm necessary for achieving a high degree of accuracy and hence a high statistical multiplexing gain  相似文献   

10.
Accurate QRS detection is an important first step for the analysis of heart rate variability. Algorithms based on the differentiated ECG are computationally efficient and hence ideal for real-time analysis of large datasets. Here, we analyze traditional first-derivative based squaring function (Hamilton-Tompkins) and Hilbert transform-based methods for QRS detection and their modifications with improved detection thresholds. On a standard ECG dataset, the Hamilton-Tompkins algorithm had the highest detection accuracy (99.68% sensitivity, 99.63% positive predictivity) but also the largest time error. The modified Hamilton-Tompkins algorithm as well as the Hilbert transform-based algorithms had comparable, though slightly lower, accuracy; yet these automated algorithms present an advantage for real-time applications by avoiding human intervention in threshold determination. The high accuracy of the Hilbert transform-based method compared to detection with the second derivative of the ECG is ascribable to its inherently uniform magnitude spectrum. For all algorithms, detection errors occurred mainly in beats with decreased signal slope, such as wide arrhythmic beats or attenuated beats. For best performance, a combination of the squaring function and Hilbert transform-based algorithms can be applied such that differences in detection will point to abnormalities in the signal that can be further analyzed.  相似文献   

11.
An Efficient Logic Equivalence Checker for Industrial Circuits   总被引:4,自引:0,他引:4  
We present our formal combinational logic equivalence checking methods for industry-sized circuits. Our methods employ functional (OBDDs) algorithms for decisions on logic equivalence and structural (ATPG) algorithms to quickly identify inequivalence. The complimentary strengths of the two types of algorithms result in a significant reduction in CPU time. Our methods also involve analytical and empirical heuristics whose impact on performance for industrial designs is considerable. The combination of OBDDs, ATPG, and our heuristics resulted in a decrease in CPU time of up to 80% over OBDDs alone for the circuits we tested. In addition, we describe an algorithm for automatically determining the correspondence between storage elements in the designs being compared.  相似文献   

12.
A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation (VF) by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for handling this task. These algorithms should have a high detection quality, be easily implementable, and work in realtime in an AED. Testing of these algorithms should be done by using a large amount of annotated data under equal conditions. For our investigation we simulated a continuous analysis by selecting the data in steps of 1 s without any preselection. We used the complete BIH-MIT arrhythmia database, the CU database, and files 7001-8210 of the AHA database. For a new VF detection algorithm we calculated the sensitivity, specificity, and the area under its receiver operating characteristic curve and compared these values with the results from an earlier investigation of several VF detection algorithms. This new algorithm is based on time-delay methods and outperforms all other investigated algorithms.  相似文献   

13.
This paper studies three related algorithms: the (traditional) gradient descent (GD) algorithm, the exponentiated gradient algorithm with positive and negative weights (EG± algorithm), and the exponentiated gradient algorithm with unnormalized positive and negative weights (EGU± algorithm). These algorithms have been previously analyzed using the “mistake-bound framework” in the computational learning theory community. We perform a traditional signal processing analysis in terms of the mean square error. A relationship between the learning rate and the mean squared error (MSE) of predictions is found for the family of algorithms. This is used to compare the performance of the algorithms by choosing learning rates such that they converge to the same steady-state MSE. We demonstrate that if the target weight vector is sparse, the EG± algorithm typically converges more quickly than the GD or EGU± algorithms that perform very similarly. A side effect of our analysis is a reparametrization of the algorithms that provides insights into their behavior. The general form of the results we obtain are consistent with those obtained in the mistake-bound framework. The application of the algorithms to acoustic echo cancellation is then studied, and it is shown in some circumstances that the EG± algorithm will converge faster than the other two algorithms  相似文献   

14.
In this paper, we have proposed the adaptive subcarriers-distribution routing and spectrum allocation (ASD-RSA) algorithm, which is the first elastic optical network routing and spectrum allocation algorithm based on distributed subcarriers. It allocates lightpaths to request adaptively and proved to achieve much lower bandwidth blocking probability than traditional routing and spectrum allocation algorithms based on centralized subcarriers with integer linear programming and dynamic simulation methods. Additionally, the ASD-RSA algorithm performs the best with three alternate routing paths; this character will decrease the calculating amount of both alternate routing path searching and spectrum allocation immensely in large networks.  相似文献   

15.
动作识别算法的评估策略探讨   总被引:4,自引:4,他引:0  
以时空兴趣点特征和支持向量机(SVM)分类器识别方法为基本算法,在广泛使用的公开动作数据集KTH上,从不同角度考察评估策略对动作识别算法性能的影响。实验表明,当采用不同的交叉实验方法时,算法性能的波动最大达到10.5%,而不同数据集划分方法对算法性能的影响则达到11.87%。因此,通过量化分析得出的结论,可以充分地比较现有算法的真实差异,并为设计合理的评估策略提供参考。  相似文献   

16.
This paper presents some different methods for the design of finite impulse response linear phase digital filters with finite wordlength coefficients. An approximation problem with discrete variables is stated for which we search a solution in a Chebyshev sense. The optimal algorithm proposed associates the Remez algorithm and a branch and bound technique (BaB). The use of this kind of algorithm may sometimes lead to relatively important computer time, so two local search algorithms are also considered. The application of these algorithms with or without constraints is illustrated by examples.  相似文献   

17.
Many methods have been presented for the testing and diagnosis of analog circuits. Each of these methods has its advantages and disadvantages. In this paper we propose a novel sensitivity analysis algorithm for the classical parameter identification method and a continuous fault model for the modern test generation algorithm, and we compare the characteristics of these methods. At present, parameter identification based on the component connection model (CCM) cannot ensure that the diagnostic equation is optimal. The sensitivity analysis algorithm proposed in this paper can choose the optimal set of trees to construct an optimal CCM diagnostic equation, and enhance the diagnostic precision. But nowadays increasing attention is being paid to test generation algorithms. Most test generation algorithms use a single value in the fault model. But the single values cannot substitute for the actual faults that may occur, because the possible faulty values vary over a continuous range. To solve this problem, this paper presents a continuous fault model for the test generation algorithm which has a continuous range of parameters. The test generation algorithm with this model can improve the treatment of the tolerance problem, including the tolerances of both normal and faulty parameters, and enhance the fault coverage rate. The two methods can be applied in different situations.  相似文献   

18.
Linear Least Squares (LLS) estimation is a low complexity but sub-optimum method for estimating the location of a mobile terminal (MT) from some measured distances. It requires selecting one of the known fixed terminals (FTs) as a reference FT for obtaining a linear set of expressions. In this paper, the choosing of the reference FT is investigated. By analyzing the objective function of LLS algorithm, a new method for selecting the reference FT is proposed, which selects the reference FT based on the minimum residual (denoted as MR-RS) rather than the smallest measured distance and improves the localization accuracy significantly in Line of sight (LOS) environment. In Non-line of sight (NLOS) environment, we combine MR-RS algorithm with two other existing algorithms (residual weighting algorithm and three-stage algorithm) to form new algorithms, which also improve the localization accuracy comparing with the two algorithms. Moreover, the time complexity of the proposed algorithms is analyzed. Simulation results show that the proposed methods are always better than the existing methods for arbitrary geometry position of the MT and the LOS/NLOS conditions.  相似文献   

19.
粗粒度可重构密码逻辑阵列智能映射算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
针对粗粒度可重构密码逻辑阵列密码算法映射周期长且性能不高的问题,该文通过构建粗粒度可重构密码逻辑阵列参数化模型,以密码算法映射时间及实现性能为目标,结合本文构建的粗粒度可重构密码逻辑阵列结构特征,提出了一种算法数据流图划分算法.通过将密码算法数据流图中节点聚集成簇并以簇为最小映射粒度进行映射,降低算法映射复杂度;该文借鉴机器学习过程,构建了具备学习能力的智慧蚁群模型,提出了智慧蚁群优化算法,通过对训练样本的映射学习,持续优化初始化信息素浓度矩阵,提升算法映射收敛速度,以已知算法映射指导未知算法映射,实现密码算法映射的智能化.实验结果表明,本文提出的映射方法能够平均降低编译时间37.9%并实现密码算法映射性能最大,同时,以算法数据流图作为映射输入,自动化的生成密码算法映射流,提升了密码算法映射的直观性与便捷性.  相似文献   

20.
数字水印盲检测算法自适应门限的确定   总被引:3,自引:0,他引:3  
数字媒体的迅速发展使得其版权保护成为人们关注的焦点,从而使数字水印和信息隐藏技术日益凸显其重要的学术价值和应用前景。本文提出了一种新颖的基于离散余弦变换(DCT)的数字水印盲检测算法,并使用了概率论的方法对水印检测的门限进行了理论推导,使检测结果更具客观性。实验结果表明,该算法在图像数字水印方面是十分有效和实用的,并且对于绝大多数的攻击算法显示出极强的鲁棒性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号