首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The image foresting transform: theory, algorithms, and applications   总被引:5,自引:0,他引:5  
The image foresting transform (IFT) is a graph-based approach to the design of image processing operators based on connectivity. It naturally leads to correct and efficient implementations and to a better understanding of how different operators relate to each other. We give here a precise definition of the IFT, and a procedure to compute it-a generalization of Dijkstra's algorithm-with a proof of correctness. We also discuss implementation issues and illustrate the use of the IFT in a few applications.  相似文献   

2.
Numerous segmentation algorithms have been developed, many of them highly specific and only applicable to a reduced class of problems and image data. Without an additional source of knowledge, automatic image segmentation based on low level image features seemed unlikely to succeed in extracting semantic objects in generic images. A new region-merging segmentation technique has recently been developed which incorporates the spectral and textural properties of the objects to be detected and also their different size and behaviour at different stages of scale, respectively. Linking this technique with the FAO Land Cover Land Use classification system resulted in the development of an automated, standardized classification methodology. Testing on Landsat and Aster images resulted in mutually exclusive classes with clear and unambiguous class definitions. The error matrix based on field samples showed overall accuracy values of 92% for Aster image and 89% for Landsat. The KIA values were 88% for Aster images and 84% for the Landsat image.  相似文献   

3.

共识机制是区块链技术的重要组成部分,但是主流的共识机制尤其是工作量证明共识机制都存在算力过度耗费和吞吐量低等问题. 而联邦学习作为一种分布式机器学习方法,学习模型的本地训练和最终的参与方贡献度计算都需要消耗大量算力资源. 因此,提出了一种支持自适应联邦学习任务的可信公平区块链框架TFchain,探索如何利用原本共识机制中耗费的大量算力来提高联邦学习的效率. 首先,设计了基于区块链和联邦学习的全新共识机制PoTF(proof of trust and fair),该共识机制将区块链的节点设置为联邦学习的参与方,将原本共识机制中用于哈希计算的大量无效算力转移到联邦学习中,进行本地模型的训练和参与方贡献度的评估;其次,在提高区块链交易吞吐量的同时,对联邦学习的参与方进行了合理的贡献度评估和激励;最后,设计了防止节点作恶的算法. 实验结果表明,提出的TFchain能够在回收算力的同时有效提升区块链的交易处理性能,对积极参与联邦学习的参与方进行有效正向的激励.

  相似文献   

4.
In 2008,Blockchain was introduced to the world as the underlying technology of the Bitcoin system.After more than a decade of development,various Blockchain sys...  相似文献   

5.
Flocking for multi-agent dynamic systems: algorithms and theory   总被引:25,自引:0,他引:25  
In this paper, we present a theoretical framework for design and analysis of distributed flocking algorithms. Two cases of flocking in free-space and presence of multiple obstacles are considered. We present three flocking algorithms: two for free-flocking and one for constrained flocking. A comprehensive analysis of the first two algorithms is provided. We demonstrate the first algorithm embodies all three rules of Reynolds. This is a formal approach to extraction of interaction rules that lead to the emergence of collective behavior. We show that the first algorithm generically leads to regular fragmentation, whereas the second and third algorithms both lead to flocking. A systematic method is provided for construction of cost functions (or collective potentials) for flocking. These collective potentials penalize deviation from a class of lattice-shape objects called /spl alpha/-lattices. We use a multi-species framework for construction of collective potentials that consist of flock-members, or /spl alpha/-agents, and virtual agents associated with /spl alpha/-agents called /spl beta/- and /spl gamma/-agents. We show that migration of flocks can be performed using a peer-to-peer network of agents, i.e., "flocks need no leaders." A "universal" definition of flocking for particle systems with similarities to Lyapunov stability is given. Several simulation results are provided that demonstrate performing 2-D and 3-D flocking, split/rejoin maneuver, and squeezing maneuver for hundreds of agents using the proposed algorithms.  相似文献   

6.
In this work we introduce the new problem of finding time seriesdiscords. Time series discords are subsequences of longer time series that are maximally different to all the rest of the time series subsequences. They thus capture the sense of the most unusual subsequence within a time series. While discords have many uses for data mining, they are particularly attractive as anomaly detectors because they only require one intuitive parameter (the length of the subsequence) unlike most anomaly detection algorithms that typically require many parameters. While the brute force algorithm to discover time series discords is quadratic in the length of the time series, we show a simple algorithm that is three to four orders of magnitude faster than brute force, while guaranteed to produce identical results. We evaluate our work with a comprehensive set of experiments on diverse data sources including electrocardiograms, space telemetry, respiration physiology, anthropological and video datasets. Eamonn Keogh is an Assistant Professor of computer science at the University of California, Riverside. His research interests include data mining, machine learning and information retrieval. Several of his papers have won best paper awards, including papers at SIGKDD and SIGMOD. Dr. Keogh is the recipient of a 5-year NSF Career Award for “Efficient discovery of previously unknown patterns and relationships in massive time series databases.” Jessica Lin is an Assistant Professor of information and software engineering at George Mason University. She received her Ph.D. from the University of California, Riverside. Her research interests include data mining and informational retrieval. Sang-Hee Lee is a paleoanthropologist at the University of California, Riverside. Her research interests include the evolution of human morphological variation and how different mechanisms (such as taxonomy, sex, age, and time) explain what is observed in fossil data. Dr. Lee obtained her Ph.D. in anthropology from the University of Michigan in 1999. Helga Van Herle is an Assistant Clinical Professor of medicine at the Division of Cardiology of the Geffen School of Medicine at UCLA. She received her M.D. from UCLA in 1993; completed her residency in internal medicine at the New York Hospital (Cornell University, 1993–1996) and her cardiology fellowship at UCLA (1997–2001). Dr. Van Herle holds a M.Sc. in bioengineering from Columbia University (1987) and a B.Sc. in Chemical Engineering from UCLA (1985)  相似文献   

7.
A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.  相似文献   

8.
9.
We clarify the mathematical equivalence between low-dimensional singular value decomposition and low-order tensor principal component analysis for two- and three-dimensional images. Furthermore, we show that the two- and three-dimensional discrete cosine transforms are, respectively, acceptable approximations to two- and three-dimensional singular value decomposition and classical principal component analysis. Moreover, for the practical computation in two-dimensional singular value decomposition, we introduce the marginal eigenvector method, which was proposed for image compression. For three-dimensional singular value decomposition, we also show an iterative algorithm. To evaluate the performances of the marginal eigenvector method and two-dimensional discrete cosine transform for dimension reduction, we compute recognition rates for six datasets of two-dimensional image patterns. To evaluate the performances of the iterative algorithm and three-dimensional discrete cosine transform for dimension reduction, we compute recognition rates for datasets of gait patterns and human organs. For two- and three-dimensional images, the two- and three-dimensional discrete cosine transforms give almost the same recognition rates as the marginal eigenvector method and iterative algorithm, respectively.  相似文献   

10.
11.
Agostino 《Pattern recognition》2003,36(12):2955-2966
The core of a k-means algorithm is the reallocation phase. A variety of schemes have been suggested for moving entities from one cluster to another and each of them may give a different clustering even though the data set is the same. The present paper describes shortcomings and relative merits of 17 relocation methods in connection with randomly generated data sets.  相似文献   

12.
This special issue provides a leading forum for timely, in-depth presentation of recent advances in algorithms, theories and applications in temporal data mining. The selected papers underwent a rigorous refereeing and revision process.  相似文献   

13.
《Computers & Geosciences》1997,23(8):823-849
Numerical solutions to a recently-introduced family of continuous models provide realistic representations of the evolution of fluvial landscapes. The simplest subfamily of the models offers a characterization of the evolution of “badlands” as a process involving (1) a first, transient stage in which branching valleys emerge from unchanneled surfaces; (2) a second, “equilibrium” stage in which a fully-developed surface with branching valleys and ridges declines in a stable, self-similar mode; and (3) a final, dissipative stage in which regularities in the landscape break down. In the transient stage of development, small perturbations to the surface induce local variations in water flow, differential erosion, and the rapid emergence of a coherent, fine-scale structure of channelized flow patterns. The small-scale features evolve into larger scale features by a process in which small flows intersect and grow. Standard linearized analyses of the equations are inadequate for characterizing this process, which appears to be initially dominated by random effects and non-linear saturation. In the second stage, the numerical solutions converge towards satisfaction of an optimality principle by which the patterns of ridges, valleys, and surface concavities minimize a function of the sediment flux over the surface, subject to two constraints. This stage is in accordance with a theoretical analysis of the model presented in a previous paper, and the numerical solutions are stable in accordance with this analysis. The optimality principle is associated with both the emergence of separable solutions to the conservation equations and a variety of regularities in landscape form and evolution, including self-similar decline of forms and a “law of height-proportional erosion”. The numerical solutions provide detailed insight into the co-evolution of landforms and flows of water and sediment. The family of models provides an elementary theory characterizing the evolution of drainage basin phenomena, and in particular (1) possesses interpretations in terms of various geomorphological concepts and observations; (2) appears capable of explaining variations in geomorphic forms over a wide variety of environments; and (3) unifies certain aspects of the continuous, discrete, and variational approaches to landscape modeling.  相似文献   

14.
对一个O(|V|^3)的最大流有效组合算法进行了研究,提出了用广度优先搜索的方法实现该算法的实用化设计方法。给出了该实用化方法具有的性质,利用该性质,采取正逆双向广度优先搜索的方式,按路径长度递增的次序依次形成各辅助网L,从而计算各辅助网L的最大流,最终组合成最大流。设计了十字双向链表存储结构,该结构采用了独特的动态双向邻接表存储辅助网L,这样即保留有用信息并删除无用信息,又保证最大流有效算法的时间复杂虚仍为O(|P|^3)从而实现了动态存储。  相似文献   

15.
16.
近年来,强化学习与自适应动态规划算法的迅猛发展及其在一系列挑战性问题(如大规模多智能体系统优化决策和最优协调控制问题)中的成功应用,使其逐渐成为人工智能、系统与控制和应用数学等领域的研究热点.鉴于此,首先简要介绍强化学习和自适应动态规划算法的基础知识和核心思想,在此基础上综述两类密切相关的算法在不同研究领域的发展历程,着重介绍其从应用于单个智能体(控制对象)序贯决策(最优控制)问题到多智能体系统序贯决策(最优协调控制)问题的发展脉络和研究进展.进一步,在简要介绍自适应动态规划算法的结构变化历程和由基于模型的离线规划到无模型的在线学习发展演进的基础上,综述自适应动态规划算法在多智能体系统最优协调控制问题中的研究进展.最后,给出多智能体强化学习算法和利用自适应动态规划求解多智能体系统最优协调控制问题研究中值得关注的一些挑战性课题.  相似文献   

17.
The combination of evolutionary algorithms with local search was named "memetic algorithms" (MAs) (Moscato, 1989). These methods are inspired by models of natural systems that combine the evolutionary adaptation of a population with individual learning within the lifetimes of its members. Additionally, MAs are inspired by Richard Dawkin's concept of a meme, which represents a unit of cultural evolution that can exhibit local refinement (Dawkins, 1976). In the case of MA's, "memes" refer to the strategies (e.g., local refinement, perturbation, or constructive methods, etc.) that are employed to improve individuals. In this paper, we review some works on the application of MAs to well-known combinatorial optimization problems, and place them in a framework defined by a general syntactic model. This model provides us with a classification scheme based on a computable index D, which facilitates algorithmic comparisons and suggests areas for future research. Also, by having an abstract model for this class of metaheuristics, it is possible to explore their design space and better understand their behavior from a theoretical standpoint. We illustrate the theoretical and practical relevance of this model and taxonomy for MAs in the context of a discussion of important design issues that must be addressed to produce effective and efficient MAs.  相似文献   

18.
Sleep plays a significant role in human’ smental and physical health. Recently, the associations between lack of sleep and weight gain, development of cancer and many other health problems have been recognized. Then monitoring the sleep and wake state all night is becoming a hotspot issue. Traditionally it classified by a PSG recording which is very costly and uncomfortable. Nowadays, with the advance of internet of things, many convenient wearable devices are being used for medical use, like measuring the heart rate (HR), blood pressure and other signals. With the sleep quality monitor problem, the key question is how to discriminate the sleep and weak stage from these signals. This paper proposed a Bayesian approach based on dynamic time warping (DTW) method for sleep and wake classification. It used HR and surplus pulse O2 (SPO2) signals to analyze the sleep states and the occurrence of some sleep-related problems. DTW is an algorithm that searches an optimal alignment between time series with scaling and shifting and Bayesian methods have been successfully used for object classification in many study. In this paper, a three-step process is used for sleep and wake classification. In the first step, the DTW is used to extract features of the original HR and SPO2 signals. Then a probabilistic model is introduced for using the Bayesian classification for uncertain data. And in the classification step, the DTW features are used as the training dataset in the Bayesian approach for sleep and wake classification. Finally, a case study form a real-word applications, collected from the website of the Sleep Heart Health Study, is presented to shown the feasibility and advantages of the DTW-based Bayesian approach.  相似文献   

19.
20.
Commercial applications for the arts tend to enforce a division between the use of learnable direct manipulation interfaces and the use of powerful, well supported programming environments. In contrast, programmable applications integrate these two software-design paradigms (i.e. direct manipulation and programming languages) and thereby attempt to exploit the strengths of both. A sample graphics application, SchemePaint, is outlined, and some of the issues related to the creation of programmable applications for the arts are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号