首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 223 毫秒
1.
: A two-dimensional (2D) N-directional edge detection filter was represented as a pair of real masks, i.e. by one complex-valued matrix [1]. The 3 × 3 compass gradient edge masks have often been used because of their simplicity, where N compass masks are constructed by rotating the kernel mask by an incremental angle of 2π/N. This paper presents complex-valued feature masks formulated by directional filtering of 3 × 3 feature masks (e.g. Prewitt, Sobel, Frei-Chen, Kirsch and roof masks), with the different number of directions N (= 8, 4 and 2). The same concept can be applied to any types of filtering/masks with arbitrary N. Received: 21 May 2001, Received in revised form: 15 November 2001, Accepted: 26 March 2002 ID="A1" Correspondence and offprint requests to: Professor Rae-Hong Park, Department of Electronic Engineering, Sogang University, C.P.O. Box 1142, Seoul 100–611, Korea. Email: rhpark@sogang.ac.kr  相似文献   

2.
: This paper presents a motion segmentation method useful for representing efficiently a video shot as a static mosaic of the background plus sequences of the objects moving in the foreground. This generates an MPEG-4 compliant, layered representation useful for video coding, editing and indexing. First, a mosaic of the static background is computed by estimating the dominant motion of the scene. This is achieved by tracking features over the video sequence and using a robust technique that discards features attached to the moving objects. The moving objects get removed in the final mosaic by computing the median of the grey levels. Then, segmentation is obtained by taking the pixelwise difference between each frame of the original sequence and the mosaic of the background. To discriminate between the moving object and noise, temporal coherence is exploited by tracking the object in the binarised difference image sequence. The automatic computation of the mosaic and the segmentation procedure are illustrated with real sequences experiments. Examples of coding and content-based manipulation are also shown. Received: 31 August 2000, Received in revised form: 18 April 2001, Accepted: 20 July 2001  相似文献   

3.
: Gaussian mixture modelling is used to provide a semi-parametric density estimate for a given data set. The fundamental problem with this approach is that the number of mixtures required to adequately describe the data is not known in advance. In our previous work , we described an algorithm, termed Predictive Validation, which attempted to automatically select the number of components. The aim of this paper is to investigate the influence of the various parameters in our model selection method in order to develop it into an operational tool. In this paper, we demonstrate the successful application of model validation to three applications in which the selected models are used for supervised classification, unsupervised classification and outlier detection tasks. Received: 23 Novenber 2000, Received in revised form: 24 April 2001, Accepted: 21 May 2001  相似文献   

4.
We consider the problem of locating replicas in a network to minimize communications costs. Under the assumption that the read-one-write-all policy is used to ensure data consistency, an optimization problem is formulated in which the cost function estimates the total communications costs. The paper concentrates on the study of the optimal communications cost as a function of the ratio between the frequency of the read and write operations. The problem is reformulated as a zero-one linear programming problem, and its connection to the p-median problem is explained. The general problem is proved to be NP-complete. For path graphs a dynamic programming algorithm for the problem is presented. Received: May 1993 / Accepted: June 2001  相似文献   

5.
Hierarchical Fusion of Multiple Classifiers for Hyperspectral Data Analysis   总被引:3,自引:0,他引:3  
Many classification problems involve high dimensional inputs and a large number of classes. Multiclassifier fusion approaches to such difficult problems typically centre around smart feature extraction, input resampling methods, or input space partitioning to exploit modular learning. In this paper, we investigate how partitioning of the output space (i.e. the set of class labels) can be exploited in a multiclassifier fusion framework to simplify such problems and to yield better solutions. Specifically, we introduce a hierarchical technique to recursively decompose a C-class problem into C_1 two-(meta) class problems. A generalised modular learning framework is used to partition a set of classes into two disjoint groups called meta-classes. The coupled problems of finding a good partition and of searching for a linear feature extractor that best discriminates the resulting two meta-classes are solved simultaneously at each stage of the recursive algorithm. This results in a binary tree whose leaf nodes represent the original C classes. The proposed hierarchical multiclassifier framework is particularly effective for difficult classification problems involving a moderately large number of classes. The proposed method is illustrated on a problem related to classification of landcover using hyperspectral data: a 12-class AVIRIS subset with 180 bands. For this problem, the classification accuracies obtained were superior to most other techniques developed for hyperspectral classification. Moreover, the class hierarchies that were automatically discovered conformed very well with human domain experts’ opinions, which demonstrates the potential of using such a modular learning approach for discovering domain knowledge automatically from data. Received: 21 November 2000, Received in revised form: 02 November 2001, Accepted: 13 December 2001  相似文献   

6.
Current statistical machine translation systems are mainly based on statistical word lexicons. However, these models are usually context-independent, therefore, the disambiguation of the translation of a source word must be carried out using other probabilistic distributions (distortion distributions and statistical language models). One efficient way to add contextual information to the statistical lexicons is based on maximum entropy modeling. In that framework, the context is introduced through feature functions that allow us to automatically learn context-dependent lexicon models.In a first approach, maximum entropy modeling is carried out after a process of learning standard statistical models (alignment and lexicon). In a second approach, the maximum entropy modeling is integrated in the expectation-maximization process of learning standard statistical models.Experimental results were obtained for two well-known tasks, the French–English Canadian Parliament Hansards task and the German–English Verbmobil task. These results proved that the use of maximum entropy models in both approaches, can help to improve the performance of the statistical translation systems.This work has been partially supported by the European Union under grant IST-2001-32091 and by the Spanish CICYT under project TIC-2003-08681-C02-02. The experiments on the Verbmobil task were done when the first author was a visiting scientist at RWTH Aachen-Germany.Editors: Dan Roth and Pascale Fung  相似文献   

7.
View materialization is an important way of improving the performance of query processing. When an update occurs to the source data from which a materialized view is derived, the materialized view has to be updated so that it is consistent with the source data. This update process is called view maintenance. The incremental method of view maintenance, which computes the new view using the old view and the update to the source data, is widely preferred to full view recomputation when the update is small in size. In this paper we investigate how to incrementally maintain views in object-relational (OR) databases. The investigation focuses on maintaining views defined in OR-SQL, a language containing the features of object referencing, inheritance, collection, and aggregate functions including user-defined set aggregate functions. We propose an architecture and algorithms for incremental OR viewmaintenance. We implement all algorithms and analyze the performance of them in comparison with full view recomputation. The analysis shows that the algorithms significantly reduce the cost of updating a vieww hen the size of an update to the source data is relatively small. Received 23 May 2000 / Revised 27 March 2001 / Accepted in revised form 30 April 2001 Correspondence and offprint requests to: Jixue Liu, School of Computer and Information Science, University of South Australia, Mawson Lakes, Adelaide SA5084, Australia. Email: jixue.liu@unisa.edu.auau  相似文献   

8.
On the Weighted Mean of a Pair of Strings   总被引:4,自引:1,他引:4  
String matching and string edit distance are fundamental concepts in structural pattern recognition. In this paper, the weighted mean of a pair of strings is introduced. Given two strings, x and y, where d(x, y) is the edit distance of x and y, the weighted mean of x and y is a string z that has edit distances d(x, z) and d(z, y)to x and y, respectively, such that d(x, z) _ d(z, y) = d(x, y). We’ll show formal properties of the weighted mean, describe a procedure for its computation, and give practical examples. Received: 26 October 2000, Received in revised form: 27 April 2001, Accepted: 20 July 2001  相似文献   

9.
We present the results of an extensive study of the combination of multiple Information Retrieval systems, and also introduce a new fusion model (the Adaptive Combination of Evidence, or ACE, model) that determines which IR systems to listen to based on the content of the document being scored. We compare the results of using the ACE model on a standard data set from the Text Retrieval Conference (TREC) to two baseline models (a simple sum and a weighted sum) in a variety of tasks and settings. These settings are chosen to reflect a cross-product of various dimensions of both experimental inquiry and real-world IR environments, providing a comprehensive view of the role of fusion in IR. We verify that one baseline system does, on average, provide improvements in performance. Although the ACE model only outperforms the better of baselines in one setting (tying or slightly underperforming in others), our analysis shows that it exhibits interesting and desirable behaviour that could be exploited given enough training data. Received: 15 November 2000, Received in revised form: 07 November 2001, Accepted: 13 December 2001  相似文献   

10.
 Based on a previously developed three phase variable reluctance (VR) linear microactuator design features, improving the dynamic properties, were investigated. The active part exhibits, as in the case of its predecessor, permalloy yokes and stator poles with teeth, and is fabricated using thin film technology. Improvements regarding the dynamic motor range were made by varying the number of phases. The new design has substantially improved the dynamic properties, due to the fact that the six phase design greatly reduced the location dependent driving force ripple. Received: 21 May 2001 / Accepted: 30 July 2001  相似文献   

11.
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article, we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modelling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyse the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings. Received: 17 November 2000, Received in revised form: 07 November 2001, Accepted: 22 November 2001  相似文献   

12.
Consider a distributed system consisting of n computers connected by a number of identical broadcast channels. All computers may receive messages from all channels. We distinguish between two kinds of systems: systems in which the computers may send on any channel (dynamic allocation) and system where the send port of each computer is statically allocated to a particular channel. A distributed task (application) is executed on the distributed system. A task performs execution as well as communication between its subtasks. We compare the completion time of the communication for such a task using dynamic allocation and channels with the completion time using static allocation and channels. Some distributed tasks will benefit very much from allowing dynamic allocation, whereas others will work fine with static allocation. In this paper we define optimal upper and lower bounds on the gain (or loss) of using dynamic allocation and channels compared to static allocation and channels. Our results show that, for some tasks, the gain of permitting dynamic allocation is substantial, e.g. when , there are tasks which will complete 1.89 times faster using dynamic allocation compared to using the best possible static allocation, but there are no tasks with a higher such ratio. Received: 26 February 1998 / 26 July 1999  相似文献   

13.
A Feature-Based Serial Approach to Classifier Combination   总被引:2,自引:0,他引:2  
: A new approach to the serial multi-stage combination of classifiers is proposed. Each classifier in the sequence uses a smaller subset of features than the subsequent classifier. The classification provided by a classifier is rejected only if its decision is below a predefined confidence level. The approach is tested on a two-stage combination of k-Nearest Neighbour classifiers. The features to be used by the first classifier in the combination are selected by two stand-alone algorithms (Relief and Info-Fuzzy Network, or IFN) and a hybrid method, called ‘IFN + Relief’. The feature-based approach is shown empirically to provide a substantial decrease in the computational complexity, while maintaining the accuracy level of a single-stage classifier or even improving it. Received: 24 November 2000, Received in revised form: 30 November 2001, Accepted: 05 June 2002 ID="A1" Correspondence and offprint requests to: M. Last, Department of Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel. Email: mlast@bgumail.bgu.ac.il  相似文献   

14.
15.
Class Refinement as Semantics of Correct Object Substitutability   总被引:2,自引:0,他引:2  
Subtype polymorphism, based on syntactic conformance of objects' methods and used for substituting subtype objects for supertype objects, is a characteristic feature of the object-oriented programming style. While certainly very useful, typechecking of syntactic conformance of subtype objects to supertype objects is insufficient to guarantee correctness of object substitutability. In addition, the behaviour of subtype objects must be constrained to achieve correctness. In class-based systems classes specify the behaviour of the objects they instantiate. In this paper we define the class refinement relation which captures the semantic constraints that must be imposed on classes to guarantee correctness of substitutability in all clients of the objects these classes instantiate. Clients of class instances are modelled as programs making an iterative choice over invocation of class methods, and we formally prove that when a class C′ refines a class C, substituting instances of C′ for instances of C is refinement for the clients. Received May 1999 / Accepted in revised form March 2000  相似文献   

16.
罗小平  张沪寅 《计算机工程与设计》2006,27(15):2734-2736,2781
工作流系统中不同的业务流程之间资源的共享必然会引起一系列安全问题,安全策略在工作流系统中集中表现为存取控制策略。基于工作流系统的安全需求,给出了基于角色的工作流系统存取控制模型(WfRBAC)。在该模型中,引入任务来扩充RBAC模型的动态性。WfRBAC的6要素是用户、角色、任务、客体、权限和约束,约束分为动态约束和静态约束,能够满足工作流系统中的静态性和动态性存取控制要求。  相似文献   

17.
: The performance of a multiple classifier system combining the soft outputs of k-Nearest Neighbour (k-NN) Classifiers by the product rule can be degraded by the veto effect. This phenomenon is caused by k-NN classifiers estimating the class a posteriori probabilities using the maximum likelihood method. We show that the problem can be minimised by marginalising the k-NN estimates using the Bayesian prior. A formula for the resulting moderated k-NN estimate is derived. The merits of moderation are examined on real data sets. Tests with different bagging procedures indicate that the proposed moderation method improves the performance of the multiple classifier system significantly. Received: 21 March 2001, Received in revised form: 04 September 2001, Accepted: 20 September 2001  相似文献   

18.
Improving Feature Tracking with Robust Statistics   总被引:6,自引:0,他引:6  
This paper addresses robust feature tracking. The aim is to track point features in a sequence of images and to identify unreliable features resulting from occlusions, perspective distortions and strong intensity changes. We extend the well-known Shi–Tomasi–Kanade tracker by introducing an automatic scheme for rejecting spurious features. We employ a simple and efficient outliers rejection rule, called X84, and prove that its theoretical assumptions are satisfied in the feature tracking scenario. Experiments with real and synthetic images confirm that our algorithm consistently discards unreliable features; we show a quantitative example of the benefits introduced by the algorithm for the case of fundamental matrix estimation. The complete code of the robust tracker is available via ftp. Received: 22 January 1999, Received in revised form: 3 May 1999, Accepted: 3 May 1999  相似文献   

19.
: A neural architecture, based on several self-organising maps, is presented which counteracts the parameter drift problem for an array of conducting polymer gas sensors when used for odour sensing. The neural architecture is named mSom, where m is the number of odours to be recognised, and is mainly constituted of m maps; each one approximates the statistical distribution of a given odour. Competition occurs both within each map and between maps for the selection of the minimum map distance in the Euclidean space. The network (mSom) is able to adapt itself to new changes of the input probability distribution by repetitive self-training processes based on its experience. This architecture has been tested and compared with other neural architectures, such as RBF and Fuzzy ARTMAP. The network shows long-term stable behaviour, and is completely autonomous during the testing phase, where re-adaptation of the neurons is needed due to the changes of the input probability distribution of the given data set. Received: 23 November 2000, Received in revised form: 01 June 2001, Accepted: 23 July 2001  相似文献   

20.
: This paper presents a new three-stage verification system which is based on three types of features: global features; local features of the corner points; and function features that contain information of each point of the signatures. The first verification stage implements a parameter-based method, in which the Mahalanobis distance is used as a dissimilarity measure between the signatures. The second verification stage involves corner extraction and corner matching. It also performs signature segmentation. The third verification stage implements a function-based method, which is based on an elastic matching algorithm establishing a point-to-point correspondence between the compared signatures. By combining the three different types of verification, a high security level can be reached. According to our experiments, the rates of false rejection and false acceptance are, respectively, 5.8% and 0%. Received: 12 Febuary 2001, Received in revised form: 24 May 2001, Accepted: 03 July 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号