首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clustering is the task of classifying patterns or observations into clusters or groups. Generally, clustering in high-dimensional feature spaces has a lot of complications such as: the unidentified or unknown data shape which is typically non-Gaussian and follows different distributions; the unknown number of clusters in the case of unsupervised learning; and the existence of noisy, redundant, or uninformative features which normally compromise modeling capabilities and speed. Therefore, high-dimensional data clustering has been a subject of extensive research in data mining, pattern recognition, image processing, computer vision, and other areas for several decades. However, most of existing researches tackle one or two problems at a time which is unrealistic because all problems are connected and should be tackled simultaneously. Thus, in this paper, we propose two novel inference frameworks for unsupervised non-Gaussian feature selection, in the context of finite asymmetric generalized Gaussian (AGG) mixture-based clustering. The choice of the AGG distribution is mainly due to its ability not only to approximate a large class of statistical distributions (e.g. impulsive, Laplacian, Gaussian and uniform distributions) but also to include the asymmetry. In addition, the two frameworks simultaneously perform model parameters estimation as well as model complexity (i.e., both model and feature selection) determination in the same step. This was done by incorporating a minimum message length (MML) penalty in the model learning step and by fading out the redundant densities in the mixture using the rival penalized EM (RPEM) algorithm, for first and second frameworks, respectively. Furthermore, for both algorithms, we tackle the problem of noisy and uninformative features by determining a set of relevant features for each data cluster. The efficiencies of the proposed algorithms are validated by applying them to real challenging problems namely action and facial expression recognition.  相似文献   

2.
基于Gabor小波变换多特征向量的人脸识别鲁棒性研究   总被引:1,自引:0,他引:1  
彭辉 《计算机科学》2014,41(2):308-311,316
传统的Gabor小波变换人脸识别技术在曲线奇异性的表达上存在着不足,难以识别包含表情的人脸信息,针对该问题,提出了结合Gabor小波变换和多特征向量的人脸识别算法。算法首先利用Gabor小波变换的频率及方向选择性来提取出人脸的多尺度、多方向上的Gabor特征,并组成联合稀疏模型,通过计算可以得到各个方向和尺度上Gabor特征的共同特征和表情特征,利用这两个特征向量可以精确重构测试图像的特征向量。仿真实验结果表明,所提出的方法能够有效提高带表情人脸图像的正确匹配率,改善识别效果 。  相似文献   

3.
Emotion is an important driver of human decision-making and communication. With the recent rise of human–computer interaction, affective computing has become a trending research topic, aiming to develop computational systems that can understand human emotions and respond to them. A systematic review has been conducted to fill these gaps since previous reviews regarding machine-enabled automated visual emotion recognition neglect important methodological aspects, including emotion models and hardware usage. 467 relevant papers were initially found and examined. After the screening process with specific inclusion and exclusion criteria, 30 papers were selected. Methodological aspects including emotion models, devices, architectures, and classification techniques employed by the selected studies were analyzed, and the most popular techniques and current trends in visual emotion recognition were identified. This review not only offers a comprehensive and up-to-date overview of the topic but also provides researchers with insights regarding methodological aspects like emotion models employed, devices used, and classification techniques for automated visual emotion recognition. By identifying current trends, like the increased use of deep learning algorithms and the need for further study on body gestures, this review advocates the advantages of implementing emotion recognition with the use of visual data and builds a solid foundation for applying relevant techniques in different fields.  相似文献   

4.
The recognition of facial gestures and expressions in image sequences is an important and challenging problem. Most of the existing methods adopt the following paradigm. First, facial actions/features are retrieved from the images, then the facial expression is recognized based on the retrieved temporal parameters. In contrast to this mainstream approach, this paper introduces a new approach allowing the simultaneous retrieval of facial actions and expression using a particle filter adopting multi-class dynamics that are conditioned on the expression. For each frame in the video sequence, our approach is split into two consecutive stages. In the first stage, the 3D head pose is retrieved using a deterministic registration technique based on Online Appearance Models. In the second stage, the facial actions as well as the facial expression are simultaneously retrieved using a stochastic framework based on second-order Markov chains. The proposed fast scheme is either as robust as, or more robust than existing ones in a number of respects. We describe extensive experiments and provide evaluations of performance to show the feasibility and robustness of the proposed approach.  相似文献   

5.
Although many variants of local binary patterns (LBP) are widely used for face analysis due to their satisfactory classification performance, they have not yet been proven compact. We propose an effective code selection method that obtain a compact LBP (CLBP) using the maximization of mutual information (MMI) between features and class labels. The derived CLBP is effective because it provides better classification performance with smaller number of codes. We demonstrate the effectiveness of the proposed CLBP by several experiments of face recognition and facial expression recognition. Our experimental results show that the CLBP outperforms other LBP variants such as LBP, ULBP, and MCT in terms of smaller number of codes and better recognition performance.  相似文献   

6.
基于平衡兴趣树的P2P空间数据服务调度*   总被引:1,自引:0,他引:1  
构建空间信息网格要求解决海量地理空间数据传输问题,通过分析空间数据服务特征,对空间数据设计了多级网格索引,利用P2P技术设计了基于平衡兴趣树的空间数据服务网络模型。算法按peer兴趣区对申请空间数据服务的peer进行组织,将peer间路由关系动态组织成一种新的拓扑结构——平衡兴趣树。算法可动态维护网格热度表中数据块的热度,通过热度表可快速发现网格数据块在P2P网络中的位置并下载,从而减轻了空间数据服务器压力,提高了服务效率。  相似文献   

7.
In this paper, a novel learning methodology for face recognition, LearnIng From Testing data (LIFT) framework, is proposed. Considering many face recognition problems featured by the inadequate training examples and availability of the vast testing examples, we aim to explore the useful information from the testing data to facilitate learning. The one-against-all technique is integrated into the learning system to recover the labels of the testing data, and then expand the training population by such recovered data. In this paper, neural networks and support vector machines are used as the base learning models. Furthermore, we integrate two other transductive methods, consistency method and LRGA method into the LIFT framework. Experimental results and various hypothesis testing over five popular face benchmarks illustrate the effectiveness of the proposed framework.  相似文献   

8.
The problem of separation of style and content is an essential element of visual perception, and is a fundamental mystery of perception. This problem appears extensively in different computer vision applications. The problem we address in this paper is the separation of style and content when the content lies on a low-dimensional nonlinear manifold representing a dynamic object. We show that such a setting appears in many human motion analysis problems. We introduce a framework for learning parameterization of style and content in such settings. Given a set of topologically equivalent manifolds, the Homeomorphic Manifold Analysis (HMA) framework models the variation in their geometries in the space of functions that maps between a topologically-equivalent common representation and each of them. The framework is based on decomposing the style parameters in the space of nonlinear functions that map between a unified embedded representation of the content manifold and style-dependent visual observations. We show the application of the framework in synthesis, recognition, and tracking of certain human motions that follow this setting, such as gait and facial expressions.  相似文献   

9.
10.
Urban areas of interest (AOIs) represent areas within the urban environment featuring high levels of public interaction, with their understanding holding utility for a wide range of urban planning applications.Within this context, our study proposes a novel space-time analytical framework and implements it to the taxi GPS data for the extent of Manhattan, NYC to identify and describe 31 road-constrained AOIs in terms of their spatiotemporal distribution and contextual characteristics. Our analysis captures many important locations, including but not limited to primary transit hubs, famous cultural venues, open spaces, and some other tourist attractions, prominent landmarks, and commercial centres. Moreover, we respectively analyse these AOIs in terms of their dynamics and contexts by performing further clustering analysis, formulating five temporal clusters delineating the dynamic evolution of the AOIs and four contextual clusters representing their salient contextual characteristics.  相似文献   

11.
Exposure characterization is a central step in Ecological Risk Assessment (ERA). Exposure level is a function of the spatial factors linking contaminants and receptors, yet exposure estimation models are traditionally non-spatial. Non-spatial models are prone to the adverse effects of spatial dependence: inflated variance and biased inferential procedures, which can result in unreliable and potentially misleading models. Such negative effects can be amended by spatial regression modelling: we propose an integration of geostatistics and multivariate spatial regression to compute efficient spatial regression parameters and to characterize exposure at under-sampled locations. The method is applied to estimate bioaccumulation models of organic and inorganic micropollutants in the tissues of the clam Tapes philipinarum. The models link bioaccumulation of micropollutants in clam tissue to a set of environmental variables sampled in the lagoon sediment. The Venetian lagoon case study exemplifies the problem of multiple variables sampled at different locations or spatial units: we propose and test an effective solution to this common and serious problem in environmental as well as socio-economic multivariate analysis.  相似文献   

12.
为评价阴影消除植被指数(Shadow-Eliminated Vegetation Index,SEVI)对常用十米级不同空间分辨率遥感影像的地形阴影消除效果,采用2019年1月24~25日过境的Sentinel S2B(10 m)、GF-1(16 m)、Landsat 8 OLI(30 m)、GF-4(50 m)4种空...  相似文献   

13.
This paper presents a new hybrid (graph + rule based) approach for recognizing the interacting features from B-Rep CAD models of prismatic machined parts. The developed algorithm considers variable topology features and handles both adjacent and volumetric feature interactions to provide a single interpretation for the latter. The input CAD part model in B-Rep format is preprocessed to create the adjacency graphs for faces and features of associated topological entities and compute their attributes. The developed FR system initially recognizes all varieties of the simple and stepped holes with flat and conical bottoms from the feature graphs. A new concept of Base Explicit Feature Graphs and No-base Explicit Feature Graphs has been proposed which essentially delineates between features having planar base face like pockets, blind slots, etc. and those without planar base faces like passages, 3D features, conical bottom features, etc. Based on the structure of the explicit feature graphs, geometric reasoning rules are formulated to recognize the interacting feature types. Extracted data has been post-processed to compute the feature attributes and their parent-child relationships which are written into a STEP like native feature file format. The FR system was extensively tested with several standard benchmark components and was found to be robust and consistent. The extracted feature file can be used for integration with various downstream applications like CAPP.  相似文献   

14.
15.
Ensuring interoperability between WebGIS applications is essential for maximizing access to data, data sharing, and data manipulation. Interoperability is maximized through the adoption of best practices, use of open standards, and utilization of spatial data infrastructure (SDI). While many of the interoperability challenges like infrastructure, data exchange, and file formats are common between applications, some regions like the Arctic present specific challenges including the need for presenting data in one or more polar projections. This paper describes the Arctic Research Mapping Application (ARMAP) suite of online interactive maps, web services, and virtual globes (the ARMAP suite; http://armap.org/) and several of the interoperability challenges and solutions encountered in development to date. ARMAP is a unique science and logistic tool supporting United States and international Arctic science by providing users with the ability to access, query, and browse information and data. Access to data services include a text-based search utility, an Internet Map Server client (ArcIMS), a lightweight Flex client, ArcGIS Explorer and Google Earth virtual globes, and Open Geospatial Consortium (OGC) compliant web services, such as Web Map Service (WMS) and Web Feature Service (WFS). Through the ARMAP suite, users can view a variety of Arctic map layers and explore pertinent information about United States Arctic research efforts. The Arctic Research Logistics Support Service (ARLSS) database is the informational underpinning of ARMAP. Avoiding duplication of effort has been a key priority in the development of the ARMAP applications. The ARMAP suite incorporates best practices that facilitate interoperability such as Federal Geographic Data Committee (FGDC) metadata standards, web services for embedding external data and serving framework layers, and open standards such as Open Geospatial Consortium (OGC) compliant web services. Many of the features and capabilities of ARMAP are expected to greatly enhance the development of an Arctic SDI.  相似文献   

16.
Integrated Water Management at the Basin level is a concept that was introduced in the 1990s and is a goal in every national and local water management plan. Unfortunately this goal has not been achieved mainly due to a lack of both tools and data management, as data must be gathered from different sources and in different formats. Compounding this problem is the fact that in some regions different water agencies are in charge of water supply as is the case in the Basin of Mexico, in which Mexico City and its Metropolitan Zone are located. The inhabitants of the Basin of Mexico, which comprises five different political entities and in which different agencies are in charge of water supply rely on the Basin's aquifer system as its main water supply source. However, a regional hydrogeological database in this area does not exist which is why the use of both a Relational Database Management System (RDMBS) and a Geographic Information System (GIS) is proposed in order to improve regional data management in the study area. Data stored in this new database, the Basin of Mexico Hydrogeological Database (BMHDB) comprise data on climatological, borehole and run-off variables, readily providing information for the development of hydrogeological models. A simple example is used to show how geostatistical analysis can be done using the data directly from the BMHDB. The structure of the BMHDB allows easy maintenance and updating, representing a valuable tool for the development of regional studies.  相似文献   

17.
18.
Durable products and their components are increasingly being equipped with one of several forms of automatic identification technology such as radio frequency identification (RFID). This technology enables data collection, storage, and transmission of product information throughout its life cycle. Ideally all available relevant information could be stored on RFID tags with new information being added to the tags as it becomes available. However, because of the finite memory capacity of RFID tags along with the magnitude of potential lifecycle data, users need to be more selective in data allocation. In this research, the data allocation problem is modeled as a variant of the nonlinear knapsack problem. The objective is to determine the number of items to place on the tag such that the value of the “unexplained” data left off the tag is minimized. A binary encoded genetic algorithm is proposed and an extensive computational study is performed to illustrate the effectiveness of this approach. Additionally, we discuss some properties of the optimal solution which can be effective in solving more difficult problem instances.  相似文献   

19.
Various benthic mapping methods exist but financing and technical capacity limit the choice of technology available to developing states to aid with natural resource management. Therefore, we assessed the efficacy of using a single-beam echosounder (SBES), satellite images (GeoEye-1 and WorldView-2) and different image (pixel-based Maximum Likelihood Classifier (MLC), and an object-based image analysis (OBIA)) and hydroacoustic classification and interpolation techniques, to map nearshore benthic features at the Bluefields Bay marine protected area in western Jamaica (13.82 km2 in size). A map with three benthic classes (submerged aquatic vegetation (SAV), bare substrate, and coral reef) produced from a radiometrically corrected, deglinted and water column-corrected WorldView-2 image had a marginally higher accuracy (3%) than that of a map classified from a similarly corrected GeoEye-1 image. However, only one of the two extra WorldView-2 image bands (coastal) was used because the yellow band was completely attenuated at depths ≥3.7 m. The coral reef class was completely misclassified by the MLC and had to be contextually edited. The contextually edited MLC map had a higher overall accuracy (OA) than the OBIA map (86.7% versus 80.4%) and maps that were not contextually edited. But, the OBIA map had a higher OA than a MLC map without edits. Maps produced from the images also had a higher accuracy than the SAV map created from the acoustic data (OAs >80% and kappa >0.67 versus 76.6% and kappa = 0.32). SAV classification was comparable among the classified SBES SAV data points and all the final maps. The total area classified as SAV was marginally larger for satellite maps; however, the total area classified as bare substrate using the images was twice as large. A substrate map with three classes (silt, sand, and coral/hard bottom) produced from the SBES data using a random forest classifier and a Markov chain interpolator had a higher accuracy than a substrate map produced using a fractal dimension classifier and an indicator krig (the default choice) (72.4% versus 53.5%). The coral reef class from the SBES, OBIA, and contextually edited maps had comparable accuracies, but covered a much smaller area in the SBES maps because data points were lost during the interpolation process. The use of images was limited by turbidity levels and cloud cover and it yielded lower benthic detail. Despite these limitations, satellite image classification was the most efficacious method. If greater benthic detail is required, the SBES is more suitable or more effort is required during image classification. Also, the SBES can be operated in areas with turbid waters and greater depths. However, it could not be used in very shallow areas. Also, processing and interpolation of data points can result in a loss of resolution and introduces spatial uncertainty.  相似文献   

20.
Off-line handwritten oriental character recognition is a difficult task due to the large category and stroke variety. These oriental characters are made up of components known as radicals, which are often written in a distorted proportion and size. All these factors lead to a difficult recognition problem, which unfortunately cannot be solved using direct classification approach like the neural network classifier and a preprocessing module. This paper proposes several novel preprocessing approaches and synergy of classifiers to achieve good performance. Novel classification approaches, comprising rough and coarse classification modules are proposed which when combined appropriately produced a high-performance recognition system capable of producing high accuracy classification in off-line oriental character recognition. The recognition accuracy of the system is a high of 97% and a 99% for the top 5 candidate selection scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号