首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The study of face alignment has been an area of intense research in computer vision, with its achievements widely used in computer graphics applications. The performance of various face alignment methods is often image‐dependent or somewhat random because of their own strategy. This study aims to develop a method that can select an input image with good face alignment results from many results produced by a single method or multiple ones. The task is challenging because different face alignment results need to be evaluated without any ground truth. This study addresses this problem by designing a feasible feature extraction scheme to measure the quality of face alignment results. The feature is then used in various machine learning algorithms to rank different face alignment results. Our experiments show that our method is promising for ranking face alignment results and is able to pick good face alignment results, which can enhance the overall performance of a face alignment method with a random strategy. We demonstrate the usefulness of our ranking‐enhanced face alignment algorithm in two practical applications: face cartoon stylization and digital face makeup.  相似文献   

3.
Face sketch synthesis has many practical applications, such as law enforcement and digital entertainment. Existing face sketch synthesis methods focus on neighbor selection and/or weight reconstruction. However, these approaches did not take “interpretation through synthesis” into consideration obviously. Active appearance model (AAM) is one of “interpretation through synthesis” approaches. In this paper, we introduce AAM to “explain” face photos by generating synthetic images that are as similar as possible. Then AAM provides a compact set of parameters that are useful for face sketch synthesis. Extensive experiments on public face sketch databases demonstrate the superiority of the proposed method in comparison to state-of-the-art methods.  相似文献   

4.
Feature extraction from images, which are typical of high dimensionality, is crucial to the recognition performance. To explore the discriminative information while depressing the intra-class variations due to variable illumination and view conditions, we propose a factor analysis framework for separate “content” from “style,” identifying a familiar face seen under unfamiliar viewing conditions, classifying familiar poses presented in an unfamiliar face, estimating age across unfamiliar faces. The framework applies efficient algorithms derived from objective factor separating functions and space mapping functions, which can produce sufficiently expressive representations of feature extraction and dimensionality reduction. We report promising results on three different tasks in the high-dimensional image perceptual domains: face identification with two benchmark face databases, facial pose classification with a benchmark facial pose database, extrapolation of age to unseen facial image. Experimental results show that our approach produced higher classification performance when compared to classical LDA, WLDA, LPP, MFA, and DLA algorithms.  相似文献   

5.
Face Alignment by Explicit Shape Regression   总被引:1,自引:0,他引:1  
We present a very efficient, highly accurate, “Explicit Shape Regression” approach for face alignment. Unlike previous regression-based approaches, we directly learn a vectorial regression function to infer the whole facial shape (a set of facial landmarks) from the image and explicitly minimize the alignment errors over the training data. The inherent shape constraint is naturally encoded into the regressor in a cascaded learning framework and applied from coarse to fine during the test, without using a fixed parametric shape model as in most previous methods. To make the regression more effective and efficient, we design a two-level boosted regression, shape indexed features and a correlation-based feature selection method. This combination enables us to learn accurate models from large training data in a short time (20 min for 2,000 training images), and run regression extremely fast in test (15 ms for a 87 landmarks shape). Experiments on challenging data show that our approach significantly outperforms the state-of-the-art in terms of both accuracy and efficiency.  相似文献   

6.
《Information Fusion》2005,6(1):49-62
The diversity of an ensemble of classifiers can be calculated in a variety of ways. Here a diversity metric and a means for altering the diversity of an ensemble, called “thinning”, are introduced. We evaluate thinning algorithms created by several techniques on 22 publicly available datasets. When compared to other methods, our percentage correct diversity measure shows a greatest correlation between the increase in voted ensemble accuracy and the diversity value. Also, the analysis of different ensemble creation methods indicates that they generate different levels of diversity. Finally, the methods proposed for thinning show that ensembles can be made smaller without loss in accuracy.  相似文献   

7.
Flow level information is important for many applications in network measurement and analysis. In this work, we tackle the “Top Spreaders” and “Top Scanners” problems, where hosts that are spreading the largest numbers of flows, especially small flows, must be efficiently and accurately identified. The identification of these top users can be very helpful in network management, traffic engineering, application behavior analysis, and anomaly detection.We propose novel streaming algorithms and a “Filter-Tracker-Digester” framework to catch the top spreaders and scanners online. Our framework combines sampling and streaming algorithms, as well as deterministic and randomized algorithms, in such a way that they can effectively help each other to improve accuracy while reducing memory usage and processing time. To our knowledge, we are the first to tackle the “Top Scanners” problem in a streaming way. We address several challenges, namely: traffic scale, skewness, speed, memory usage, and result accuracy. The performance bounds of our algorithms are derived analytically, and are also evaluated by both real and synthetic traces, where we show our algorithm can achieve accuracy and speed of at least an order of magnitude higher than existing approaches.  相似文献   

8.
针对人脸姿态偏转较大导致人脸特征点定位精度低的问题,提出了多视角人脸特征点定位算法,采用随机森林局部学习与全局线性回归相结合的级联姿态回归(Cascaded Pose Regression,CPR)人脸特征点定位模型,在不同的人脸姿态视角下建立不同的模型,以多模型代替单一模型来提高人脸特征点定位的精度。首先采用CPR模型对不同视角下的人脸建立不同的模型;然后采用多视角生成模型(Multi-View Generative Model,MVGM)来评估输入人脸图片的姿态;最后根据评估的姿态选择相对应的模型,进而实现特征点的精确定位。仿真实验结果表明,相比于现有的几种人脸特征点定位算法,所提算法实现了更精确的定位效果。  相似文献   

9.
The process of manually generating precise segmentations of brain tumors from magnetic resonance images (MRI) is time-consuming and error-prone. We present a new algorithm, Potential Field Segmentation (PFS), and propose the use of ensemble approaches that combine the results generated by PFS and other methods to achieve a fused segmentation. For the PFS method, we build on our recently proposed clustering algorithm, Potential Field Clustering, which is based on an analogy with the concept of potential field in Physics. We view the intensity of a pixel in an MRI as a “mass” that creates a potential field. Specifically, for each pixel in the MRI, the potential field is computed and, if smaller than an adaptive potential threshold, the pixel is associated with the tumor region. This “small potential” segmentation criterion is intuitively valid because tumor pixels have larger “mass” and thus the potential of surrounding regions is also much larger than in other regions of smaller or no “mass”. We evaluate the performance of the different methods, including the ensemble approaches, on the publicly available Brain Tumor Image Segmentation (BRATS) MRI benchmark database.  相似文献   

10.
Shape from shading (SfS) and stereo are two fundamentally different strategies for image-based 3-D reconstruction. While approaches for SfS infer the depth solely from pixel intensities, methods for stereo are based on a matching process that establishes correspondences across images. This difference in approaching the reconstruction problem yields complementary advantages that are worthwhile being combined. So far, however, most “joint” approaches are based on an initial stereo mesh that is subsequently refined using shading information. In this paper we follow a completely different approach. We propose a joint variational method that combines both cues within a single minimisation framework. To this end, we fuse a Lambertian SfS approach with a robust stereo model and supplement the resulting energy functional with a detail-preserving anisotropic second-order smoothness term. Moreover, we extend the resulting model in such a way that it jointly estimates depth, albedo and illumination. This in turn makes the approach applicable to objects with non-uniform albedo as well as to scenes with unknown illumination. Experiments for synthetic and real-world images demonstrate the benefits of our combined approach: They not only show that our method is capable of generating very detailed reconstructions, but also that joint approaches are feasible in practice.  相似文献   

11.
New inverse kinematic algorithms for generating redundant robot joint trajectories are proposed. The algorithms utilize the kinematic redundancy to improve robot motion performance (in joint space or Cartesian space) as specified by certain objective functions. The algorithms are based on the extension of the existing “joint-space command generator” technique in which a null space vector is introduced which optimizes a specific objective function along the joint trajectories. In this article, the algorithms for generating the joint position and velocity (PV) trajectories are extensively developed. The case for joint position, velocity, and acceleration (PVA) generation is also addressed. Application of the algorithms to a four-link revolute planar robot manipulator is demonstrated through simulation. Several motion performance criteria are considered and their results analyzed.  相似文献   

12.
13.
目的 人脸配准是当前计算机视觉领域的研究热点之一,其目的是准确定位出人脸图像中具有语义特征的面部关键点,这也是人脸识别、人脸美化等众多与人脸有关的视觉任务的重要步骤。最近,基于级联回归的人脸配准算法在配准精度和速度上都达到了最先进的水准。级联回归是一种迭代更新的算法,初始脸形将通过多个线性组合的弱回归器逐渐逼近真实的人脸形状。但目前的算法大多致力于改进学习方法或提取具有几何不变性的特征来提升弱回归器的能力,而忽略了初始脸形的质量,这极大的降低了它们在复杂场景下的配准精度,如夸张的面部表情和极端的头部姿态等。因此,在现有的级联回归框架上,提出自动估计初始形状的多姿态人脸配准算法。方法 本文算法首先在脸部区域提取基于高斯滤波一阶导数的梯度差值特征,并使用随机回归森林预测人脸形状;然后针对不同的形状使用独立的级联回归器。结果 验证初始形状估计算法的有效性,结果显示,本文的初始化算法能给现有的级联回归算法带来精度上的提升,同时结果也更加稳定;本文算法产生的初始形状都与实际脸型较为相近,只需很少的初始形状即可取得较高的精度;在COFW、HELEN和300W人脸数据库上,将本文提出的多姿态级联回归算法和现有配准算法进行对比实验,本文算法的配准误差相较现有算法分别下降了29.2%、13.3%和9.2%,结果表明,本文算法能有效消除不同脸型之间的干扰,在多姿态场景下得到更加精确的配准结果,并能达到实时的检测速度。结论 基于级联回归模型的多姿态人脸配准算法可以取得优于现有算法的结果,在应对复杂的脸形时也更加鲁棒。所提出的初始形状估计算法可以自动产生高质量的初始形状,用于提升现有的级联回归算法。  相似文献   

14.
随着人脸识别算法在众多应用领域的迅猛发展,作为人脸检测和人脸识别中间步骤的人脸对齐算法日益受到重视。针对平面内的人脸图像旋转问题,提出一个基于TI-SPCA(Transformation Invariant Symmetrical Principal Components Analysis)的人脸自动对齐方法及其识别框架。不同于传统的人眼对齐方法,TI-SPCA通过最小化重构图像和扭曲图像之间的误差得到一个旋转不变的特征空间,最终实现无人为干涉的全自动对齐。为了将其性能与人眼对齐方法的性能进行比较,并展示其优势,文中分别在ORL数据库和FERET数据库上通过两种不同对齐方法的输出图像从视觉效果上直观地展现。进一步地,为了验证对齐后的图像在识别算法中的有效性,结合三种距离函数和四种局部算子进行了对比实验,实验结果表明了基于TI-SPCA的全自动对齐方法在人脸识别中的有效性。  相似文献   

15.

Recommender systems are tools that support online users by pointing them to potential items of interest in situations of information overload. In recent years, the class of session-based recommendation algorithms received more attention in the research literature. These algorithms base their recommendations solely on the observed interactions with the user in an ongoing session and do not require the existence of long-term preference profiles. Most recently, a number of deep learning-based (“neural”) approaches to session-based recommendations have been proposed. However, previous research indicates that today’s complex neural recommendation methods are not always better than comparably simple algorithms in terms of prediction accuracy. With this work, our goal is to shed light on the state of the art in the area of session-based recommendation and on the progress that is made with neural approaches. For this purpose, we compare twelve algorithmic approaches, among them six recent neural methods, under identical conditions on various datasets. We find that the progress in terms of prediction accuracy that is achieved with neural methods is still limited. In most cases, our experiments show that simple heuristic methods based on nearest-neighbors schemes are preferable over conceptually and computationally more complex methods. Observations from a user study furthermore indicate that recommendations based on heuristic methods were also well accepted by the study participants. To support future progress and reproducibility in this area, we publicly share the session-rec evaluation framework that was used in our research.

  相似文献   

16.
The Job Shop Scheduling Problem (JSSP) is known as one of the most difficult scheduling problems. It is an important practical problem in the fields of production management and combinatorial optimization. Since JSSP is NP-complete, meaning that the selection of the best scheduling solution is not polynomially bounded, heuristic approaches are often considered. Inspired by the decision making capability of bee swarms in the nature, this paper proposes an effective scheduling method based on Best-so-far Artificial Bee Colony (Best-so-far ABC) for solving the JSSP. In this method, we bias the solution direction toward the Best-so-far solution rather a neighboring solution as proposed in the original ABC method. We also use the set theory to describe the mapping of our proposed method to the problem in the combinatorial optimization domain. The performance of the proposed method is then empirically assessed using 62 benchmark problems taken from the Operations Research Library (OR-Library). The solution quality is measured based on “Best”, “Average”, “Standard Deviation (S.D.)”, and “Relative Percent Error (RPE)” of the objective value. The results demonstrate that the proposed method is able to produce higher quality solutions than the current state-of-the-art heuristic-based algorithms.  相似文献   

17.
Outlier detection research has been seeing many new algorithms every year that often appear to be only slightly different from existing methods along with some experiments that show them to “clearly outperform” the others. However, few approaches come along with a clear analysis of existing methods and a solid theoretical differentiation. Here, we provide a formalized method of analysis to allow for a theoretical comparison and generalization of many existing methods. Our unified view improves understanding of the shared properties and of the differences of outlier detection models. By abstracting the notion of locality from the classic distance-based notion, our framework facilitates the construction of abstract methods for many special data types that are usually handled with specialized algorithms. In particular, spatial neighborhood can be seen as a special case of locality. Here we therefore compare and generalize approaches to spatial outlier detection in a detailed manner. We also discuss temporal data like video streams, or graph data such as community networks. Since we reproduce results of specialized approaches with our general framework, and even improve upon them, our framework provides reasonable baselines to evaluate the true merits of specialized approaches. At the same time, seeing spatial outlier detection as a special case of local outlier detection, opens up new potentials for analysis and advancement of methods.  相似文献   

18.
Subjective pattern recognition is a class of pattern recognition problems, where we not only merely know a few, if any, the strategies our brains employ in making decisions in daily life but also have only limited ideas on the standards our brains use in determining the equality/inequality among the objects. Face recognition is a typical example of such problems. For solving a subjective pattern recognition problem by machinery, application accuracy is the standard performance metric for evaluating algorithms. However, we indeed do not know the connection between algorithm design and application accuracy in subjective pattern recognition. Consequently, the research in this area follows a “trial and error” process in a general sense: try different parameters of an algorithm, try different algorithms, and try different algorithms with different parameters. This phenomenon can be observed clearly in the nearly 30 years research of the face recognition: although huge advances have been made, no algorithm has ever been shown a potential to be consistently better than most of the algorithms developed earlier; it was even shown that a naïve algorithm can work, in the sense of accuracy, at least no worse than many newly developed ones in a few benchmarks. We argue that, the primary objective of subjective pattern recognition research should be moved to theoretical robustness from application accuracy so that we can evaluate and compare algorithms without or with only few “trial and error” steps. We in this paper introduce an analytical model for studying the theoretical stabilities of multicandidate Electoral College and Direct Popular Vote schemes (aka regional voting scheme and national voting scheme, respectively), which can be expressed as the a posteriori probability that a winning candidate will continue to be chosen after the system is subjected to noise. This model shows that, in the context of multicandidate elections, generally, Electoral College is more stable than Direct Popular Vote, that the stability of Electoral College increases from that of Direct Popular Vote as the size of the subdivided regions decreases from the original nation size, up to a certain level, and then the stability starts to decrease approaching the stability of Direct Popular Vote as the region size approaches the original unit cell size; and that the stability of Electoral College approaches that of Direct Popular Vote in the two extremities as the region size increases to the original national size or decreases to the unit cell size. It also shows a special situation of white noise dominance with negligibly small concentrated noise, where Direct Popular Vote is surprisingly more stable than Electoral College, although the existence of such a special situation is questionable. We observe that “high stability” in theory indeed always reveals itself in “high accuracy” in applications. Extensive experiments on two human face benchmark databases applying an Electoral College framework embedded with standard baseline and newly developed holistic algorithms have been conducted. The impressive improvement by Electoral College over regular holistic algorithms verifies the stability theory on the voting systems. It also shows an evidential support for adopting theoretical stability instead of application accuracy as the primary objective for subjective pattern recognition research.  相似文献   

19.
Clustering is a powerful machine learning technique that groups “similar” data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver “qbsolv.” The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.  相似文献   

20.
Multiple sequence alignment is of central importance to bioinformatics and computational biology. Although a large number of algorithms for computing a multiple sequence alignment have been designed, the efficient computation of highly accurate and statistically significant multiple alignments is still a challenge. In this paper, we propose an efficient method by using multi-objective genetic algorithm (MSAGMOGA) to discover optimal alignments with affine gap in multiple sequence data. The main advantage of our approach is that a large number of tradeoff (i.e., non-dominated) alignments can be obtained by a single run with respect to conflicting objectives: affine gap penalty minimization and similarity and support maximization. To the best of our knowledge, this is the first effort with three objectives in this direction. The proposed method can be applied to any data set with a sequential character. Furthermore, it allows any choice of similarity measures for finding alignments. By analyzing the obtained optimal alignments, the decision maker can understand the tradeoff between the objectives. We compared our method with the three well-known multiple sequence alignment methods, MUSCLE, SAGA and MSA-GA. As the first of them is a progressive method, and the other two are based on evolutionary algorithms. Experiments on the BAliBASE 2.0 database were conducted and the results confirm that MSAGMOGA obtains the results with better accuracy statistical significance compared with the three well-known methods in aligning multiple sequence alignment with affine gap. The proposed method also finds solutions faster than the other evolutionary approaches mentioned above.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号