共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, we propose a fast 3-D facial shape recovery algorithm from a single image with general, unknown lighting. In order to derive the algorithm, we formulate a nonlinear least-square problem with two parameter vectors which are related to personal identity and light conditions. We then combine the spherical harmonics for the surface normals of a human face with tensor algebra and show that in a certain condition, the dimensionality of the least-square problem can be further reduced to one-tenth of the regular subspace-based model by using tensor decomposition (N-mode SVD), which greatly speeds up the computations. In order to enhance the shape recovery performance, we have incorporated prior information in updating the parameters. In the experiment, the proposed algorithm takes less than 0.4 s to reconstruct a face and shows a significant performance improvement over other reported schemes. 相似文献
3.
Tung Yun Mei 《Software》1981,11(12):1273-1292
One of the critical problems of Chinese data processing is to make a computer produce Chinese characters as output. This paper describes a language called LCCD, which is especially intended for high quality design of Chinese characters, although it can in fact be used for any kind of symbols. Special attention is given to the methodology of character design, using a parametric graphical approach. The underlying methods of implementation are also discussed. 相似文献
4.
We propose an information filtering system based on a probabilistic model. We make an assumption that a document consists
of words which occur according to a probability distribution, and regard a document as a sample drawn according to that distribution.
In this article, we adopt a multinomial distribution and represent a document as probability which has random values as the
words in the document. When an information filtering system selects information, it uses the similarity between the user's
interests (a user profile) and a document. Since our proposed system is constructed under the probabilistic model, the similarity
is defined using the Kullback Leibler divergence. To create the user profile, we must optimize the Kullback Leibler divergence.
Since the Kullback Leibler divergence is a nonlinear function, we use a genetic algorithm to optimize it. We carry out experiments
and confirm effectiveness of the proposed method.
This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February
4–6, 2005 相似文献
5.
Reliability analysis of a satellite structure with a parametric and a non-parametric probabilistic model 总被引:1,自引:0,他引:1
M. Pellissetti H. Pradlwarter G.I. Schuëller 《Computer Methods in Applied Mechanics and Engineering》2008,198(2):344-357
The reliability of a satellite structure subjected to harmonic base excitation in the low frequency range is analyzed with respect to the exceedance of critical frequency response thresholds. Both a parametric model of uncertainties and a more recently introduced non-parametric model are used to analyze the reliability, where the latter model in the present analysis captures the model uncertainties.With both models, the probability of exceedance of given acceleration thresholds is estimated using Monte-Carlo simulation. To reduce the computational cost of the parametric model, a suitable meta-model is used instead.The results indicate that for low levels of uncertainty in the damping, the non-parametric model provides significantly more pessimistic - and hence conservative - predictions about the exceedance probabilities. For high levels of damping uncertainty the opposite is the case. 相似文献
6.
Traditionally, gesture-based interaction in virtual environments is composed of either static, posture-based gesture primitives or temporally analyzed dynamic primitives. However, it would be ideal to incorporate both static and dynamic gestures to fully utilize the potential of gesture-based interaction. To that end, we propose a probabilistic framework that incorporates both static and dynamic gesture primitives. We call these primitives Gesture Words (GWords). Using a probabilistic graphical model (PGM), we integrate these heterogeneous GWords and a high-level language model in a coherent fashion. Composite gestures are represented as stochastic paths through the PGM. A gesture is analyzed by finding the path that maximizes the likelihood on the PGM with respect to the video sequence. To facilitate online computation, we propose a greedy algorithm for performing inference on the PGM. The parameters of the PGM can be learned via three different methods: supervised, unsupervised, and hybrid. We have implemented the PGM model for a gesture set of ten GWords with six composite gestures. The experimental results show that the PGM can accurately recognize composite gestures. 相似文献
7.
利用小键盘输入汉字的思路与实现 总被引:1,自引:1,他引:1
蔡昭权 《计算机工程与设计》2006,27(5):908-910
通过对汉字输入法录入形式的比较,以及对汉字输入法特征及数字键的分析,提出利用数字小键盘实现汉字输入技术的思路和办法,并用C语言实现,达到数字键也能方便输入汉字的效果,并能应用于各种具有数字小键盘设备的汉字输入。 相似文献
8.
《国际计算机数学杂志》2012,89(1):185-198
In this paper, a multi-objective production planning model has been presented for a captive plant. The model includes multi-products, multi-plants, and multi-objective with some probabilistic constraints. The probabilistic constraints have been transformed into deterministic constraints assuming the parameters as independent normal random variables. The deterministic problem has been computed with two different methods, namely weighting method and fuzzy programming method. Finally, the integral solution obtained by these two methods have been compared. 相似文献
9.
Chinese characters are constructed by strokes according to structural rules. Therefore, the geometric configurations of characters are important features for character recognition. In handwritten characters, stroke shapes and their spatial relations may vary to some extent. The attribute value of a structural identification is then a fuzzy quantity rather than a binary quantity. Recognizing these facts, we propose a fuzzy attribute representation (FAR) to describe the structural features of handwritten Chinese characters for an on-line Chinese character recognition (OLCCR) system. With a FAR. a fuzzy attribute graph for each handwritten character is created, and the character recognition process is thus transformed into a simple graph matching problem. This character representation and our proposed recognition method allow us to relax the constraints on stroke order and stroke connection. The graph model provides a generalized character representation that can easily incorporate newly added characters into an OLCCR system with an automatic learning capability. The fuzzy representation can describe the degree of structural deformation in handwritten characters. The character matching algorithm is designed to tolerate structural deformations to some extent. Therefore, even input characters with deformations can be recognized correctly once the reference dictionary of the recognition system has been trained using a few representative learning samples. Experimental results are provided to show the effectiveness of the proposed method. 相似文献
10.
采用在遗传规划中使用概率模型的新方法采解决一系列故障诊断问题。故障诊断可被看为是一个多级分类问题。遗传规划在解决复杂问题上有很大的优势,而这种优势在故障诊断中仍然显著。而且,使用概率模型作为适应函数能提高诊断的精确性,最后用这种方法解决机电设备的故障诊断。结果显示,使用基于概率模型的遗传规划解决机电设备的故障诊断比人工神经网络优越。 相似文献
11.
Since Chinese characters are composed from a small set of fundamental shapes (radicals) the problem of recognising large numbers of characters can be converted to that of extracting a small number of radicals and then finding their optimal combination. In this paper, radical extraction is carried out by nonlinear active shape models, in which kernel principal component analysis is employed to capture the nonlinear variation. Treating Chinese character composition as a discrete Markov process, we also propose an approach to recognition with the Viterbi algorithm. Our initial experiments are conducted on off-line recognition of 430,800 loosely-constrained characters, comprised of 200 radical categories covering 2154 character categories from 200 writers. The correct recognition rate is 93.5% characters correct (writer-independent). Consideration of published figures for existing radical approaches suggests that our method achieves superior performance. 相似文献
12.
《Ergonomics》2012,55(8):1433-1444
We aimed to propose a convenient model for predicting complete recovery time (CRT) after exhaustion in high-intensity work. Before participating in the laboratory test, each of the 47 young adult subjects provided demographic information and filled out the perceived functional ability (PFA) and physical activity rating (PA-R) questionnaires. All subjects were required to perform one cycling test (at 70% maximum working capacity). Subjects continued cycling until exhaustion and then sat and recovered until their heart rates (HR) returned to baseline values. We found that CRT was significantly correlated with relative body mass index, the PFA score, PA-R score and maximum heart rate (HRmax). Accordingly, a prediction model for CRT was proposed. Furthermore, by replacing HRmax with age-predicted maximal HR, we obtained a more convenient prediction model that was independent of any physiological indexes that can only be obtained by subject testing. 相似文献
13.
In this paper, we present a method to recover the parameters governing the reflection of light from a surface making use of a single hyperspectral image. To do this, we view the image radiance as a combination of specular and diffuse reflection components and present a cost functional which can be used for purposes of iterative least squares optimisation. This optimisation process is quite general in nature and can be applied to a number of reflectance models widely used in the computer vision and graphics communities. We elaborate on the use of these models in our optimisation process and provide a variant of the Beckmann–Kirchhoff model which incorporates the Fresnel reflection term. We show results on synthetic images and illustrate how the recovered photometric parameters can be employed for skin recognition in real world imagery, where our estimated albedo yields a classification rate of 95.09 ± 4.26% as compared to an alternative, whose classification rate is of 90.94 ± 6.12%. We also show quantitative results on the estimation of the index of refraction, where our method delivers an average per-pixel angular error of 0.15°. This is a considerable improvement with respect to an alternative, which yields an error of 9.9°. 相似文献
14.
基于CAPTCHA的中文安全机制的研究 总被引:2,自引:1,他引:1
随着越来越多的“网络机器人”在Intemet上活动,网站的安全性问题显得越来越严峻。全自动人机识别系统(CAPTCHA),一个让人类能够通过测试,而当前的计算机不能通过的程序出现了。它的原理建立在未解决的人工智能问题领域。通过对几种在实际安全应用中的不同CAPTCHA结构的考察,描述了它们的原理、模型和优缺点,并结合我国情况,提出了基于中文文字识别的CAPTCHA模型,详细描述了在中文CAPTCHA编程设计中的实现策略。 相似文献
15.
Xi Zhang Author Vitae Wai-Ming Tsang Author Vitae Author Vitae Kazuo Yamazaki Author Vitae 《Computers in Industry》2010,61(7):711-726
Collision detection by machining simulation requires the 3D models of rotating cutters. However, the 3D models of a cutter and holder are not always available. In this paper, a new method is proposed to design an automatic vision-based 3D modeling system, which is able to quickly reconstruct the 3D model of a cutter and holder when they are installed onto the spindle. Only a single camera is mounted on the machine tool to capture the image of the rotating cutter and holder. By viewing the rotating cutter and holder as an object of surface of revolution, the contour of the imaged cutter and holder can be used to reconstruct the 3D model as a stack of circular cross-sections. Then the complete generating function of the cutter and holder can be recovered from the cross-sections. Finally, the 3D model of the cutter is built by rotating the generating function around the spindle axis. The effectiveness and accuracy of the proposed method are verified by experiments on-machine using 12 kinds of cutters and holders, which can satisfy the requirement of collision detection. 相似文献
16.
Traffic congestion occurs frequently in urban settings, and is not always caused by traffic incidents. In this paper, we propose a simple method for detecting traffic incidents from probe-car data by identifying unusual events that distinguish incidents from spontaneous congestion. First, we introduce a traffic state model based on a probabilistic topic model to describe the traffic states for a variety of roads. Formulas for estimating the model parameters are derived, so that the model of usual traffic can be learned using an expectation–maximization algorithm. Next, we propose several divergence functions to evaluate differences between the current and usual traffic states and streaming algorithms that detect high-divergence segments in real time. We conducted an experiment with data collected for the entire Shuto Expressway system in Tokyo during 2010 and 2011. The results showed that our method discriminates successfully between anomalous car trajectories and the more usual, slowly moving traffic patterns. 相似文献
17.
《Pattern recognition》2014,47(2):685-693
In this paper, a systematic method is described that constructs an efficient and a robust coarse classifier from a large number of basic recognizers obtained by different parameters of feature extraction, different discriminant methods or functions, etc. The architecture of the coarse classification is a sequential cascade of basic recognizers that reduces the candidates after each basic recognizer. A genetic algorithm determines the best cascade with the best speed and highest performance. The method was applied for on-line handwritten Chinese and Japanese character recognitions. We produced hundreds of basic recognizers with different classification costs and different classification accuracies by changing parameters of feature extraction and discriminant functions. From these basic recognizers, we obtained a rather simple two-stage cascade, resulting in the whole recognition time being reduced largely while maintaining classification and recognition rates. 相似文献
18.
In this paper, we introduce a novel visual similarity measuring technique to retrieve face images in photo album databases for law enforcement. Though much work is being done on face similarity matching techniques, little attention is given to the design of face matching schemes suitable for visual retrieval in single model databases where accuracy, robustness to scale and environmental changes, and computational efficiency are three important issues to be considered. This paper presents a robust face retrieval approach using structural and spatial point correspondence in which the directional corner points (DCPs) are generated for efficient face coding and retrieval. A complete investigation on the proposed method is conducted, which covers face retrieval under controlled/ideal condition, scale variations, environmental changes and subject actions. The system performance is compared with the performance of the eigenface method. It is an attractive finding that the proposed DCP retrieval technique has performed superior to the eigenface method in most of the comparison experiments. This research demonstrates that the proposed DCP approach provides a new way, which is both robust to scale and environmental changes, and efficient in computation, for retrieving human faces in single model databases. 相似文献
19.
Deformable shape detection is an important problem in computer vision and pattern recognition. However, standard detectors are typically limited to locating only a few salient landmarks such as landmarks near edges or areas of high contrast, often conveying insufficient shape information. This paper presents a novel statistical pattern recognition approach to locate a dense set of salient and non-salient landmarks in images of a deformable object. We explore the fact that several object classes exhibit a homogeneous structure such that each landmark position provides some information about the position of the other landmarks. In our model, the relationship between all pairs of landmarks is naturally encoded as a probabilistic graph. Dense landmark detections are then obtained with a new sampling algorithm that, given a set of candidate detections, selects the most likely positions as to maximize the probability of the graph. Our experimental results demonstrate accurate, dense landmark detections within and across different databases. 相似文献
20.
This paper considers the problem of scheduling two-operation non-preemptable jobs on two identical semi-automatic machines. A single server is available to carry out the first (or setup) operation. The second operation is executed automatically, without the server. The general problem of makespan minimization is NP-hard in the strong sense. In earlier work, we showed that the equal total length problem is polynomial time and we also provided efficient and effective solutions for the special cases of equal setup and equal processing times. Most of the cases analyzed thus far have fallen into the category of regular problems. In this paper we build on this earlier work to deal with the general case. Various approaches will be considered. One may reduce the problem to a regular one by amalgamating jobs, or we may apply the earlier heuristics to (possibly regular) job clusters. Alternately we may apply a greedy heuristic, a metaheuristic such as a genetic algorithm or the well known Gilmore–Gomory algorithm to solve the general problem. We report on the performance of these various methods. 相似文献