首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 79 毫秒
1.
A survey on hair modeling: styling, simulation, and rendering   总被引:3,自引:0,他引:3  
Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in alt these areas, a broad diversity of approaches is used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling  相似文献   

2.
We develop and present a new approach to modelling the characteristics of human hair, considering not only its structure, but also the control of its motion and a technique for rendering it in a realistic form. The approach includes a system for interactively defining the global positioning of the strands of hair on the head. Special attention is paid to the self shadowing of the hair. A mass/spring/hinge system is used to control a single strand's position and orientation. We demonstrate that this approach results in a believable rendition of the hair and its dynamics.  相似文献   

3.
Image matching has been a central problem in computer vision and image processing for decades. Most of the previous approaches to image matching can be categorized into the intensity-based and edge-based comparison. Hausdorff distance has been widely used for comparing point sets or edge maps since it does not require point correspondences. In this paper, we propose a new image similarity measure combining the Hausdorff distance with a normalized gradient consistency score for image matching. The normalized gradient consistency score is designed to compare the normalized image gradient fields between two images to alleviate the illumination variation problem in image matching. By combining the edge-based and intensity-based information for image matching, we are able to achieve robust image matching under different lighting conditions. We show the superior robustness property of the proposed image matching technique through experiments on face recognition under different lighting conditions.  相似文献   

4.
根据蚁群算法与模拟退火算法的特性,提出了求解旅行商问题的混合算法.由模拟退火算法生成信息素分布,然后由蚁群算法根据累计更新的信息素找出若干组解,再经过模拟退火算法在邻域内找另外一个解的操作,得到更有效的解.与模拟退火算法、标准遗传算法、蚁群算法和随机初始化的蚁群算法进行比较,4种混合算法效果都比较好,策略D的混合算法效果最好.  相似文献   

5.
The recently developed methods of explicit (multi-parametric) model predictive control (e-MPC) for hybrid systems provide an interesting opportunity for solving a class of nonlinear control problems. With this approach, the nonlinear process is approximated by a piecewise affine (PWA) hybrid model containing a set of local linear dynamics. Compared to linear-model-based MPC, a performance improvement is expected with the reduction of the plant-to-model mismatch; however at a cost of controller computation complexity. In order to reduce the computational load, so that desired horizon lengths may be used, we present an efficient sub-optimal solution. The feasibility of the approach for the application was evaluated in an experimental case study, where an output feedback, offset-free-tracking hybrid e-MPC controller was considered as a replacement for a PID-controller-based scheme for the control of the pressure in a wire-annealing machine.  相似文献   

6.
Industrial processes are often subjected to abnormal events such as faults or external disturbances which can easily propagate via the process units. Establishing causal dependencies among process measurements has a key role in fault diagnosis due to its ability to identify the root cause of a fault and its propagation path. This paper proposes a hybrid nonlinear causal analysis based on nonparametric multiplicative regression (NPMR) for identifying the propagation of an oscillatory disturbance via control loops. The NPMR causality estimator addresses most of the limitations of the linear model-based methods and it can be applied to both bivariate and multivariate estimations without any modifications to the method parameters. Moreover, the NPMR-based estimations can be used to pinpoint the root cause of a fault. The process connectivity information is automatically integrated into the causal analysis using a specialized search algorithm. Thereby, it enables to efficiently tackle industrial systems with a high level of connectivity and enhance the quality of the results. The proposed approach is successfully demonstrated on an industrial board machine exhibiting oscillations in its drying section due to valve stiction and. The NPMR-based estimator produced highly accurate results with relatively low computational effort compared with the linear Granger causality and other nonlinear causality estimators.  相似文献   

7.
The author (ibid., vol.36, no.9, pp.994-1007, Sept. 1991) formulated and solved an optimal linear quadratic Gaussian (LQG) problem for stability enhancement for flexible structures with collocated controllers and constructed an approximate realization of the compensator. An explicit closed-form solution for the compensator transfer function is obtained here. It is positive real but nonrational  相似文献   

8.
Despite notable advances over the past decade, current virtual reality systems have numerous drawbacks. The FlatWorld project at the University of Southern California's Institute for Creative Technologies seeks to overcome these limitations by exploring a new approach to virtual environments (VEs) inspired by Hollywood set-design techniques. Since the dawn of the film industry, movie sets have been constructed using modular panels called flats. Set designers use flats to create physical structures to represent various places and activities. The paper considers how FlatWorld is developing a reconfigurable system of digital flats. Using large-screen displays and real-time computer graphics technology, a single digital flat can appear as an interior room wall or an exterior building face.  相似文献   

9.
We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.  相似文献   

10.
In this paper we use some basic principles from data envelopment analysis (DEA) in order to extract the necessary information for solving a multicriteria decision analysis (MCDA) problem. The proposed method (enhanced alternative cross-evaluation, ACE+) is appropriate when either the decision maker is unwilling (or hardly available) to provide information, or there are several decision makers, each one supporting his/her own option. It is similar to the AXE method of Doyle, but it goes one step further: each alternative uses its most favourable weights (as in AXE) and its most favourable value functions in order to perform a self evaluation, according to multi attribute value theory (MAVT). These self-evaluations are averaged in order to derive the overall peer-evaluation for each alternative. The minimum information required from the decision maker is to define the weight interval for each criterion. Beside the peer evaluation and the final rating of the alternatives, the method provides useful conclusions for the sensitivity analysis of the results.  相似文献   

11.
Multimedia Tools and Applications - Random-needle Embroidery is a graceful Chinese art designated as Intangible Cultural Heritage, which “draws” beautiful images with thousands of...  相似文献   

12.
13.
In recent years, personalized fabrication has received considerable attention because of the widespread use of consumer-level three-dimensional (3D) printers. However, such 3D printers have drawbacks, such as long production time and limited output size, which hinder large-scale rapid-prototyping. In this paper, for the time- and cost-effective fabrication of large-scale objects, we propose a hybrid 3D fabrication method that combines 3D printing and the Zometool construction set, which is a compact, sturdy and reusable structure for infill fabrication. The proposed method significantly reduces fabrication cost and time by printing only thin 3D outer shells. In addition, we design an optimization framework to generate both a Zometol structure and printed surface partitions by optimizing several criteria, including printability, material cost and Zometool structure complexity. Moreover, we demonstrate the effectiveness of the proposed method by fabricating various large-scale 3D models.  相似文献   

14.
Experience with a Hybrid Processor: K-Means Clustering   总被引:2,自引:0,他引:2  
We discuss hardware/software co-processing on a hybrid processor for a compute- and data-intensive multispectral imaging algorithm, k-means clustering. The experiments are performed on two models of the Altera Excalibur board, the first using the soft IP core 32-bit NIOS 1.1 RISC processor, and the second with the hard IP core ARM processor. In our experiments, we compare performance of the sequential k-means algorithm with three different accelerated versions. We consider granularity and synchronization issues when mapping an algorithm to a hybrid processor. Our results show that speedup of 11.8X is achieved by migrating computation to the Excalibur ARM hardware/software as compared to software only on a Gigahertz Pentium III. Speedup on the Excalibur NIOS is limited by the communication cost of transferring data from external memory through the processor to the customized circuits. This limitation is overcome on the Excalibur ARM, in which dual-port memories, accessible to both the processor and configurable logic, have the biggest performance impact of all the techniques studied.  相似文献   

15.
We present a new algorithm for view-dependent level-of-detail rendering of meshes. Not only can it effectively resolve complex geometry features similar to edge collapse-based schemes, but it also produces meshes that modern graphics hardware can render efficiently. This is accomplished through a novel hybrid approach: for each frame, we view-dependently refine the progressive mesh (PM) representation of the original mesh and use the output as the base domain of uniform regular refinements. The algorithm exploits frame-to-frame coherence and only updates portions of the output mesh corresponding to modified domain triangles. The PM representation is built using a custom volume preservation-based error function. A simple k-d tree enhanced jump-and-walk scheme is used to quickly map from the dynamic base domain to the original mesh during regular refinements. In practice, the PM refinement provides a view-optimized base domain for later regular refinements. The regular refinements ensure almost-everywhere regularity of output meshes, allowing optimization for vertex cache coherence and caching of geometry data in high-performance graphics memory. Combined, they also have the effect of allowing our algorithm to operate on uniform clusters of triangles instead of individual ones, reducing CPU workload.  相似文献   

16.
Multiple spatially-related videos are increasingly used in security, communication, and other applications. Since it can be difficult to understand the spatial relationships between multiple videos in complex environments (e.g. to predict a person's path through a building), some visualization techniques, such as video texture projection, have been used to aid spatial understanding. In this paper, we identify and begin to characterize an overall class of visualization techniques that combine video with 3D spatial context. This set of techniques, which we call contextualized videos, forms a design palette which must be well understood so that designers can select and use appropriate techniques that address the requirements of particular spatial video tasks. In this paper, we first identify user tasks in video surveillance that are likely to benefit from contextualized videos and discuss the video, model, and navigation related dimensions of the contextualized video design space. We then describe our contextualized video testbed which allows us to explore this design space and compose various video visualizations for evaluation. Finally, we describe the results of our process to identify promising design patterns through user selection of visualization features from the design space, followed by user interviews.  相似文献   

17.
《Computer Networks》2003,41(5):641-665
The designs of most systems-on-a-chip (SoC) architectures rely on simulation as a means for performance estimation. Such designs usually start with a parameterizable template architecture, and the design space exploration is restricted to identifying the suitable parameters for all the architectural components. However, in the case of heterogeneous SoC architectures such as network processors the design space exploration also involves a combinatorial aspect––which architectural components are to be chosen, how should they be interconnected, task mapping decisions––thereby increasing the design space. Moreover, in the case of network processor architectures there is also an associated uncertainty in terms of the application scenario and the traffic it will be required to process. As a result, simulation is no longer a feasible option for evaluating such architectures in any automated or semi-automated design space exploration process due to the high simulation times involved. To address this problem, in this paper we hypothesize that the design space exploration for network processors should be separated into multiple stages, each having a different level of abstraction. Further, it would be appropriate to use analytical evaluation frameworks during the initial stages and resort to simulation techniques only when a relatively small set of potential architectures is identified. None of the known performance evaluation methods for network processors have been positioned from this perspective.We show that there are already suitable analytical models for network processor performance evaluation which may be used to support our hypothesis. To this end, we choose a reference system-level model of a network processor architecture and compare its performance evaluation results derived using a known analytical model [Thiele et al., Design space exploration of network processor architectures, in: Proc. 1st Workshop on Network Processors, Cambridge, MA, February 2002; Thiele et al., A framework for evaluating design tradeoffs in packet processing architectures, in: Proc. 39th Design Automation Conference (DAC), New Orleans, USA, ACM Press, 2002] with the results derived by detailed simulation. Based on this comparison, we propose a scheme for the design space exploration of network processor architectures where both analytical performance evaluation techniques and simulation techniques have unique roles to play.  相似文献   

18.
Spam Detection: Increasing Accuracy with A Hybrid Solution   总被引:1,自引:0,他引:1  
A significant increase in the amount of e-mail spam has become a major concern to both organizations and employees. This article identifies and compares three different types of spam-detection approaches: list-based, content-scanning, and forged-e-mail detection. Because none of these approaches provides a comprehensive solution, a hybrid solution, which maximizes their collective strengths and minimizes their individual weaknesses, is proposed and tested with a prototype tool developed by the authors.  相似文献   

19.
In electronic markets, both bundle search and buyer coalition formation are profitable purchasing strategies for buyers who need to buy small amount of goods and have no or limited bargaining power. In this paper, we present a distributed mechanism that allows buyers to use both purchasing strategies. The mechanism includes a heuristic bundle search algorithm and a distributed coalition formation scheme, which is based on an explicit negotiation protocol with low communication cost. The resulting coalitions are stable in the core in terms of coalition rationality. The simulation results show that this mechanism is very efficient. The resulting cost to buyers is close to the optimal cost.  相似文献   

20.
We describe the Fortran code CPsuperH2.0, which contains several improvements and extensions of its predecessor CPsuperH. It implements improved calculations of the Higgs-boson pole masses, notably a full treatment of the 4×4 neutral Higgs propagator matrix including the Goldstone boson and a more complete treatment of threshold effects in self-energies and Yukawa couplings, improved treatments of two-body Higgs decays, some important three-body decays, and two-loop Higgs-mediated contributions to electric dipole moments. CPsuperH2.0 also implements an integrated treatment of several B-meson observables, including the branching ratios of Bsμ+μ, Bdτ+τ, Buτν, BXsγ and the latter's CP-violating asymmetry ACP, and the supersymmetric contributions to the mass differences. These additions make CPsuperH2.0 an attractive integrated tool for analyzing supersymmetric CP and flavour physics as well as searches for new physics at high-energy colliders such as the Tevatron, LHC and linear colliders.1

Program summary

Program title: CPsuperH2.0Catalogue identifier: ADSR_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 13 290No. of bytes in distributed program, including test data, etc.: 89 540Distribution format: tar.gzProgramming language: Fortran 77Computer: PC running under Linux and computers in Unix environmentOperating system: LinuxRAM: 32 MbytesClassification: 11.1Catalogue identifier of the previous version: ADSR_v1_0Journal reference of the previous version: CPC 156 (2004) 283Does the new version supersede the previous version?: YesNature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on recent renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. The new implementations include a full treatment of the 4×4(2×2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, two-loop Higgs-mediated contributions to electric dipole moments, and an integrated treatment of several B-meson observables.Solution method: One-dimensional numerical integration for several Higgs-decay modes, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix.Reasons for new version: Mainly to provide a coherent numerical framework which calculates consistently observables for both low- and high-energy experiments.Summary of revisions: Improved treatment of Higgs-boson masses and propagators. Improved treatment of Higgs-boson couplings and decays. Higgs-mediated two-loop electric dipole moments. B-meson observables.Running time: Less than 0.1 seconds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号