首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We give an up-to-date survey on techniques and methods for fire simulation in computer graphics. Physically-based method prevails over traditional non-physical methods for realistic visual effect. In this paper, we explore visual simulation of fire-related phenomena in terms of physically modeling, numerical simulation and visual rendering. Firstly, we introduce a physical and chemical coupled mathematical model to explain fire behavior and motion. Several assumptions and constrains are put forward to simplify their implementations in computer graphics. We then give an overview of present methods to solve the most complicated processes in numerical simulation: velocity advection and pressure projection. In addition, comparisons of these methods are also presented respectively. Since fire is a participating medium as well as a visual radiator, we discuss techniques and problems of these issues as well. We conclude by addressing several open challenges and possible future research directions in fire simulation.  相似文献   

2.
Mathematical and computational aspects of a general numerical inverse solution procedure, in the form of a general computer program applicable to weakly swirling turbulent flows, are described. The method requires as input data only flow field distributions of experimental time-mean values and from these deduces distributions of significant turbulent stress and exchange coefficient components, thus providing knowledge about turbulence quantities which would otherwise have to be measured by more sophisticated techniques. Sufficient detail for prospective users is given, together with accuracy checks. The technique is applied to both inert and chemically-reacting swirl flows, and calculations show that previous assumptions of isotropy of the turbulent stress and viscosity components and constancy of Prandtl-Schmidt numbers are not generally valid. The exchange coefficients are shown to be functions of the degree of swirl and position in the flowfield; the turbulent viscosity is found to be nonisotropic.  相似文献   

3.
The conservative elastic behavior of soft materials is characterized by a stored energy function which shape is usually specified a priori, except for some material parameters. There are hundreds of proposed stored energies in the literature for different materials. The stored energy function may change under loading due to damage effects, but it may be considered constant during unloading–reloading. The two dominant approaches in the literature to model this damage effect are based either on the Continuum Damage Mechanics framework or on the Pseudoelasticity framework. In both cases, additional assumed evolution functions, with their associated material parameters, are proposed. These proposals are semi-inverse, semi-analytical, model-driven and data-adjusted ones. We propose an alternative which may be considered a non-inverse, numerical, model-free, data-driven, approach. We call this approach WYPiWYG constitutive modeling. We do not assume global functions nor material parameters, but just solve numerically the differential equations of a set of tests that completely define the behavior of the solid under the given assumptions. In this work we extend the approach to model isotropic and anisotropic damage in soft materials. We obtain numerically the damage evolution from experimental tests. The theory can be used for both hard and soft materials, and the infinitesimal formulation is naturally recovered for infinitesimal strains. In fact, we motivate the formulation in a one-dimensional infinitesimal framework and we show that the concepts are immediately applicable to soft materials.  相似文献   

4.
A novel algorithm for color constancy   总被引:8,自引:0,他引:8  
  相似文献   

5.
《Advanced Robotics》2013,27(11):1639-1660
To date, many hardware devices to control joint stiffness have been proposed for tendon-driven manipulators. However, the earlier devices presented some problems such as complex structures, increase of friction, increase of inertia, etc. To overcome these problems, we propose the cylindrical elastic element (CEE) to vary the joint stiffness by changing the internal force among wires. By inserting the CEEs into routes of wires, the joint stiffness of a tendon-driven manipulator can be changed depending on the internal force. However, it is difficult to obtain the kinematics because of the complexity of a CEE's deformation. Since a finite element method usually requires much time to calculate, the establishment of a simple CEE model is very important to solve the kinematics. This paper presents a numerical framework to approximately solve the kinematics of a one-link manipulator equipped with two CEEs. First, we propose some approximate models of a CEE and evaluate them. Using the most useful model, we then demonstrate a numerical solving method of the forward kinematics. Next, expanding this solving method, we also provide the inverse kinematics. The precision of the proposed methods is discussed through comparison of experimental results with numerical results.  相似文献   

6.
Recently an interesting evolutionary mechanism, sensibility, inherited from a concept model of Free Search (FS) was introduced and used for solving network problems. Unfortunately, the original FS is not easy to implement because it requires key knowledge that is not clearly defined in the existing literature to determine the neighborhood space that profoundly affects the performance of the original FS. This paper thus designs a new implementation for the concept model of FS, and proposes a new algorithm, called Free Search with Adaptive Differential Evolution Exploitation and Quantum-Inspired Exploration (ADEQFS) to address this issue. In ADEQFS, we focus on designing a new mutation strategy by employing adaptive differential evolution techniques as well as concepts and principles from real-coded quantum-inspired evolutionary algorithm. In addition, we use the crossover operation from the traditional Differential Evolution scheme to alleviate the premature convergence for the proposed algorithm. Furthermore, we employ the greedy mechanism to preserve the best solutions found at each generation. The convergence analysis of the proposed algorithm is also presented in this paper. We give the proof of convergence by using the Markov chain model. Thirty-four optimization test functions with different mathematical characteristics are employed as benchmark set to test the performance of ADEQFS. The numerical results highlight the improved convergence rate and computation reliability.  相似文献   

7.
8.
Since the recent appearance of neutrosophic theory as a generalization of fuzzy and intuitionistic fuzzy theories, many multicriteria decision methods have adopted this theory to deal with incomplete and indeterminate data. However, it has not yet been applied to the data envelopment analysis (DEA) methodology. Therefore, this study presents a DEA model with triangular neutrosophic inputs and outputs that considers the truth, indeterminacy, and falsity degrees of each data value. As an alternative, a parametric approach based on what we term the variation degree of a triangular neutrosophic number is developed. This approach transforms a neutrosophic DEA model into an interval DEA model that can be solved using one of many existing techniques. Interval efficiency scores obtained from our numerical example show the flexibility and authenticity of the proposed approach.  相似文献   

9.
10.
可信网络中基于多维决策属性的信任量化模型   总被引:19,自引:0,他引:19  
可信网络中的信任关系模型本质上是最复杂的社会关系之一,涉及假设、期望、行为和环境等多种因子,很难准确地定量表示和预测.综合考虑影响信任关系的多种可能要素,提出了一个新的基于多维决策属性的信任关系量化模型,引入直接信任、风险函数、反馈信任、激励函数和实体活跃度等多个决策属性,从多个角度推理和评估信任关系的复杂性和不确定性,用来解决传统量化模型对环境的动态变化适应能力不足的问题;在多维决策属性的融合计算过程中,通过信息熵理论确立各决策属性的分类权重,克服了过去常用的确定权重的主观判断方法,并可以改善传统方法由于主观分配分类权重而导致的模型自适应性不强的问题.模拟实验表明,与已有同类模型相比,该模型具有更稳健的动态适应性,在模型的安全性方面也有明显的优势.  相似文献   

11.
Discretization is the process of converting numerical values into categorical values. There are many existing techniques for discretization. However, the existing techniques have various limitations such as the requirement of a user input on the number of categories and number of records in each category. Therefore, we propose a new discretization technique called low frequency discretizer (LFD) that does not require any user input. There are some existing techniques that do not require user input, but they rely on various assumptions such as the number of records in each interval is same, and the number of intervals is equal to the number of records in each interval. These assumptions are often difficult to justify. LFD does not require any assumptions. In LFD the number of categories and frequency of each category are not pre-defined, rather data driven. Other contributions of LFD are as follows. LFD uses low frequency values as cut points and thus reduces the information loss due to discretization. It uses all other categorical attributes and any numerical attribute that has already been categorized. It considers that the influence of an attribute in discretization of another attribute depends on the strength of their relationship. We evaluate LFD by comparing it with six (6) existing techniques on eight (8) datasets for three different types of evaluation, namely the classification accuracy, imputation accuracy and noise detection accuracy. Our experimental results indicate a significant improvement based on the sign test analysis.  相似文献   

12.
In this paper we review the concepts of Bayesian evidence and Bayes factors, also known as log odds ratios, and their application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Specific attention is paid to the Laplace approximation, variational Bayes, importance sampling, thermodynamic integration, and nested sampling and its recent variants. Analogies to statistical physics, from which many of these techniques originate, are discussed in order to provide readers with deeper insights that may lead to new techniques. The utility of Bayesian model testing in the domain sciences is demonstrated by presenting four specific practical examples considered within the context of signal processing in the areas of signal detection, sensor characterization, scientific model selection and molecular force characterization.  相似文献   

13.
《Robotics and Computer》1994,11(3):233-244
In this paper, a connectionist model to integrate knowledge-based techniques into neural network approaches for visual pattern classification is presented. We propose a new structure of connectionist model which has rule-following capability as well as instance-based learning capability. Each node of the proposed network is doubly linked by two types of connections: positive connection and negative connection. Such connectionism provides a methodology to construct the classifier from the rule base and allows the expert knowledge to be utilized for the effective learning. For visual pattern classification, we present the techniques for knowledge representation and utilization using the concepts of fuzzy rules and fuzzy relations. We also discuss in this paper some advantageous characteristics of the model: result explanation capability and rule refinement capability. From the experimental results of the handwritten digit classification, the feasibility of the proposed model is evaluated.  相似文献   

14.
基于BP神经网络的颜色补偿模型   总被引:1,自引:0,他引:1  
针对光源渐变等因素在机器视觉中产生的相关问题,提出了一种基于BP神经网络的图像颜色校正方法.该方法通过合适的训练集对BP神经网络进行大量训练,得到光照变化前后图像像素点之间的映射关系,从而建立了在渐变光照环境下的颜色恒常性模型.该方法不需要内建约束的自适应模型,对于输入的数据不需要对表面属性做特定假设,拥有自适应、自学习的特点.实验结果表明,该模型对室内真实环境中渐变日光下颜色的识别表现出较好的颜色恒常性.  相似文献   

15.
In this paper, an experimental validation of some modelling aspects of an uncontrolled bicycle is presented. In numerical models, many physical aspects of the real bicycle are considered negligible, such as the flexibility of the frame and wheels, play in the bearings, and precise tire characteristics. The admissibility of these assumptions has been checked by comparing experimental results with numerical simulation results. The numerical simulations were performed on a three-degree-of-freedom benchmarked bicycle model. For the validation we considered the linearized equations of motion for small perturbations of the upright steady forward motion. The most dubious assumption that was validated in this model was the replacement of the tires by knife-edge wheels rolling without slipping (non-holonomic constraints). The experimental system consisted of an instrumented bicycle without rider. Sensors were present for measuring the roll rate, yaw rate, steering angle, and rear wheel rotation. Measurements were recorded for the case in which the bicycle coasted freely on a level surface. From these measured data, eigenvalues were extracted by means of curve fitting. These eigenvalues were then compared with the results from the linearized equations of motion of the model. As a result, the model appeared to be fairly accurate for the low-speed low-frequency behaviour.  相似文献   

16.
The recent coronavirus disease (COVID-19) outbreak has dramatically increased the public awareness and appreciation of the utility of dynamic models. At the same time, the dissemination of contradictory model predictions has highlighted their limitations. If some parameters and/or state variables of a model cannot be determined from output measurements, its ability to yield correct insights – as well as the possibility of controlling the system – may be compromised. Epidemic dynamics are commonly analysed using compartmental models, and many variations of such models have been used for analysing and predicting the evolution of the COVID-19 pandemic. In this paper we survey the different models proposed in the literature, assembling a list of 36 model structures and assessing their ability to provide reliable information. We address the problem using the control theoretic concepts of structural identifiability and observability. Since some parameters can vary during the course of an epidemic, we consider both the constant and time-varying parameter assumptions. We analyse the structural identifiability and observability of all of the models, considering all plausible choices of outputs and time-varying parameters, which leads us to analyse 255 different model versions. We classify the models according to their structural identifiability and observability under the different assumptions and discuss the implications of the results. We also illustrate with an example several alternative ways of remedying the lack of observability of a model. Our analyses provide guidelines for choosing the most informative model for each purpose, taking into account the available knowledge and measurements.  相似文献   

17.
Numerical modelling of wind flow over complex dune topography is an ambitious prospect. There is an increasing need to understand wind flow over complex topography for land planning purposes to enable prediction of sediment transport at a particular site. New surveying techniques permit the rapid development of digital terrain models, however a stumbling block is the ability of Computational Fluid Dynamics (CFD) to emulate the wind flow over such a landscape. To overcome these difficulties, it is important to establish the parameters within which such simulations can operate. This paper details an initial two-dimensional numerical model developed in order to test various modelling assumptions against experimental field wind data. Mason Bay, Stewart Island, New Zealand was chosen as an undisturbed but accessible experimental site with a prevalent on-shore wind perpendicular to a simple foredune and a complex down-wind parabolic dune system. A complex topographical two-dimensional model with vegetation represented as a roughness was compared against field data along a transect dissecting a dune system.This paper establishes that:
* Replicating the roughness patterns at the surface is important
* The inlet profile should be duplicated with care
* Modelling only a portion of the domain can have an effect on the flow patterns due to outflow effects
* There is a modelling decision to be made between the complexity of the topography and the sophistication of the turbulence model and degree to which vegetation and sand transportation are modelled.
The long-term aim is to instil confidence in numerical techniques so that such technology can be used for predictive purposes.  相似文献   

18.
Ontologies have been intensively applied for improving multimedia search and retrieval by providing explicit meaning to visual content. Several multimedia ontologies have been recently proposed as knowledge models suitable for narrowing the well known semantic gap and for enabling the semantic interpretation of images. Since these ontologies have been created in different application contexts, establishing links between them, a task known as ontology matching, promises to fully unlock their potential in support of multimedia search and retrieval. This paper proposes and compares empirically two extensional ontology matching techniques applied to an important semantic image retrieval issue: automatically associating common-sense knowledge to multimedia concepts. First, we extend a previously introduced textual concept matching approach to use both textual and visual representation of images. In addition, a novel matching technique based on a multi-modal graph is proposed. We argue that the textual and visual modalities have to be seen as complementary rather than as exclusive sources of extensional information in order to improve the efficiency of the application of an ontology matching approach in the multimedia domain. An experimental evaluation is included in the paper.  相似文献   

19.
Control charts are the most popular Statistical Process Control (SPC) tools used to monitor process changes. When a control chart produces an out-of-control signal, it means that the process has changed. However, control chart signals do not indicate the real time of the process changes, which is essential for identifying and removing assignable causes and ultimately improving the process. Identifying the real time of the process change is known as change-point estimation problem. Most of the traditional change-point methods are based on maximum likelihood estimators (MLE) which need strict statistical assumptions. In this paper, first, we introduce clustering as a potential tool for change-point estimation. Next, we discuss the challenges of employing clustering methods for change-point estimation. Afterwards, based on the concepts of fuzzy clustering and statistical methods, we develop a novel hybrid approach which is able to effectively estimate change-points in processes with either fixed or variable sample size. Using extensive simulation studies, we also show that the proposed approach performs considerably well in all considered conditions in comparison to powerful statistical methods and popular fuzzy clustering techniques. The proposed approach can be employed for processes with either normal or non-normal distributions. It is also applicable to both phase-I and phase-II. Finally, it can estimate the true values of both in- and out-of-control states’ parameters.  相似文献   

20.
To trust a computer system that is supposed to be secure, it is necessary to predict the degree to which the system’s security level can be achieved when operating in a specific environment under cyber attacks. In this paper, we propose a state-based stochastic model for obtaining quantitative security metrics representing the level of a system’s security. The main focus of the study is on how to model the progression of an attack process over time. The basic assumption of our model is that the time parameter plays the essential role in capturing the nature of an attack process. In practice, the attack process will terminate successfully, possibly after a number of unsuccessful attempts. What is important is, indeed, the estimation of how long it takes to be conducted. The proposed stochastic model is parameterized based on a suitable definition of time distributions describing attacker’s actions and system’s reactions over time. For this purpose, probability distribution functions are defined and assigned to transitions of the model for characterizing the temporal aspects of the attacker and system behavior. With the definition of the distributions, the stochastic model will be recognized to be a semi-Markov chain. This mathematical model will be analytically solved to calculate the desirable quantitative security metrics, such as mean time to security failure and steady-state security. The proposed method shows a systematic development of the stochastic modeling techniques and concepts, used frequently in the area of dependability evaluation, for attack process modeling. Like any other modeling method, the proposed model is also constructed based on some underlying assumptions, which are specific to the context of security analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号