首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
The reliability-based design optimization (RBDO) using performance measure approach for problems with correlated input variables requires a transformation from the correlated input random variables into independent standard normal variables. For the transformation with correlated input variables, the two most representative transformations, the Rosenblatt and Nataf transformations, are investigated. The Rosenblatt transformation requires a joint cumulative distribution function (CDF). Thus, the Rosenblatt transformation can be used only if the joint CDF is given or input variables are independent. In the Nataf transformation, the joint CDF is approximated using the Gaussian copula, marginal CDFs, and covariance of the input correlated variables. Using the generated CDF, the correlated input variables are transformed into correlated normal variables and then the correlated normal variables are transformed into independent standard normal variables through a linear transformation. Thus, the Nataf transformation can accurately estimates joint normal and some lognormal CDFs of the input variable that cover broad engineering applications. This paper develops a PMA-based RBDO method for problems with correlated random input variables using the Gaussian copula. Several numerical examples show that the correlated random input variables significantly affect RBDO results.  相似文献   

4.
5.
The finite element method is extensively used for solving various problems in engineering practice. The input requires properties, which are generally imprecise. In this paper an elastoplastic analysis (Drucker–Prager yield criterion) is performed, and the properties are considered as uncertain parameters (elastic modulus, Poisson's ratio, cohesion and angle of internal friction).Two different methodologies are studied and compared:
  • •classical probabilistic approach in which the properties are treated as random variables: stochastic finite element methods using Monte Carlo simulation.
  • •possibilistic approach, by a model based on the fuzzy sets theory.
Some results are presented to point out the main characteristics of the two methodologies. [Lima BSLP. DSc thesis, (1996).]  相似文献   

6.
Often, about the same real‐life system, we have both measurement‐related probabilistic information expressed by a probability measure P(S) and expert‐related possibilistic information expressed by a possibility measure M(S). To get the most adequate idea about the system, we must combine these two pieces of information. For this combination, R. Yager—borrowing an idea from fuzzy logic—proposed to use a t‐norm f&(a,b) such as the product f&(a,b)=a· b, i.e., to consider a set function f(S)=f&(P(S),M(S)). A natural question is: can we uniquely reconstruct the two parts of knowledge from this function f(S)? In our previous paper, we showed that such a unique reconstruction is possible for the product t‐norm; in this paper, we extend this result to a general class of t‐norms. © 2011 Wiley Periodicals, Inc.  相似文献   

7.
The Bayesian method is widely used to identify a joint distribution, which is modeled by marginal distributions and a copula. The joint distribution can be identified by one-step procedure, which directly tests all candidate joint distributions, or by two-step procedure, which first identifies marginal distributions and then copula. The weight-based Bayesian method using two-step procedure and the Markov chain Monte Carlo (MCMC)-based Bayesian method using one-step and two-step procedures were recently developed. In this paper, the one-step weight-based Bayesian method and two-step MCMC-based Bayesian method using the parametric marginal distributions are proposed. Comparison studies among the Bayesian methods have not been thoroughly carried out. In this paper, the weight-based and MCMC-based Bayesian methods using one-step and two-step procedures are compared to see which Bayesian method accurately and efficiently identifies a correct joint distribution through simulation studies. It is validated that the two-step weight-based Bayesian method has the best performance.  相似文献   

8.
Input data representation is highly decisive in neural learning in terms of convergence. In this paper, within an analytical and statistical framework, the effect of the distribution characteristics of the input pattern vectors on the performance of the back-propagation (BP) algorithm is established for a function approximation problem, where parameters of an articulatory speech synthesizer are estimated from acoustic input data. The aim is to determine the optimum statistical characteristics of the acoustic input patterns in order to improve neural learning. Improvement is obtained through a modification of the statistical characteristics of the input data, which reduces effectively the occurrence of node saturation in the hidden layer.  相似文献   

9.
讨论一种常见的集成方法--距离平方和最小准则,指出该准则下由线性加权原理所得融合结果的优良性以及信息检索文献中的一个错误.然后通过分析基于距离之和最小准则所得融合结果的检索性能,发现由基于距离之和最小准则得到的融合结果距离原始检索结果最近.最后,通过实例验证了该方法的结果.  相似文献   

10.
In the last decade, many researchers have devoted considerable effort to the problem of image restoration. However, no recent study has been undertaken for a comparative evaluation of these techniques under conditions where a user may have different kinds of a priori information about the ideal image. To this effect, we briefly survey some recent techniques and compare the performance of a linear space-invariant (LSI) maximum a posteriori (MAP) filter, an LSI reduced update Kalman filter (RUKF), an edge-adaptive RUKF, and an adaptive convex-type constraint-based restoration implemented via the method of projection onto convex sets (POCS). The mean square errors resulting from the LSI algorithms are compared with that of the finite impulse response Wiener filter, which is the theoretical limit in this case. We also compare the results visually in terms of their sharpness and the appearance of artifacts. As expected, the space-variant restoration methods which are adaptive to local image properties obtain the best results.  相似文献   

11.
The fast aging of many western and eastern societies and their increasing reliance on information technology create a compelling need to reconsider older users' interactions with computers. Changes in perceptual and motor skill abilities that often accompany the aging process have important implications for the design of information input devices. This paper summarises the results of two comparative studies on information input with 90 subjects aged between 20 and 75 years. In the first study, three input devices – mouse, touch screen and eye-gaze control – were analysed concerning efficiency, effectiveness and subjective task difficulty with respect to the age group of the computer user. In the second study, an age-differentiated analysis of hybrid user interfaces for input confirmation was conducted combining eye-gaze control with additional input devices. Input confirmation was done with the space bar of a PC keyboard, speech input or a foot pedal. The results of the first study show that regardless of participants' age group, the best performance in terms of short execution time results from touch screen information input. This effect is even more pronounced for the elderly. Regarding the hybrid interfaces, the lowest mean execution time, error rate and task difficulty were found for the combination of eye-gaze control with the space bar. In conclusion, we recommend using direct input devices, particularly a touch screen, for the elderly. For user groups with severe motor impairments, we suggest eye-gaze information input.  相似文献   

12.
Teng  Fei  Liu  Peide  Liang  Xia 《Artificial Intelligence Review》2021,54(5):3431-3471
Artificial Intelligence Review - In group decision-making problems, decision makers prefer to use several linguistic terms to describe their own perception and knowledge, and give their preference...  相似文献   

13.
14.
Probabilistic latent semantic analysis (PLSA) is a method for computing term and document relationships from a document set. The probabilistic latent semantic index (PLSI) has been used to store PLSA information, but unfortunately the PLSI uses excessive storage space relative to a simple term frequency index, which causes lengthy query times. To overcome the storage and speed problems of PLSI, we introduce the probabilistic latent semantic thesaurus (PLST); an efficient and effective method of storing the PLSA information. We show that through methods such as document thresholding and term pruning, we are able to maintain the high precision results found using PLSA while using a very small percent (0.15%) of the storage space of PLSI.  相似文献   

15.
目前比较流行的中文分词方法为基于统计模型的机器学习方法。基于统计的方法一般采用人工标注的句子级的标注语料进行训练,但是这种方法往往忽略了已有的经过多年积累的人工标注的词典信息。这些信息尤其是在面向跨领域时,由于目标领域句子级别的标注资源稀少,从而显得更加珍贵。因此如何充分而且有效的在基于统计的模型中利用词典信息,是一个非常值得关注的工作。最近已有部分工作对它进行了研究,按照词典信息融入方式大致可以分为两类:一类是在基于字的序列标注模型中融入词典特征,而另一类是在基于词的柱搜索模型中融入特征。对这两类方法进行比较,并进一步进行结合。实验表明,这两类方法结合之后,词典信息可以得到更充分的利用,最终无论是在同领域测试和还是在跨领域测试上都取得了更优的性能。  相似文献   

16.
We provide the complete record of methodology that let us evolve BrilliAnt, the winner of the Ant Wars contest. Ant Wars contestants are virtual ants collecting food on a grid board in the presence of a competing ant. BrilliAnt has been evolved through a competitive one-population coevolution using genetic programming and fitnessless selection. In this paper, we detail the evolutionary setup that lead to BrilliAnt’s emergence, assess its direct and indirect human-competitiveness, and describe the behavioral patterns observed in its strategy.
Wojciech JaśkowskiEmail:
Krzysztof Krawiec (Corresponding author)Email:
Bartosz WielochEmail:
  相似文献   

17.
Received signal strength indication fingerprinting (RSSIF) is an indoor localization technique that exploits the prevalence of wireless local area networks (WLANs). Past research into RSSIF systems has seen the development of a number of algorithmic methods that provide effective indoor positioning. A key limitation, however, is that the performance of these methods is heavily dependent on practical implementation parameters and the nature of the test-bed environment. As a result, past research has tend to only compare algorithms of the same paradigm using a specific test-bed, and thus making it difficult to judge and compare their performance objectively. There is, therefore, a critical need for a study that addresses this gap in the literature. To this end, this paper compares a range of RSSIF methods, drawn from both probabilistic and deterministic paradigms, on a common test-bed. We evaluate their localization efficiency and accuracy, and also propose a number of improvements and modifications. In particular, we report on the impact of dense and transient access points (APs) - two problems that stem from the popularity of WLANs. Our results show methods that average the distance to the k nearest neighbors in signal space perform well with reduced dimensions. Moreover, we show the benefits of using the standard deviation of RSSI values to exclude transient APs. Other than that, we outline a shortcoming of the Bayesian algorithm in uncontrolled environments with highly variable APs and RSSI values, and propose an extension that uses a mode filter to restore its accuracy with increasing samples.  相似文献   

18.
A wide variety of uncertainty propagation methods exist in literature; however, there is a lack of good understanding of their relative merits. In this paper, a comparative study on the performances of several representative uncertainty propagation methods, including a few newly developed methods that have received growing attention, is performed. The full factorial numerical integration, the univariate dimension reduction method, and the polynomial chaos expansion method are implemented and applied to several test problems. They are tested under different settings of the performance nonlinearity, distribution types of input random variables, and the magnitude of input uncertainty. The performances of those methods are compared in moment estimation, tail probability calculation, and the probability density function construction, corresponding to a wide variety of scenarios of design under uncertainty, such as robust design, and reliability-based design optimization. The insights gained are expected to direct designers for choosing the most applicable uncertainty propagation technique in design under uncertainty.  相似文献   

19.
This paper explores possibilities for cross-fertilization between interpretive approaches and other approaches for performing the initial analysis of an information system as part of an effort to redesign and improve it. The paper presents a hypothetical situation concerning the analysis of a loan approval system in a large bank. It assumes that ethnographers observed three systems analysis projects that applied different approaches in three identical banks. It uses hypothetical accounts of the three analysis efforts to propose likely differences in the process and in the results. These differences illustrate possible opportunities for cross-fertilization that might make each approach more powerful and reliable. The paper concludes that the most likely direction for cross-fertilization is from interpretive approaches to the other approaches. An earlier version of this paper was presented at the First International Workshop on Interpretive Approaches to Information Systems and Computing Research, SIG-IAM, Brunel University, July 25–27, 2002, to motivate discussion about the applications, strengths, and limitations of interpretive approaches and to help in the further development of systems analysis methods.  相似文献   

20.
In this paper, we made an attempt to establish the usefulness of Lanczos solver with preconditioning technique over the preconditioned Conjugate Gradient (CG) solvers. We have presented here a detail comparative study with respect to convergence, speed as well as CPU-time, by considering appropriate boundary value problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号