共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
This article describes some of the early developments that can now be viewed as steps toward the development of program control and the modern concept of a stored program. In particular, it discusses early automatic devices, Babbage's contributions set against a background of the technology of his day, the contributions of some of his direct successors, and the genesis of the stored-program idea 相似文献
4.
5.
We present a mechanism that allows any nonlinear theory of electrodynamics to be described as a consequence of a coupling of the electromagnetic field to gravity in the presence of a vacuum represented by the cosmological constant. We emphasize gravity’s exclusive role of catalysis. 相似文献
6.
W.G. Hoover T.G. Pierce C.G. Hoover J.O. Shugart C.M. Stein A.L. Edwards 《Computers & Mathematics with Applications》1994,28(10-12)
We explore the relationship of Monaghan's version of “smoothed-particle hydrodynamics,” here called “smoothed-particle applied mechanics,” to nonequilibrium molecular dynamics. We first use smoothed particles to model the simplest possible linear transport problems, as well as a liquid-drop problem. We then consider both gas-phase and dense-fluid versions of Rayleigh-Bénard convection, all in two space dimensions. We also discuss the possibility of combining the microscopic and macroscopic techniques in a hybrid scheme well-suited to the massively-parallel modelling of large-scale nonequilibrium flows. 相似文献
7.
It is widely believed that, in molecular dynamics (MD) simulations, round-off errors can cause numerical irreversibility since, in the standard MD, floating-point real number arithmetic is employed and round-off errors cannot be avoided. To investigate the characteristic of this numerical irreversibility, the ‘bit-reversible algorithm’, which is completely time-reversible and is free from any round-off error, is made use of as a test bed. Through this study, it is clearly demonstrated that, other than the extent of the stability of the system, the appearance of irreversibility is related to the ‘quantity’ of the controlled noise. By means of the bit-reversible simulation added to the controlled noise of an appropriate ‘quantity’, the characteristic of the numerical irreversibility in the standard MD is revealed. 相似文献
8.
9.
Both sample entropy and approximate entropy are measurements of complexity. The two methods have received a great deal of attention in the last few years, and have been successfully verified and applied to biomedical applications and many others. However, the algorithms proposed in the literature require O(N2) execution time, which is not fast enough for online applications and for applications with long data sets. To accelerate computation, the authors of the present paper have developed a new algorithm that reduces the computational time to O(N3/2)) using O(N) storage. As biomedical data are often measured with integer-type data, the computation time can be further reduced to O(N) using O(N) storage. The execution times of the experimental results with ECG, EEG, RR, and DNA signals show a significant improvement of more than 100 times when compared with the conventional O(N2) method for N = 80,000 (N = length of the signal). Furthermore, an adaptive version of the new algorithm has been developed to speed up the computation for short data length. Experimental results show an improvement of more than 10 times when compared with the conventional method for N > 4000. 相似文献
10.
It is well known that the EDVAC was the first general-purpose electronic digital stored-program computer to be designed. What is not so well known is that the EDVAC was nowhere near the first computer to be operational, was not actually constructed according to the initial design, was not reliable when constructed, and was eventually so heavily modified that it would have been almost unrecognizable to the original design team. The origins, designs, and construction of EDVAC are discussed. EDVAC's software and later modifications are reviewed 相似文献
11.
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms. In this paper we explain the relationship of EDA to algorithms developed in statistics, artificial intelligence, and statistical physics. The major design issues are discussed within a general interdisciplinary framework. It is shown that maximum entropy approximations play a crucial role. All proposed algorithms try to minimize the Kullback-Leibler divergence KLD between the unknown distribution p(x) and a class q(x) of approximations. However, the Kullback-Leibler divergence is not symmetric. Approximations which suppose that the function to be optimized is additively decomposed (ADF) minimize KLD(q||p), the methods which learn the approximate model from data minimize KLD(p||q). This minimization is identical to maximizing the log-likelihood. In the paper three classes of algorithms are discussed. FDA uses the ADF to compute an approximate factorization of the unknown distribution. The factors are marginal distributions, whose values are computed from samples. The second class is represented by the Bethe-Kikuchi approach which has recently been rediscovered in statistical physics. Here the values of the marginals are computed from a difficult constrained minimization problem. The third class learns the factorization from the data. We analyze our learning algorithm LFDA in detail. It is shown that learning is faced with two problems: first, to detect the important dependencies between the variables, and second, to create an acyclic Bayesian network of bounded clique size. 相似文献
12.
13.
Abstract Landsat investigations offered European researchers the opportunity to begin research and development programmes in remote sensing technology. During the seventies United States' satellites together with aircraft sensors provided the data necessary to improve methods. During the following decade Europe has launched, or prepared the launch of, a generation of advanced Eanh observation satellites and its scientific community has prepared the operational use of the corresponding technology. 相似文献
14.
《Information Sciences》1986,40(2):165-174
A new nonprobabilistic entropy measure is introduced in the context of fuzzy sets or messages. Fuzzy units, or fits, replace bits in a new framework of fuzzy information theory. An appropriate measure of entropy or fuzziness of messages is shown to be a simple ratio of distances: the distances between the fuzzy message and its nearest and farthest nonfuzzy neighbors. Fuzzy conditioning is examined as the degree of subsethood (submessagehood) of one fuzzy set or message in another. This quantity is shown to behave as a conditional probability in many contexts. It is also shown that the entropy of A is the degree to which A ∪ Ac is a subset of A ∩ Ac, an intuitive relationship that cannot occur in probability theory. This theory of subsethood is then shown to solve one of the major problems with Bayes-theorem learning and its variants—the problem of requiring that the space of alternatives be partitioned into disjoint exhaustive hypotheses. Any fuzzy subsets will do. However, a rough inverse relationship holds between number and fuzziness of partitions and the information gained from experience. All results reduce to fuzzy cardinality. 相似文献
15.
变精度粗糙集是解决模糊决策问题的重要工具,图像边缘信息本身就具有一定的不确定性和模糊性,而图像分割的效果直接依赖于对图像边缘像素的判断精度,因此变精度粗糙集可以更精确地表达图像边缘。将经典图像粗糙集模型扩展到图像变精度粗糙集模型,并将其应用于灰度图像边缘判定问题,利用变精度粗糙集的上下近似定义,构造了变精度灰色形态学算子,依据灰度图像粗糙熵的定义,提出一种基于VPRS粗糙熵的图像分割算法。针对噪声图像,该方法用变精度粗糙集模型判断目标、背景和边界像素集,在不同参数下判断近似集时容忍部分噪声点的存在,从而可获得较好的灰色边缘图像。实验结果说明,由于变精度灰度形态学算子避免了复杂参数优化过程,算法时间执行效率高;同时由于粗糙形态学算子对噪声的优良处理能力,新算法具有较好的噪声鲁棒性。 相似文献
16.
为研究系统故障在不同因素叠加时体现的总体规律、故障变化程度和故障信息量,提出系统故障熵的概念。基于线性熵的线性均匀度特性,推导了多因素相被划分为两状态时的线性熵模型。认为线性熵可以表征系统故障熵,进而研究了系统故障熵的时变特征。对连续时间间隔内的不同因素状态叠加下系统故障进行统计,得到系统故障概率分布,绘制系统故障熵时变曲线。从结果来看至少可以完成3项任务:从变化规律得到考虑不同因素影响下的系统故障熵变化情况,系统故障熵的总体变化规律,系统可靠性的稳定性。此研究可应用于类似情况下的各领域故障及数据分析。 相似文献
17.
文章综述了近年来目标波段熵最小化(BTEM)作为一种新发展的自模式曲线分辨法(SMCR)在纯组分光谱重建中的原理及其应用研究进展,展望了BTEM应用于药物分析的未来发展前景. 相似文献
18.
In this paper, we describe two experiments that show the powerful influence of interface complexity and entropy on online information-sharing behaviour. One hundred and thirty-four participants were asked to do a creativity test and answer six open questions against three different screen backgrounds of increasing complexity. Our data show that, as an interface becomes more complex and has more entropy users refer less to themselves and show less information-sharing breadth. However, their verbal creativity and information-sharing depth do not suffer in the same way. Instead, an inverse U-shaped relationship between interface complexity and creativity as well as information-sharing depth can be observed: users become more creative and thoughtful until a certain tipping point of interface complexity is reached. At that point, creativity and thinking suffer, leading to significantly less disclosure. This result challenges the general HCI assumption that simplicity is always best for computers’ interface design, as users’ creativity and information-sharing depth initially increases with more interface complexity. Our results suggest that the Yerkes–Dodson Law may be a key theory underlying online creativity and depth of online disclosures. 相似文献
19.
In this paper, we define the conditional Rényi entropy and show that the so-called chain rule holds for the Rényi entropy. Then, we introduce a relation for the rate of Rényi entropy and use it to derive the rate of the Rényi entropy for an irreducible-aperiodic Markov chain. We also show that the bound for the Rényi entropy rate is simply the Shannon entropy rate. 相似文献
20.
针对评估信息为多粒度直觉语言集的决策问题,提出一种基于相对熵和二元熵的TODIM方法。该方法首先定义了直觉语言数的相对熵和二元熵,以度量决策信息的差异和不确定性;其次,构建了基于相对熵和二元熵的专家赋权模型,并建立了主观权重完全已知、部分已知和完全未知场景下的属性赋权模型;最后,为集结多粒度群体决策信息,提出了多粒度直觉语言加权算术平均(MIL-WAA)算子。算例分析表明,该方法能够较好地度量决策信息的不确定性和差异性,并考虑了决策者的有限理性行为,具有一定的合理性和有效性。 相似文献