首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
《Calphad》1997,21(3):337-348
The equivalence of three thermodynamic models used in the literature to describe the compositional and temperature dependencies of the thermodynamic properties of ordered intermetallic phases is presented. The three models are the generalized bond-energy model, the Wagner-Schottky-type model and the compound-energy model. Equations to convert the model parameters of the generalized bond-energy model to those of the other two models are derived. The validity of these equations is demonstrated by showing the successful conversion of the model parameters for the ordered phases in the Ti-Al system from the generalized bond-energy model to those of the Wagner-Schottky-type model and the compound-energy model. However, conversion of the model parameters from these two models to the generalized bond energy model is generally not possible due to additional constraints imposed in developing the later model.  相似文献   

2.
In this paper, a mapping of the XML document structure into the canonical data model is studied [5]. The XML document structure is specified by the Document Type Definition (DTD). DTD serves as a basis for a specification in the SYNTHESIS language; each DTD element declaration is mapped into some data type of SYNTHESIS.  相似文献   

3.
With the rapid development of Web 2.0 sites such as Blogs and Wikis users are encouraged to express opinions about certain products, services or social topics over the web. There is a method for aggregating these opinions, called Opinion Aggregation, which is made up of four steps: Collect, Identify, Classify and Aggregate. In this paper, we present a new conceptual multidimensional data model based on the Fuzzy Model based on the Semantic Translation to solve the Aggregate step of an Opinion Aggregation architecture, which allows exploiting the measure values resulting from integrating heterogeneous information (including unstructured data such as free texts) by means of traditional Business Intelligence tools. We also present an entire Opinion Aggregation architecture that includes the Aggregate step and solves the rest of steps (Collect, Identify and Classify) by means an Extraction, Transformation and Loading process. This architecture has been implemented in an Oracle Relational Database Management System. We have applied it to integrate heterogeneous data extracted from certain high end hotels websites, and we show a case study using the collected data during several years in the websites of high end hotels located in Granada (Spain). With this integrated information, the Data Warehouse user can make several analyses with the benefit of an easy linguistic interpretability and a high precision by means of interactive tools such as the dashboards.  相似文献   

4.
模型驱动架构(MDA)是由对象管理组织(OMG)提出的应用模型技术进行软件开发的方法和标准体系,其核心理念是平台无关模型建模和平台特定模型转换。通过编程实现基于元对象机制2.0(MOF2.0)查询/视图/转换(QVT)标准定义开发的框架,该框架可以将元模型转换为特定N层应用程序类,实现主要程序功能,从而大大提高开发效率。验证了通过编程实现模型驱动转换的灵活性和功能实现的多样性,包括XML文件对模型规范的描述以及生成代码的完整性。  相似文献   

5.
A generalised algorithm for transformation from an input-output model into a state-space model, which is also suitable for systems with a nondynamic part, is presented. The algorithm is based on a recently developed fast and accurate method which uses a recursive formula. The derivation of the algorithm is presented and a numerical example is also given. A discrete time invariant linear multivariable and completely observable system given in the canonical state-space form with a nondynamic part are considered  相似文献   

6.
面向对象数据模型的应用打破了油田单一关系数据库体系架构。如何在面向对象数据模型和关系数据模型并存条件下构建和优化协调统一的数据库体系架构,是油田数据库建设的重要技术方向之一。本文从油田数据库应用的角度概要对比了关系数据模型和面向对象数据模型的特点,对关系数据模型和面向对象数据模型共存条件下建立油田数据库体系架构的可能性进行了探讨。  相似文献   

7.
Empirical tests of the arbitrage pricing theory using measured variables rely on the accuracy of standard inferential theory in approximating the distribution of the estimated risk premiums and factor betas. The techniques employed thus far perform factor selection and model inference sequentially. Recent advances in Bayesian variable selection are adapted to an approximate factor model to investigate the role of measured economic variables in the pricing of securities. In finite samples, exact statistical inference is carried out using posterior distributions of functions of risk premiums and factor betas. The role of the panel dimensions in posterior inference is investigated. New empirical evidence is found of time-varying risk premiums with higher and more volatile expected compensation for bearing systematic risk during contraction phases. In addition, investors are rewarded for exposure to “Economic” risk.  相似文献   

8.
This paper explains how multisensor data fusion and target identification can be performed within the transferable belief model (TBM), a model for the representation of quantified uncertainty based on belief functions. We present the underlying theory, in particular the general Bayesian theorem needed to transform likelihoods into beliefs and the pignistic transformation needed to build the probability measure required for decision making. We present how this method applies in practice. We compare its solution with the classical one, illustrating it with an embarrassing example, where the TBM and the probability solutions completely disagree. Computational efficiency of the belief-function solution was supposedly proved in a study that we reproduce and we show that in fact the opposite conclusions hold. The results presented here can be extended directly to many problems of data fusion and diagnosis.  相似文献   

9.
Kroenke  D.M. 《Computer》2005,38(5):89-90
The relational model is a set-theoretic model for describing data constructs common in the business environment. Relational databases also minimize data duplication, which ensures data integrity and reduces storage requirements. Further, the relational model provides a way to represent variable-length constructs with fixed-length components. In addition, normalization theory is the basis of hundreds of papers and successful tenure applications. This ensured the academic community would carry the model forward. Finally, by following open standards, including the structured query language, vendors created a buzz with dozens of relational DBMS products such as System R, DB2, Oracle, SQL Server, Ingres, dBase, R:Base, Pearl, Paradox, and Access. An XDS have many advantages over a relational database, including seamless integration with user views as well as all the benefits of XML standards such as XML schema validation and the extensible stylesheet language for document materialization.  相似文献   

10.
In many large, distributed or mobile networks, broadcast algorithms are used to update information stored at the nodes. In this paper, we propose a new model of communication based on rendezvous and analyze a multi-hop distributed algorithm to broadcast a message in a synchronous setting. In the rendezvous model, two neighbors u and v can communicate if and only if u calls v and v calls u simultaneously. Thus nodes u and v obtain a rendezvous at a meeting point. If m is the number of meeting points, the network can be modeled by a graph of n vertices and m edges. At each round, every vertex chooses a random neighbor and there is a rendezvous if an edge has been chosen by its two extremities. Rendezvous enable an exchange of information between the two entities. We get sharp lower and upper bounds on the time complexity in terms of number of rounds to broadcast: we show that, for any graph, the expected number of rounds is between ln n and O (n2). For these two bounds, we prove that there exist some graphs for which the expected number of rounds is either O (ln (n)) or Ω (n2). For specific topologies, additional bounds are given.  相似文献   

11.
A model linking algorithm which is based on the algorithmic concept employed in the Project Independence Evaluation System (PIES) is developed for a dynamic energy/economic interaction model. The convergence of the algorithm is tested empirically and the results show that the algorithm converges to the desired accuracy. Potential application to generalized networks is discussed.Results of this work confirms the possibility of utilizing the PIES algorithm concept as a model integration scheme in general network situations.  相似文献   

12.
The goal of this paper is to design a statistical test for the camera model identification problem. The approach is based on the generalized noise model that is developed by following the image processing pipeline of the digital camera. More specifically, this model is given by starting from the heteroscedastic noise model that describes the linear relation between the expectation and variance of a RAW pixel and taking into account the non-linear effect of gamma correction. The generalized noise model characterizes more accurately a natural image in TIFF or JPEG format. The present paper is similar to our previous work that was proposed for camera model identification from RAW images based on the heteroscedastic noise model. The parameters that are specified in the generalized noise model are used as camera fingerprint to identify camera models. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test is presented and its statistical performances are theoretically established. In practice when the model parameters are unknown, two Generalized Likelihood Ratio Tests are designed to deal with this difficulty such that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated images and real natural JPEG images highlight the relevance of the proposed approach.  相似文献   

13.
The standard reference model (SRM) for intelligent multimedia presentation systems describes a framework for the automatic generation of multimedia presentations. This framework, however, lacks an explicit document model of the presentation being generated. The Amsterdam hypermedia model (AHM) describes the document features of a hypermedia presentation explicitly. We take the AHM and use it as a basis for describing in detail the stages of generating a hypermedia presentation within the SRM framework, which we summarise in a table. By doing so, the responsibilities of the individual SRM layers become more apparent.  相似文献   

14.
In this paper we maintain that there are benefits to extending the scope of student models to include additional information as part of the explicit student model. We illustrate our argument by describing a student model which focuses on 1. performance in the domain; 2. acquisition order of the target knowledge; 3. analogy; 4. learning strategies; 5. awareness and reflection. The first four of these issues are explicitly represented in the student model. Awareness and reflection should occur as the student model is transparent; it is used to promote learner reflection by encouraging the learner to view, and even negotiate changes to the model. Although the architecture is transferable across domains, each instantiation of the student model will necessarily be domain specific due to the importance of factors such as the relevant background knowledge for analogy, and typical progress through the target material. As an example of this approach we describe the student model of an intelligent computer assisted language learning system which was based on research findings on the above five topics in the field of second language acquisition. Throughout we address the issue of the generality of this model, with particular reference to the possibility of a similar architecture reflecting comparable issues in the domain of learning about electrical circuits.  相似文献   

15.
The tensor‐product (TP) model transformation is a recently proposed numerical method capable of transforming linear parameter varying state‐space models to the higher order singular value decomposition (HOSVD) based canonical form of polytopic models. It is also capable of generating various types of convex TP models, a type of polytop models, for linear matrix inequality based controller design. The crucial point of the TP model transformation is that its computational load exponentially explodes with the dimensionality of the parameter vector of the parameter‐varying state‐space model. In this paper we propose a modified TP model transformation that leads to considerable reduction of the computation. The key idea of the method is that instead of transforming the whole system matrix at once in the whole parameter space, we decompose the problem and perform the transformation element wise and restrict the computation to the subspace where the given element of the model varies. The modified TP model transformation can readily be executed in higher dimensional cases when the original TP model transformation fails. The effectiveness of the new method is illustrated with numerical examples. Copyright © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

16.
17.
Document ranking and the vector-space model   总被引:2,自引:0,他引:2  
Efficient and effective text retrieval techniques are critical in managing the increasing amount of textual information available in electronic form. Yet text retrieval is a daunting task because it is difficult to extract the semantics of natural language texts. Many problems must be resolved before natural language processing techniques can be effectively applied to a large collection of texts. Most existing text retrieval techniques rely on indexing keywords. Unfortunately, keywords or index terms alone cannot adequately capture the document contents, resulting in poor retrieval performance. Yet keyword indexing is widely used in commercial systems because it is still the most viable way by far to process large amounts of text. Using several simplifications of the vector-space model for text retrieval queries, the authors seek the optimal balance between processing efficiency and retrieval effectiveness as expressed in relevant document rankings  相似文献   

18.
自适应相干模板法在信号检测系统中具有广泛应用,该算法可同时滤除工频干扰和基线漂移。但在工频频率不断波动的采集系统中,该算法的滤波效果明显变差。介绍了一种双线程模式实时跟踪工频干扰的自适应相干模板法及该算法在LabVIEW上的实现过程。实验证明,该算法通过在LabVIEW上的实现,能够快速实时跟踪和滤除工频干扰,且效果明显。  相似文献   

19.
A variance shift outlier model (VSOM), previously used for detecting outliers in the linear model, is extended to the variance components model. This VSOM accommodates outliers as observations with inflated variance, with the status of the ith observation as an outlier indicated by the size of the associated shift in the variance. Likelihood ratio and score test statistics are assessed as objective measures for determining whether the ith observation has inflated variance and is therefore an outlier. It is shown that standard asymptotic distributions do not apply to these tests for a VSOM, and a modified distribution is proposed. A parametric bootstrap procedure is proposed to account for multiple testing. The VSOM framework is extended to account for outliers in random effects and is shown to have an advantage over case-deletion approaches. A simulation study is presented to verify the performance of the proposed tests. Challenges associated with computation and extensions of the VSOM to the general linear mixed model with correlated errors are discussed.  相似文献   

20.
The estimate of digitization costs is a very difficult task. It is difficult to obtain accurate values because of the great quantity of unknown factors. However, digitization projects need to have a precise idea of the economic costs and the times involved in the development of their contents. The common practice when we start digitizing a new collection is to set a schedule, and a firm commitment to fulfil it (both in terms of cost and deadlines), even before the actual digitization work starts. As it happens with software development projects, incorrect estimates produce delays and cause costs overdrafts. Based on methods used in Software Engineering for software development cost prediction like COCOMO and Function Points, and using historical data gathered during 5 years at the MCDL project, during the digitization of more than 12000 books, we have developed a method for time-and-cost estimates named DiCoMo (Digitization Cost Model) for digital content production in general. This method can be adapted to different production processes, like the production of digital XML or HTML texts using scanning and OCR, and undergoing human proofreading and error correction, or for the production of digital facsimiles (scanning without OCR). The accuracy of the estimates improve with time, since the algorithms can be optimized by making adjustments based on historical data gathered from previous tasks. Finally, we consider the problem of parallelizing tasks, i.e. dividing the work among a number of encoders that will work in parallel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号