共查询到20条相似文献,搜索用时 62 毫秒
1.
认证信誉是目前信任模型的一个重要组成部分,它使Agent利用很小的开销就可以跟合作伙伴建立起信任关系.然而,由于在评价的计算过程中忽略了交易金额这个重要因素,使得认证信誉不能够很好的处理虚假信息所造成的影响.因此,提出了基于交易金额的认证信誉来解决这个问题,实验表明基于交易金额的认证信誉能够帮助Agent选择更合适的交易伙伴. 相似文献
2.
Enterprise architecture has become an important driver to facilitate digital transformation in companies, since it allows to manage IT and business in a holistic and integrated manner by establishing connections among technology concerns and strategical/motivational ones. Enterprise architecture modelling is critical to accurately represent business and their IT assets in combination. This modelling is important when companies start to manage their enterprise architecture, but also when it is remodelled so that the enterprise architecture is realigned in a changing world. Enterprise architecture is commonly modelled by few experts in a manual way, which is error-prone and time-consuming and makes continuous realignment difficult. In contrast, other enterprise architecture modelling proposal automatically analyses some artefacts like source code, databases, services, etc. Previous automated modelling proposals focus on the analysis of individual artefacts with isolated transformations toward ArchiMate or other enterprise architecture notations and/or frameworks. We propose the usage of Knowledge Discovery Metamodel (KDM) to represent all the intermediate information retrieved from information systems’ artefacts, which is then transformed into ArchiMate models. Thus, the core contribution of this paper is the model transformation between KDM and ArchiMate metamodels. The main implication of this proposal is that ArchiMate models are automatically generated from a common knowledge repository. Thereby, the relationships between different-nature artefacts can be exploited to get more complete and accurate enterprise architecture representations. 相似文献
3.
The statistical information processing can be characterized by the likelihood function defined by giving an explicit form
for an approximation to the true distribution. This mathematical representation, which is usually called a model, is built
based on not only the current data but also prior knowledge on the object and the objective of the analysis. Akaike 2,3) showed that the log-likelihood can be considered as an estimate of the Kullback-Leibler (K-L) information which measures
the similarity between the predictive distribution of the model and the true distribution. Akaike information criterion (AIC)
is an estimate of the K-L information and makes it possible to evaluate and compare the goodness of many models objectively.
In consequence, the minimum AIC procedure allows us to develop automatic modeling and signal extraction procedures. In this
article, we give a simple explanation of statistical modeling based on the AIC and demonstrate four examples of applying the
minimum AIC procedure to an automatic transaction of signals observed in the earth sciences.
Genshiro, Kitagawa, Ph.D.: He is a Professor in the Department of Prediction and Control at the Institute of Statistical Mathematics. He is currently
Deputy Director of the Institute of Statistical Mathematics and Professor of Statistical Science at the Graduate University
for Advanced Study. He obtained his Ph.D. from the Kyushu University in 1983. His primary research interests are in time series
analysis, non-Gaussian nonlinear filtering, and statistical modeling. He has published over 50 research papers. He was awarded
the 2nd Japan Statistical Society Prize in 1997.
Tomoyuki Higuchi, Ph.D.: He is an Associate Professor in the Department of Prediction and Control at the Institute of Statistical Mathematics. He
is currently an Associate Professor of Statistical Science at the Graduate University for Advanced Study. He obtained his
Ph.D. from the University of Tokyo in 1989. His research interests are in statistical modeling of space-time data, stochastic
optimization techniques, and data mining. He has published over 30 research papers. 相似文献
4.
Enterprise models assist the governance and transformation of organizations through the specification, communication and analysis of strategy, goals, processes, information, along with the underlying application and technological infrastructure. Such models cross-cut different concerns and are often conceptualized using domain-specific modelling languages. This paper explores the application of graph-based semantic techniques to specify, integrate and analyse multiple, heterogeneous enterprise models. In particular, the proposal described in this paper (1) specifies enterprise models as ontological schemas, (2) uses transformation mapping functions to integrate the ontological schemas and (3) analyses the integrated schemas with graph querying and logical inference. The proposal is evaluated through a scenario that integrates three distinct enterprise modelling languages: the business model canvas, e3value, and the business layer of the ArchiMate language. The results show, on the one hand, that the graph-based approach is able to handle the specification, integration and analysis of enterprise models represented with different modelling languages and, on the other, that the integration challenge resides in defining appropriate mapping functions between the schemas. 相似文献
5.
Qualitative modelling is a technique integrating the fields of theoretical computer science, artificial intelligence and the physical and biological sciences. The aim is to be able to model the behaviour of systems without estimating parameter values and fixing the exact quantitative dynamics. Traditional applications are the study of the dynamics of physical and biological systems at a higher level of abstraction than that obtained by estimation of numerical parameter values for a fixed quantitative model. Qualitative modelling has been studied and implemented to varying degrees of sophistication in Petri nets, process calculi and constraint programming. In this paper we reflect on the strengths and weaknesses of existing frameworks, we demonstrate how recent advances in constraint programming can be leveraged to produce high quality qualitative models, and we describe the advances in theory and technology that would be needed to make constraint programming the best option for scientific investigation in the broadest sense. 相似文献
7.
This paper introduces a simple two-layer soil water balance model developed to Bridge Event And Continuous Hydrological (BEACH) modelling. BEACH is a spatially distributed daily basis hydrological model formulated to predict the initial condition of soil moisture for event-based soil erosion and rainfall–runoff models. The latter models usually require the spatially distributed values of antecedent soil moisture content and other input parameters at the onset of an event. BEACH uses daily meteorological records, soil physical properties, basic crop characteristics and topographical data. The basic processes incorporated in the model are precipitation, infiltration, transpiration, evaporation, lateral flow, vertical flow and plant growth. The principal advantage of this model lies in its ability to provide timely information on the spatially distributed soil moisture content over a given area without the need for repeated field visits. The application of this model to the CATSOP experimental catchment showed that it has the capability to estimate soil moisture content with acceptable accuracy. The root mean squared error of the predicted soil moisture content for 6 monitored locations within the catchment ranged from 0.011 to 0.065 cm 3 cm ?3. The predicted daily discharge at the outlet of the study area agreed well with the observed data. The coefficient of determination and Nash–Sutcliffe efficiency of the predicted discharge were 0.824 and 0.786, respectively. BEACH has been developed within freely available GIS and programming language, PCRaster. It is a useful teaching tool for learning about distributed water balance modelling and land use scenario analysis. 相似文献
8.
Rosen's modelling relations constitute a conceptual schema for the understanding of the bidirectional process of correspondence between natural systems and formal symbolic systems. The notion of formal systems used in this study refers to information structures constructed as algebraic rings of observable attributes of natural systems, in which the notion of observable signifies a physical attribute that, in principle, can be measured. Due to the fact that modelling relations are bidirectional by construction, they admit a precise categorical formulation in terms of the category-theoretic syntactic language of adjoint functors, representing the inverse processes of information encoding/decoding via adjunctions. As an application, we construct a topological modelling schema of complex systems. The crucial distinguishing requirement between simple and complex systems in this schema is reflected with respect to their rings of observables by the property of global commutativity. The global information structure representing the behaviour of a complex system is modelled functorially in terms of its spectrum functor. An exact modelling relation is obtained by means of a complex encoding/decoding adjunction restricted to an equivalence between the category of complex information structures and the category of sheaves over a base category of partial or local information carriers equipped with an appropriate topology. 相似文献
9.
To simplify the difficult task of writing fault-tolerant parallel software, we implemented extensions to the basic functionality of the LINDA or tuple-space programming model. Our approach implements a mechanism of transaction processing to ensure that tuples are properly handled in the event of a node or communications failure. If a process retrieving a tuple fails to complete processing or a tuple posting or retrieval message is lost, the system is automatically rolled back to a previous stable state. Processing failures and lost messages are detected by time-out alarms. Roll-back is accomplished by reposting pertinent tuples. Intermediate tuples produced during partial processing are not committed or made available until a process completes. In the absence of faults, system overhead is low. The fault-tolerance mechanism is implemented at the system level and requires little programmer effort or expertise. Two implementations of the model are discussed, one using a UNIX network of workstations and one using a Transputer network. Data measuring model overhead and some aspects of system performance in the presence of faults is presented for an example system. 相似文献
10.
I describe a method, particularly suitable to implementation by computer algebra, for the derivation of low-dimensional models of dynamical systems. The method is systematic and is based upon centre manifold theory. Computer code for the algorithm is relatively simple, robust and flexible. The method is applied to two examples: one a straightforward pitchwork bifurcation, and one being the dynamics of thin fluid films. 相似文献
11.
This article introduces an approach to identify unknown nonlinear systems by fuzzy rules and support vector machines (SVMs). Structure identification is realised by an on-line SVM technique, the fuzzy rules are generated automatically. Time-varying learning rates are applied for updating the membership functions of the fuzzy rules. Finally, the upper bounds of the modelling errors are proven. 相似文献
13.
针对大型应用系统运维过程中出现的问题,利用现有系统资源,开发运维平台,实现运维事务系统化管理。通过制定业务群组、业务角色、运维环节、事务类型、处理流程,实现运维事务流转,方便处理和跟踪,提高处理效率,为应用系统的完善和审计提供依据。 相似文献
14.
We consider the location of paper watermarks in documents that present problems such as variable paper thickness, stain and other damage. Earlier work has shown success in exploiting a computational model of backlit image acquisition – here we enhance this approach by incorporating knowledge of surface verso features. Robustly removing recto features using established techniques, we present a registration approach that permits similarly robust removal of verso, leaving only features attributable to watermark, folds, chain lines and inconsistencies of paper manufacture. Experimental results illustrate the success of the approach. 相似文献
15.
原生XML数据库(NXD)的事务处理机制是保障数据库正常运行的核心机制,是当前研究的一个重点.在分析了现事务处理机制的基础上,结合关系数据库中成熟的封锁理论,提出基于XPath的XPL四种锁封锁机制,对数据库的操作和事务做出了明确的定义,并给出了实例进行验证说明. 相似文献
16.
In this article, we present a mixed qualitative and quantitative approach for evaluation of information technology (IT) security investments. For this purpose, we model security scenarios by using defense trees, an extension of attack trees with countermeasures and we use economic quantitative indexes for computing the defender's return on security investment and the attacker's return on attack. We show how our approach can be used to evaluate economic profitability of countermeasures and their deterrent effect on attackers, thus providing decision makers with a useful tool for performing better evaluation of IT security investments during the risk management process. 相似文献
17.
In the context of role-based access control (RBAC), mining approaches, such as role mining or organizational mining, can be applied to derive permissions and roles from a system's configuration or from log files. In this way, mining techniques document the current state of a system and produce current-state RBAC models. However, such current-state RBAC models most often follow from structures that have evolved over time and are not the result of a systematic rights management procedure. In contrast, role engineering is applied to define a tailored RBAC model for a particular organization or information system. Thus, role engineering techniques produce a target-state RBAC model that is customized for the business processes supported via the respective information system. The migration from a current-state RBAC model to a tailored target-state RBAC model is, however, a complex task. In this paper, we present a systematic approach to migrate current-state RBAC models to target-state RBAC models. In particular, we use model comparison techniques to identify differences between two RBAC models. Based on these differences, we derive migration rules that define which elements and element relations must be changed, added, or removed. A migration guide then includes all migration rules that need to be applied to a particular current-state RBAC model to produce the corresponding target-state RBAC model. We conducted two comparative studies to identify which visualization technique is most suitable to make migration guides available to human users. Based on the results of these comparative studies, we implemented tool support for the derivation and visualization of migration guides. Our software tool is based on the Eclipse Modeling Framework (EMF). Moreover, this paper describes the experimental evaluation of our tool. 相似文献
18.
Prior studies examine whether or not using advertised reference price (ARP) increases consumers’ perceived gain from the purchase decision. However, few studies address whether or not the ARP should involve an online or offline source. Present study thus investigates the influence of the APR source on consumer perceived value. Empirical results indicate that consumers would perceive higher transaction value when an online seller adopts another online retailer’s sale price as ARP than when the online seller uses an offline competitor’s sale price as the ARP. 相似文献
19.
The method of fuzzy-model-based control has emerged as an alternative approach to the solution of analysis and synthesis problems associated with plants that exhibit complex non-linear behaviour. At present, the literature in this field has addressed the control design problem related to the stabilization of state-space fuzzy models. In practical situations, however, where perturbations exist in the state-space model, the problem becomes one of robust stabilization that has yet to be posed and solved. The present paper contributes in this direction through the development of a framework that exploits the distinctive property of the fuzzy model as the convex hull of linear system matrices. Using such a quasi-linear model structure, the robust stabilization of complex non-linear systems, against modelling error and parametric uncertainty, based on static state or dynamic output feedback, is reduced to a linear matrix inequality (LMI) problem. 相似文献
20.
In this paper,we introduce the Anderson acceleration technique developed to be applied to reinforcement learning tasks.We develop an accelerated value iteration... 相似文献
|