首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The time management model for event processing in internet of things has a special and important requirement. Many events in real world applications are long-lasting events which have different time granularity with order or out-of-order. The temporal relationships among those events are often complex. An important issue of complex event processing is to extract patterns from event streams to support decision making in real-time. However, current time management model does not consider the unified solution about time granularity, time interval, time disorder, and the difference between workday calendar systems in different organizations. In this work, we analyze the preliminaries of temporal semantics of events. A tree-plan model of out-of-order durable events is proposed. A hybrid solution is correspondingly introduced. A case study is illustrated to explain the time constraints and the time optimization. Extensive experimental studies demonstrate the efficiency of our approach.  相似文献   

2.
In real world, some data have a specific temporal validity that must be appropiately managed. To deal with this kind of data, several proposals of temporal databases have been introduced. Moreover, time can also be affected by imprecision, vagueness, and/or uncertainty, since human beings manage time using temporal indications and temporal notions, which may also be imprecise. For this reason, information systems require appropriate support to accomplish this task. In this work, we present a novel possibilistic valid time model for fuzzy databases including the data structures, the integrity constraints, and the DML. Together with this model, we also present its implementation by means of a fuzzy valid time support module on top of a fuzzy object‐relational database system. The integration of these modules allows to perform queries that combines fuzzy valid time constraints together with fuzzy predicates. Besides, the model and implementation proposed support the crisp valid time model as a particular case of the fuzzy valid time support provided.  相似文献   

3.
Integrated access to multiple data sources requires a homogeneous interface provided by a federated schema. Such a federated schema should correctly reflect the semantics of the component schemata of which it is composed. Since the semantics of a database schema is also determined by a set of semantic integrity constraints, a correct schema integration has to deal with integrity constraints existing in the different component schemata. Traditionally, most schema integration approaches solely concentrate on the structural integration of given database schemata. Local integrity constraints are often simply neglected. Their relationship to global extensional assertions, which form the basic integration constraints, are even ignored completely. In this paper, we discuss the impact of global extensional assertions and local integrity constraints on federated schemata. In particular, we point out the correspondence between local integrity constraints and global extensional assertions. The knowledge about the correspondences between the given integrity constraints and extensional assertions can then be utilized for an augmented schema integration process.  相似文献   

4.
伴随流式数据处理需求而产生的复杂事件处理技术,在处理具有多样性和流式特征数据方面性能表现突出,被广泛应用于复杂事件大数据处理系统中。针对复杂事件大数据处理系统测试需求,提出一种基于贝叶斯网络的复杂事件大数据处理系统测试数据生成方法,该方法以部分真实数据中的复杂事件结构关系及概率分布特征构建贝叶斯网络预测模型,生成具有真实数据结构特征与分布特征的复杂事件测试数据集。实验结果表明,提出的方法具有可行性。  相似文献   

5.
基于事件的通信是许多分布系统进行异步交互的重要途径 .传统的分布事件管理机制既无法解决分布应用的异构性、复杂性和集成性 ,又无法充分支持实时环境 .分布对象计算的主流技术 CORBA是解决前一个难题的最佳手段 ,但标准的 CORBA事件服务又缺乏对实时应用的支持 .为此 ,本文研究了事件通信在分布实时环境中呈现出的新特性 ,分析了 CORBA事件服务的不足 ,提出了基于 CORBA的分布实时事件管理技术 .通过实时化标准的 CORBA事件服务 ,设计实现了 DREAMS系统 ,为 CORBA在实时领域的应用推广作出了有意义的尝试 .  相似文献   

6.
SQL Server中约束与触发器差异比较   总被引:1,自引:0,他引:1  
数据库中的数据完整性是非常重要的,它涉及到数据库能否真实反映现实世界.在SQL Server数据库中,可以使用约束、规则和触发器来保证数据完整性.文章分析比较了触发器和约束在实现数据完整性方面的异同.  相似文献   

7.
Making resources closer to the user might facilitate the integration of new technologies such as edge, fog, cloud computing, and big data. However, this brings many challenges shall be overridden when distributing a real‐time stream processing, executing multiapplication in a safe multitenant environment, and orchestrating and managing the services and resources into a hybrid fog/cloud federation. In this article, first, we propose a business process model and notation (BPMN) extension to enable the Internet of Things (IoT)‐aware business process (BP) modeling. The proposed extension takes into consideration the heterogeneous IoT and non‐IoT resources, resource capacities, quality of service constraints, and so forth. Second, we present a new IoT‐fog‐cloud based architecture, which (i) supports the distributed inter and intralayer communication as well as the real‐time stream processing in order to treat immediately IoT data and improve the entire system reliability, (ii) enables the multiapplication execution within a multitenancy architecture using the single sign‐on technique to guarantee the data integrity within a multitenancy environment, and (iii) relies on the orchestration and federation management services for deploying BP into the appropriate fog and/or cloud resources. Third, we model, by using the proposed BPMN 2.0 extension, smart autistic child and coronavirus disease 2019 monitoring systems. Then we propose the prototypes for these two smart systems in order to carry out a set of extensive experiments illustrating the efficiency and effectiveness of our work.  相似文献   

8.
针对时序数据的分布异构数据源的协同方式,基于网格技术提出了一种异构时序数据的同步协同模式。给出了异构时序数据的同步集成及应用的具体模式、过程和实现算法,提高了时序数据的同步异构集成的处理效率,保证了时序数据的一致性和完整性。通过在监测系统中的应用实现,验证了方法的有效性。  相似文献   

9.
基于虚拟集中方法的异构分布式数据集成模型   总被引:15,自引:1,他引:14  
网络和信息技术的发展导致新的数据格式不断涌现,数据整合的要求日益迫切,为此提出一个基于虚拟集中方法实现的异构数据集成模型,可以实现分布、异构数据的一致性访问,并能保证数据的一致性、实时性和数据源的“即插即用”,较好地解决了异构数据源的联合使用问题。最后给出了基于Java和XML技术的基本实现方法。  相似文献   

10.
实时数据集成技术及其应用   总被引:7,自引:0,他引:7  
随着实时数据库在流程行业的应用和互联网的迅速发展,分布实时数据的采集和集成已成为一个迫切要解决的问题,介绍了实时数据集成的关键技术以及用该技术构造的实时数据集成平台ISRDIP,包括体系结构,全局模式,数据复制,桌面工具等,ISRDIP平台已在石化行业使用。  相似文献   

11.
基于角色的分布式事务处理模型设计   总被引:2,自引:0,他引:2  
传统的事务处理主要应用于数据库系统和操作系统领域。随着网络技术和分布式对象技术的飞速发展,在大型的、分布的、异构的计算环境中也广泛引入了事务处理技术。通过比较分析目前主要分布式事务处理模型的优缺点,在异构数据集成系统中设计一种基于角色的Agent分布式事务处理模型,旨在异构数据集成系统中建立高性能、高可用性的应用。  相似文献   

12.
随着云计算、物联网等信息通信技术与数据采集与监控系统的整合, 工业控制系统面临新的安全问题, 其中数据的完整性、机密性保护和有效的身份认证问题受到了关注.为了在这样一个多功能、分布式的环境中解决这些问题, 该文利用基于属性的加密方法, 构建访问控制策略, 为用户提供身份认证和授权服务, 保护用户与工业控制系统间的数据通信安全并实时检查存储数据的完整性.方案从正确性、安全性及系统性能等方面做出分析, 并与常用的认证方法进行了对比.  相似文献   

13.
Discrete event simulation is a methodology to study the behavior of complex systems. Its drawback is that, in order to get reliable results, simulations usually have to be run over a long stretch of time. This time requirement could decrease through the usage of parallel or distributed computing systems. In this paper, we analyze the Time Warp synchronization protocol for parallel discrete event simulation and present an analytical model evaluating the upper bound on the completion time of a Time Warp simulation. In our analysis, we consider the case of a simulation model with homogeneous logical processes, where “homogeneous” means they have the same average event routine time and the same state saving cost. Then we propose a methodology to determine when it is time-convenient to use a Time Warp synchronized simulation, instead of a sequential one, for a simulation model with features matching those considered in our analysis. We give an answer to this question without the need to preliminary generate the simulation code. Examples of methodology usage are reported for the case of both a synthetic benchmark and a real world model  相似文献   

14.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.  相似文献   

15.
Adapting integrity enforcement techniques for data reconciliation   总被引:2,自引:0,他引:2  
Integration of data sources opens up possibilities for new and valuable applications of data that cannot be supported by the individual sources alone. Unfortunately, many data integration projects are hindered by the inherent heterogeneities in the sources to be integrated. In particular, differences in the way that real world data is encoded within sources can cause a range of difficulties, not least of which is that the conflicting semantics may not be recognised until the integration project is well under way. Once identified, semantic conflicts of this kind are typically dealt with by configuring a data transformation engine, that can convert incoming data into the form required by the integrated system. However, determination of a complete and consistent set of data transformations for any given integration task is far from trivial. In this paper, we explore the potential application of techniques for integrity enforcement in supporting this process. We describe the design of a data reconciliation tool (LITCHI) based on these techniques that aims to assist taxonomists in the integration of biodiversity data sets. Our experiences have highlighted several limitations of integrity enforcement when applied to this real world problem, and we describe how we have overcome these in the design of our system.  相似文献   

16.
Distributed real time database systems: background and literature review   总被引:1,自引:0,他引:1  
Today’s real-time systems (RTS) are characterized by managing large volumes of dispersed data making real-time distributed data processing a reality. Large business houses need to do distributed processing for many reasons, and they often must do it in order to stay competitive. So, efficient database management algorithms and protocols for accessing and manipulating data are required to satisfy timing constraints of supported applications. Therefore, new research in distributed real-time database systems (DRTDBS) is needed to investigate possible ways of applying database systems technology to real-time systems. This paper first discusses the performance issues that are important to DRTDBS, and then surveys the research that has been done so far on the issues like priority assignment policy, commit protocols and optimizing the use of memory in non-replicated/replicated environment pertaining to distributed real time transaction processing. In fact, this study provides a foundation for addressing performance issues important for the management of very large real time data and pointer to other publications in journals and conference proceedings for further investigation of unanswered research questions.  相似文献   

17.
We introduce a new paradigm for real-time conversion of a real world event into a rich multimedia database by processing data from multiple sensors observing the event. A real-time analysis of the sensor data, tightly coupled with domain knowledge, results in instant indexing of multimedia data at capture time. This yields semantic information to answer complex queries about the content and the ability to extract portions of data that correspond to complex actions performed in the real world. The power of such an instantly indexed multimedia database system, in content-based retrieval of multimedia data or in semantic analysis and visualization of the data, far exceeds that of systems which index multimedia data only after it is produced. We present LucentVision, an instantly indexed multimedia database system developed for the sport of tennis. This system analyzes video from multiple cameras in real time and captures the activity of the players and the ball in the form of motion trajectories. The system stores these trajectories in a database along with video, 3D models of the environment, scores, and other domain-specific information. LucentVision has been used to enhance live television and Internet broadcasts with game analyses and virtual replays in more than 250 international tennis matches.  相似文献   

18.
Solving problems in a complex application domain often requires a seamles integration of some existing knowledge derivation systems which have been independently developed for solving subproblems using different inferencing schemes. This paper presents the design and implementation of an Integrated Knowledge Derivation System (IKDS) which allows the user to query against a global database containing data derivable by the rules and constraints of a number of cooperative heterogeneous systems. The global knowledge representation scheme, the global knowledge manipulation language and the global knowledge processing mechanism of IKDS are described in detail. For global knowledge representation, the dynamic aspects of knowledge such as derivational relationships and restrictive dependencies among data items are modeled by a Function Graph to uniformly represent the capabilities (or knowledge) of the rule-based systems, while the usual static aspects such as data items and their structural interrelationships are modeled by an object-oriented model. For knowledge manipulation, three types of high-level, exploratory queries are introduced to allow the user to query the global knowledge base. For deriving the best global answers for queries, the global knowledge processing mechanism allows the rules and constraints in different component systems to be indiscriminately exploited despite the incompatibilities in their inferencing mechanisms and interpretation schemes. Several key algorithms required for the knowledge processing mechanism are described in this paper. The main advantage of this integration approach is that rules and constraints can in effect be shared among heterogeneous rule-based systems so that they can freely exchange their data and operate as parts of a single system. IKDS achieves the integration at the rule level instead of at the system level. It has been implemented in C running in a network of heterogenous component systems which contain three independently developed expert systems with different rule formats and inferencing mechanisms.Database Systems Research and Development Center, Department of Computer Information Sciences, Department of Electrical Engineering, University of Florida  相似文献   

19.
Different from traditional association-rule mining, a new paradigm called Ratio Rule (RR) was proposed recently. Ratio rules are aimed at capturing the quantitative association knowledge, We extend this framework to mining ratio rules from distributed and dynamic data sources. This is a novel and challenging problem. The traditional techniques used for ratio rule mining is an eigen-system analysis which can often fall victim to noise. This has limited the application of ratio rule mining greatly. The distributed data sources impose additional constraints for the mining procedure to be robust in the presence of noise, because it is difficult to clean all the data sources in real time in real-world tasks. In addition, the traditional batch methods for ratio rule mining cannot cope with dynamic data. In this paper, we propose an integrated method to mining ratio rules from distributed and changing data sources, by first mining the ratio rules from each data source separately through a novel robust and adaptive one-pass algorithm (which is called Robust and Adaptive Ratio Rule (RARR)), and then integrating the rules of each data source in a simple probabilistic model. In this way, we can acquire the global rules from all the local information sources adaptively. We show that the RARR technique can converge to a fixed point and is robust as well. Moreover, the integration of rules is efficient and effective. Both theoretical analysis and experiments illustrate that the performance of RARR and the proposed information integration procedure is satisfactory for the purpose of discovering latent associations in distributed dynamic data source.  相似文献   

20.
为解决Web数据集成中大量事件表象语句共指现实世界同一事件,导致数据冗余问题,提出一种基于Markov逻辑网的事件表象统一方法。方法从共指事件表象集合中获得较准确详细的一条表象,作为统一的事件表象对应现实事件,为数据集成提供高质量数据。本文将事件表象使用8个维度的形式表示,训练Markov逻辑网从共指事件表象集合中推理出准确详细的维度内容,重新组合后形成一条事件表象。本文使用少量一阶谓词从维度内容、事件表象和数据源等多角度制定相应规则,通过推理解决数据不一致、不完整、不详细问题。实验结果表明基于Markov逻辑网的事件表象统一方法能获得较准确详细的统一事件表象。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号