首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Software quality models can predict which modules will have high risk, enabling developers to target enhancement activities to the most problematic modules. However, many find collection of the underlying software product and process metrics a daunting task.Many software development organizations routinely use very large databases for project management, configuration management, and problem reporting which record data on events during development. These large databases can be an unintrusive source of data for software quality modeling. However, multiplied by many releases of a legacy system or a broad product line, the amount of data can overwhelm manual analysis. The field of data mining is developing ways to find valuable bits of information in very large databases. This aptly describes our software quality modeling situation.This paper presents a case study that applied data mining techniques to software quality modeling of a very large legacy telecommunications software system's configuration management and problem reporting databases. The case study illustrates how useful models can be built and applied without interfering with development.  相似文献   

2.
The computing environment in most medium-sized and large enterprises involves old main-frame based (legacy) applications and systems as well as new workstation-based distributed computing systems. The objective of the METEOR project is to support multi-system workflow applications that automate enterprise operations. This paper deals with the modeling and specification of workflows in such applications. Tasks in our heterogeneous environment can be submitted through different types of interfaces on different processing entities. We first present a computational model for workflows that captures the behavior of both transactional and non-transactional tasks of different types. We then develop two languages for specifying a workflow at different levels of abstraction: the Workflow Specification Language (WFSL) is a declarative rule-based language used to express the application-level interactions between multiple tasks, while the Task Specification Language (TSL) focuses on the issues related to individual tasks. These languages are designed to address the important issues of inter-task dependencies, data formatting, data exchange, error handling, and recovery. The paper also presents an architecture for the workflow management system that supports the model and the languages. Recommended by: Omran Bukhres and e. Kühn  相似文献   

3.
Generality and scale are important but difficult issues in knowledge engineering. At the root of the difficulty lie two challenging issues: how to accumulate huge volumes of knowledge and how to support heterogeneous knowledge and processing. One approach to the first issue is to reuse legacy knowledge systems, integrate knowledge systems with legacy databases, and enable sharing of the databases by multiple knowledge systems. We present an architecture called HIPED for realizing this approach. HIPED converts the second issue above into a new form: how to convert data accessed from a legacy database into a form appropriate to the processing method used in a legacy knowledge system. One approach to this reformed issue is to use method-specific compilation of data into knowledge. We describe an experiment in which a legacy knowledge system called INTERACTIVE KRITIK is integrated with an ORACLE database. The experiment indicates the computational feasibility of method-specific data-to-knowledge compilation.  相似文献   

4.
关于分布式、异构、历史遗留数据的数据挖掘研究   总被引:3,自引:0,他引:3  
主要研究在分布式、异构和历史遗留数据库中进行数据挖掘的方法和策略。首先讨论分布式数据库的挖掘方法,在此基础上进行扩展讨论异构数据源的数据挖掘方法;最后,讨论历史遗留数据库的挖掘方法。  相似文献   

5.
Object-oriented databases (OODBs) were developed to support advanced applications for which traditional databases are not sufficient. The data management requirements of these new applications are significantly different from more traditional data processing applications. More light needs to be shed on these requirements in order to identify the aspects of OODBs that can lead to standards. We have studied the data management requirements of one class of advanced database applications: rule-based software development environments (RBDEs). RBDEs store project components in an object database and control access to these objects through a rule-based process model, which defines each development activity as a rule. The components are abstracted as instances of classes which are defined by the project's data model. In this paper we discuss the constructs that a data modeling language for RBDEs should provide, and then explore some of the data management requirements of RBDEs. We use the Marvel system we developed at Columbia as an example.  相似文献   

6.
This paper proposes a framework of engineering constraint maintenance using an active object-oriented database and solves a problem encountered when implementing the framework. The framework is proposed for the information-driven CIM system that integrates engineering constraints as well as its data. It resolves problems of the existing application-oriented constraint maintenance in which constraints are scattered in heterogeneous applications. It is possible due to the integrated management of constraints on a database using triggers, that is, on an “active” database. Existing active object-oriented databases, however, cannot properly support certain constraints that are specified on a set of classes. Those are the cases where the constraints must be maintained in the forward direction along a class composition hierarchy as well as in the backward direction. We call these kinds of problems “backward propagation problems” and investigate several approaches to resolve them using currently available techniques. Based on an approach which uses virtual classes, a new constructor, called CONSTRAINTCCH is proposed to support the backward propagation. Advantages of the proposed framework and the constructor for the backward propagation are demonstrated on a design constraint management that supports a control panel design.  相似文献   

7.
In statistical databases and data warehousing applications it is commonly the case that aggregate views are maintained as an underlying mechanism for summarising information. Where the databases or applications are distributed, or arise from independent data collections or system developments, there may be incompatibility, heterogeneity, and data inconsistency. These challenges need to be overcome if federations of aggregated databases are to be successfully incorporated into systems for database management, querying, retrieval, and knowledge discovery. In this paper we address the issue of integrating aggregate views that have semantically heterogeneous classification schemes. In previous work we have developed a methodology that is efficient but that cannot easily handle data inconsistencies. Our previous approach is therefore not particularly well-suited to very large databases or federations of large numbers of databases. We now address these scalability issues by introducing a methodology for heterogeneous aggregate view integration that constructs a dynamic shared ontology to which each of the aggregate views can be explicitly related. A maximum likelihood technique, implemented using the EM (Expectation-Maximisation) algorithm, is used to inherently handle data inconsistencies in the computation of integrated aggregates that are described in terms of the dynamic shared ontology.  相似文献   

8.
Data dependencies play an important role in the design of a database. Many legacy database applications have been developed on old generation database management systems and conventional file systems. As a result, most of the data dependencies in legacy databases are not enforced in the database management systems. As such, they are not explicitly defined in database schema and are enforced in the transactions, which update the databases. It is very difficult and time consuming to find out the designed data dependencies manually during the maintenance and reengineering of database applications. In software engineering, program analysis has long been developed and proven as a useful aid in many areas. With the use of program analysis, this paper proposes a novel approach for the recovery of common data dependencies, i.e., functional dependencies, key constraints, inclusion dependencies, referential constraints, and sum dependencies, designed in a database from the behavior of transactions, which update the database. The approach is based on detecting program path patterns for implementing most commonly used methods to enforce these data dependencies  相似文献   

9.
王宏志  李建中  高宏 《软件学报》2012,23(3):539-549
非清洁数据为数据管理带来了新的挑战,当前,处理非清洁的数据清洗方法在实际应用中存在一定的局限性,因此需要在一定程度上容忍非清洁数据的存在.这样,研究管理包含非清洁数据的数据库管理技术就成为了重要的问题,其核心在于如何从包含非清洁数据的数据库中得到满足应用所要求的清洁度的查询结果.从非清洁数据处理角度出发,提出了一种非清洁数据库的数据模型.该模型提出了非清洁数据的表示方法,支持非清洁数据的数据操作,并且支持数据操作清洁度的计算,同时还讨论了查询表达式的等价转换规则和模型的初步实现.  相似文献   

10.
Successful information management implies the ability to design accurate representations of the real world of interest, in spite of the diversity of perceptions from the applications sharing the same database. Current database management systems do not provide representation schemes that preserve each perception while fully supporting their diversity and maintaining their consistency. This is a major hindrance for building an all-embracing view of the world while serving multiple applications, whether it is by developing a single database or by providing transparent access (e.g., via the Web) to several heterogeneous data sources (that would typically hold a great diversity of stored representations). This paper reports on results from the multiple representations and multiple resolutions in geographical databases project,1 funded by the European Commission under the 5th Framework Programme. The objective of the project has been to enhance GIS (or DBMS) by adding functionality that supports multiple coexisting representations of the same real-word phenomena (semantic flexibility), including representations of geographic data at multiple resolutions (cartographic flexibility). The new functionality enables a semantically meaningful management of multi-scale, integrated, and temporal geo-databases.  相似文献   

11.
目前大中型企业仍然存在棘手的遗留数据,这些遗留数据库无法及时和外部健全的系统通信,新旧数据无法进行交互。为了解决这种困难,文中以SOA的思想出发,提出了一种基于集成WebServices的轻量级系统集成应用架构。运用WebServices强大的应用接口功能,通过SOAP进行数据交互,在人工的干涉下,实现数据库接口自动转变成WebServ.ices的方法。不仅解决了遗留数据访问的不可操作性,而且对异构平台数据的互操作提供了服务方法和访问接口,实验验证在一定程度上节约企业原有资金成本。  相似文献   

12.
The problem of data integration throughout the lifecycle of a construction project among multiple collaborative enterprises remains unsolved due to the dynamics and fragmented nature of the construction industry. This study presents a novel cloud approach that, focusing on China’s special construction requirements, proposes a series of as-built BIM (building information modeling) tools and a self-organised application model that correlates project engineering data and project management data through a seamless BIM and BSNS (business social networking services) federation. To achieve a logically centralised single-source data structure, a unified data model is constructed that integrates two categories of heterogeneous databases through the adoption of handlers. Based on these models, key technical mechanisms that are critical to the successful management of large amounts of data are proposed and implemented, including permission, data manipulation and file version control. Specifically, a dynamic Generalised List series is proposed to address the sophisticated construction file versioning issue. The proposed cloud has been successfully used in real applications in China. This research work can enable data sharing not only by individuals and project teams but also by enterprises in a consistent and sustainable way throughout the life of a construction project. This system will reduce costs for construction firms by providing effective and efficient means and guides to complex project management, and by facilitating the conversion of project data into enterprise-owned properties.  相似文献   

13.
Effective support for temporal applications by database systems represents an important technical objective that is difficult to achieve since it requires an integrated solution for several problems, including (i) expressive temporal representations and data models, (ii) powerful languages for temporal queries and snapshot queries, (iii) indexing, clustering and query optimization techniques for managing temporal information efficiently, and (iv) architectures that bring together the different pieces of enabling technology into a robust system. In this paper, we present the ArchIS system that achieves these objectives by supporting a temporally grouped data model on top of RDBMS. ArchIS’ architecture uses (a) XML to support temporally grouped (virtual) representations of the database history, (b) XQuery to express powerful temporal queries on such views, (c) temporal clustering and indexing techniques for managing the actual historical data in a relational database, and (d) SQL/XML for executing the queries on the XML views as equivalent queries on the relational database. The performance studies presented in the paper show that ArchIS is quite effective at storing and retrieving under complex query conditions the transaction-time history of relational databases, and can also assure excellent storage efficiency by providing compression as an option. This approach achieves full-functionality transaction-time databases without requiring temporal extensions in XML or database standards, and provides critical support to emerging application areas such as RFID.  相似文献   

14.
15.
随着钻井工程信息数据的海量增加,各级部门都建立了钻井数据库。由于各个部门和系统在业务与数据关联上缺少总体规划和设计协调,这些数据库从某种程度上形成了一个个的“信息孤岛”,所以迫切需要一种集数据集成、管理、分析、辅助决策为一体的数据仓库解决方案,为数据的共享和综合性研究应用提供支持。文章通过对钻井工程数据的分析和信息技术的调研,提出了面向钻井工程的数据仓库解决方案,即一种基于本体论的钻井数据集成方法,在一定程度上解决了钻井数据多源异构的问题。钻井数据仓库的研究和开发,为不同层次和部门的工程管理及技术人员的决策分析提供了强大的数据库支撑,为实现信息化、科学化决策打下了基础。  相似文献   

16.
map 《Computers & Graphics》2003,27(6):893-898
Mobile devices as PDAs evolve rapidly from digital calendars and address books to hosts of more complex functionality. Mobile access to business information such as customer, product or project databases is seen as one of the cutting edge IT solutions for improving productivity and customer satisfaction. However, mobility and scaled-down mobile technology lead to specific limitations in contrast to the usage of desktop computers. These restrictions are absolutely crucial to consider for the development of usable mobile applications.

With reference to the project map—Multimedia Arbeitsplatz der Zukunft (Multimedia Workplace of the Future), this article outlines our main approach to situation-aware support for mobile workers in response to mobile restrictions. We are pointing out focal components and sketch parts of the map architecture, in particular the prototypical application “BuddyAlert”.  相似文献   


17.
The service-oriented architecture is becoming increasingly popular as a paradigm for developing new distributed systems and integrating heterogeneous legacy systems. A service-oriented system (SO system for short) is a group of applications that interact with other(s) by providing and/or consuming services. As the deployments of service-oriented applications and systems increase, it becomes obvious that systems management is the “Achilles’ heel” of these systems. This paper reviews the underlying concepts, principles and technologies related to SO systems management.  相似文献   

18.
The Array Management System (AMS) is an integrated set of array management tools designed to increase the productivity of technical programmers engaged in intensive matrix computational applications. These include analog circuit simulator, statistical analysis, dense or sparse equation solving, simulation, and in particular, the finite element program development. AMS is composed of a set of easy-to-use in-core and out-of-core data management subroutines written in FORTRAN 77. The in-core array management subroutines of AMS allow dynamic storage allocation to be accomplished with integer, real, and complex data with a minimum of programming effort. The out-of-core array management subroutines of AMS support simple operations to allow array transfer between in-core and out-of-core systems and allow different programs to access the same data. The out-of-core data management provides for a direct access database file to speed up the input/output operations. Multiple databases are allowed to be accessed by a program; this provides an easy way to share data and restart. This integrated database environment is suitable to be the kernel of a software project with several programmers and data communications among them.  相似文献   

19.
随着计算机技术与Internet的普及应用,数据库应用得到飞速发展。然而在计算机应用领域不断优化升级、更新换代和整合时,异构数据环境下的数据很难得到重用,形成不利于数据共享的"信息孤岛"。本文以XML(可扩展标记语言)作为异构数据交换的载体,采用Java平台环境J2EE和Java开源库Dom4j实现异构数据交换的方法,有效地解决了异构环境下关系数据库数据交换问题,在保证完整性的前提下,为用户提供灵活和低开销的异构数据管理方案。  相似文献   

20.
This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently support data intensive computing in response to the rapid growing in database size and the need of high performance analyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinatorwrapper mechanism to support the integration of heterogeneous information resources on the Internet, and the fault tolerant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号