首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data warehousing (DW) has emerged as one of the most powerful technology innovations in recent years to support organization-wide decision making and has become a key component in the information technology (IT) infrastructure. Proponents of DW claim that its infusion can dramatically enhance the ability of businesses to improve the access, distribution, and sharing of information and provide managerial decision support for complex business questions. DW is also an enabling technology for data mining, customer-relationship management, and other business-intelligence applications. Although data warehouses have been around for quite some time, they have been plagued by high failure rates and limited spread or use. Drawing upon past research on the adoption and diffusion of innovations and on the implementation of information systems (IS), we examine the key organizational and innovation factors that influence the infusion (diffusion) of DW within organizations and also examine if more extensive infusion leads to improved organizational outcomes. In this paper, we conducted a field study, where two senior managers (one from IS and the other from a line function) from 117 companies participated, and developed a structural model to test the research hypotheses. The results indicate that four of the seven variables examined in this paper-organizational support, quality of the project management process, compatibility, and complexity-significantly influence the degree of infusion of DW and that the infusion, in turn, significantly influences organization-level benefits and stakeholder satisfaction. The findings of this paper have interesting implications for both research and practice in IT and DW infusion, as well as in the organization-level impact of the infusion of enterprise-wide infrastructural and decision support technologies such as DW.  相似文献   

2.
ABSTRACT

Data warehouses (DW) are a key component of business intelligence and decision-making. In this paper, we present an approach that combines Grounded Theory and System Dynamics to develop causal loop diagrams/models for data warehouse quality and processes. We used the top 51 data warehousing academic papers to arrive at concepts and critical success factors. A simple data warehouse quality causal model and a Data Warehouse Project Initialization Loop Analysis, Data Source Availability & Monitoring Loop Analysis and Data Model Quality and DBMS Quality Analysis models were developed. Visualization of the cause-effect loops and how data warehouse variables are interrelated provide a clear understanding of DW process. Key findings include data quality and data model quality that are more important than DBMS quality for ensuring data warehouse quality, and the number of data entry errors and the level of data complexity can be major detriments to DW quality.  相似文献   

3.
数据仓库在生产连续型企业库存控制中的应用   总被引:2,自引:0,他引:2  
本文通过对生产连续型企业库存控制的特点分析,指出了仅仅利用传统的基于事务处理的管理信息系统和决策支持系统(DSS)解决该类库存控制决策分析问题的局限性,提出了采用数据仓库技术的必要性,并分析了传统决策支持系统的不足之处以及数据仓库(DW)技术对DSS的支持,设计了一种利用DW技术和数据挖掘(DM)、联机分析处理(OLAP)等
数据仓库工具提供库存控制决策支持的方案。  相似文献   

4.
数据仓库技术将不同数据源的数据集成到数据仓库中为决策支持系统提供了一个集成的数据环境.本文以暂住人口数据仓库的设计为例,从数据源的分析、逻辑设计、数据载入、模型的选取及多维数据展示等方面阐述了数据仓库的基本设计过程.  相似文献   

5.
基于库存控制的数据仓库与决策支持   总被引:1,自引:0,他引:1  
通过对传统决策支持系统已经不能满足当今企业对决策支持的需要的分析,提出了采用数据仓库(DW),联机分析处理(OLAP)及数据挖掘技术为企业提供决策必要性,设计了一种利用数据仓库(DW)技术和联机分析处理(OLAP)、数据挖掘(DM)等数据仓库工具提供库存控制决策支持的方案,并着重阐述了联机分析处理(OLAP)与数据挖掘是如何为企业库存控制提供决策支持的。  相似文献   

6.
基于数据仓库技术的工程数据管理系统的研究与实现   总被引:9,自引:0,他引:9  
工程试验产生的数据组织分散、模式复杂而多变,从而要求工程数据管理系统能够统一管理数据,并具有良好的用户定义特征和数据扩展能力.本文描述了一个面向工程试验数据仓库体系结构的分析和设计,并实际应用到了试飞数据管理系统中,达到了预定的目标.同时提出了利用元对象实现不同模式数据集中管理和使用的方法.  相似文献   

7.
梯度分析是数据仓库和联机分析处理中的一项重要分析任务,在决策支持中发挥着重要作用.本文根据实际应用的需要,提出了一种新颖的关键梯度分析方法.借助立方体计算中的计数排序和分割策略,通过扩展补充路径,并利用插入排序方法,实现了高效的关键梯度分析算法.在模拟数据上进行了大量的实验,结果证明了算法的高效性和实用性.  相似文献   

8.
《Information Systems》1999,24(2):131-158
Relationships among different modeling perspectives have been systematically investigated focusing either on given notations (e.g. UML) or on domain reference models (e.g. ARIS/SAP). In contrast, many successful informal methods for business analysis and requirements engineering (e.g. JAD) emphasize team negotiation, goal orientation and flexibility of modeling notations. This paper addresses the question how much formal and computerized support can be provided in such settings without destroying their creative tenor. Our solution is based on a novel modeling language, M-Telos, that integrates the adaptability and analysis advantages of the logic-based meta modeling language Telos with a module concept covering the structuring mechanisms of scalable software architectures. It comprises four components: (1) A modular conceptual modeling formalism organizes individual perspectives and their interrelationships. (2) Perspective schemata are linked to a conceptual meta meta model of shared domain terms, thus giving the architecture a semantic meaning and enabling adaptability and extensibility of the network of perspectives. (3) Inconsistency management across perspectives is handled in a goal-oriented manner, by formalizing analysis goals as meta rules which are automatically customized to perspective schemata. (4) Continuous incremental maintenance of inconsistency information is provided by exploiting recent view maintenance techniques from deductive databases. The approach has been implemented as an extension to the ConceptBase3 meta database management system and has been applied in a number of real-world requirements engineering projects.  相似文献   

9.
The increase in the number of companies seeking data warehousing solutions, in order to gain significant business advantages, has created the need for a decision-aid approach in choosing appropriate data warehouse (DW) systems. Owing to the vague concepts frequently represented in decision environments, we have proposed a fuzzy multi-criteria decision-making procedure, to facilitate data warehouse system selection, with consideration given to both technical and managerial criteria. The procedure can systematically construct the objectives of DW systems selection to support the business goals and requirements of an organization, and identify the appropriate attributes or criteria for evaluation. In the fuzzy-based method, the weight of each criterion and the rating of each alternative are described using linguistic terms, which can also be expressed as triangular fuzzy numbers. The fuzzy algorithm aggregated the decision-makers’ preference rating for criteria, and the suitability of data warehouse alternatives versus the selection criteria, to calculate fuzzy appropriateness indices, through which, the most suitable data warehouse system was determined. A case study of a Bar Code Implementation Project for Agricultural Products in Taiwan was conducted to illustrate this method’s effectiveness.  相似文献   

10.
Reference models increase the efficiency of data warehouse projects by providing construction patterns. This paper presents an overview of existing applications of reference models for data warehousing which shows that there is only insufficient support of model alternatives during requirements definition. Especially configurable reference models provide an adequate solution for creating project-specific models. Therefore, we suggest an extension of data warehouse modeling techniques by configuration rules. The configuration of reference models is embedded in the data warehouse development process. Furthermore, supplementary operational instructions for reference model designers are outlined.  相似文献   

11.
本文以作者设计的银行信用评估管理系统为背景,试图将数据仓库技术引入到该系统的设计中,以进一步完善和弥补原系统中对数据库层的粗略设计。利用数据仓库和数据挖掘相关技术,针对原银行信用评估管理系统业务需要,本文提出了一种新的数据仓库解决方案,并就银行信用评估数据仓库系统在构建过程的若干关键技术问题进行了分析和探讨。  相似文献   

12.
由于传统的数据库技术具有单一性,在处理操作型事务时具有一定的优势,但是不适合处理信息型事务,基于此对数据仓库和数据挖掘在医院信息系统中的应用进行了分析,主要探讨了决策支持系统的应用结构、数据仓库、联机分析处理技术、数据挖掘技术分析等技术方面的内容,分析了医院信息管理部门的组织结构、医院信息管理系统的构成、平台系统结构和数据库和数据仓库的设计等,这一研究对于医院信息管理系统的进一步推进具有仪的借鉴价值。  相似文献   

13.
基于多级存储结构的海量信息管理系统的研究   总被引:2,自引:0,他引:2  
It is very important for such large-scale application systems as global information systems,GIS,digital libraries,data warehousing to effectively manage and manipulate the large volumes of infor-mation(more than terabytes of data). One of challenge problems is to study massive information man-agement system and related techniques. Since the large volumes of information are mainly stored on ter-tiary storage devices,the study of massive information management system should focus on the follow-ing aspects. (l)the multilevel storage architecture in which the tertiary storage devices are main datamedia and the secondary storage devices are data cache; (2)scalable system architecture of massive in-formation management system; (3)retrieve process techniques. This paper will deeply study and discussthe above oroblems.  相似文献   

14.
Among the key factors for the success of a metrics program are the regularity of metrics collection, a seamless and efficient data collection methodology, and the presence of non-intrusive automated data collection tools. This paper presents the software process data warehousing architecture SPDW+ as a solution to the frequent, seamless, and automated capturing of software quality metrics, and their integration in a central repository for a full range of analyses. The striking features of the SPDW+ ETL (data extraction, transformation, and loading) approach are that it addresses heterogeneity issues related to the software development context, it is automatable and non-intrusive, and it allows different capturing frequency and latency strategies, hence allowing both analysis and monitoring of software metrics. The paper also provides a reference framework that details three orthogonal dimensions for considering ETL issues in the software development process context, used to develop SPDW+ ETL. The advantages of SPDW+ are: (1) flexibility to meet the requirements of the frequent changes in SDP environments; (2) support for monitoring, which implies the execution of frequent and incremental loads; (3) automation of the complex and time-consuming task of capturing metrics, making it seamless; (4) freedom of choice regarding management models and support tools used in projects; and (5) cohesion and consistency of the information stored in the metrics repository which will be used to compare data of different projects. The paper presents the reference framework, illustrates the key role played by the metrics capturing process in a metrics program using a case study, and presents the striking features of SPDW+ and its ETL approach, as well as an evaluation based on a prototype implementation.  相似文献   

15.
The insurance industry of Hong Kong has been experiencing steady growth in the last decade. One of the current problems in the industry is that, in general, insurance agent turnover is high. The selection of new agents is treated as a regular recruitment exercise. This study focuses on the characteristics of data warehousing and the appropriate data mining techniques that can be used to support agent selection in the insurance industry. We examine the application of three popular data mining methods – discriminant analysis, decision trees and artificial neural networks – incorporated with a data warehouse to the prediction of the length of service, sales premiums and persistence indices of insurance agents. An intelligent decision support system, namely Intelligent Agent Selection Assistant for Insurance, is presented, which will help insurance managers to select quality agents by using data mining in a data warehouse environment.  相似文献   

16.
Data are considered to be important organizational assets because of their assumed value, including their potential to improve the organizational decision-making processes. Such potential value, however, comes with various costs, including those of acquiring, storing, securing and maintaining the given assets at appropriate quality levels. Clearly, if these costs outweigh the value that results from using the data, it would be counterproductive to acquire, store, secure and maintain the data. Thus cost–benefit assessment is particularly important in data warehouse (DW) development; yet very few techniques are available for determining the value that the organization will derive from storing a particular data table and hence determining which data set should be loaded in the DW. This research seeks to address the issue of identifying the set of data with the potential for producing the greatest net value for the organization by offering a model that can be used to perform a cost–benefit analysis on the decision support views that the warehouse can support and by providing techniques for estimating the parameters necessary for this model.  相似文献   

17.
在21世纪的今天,以信息爆炸为特征的互联网时代,越来越多的企业决策需要根据企业以往的业务数据做参考,因此,需要建立一个可靠的决策支持系统(DSS)。决策支持系统(DSS)中一种通用的决策分析方式是建立在数据仓库的基础的,通过在线联机分析处理(OLAP)技术来实现的。文章主要讲述决策支持系统(DSS)中的数据组织方式、以及数据仓库(DW)的基本知识、联机分析处理(OLAP)的基础体系结构、及其决策分析的实现过程等方面的研究。并结合实际应用中的某大商场的销售分析决策系统为实例,分析企业如何把业务经营同市场需求联系起来,并在此基础上建立数据仓库系统,为企业做出科学、正确的决策。从而更加全面深入的分析整个企业的业务数据,从不同的角度来评估销售情况,这样方能给企业管理层人员提供更真实、客观的、全面的参考数据,提高整个企业的销售水平。  相似文献   

18.
Data warehousing technology offers organizations the potential for much greater exploitation of informational assets. However, the evaluation of potential investments in this technology poses problems for organizations as traditional evaluation methods are constrained when dealing with strategic IT applications. Nevertheless, many organizations are procedurally obliged to use such methods for evaluating data warehousing investments. This paper identifies five problems with using such methods in these circumstances: evaluating intangible benefits; making the relationship between IT and profitability explicit; dealing with the vanishing status quo; dealing with the extended investment time frame; and evaluating infrastructural investments. The authors studied how four organizations in the UK and Ireland attempted to overcome these problems when introducing data warehousing, and propose a framework for evaluating data warehousing investments. This framework consists of a high‐level analysis of the economic environment and of the information intensity of the relationship between the organization and its customers. Based on the outcome of this analysis, the authors propose four factors that have to be managed during the evaluation process in order to ensure that the limitations of the traditional evaluation techniques do not adversely affect the evaluation process. These factors are: commitment and sponsorship; the approach to evaluation; the time scale of benefits; and the appraisal techniques used.  相似文献   

19.
Summary The engineering of large scale facilities, such as dams, power stations, bridges etc, involves the handling of large amounts of information. Managers of the design and construction process have to take on a wide range of roles to cope with it all. One important aspect of this information is that concerned with safety, risk and hazard management. This paper is divided into three sections each covering different aspects of a common approach to this problem. The analysis of risk using traditional reliability techniques is not covered. The concern here is rather with the use of computers to support and inform the direct management of quality, safety and hazard and hence to indirectly control risk. Firstly, the approach based on the use of “Interacting Objects” will be outlined. This will be illustrated through the use of IT to support business processes in quality management. Product and process models will be compared. Safety, risk and hazard are part of quality. Secondly, the use of these objects in physical process simulation will be described. Here the motivation for the work is to begin to look at the implications for risk analysis of the sensitivity of the behaviour of simulated non-linear systems to initial conditions. Thirdly, the identification and management of “proneness to failure” in a project will be outlined. Here the problem is how to deal with the difficult interaction between technology and human and organisational factors.  相似文献   

20.
针对现有电力物资仓库管理和运营多基于人工作业方式进行,工作内容繁杂,管理内容多样,数据处理量大,难以进行有效的归并和融合分析。再加上物资配送基本以需求部门自提为主,调配难度大;缺乏信息共享机制,无法对规定时期内的物资进行及时质检分析,造成物资供应链管理支撑不足的问题。本文将仓储“储、检、配”多源事件数据进行融合分析。首先识别并解析数据源实体,对于存在的瑕疵数据进行基于知识图谱的修复和整理;接下来结合仓储作业一体化的趋势特点,对仓储运行数据融合多级入出比控制模型,实时动态平衡与调整仓库库存;最后提出基于实时调度配送模型,调度各级入出库数据进行求解优化。经过测试使用,大大提高了仓储信息交互效率,进一步改善了物流的配送效率,节约管理运营成本,提高竞争力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号