首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Every time an Internet user downloads a video, shares a picture, or sends an email, his/her device addresses a data center and often several of them. These complex systems feed the web and all Internet applications with their computing power and information storage, but they are very energy hungry. The energy consumed by Information and Communication Technology (ICT) infrastructures is currently more than 4% of the worldwide consumption and it is expected to double in the next few years. Data centers and communication networks are responsible for a large portion of the ICT energy consumption and this has stimulated in the last years a research effort to reduce or mitigate their environmental impact. Most of the approaches proposed tackle the problem by separately optimizing the power consumption of the servers in data centers and of the network. However, the Cloud computing infrastructure of most providers, which includes traditional telcos that are extending their offer, is rapidly evolving toward geographically distributed data centers strongly integrated with the network interconnecting them. Distributed data centers do not only bring services closer to users with better quality, but also provide opportunities to improve energy efficiency exploiting the variation of prices in different time zones, the locally generated green energy, and the storage systems that are becoming popular in energy networks. In this paper, we propose an energy aware joint management framework for geo-distributed data centers and their interconnection network. The model is based on virtual machine migration and formulated using mixed integer linear programming. It can be solved using state-of-the art solvers such as CPLEX in reasonable time. The proposed approach covers various aspects of Cloud computing systems. Alongside, it jointly manages the use of green and brown energies using energy storage technologies. The obtained results show that significant energy cost savings can be achieved compared to a baseline strategy, in which data centers do not collaborate to reduce energy and do not use the power coming from renewable resources.  相似文献   

2.
In today’s digital information age, companies are struggling with an immense overload of mainly unstructured data. Reducing search times, fulfilling compliance requirements and maintaining information quality represent only three of the challenges that organisations from all industry sectors are faced with. Enterprise content management (ECM) has emerged as a promising approach addressing these challenges. Yet, there are still numerous obstacles to the implementation of ECM technologies, particularly fostered by the fact that the key challenges of ECM adaptation processes are rather organisational than technological. In the present article we claim that the consideration of an organisation’s business process structure is particularly crucial for ECM success. In response to this, we introduce a process-oriented conceptual framework that systematises the key steps of an ECM adoption. The paper suggests that ECM and business process management are two strongly related fields of research.  相似文献   

3.
Ensuring that organizational IT is in alignment with and provides support for an organization's business strategy is critical to business success. Despite this, business strategy and strategic alignment issues are all but ignored in the requirements engineering research literature. We present B-SCP, a requirements engineering framework for organizational IT that directly addresses an organization's business strategy and the alignment of IT requirements with that strategy. B-SCP integrates the three themes of strategy, context, and process using a requirements engineering notation for each theme. We demonstrate a means of cross-referencing and integrating the notations with each other, enabling explicit traceability between business processes and business strategy. In addition, we show a means of defining requirements problem scope as a Jackson problem diagram by applying a business modeling framework. Our approach is illustrated via application to an exemplar. The case example demonstrates the feasibility of B-SCP, and we present a comparison with other approaches.  相似文献   

4.
Data centers have become essential to modern society by catering to increasing number of Internet users and technologies. This results in significant challenges in terms of escalating energy consumption. Research on green initiatives that reduce energy consumption while maintaining performance levels is exigent for data centers. However, energy efficiency and resource utilization are conflicting in general. Thus, it is imperative to develop an application assignment strategy that maintains a trade-off between energy and quality of service. To address this problem, a profile-based dynamic energy management framework is presented in this paper for dynamic application assignment to virtual machines (VMs). It estimates application finishing times and addresses real-time issues in application resource provisioning. The framework implements a dynamic assignment strategy by a repairing genetic algorithm (RGA), which employs realistic profiles of applications, virtual machines and physical servers. The RGA is integrated into a three-layer energy management system incorporating VM placement to derive actual energy savings. Experiments are conducted to demonstrate the effectiveness of the dynamic approach to application management. The dynamic approach produces up to 48% better energy savings than existing application assignment approaches under investigated scenarios. It also performs better than the static application management approach with 10% higher resource utilization efficiency and lower degree of imbalance.  相似文献   

5.
Product-service system (PSS) approach has emerged as a competitive strategy to impel manufacturers to offer a set of product and services as a whole. A three-domain PSS conceptual design framework based on quality function deployment (QFD) is proposed in this research. QFD is a widely used design tool considering customer requirements (CRs). Since both product and services influence satisfaction of customer, they should be designed simultaneously. Identification of the critical parameters in these domains plays an important role. Engineering characteristics (ECs) in the functional domain include product-related ECs (P-ECs) and service-related ECs (S-ECs). ECs are identified by translating customer requirements (CRs) in the customer domain. Rating ECs’ importance has a great impact on achieving an optimal PSS planning. The rating problem should consider not only the requirements of customer, but also the requirements of manufacturer. From the requirements of customer, the analytic network process (ANP) approach is integrated in QFD to determine the initial importance weights of ECs considering the complex dependency relationships between and within CRs, P-ECs and S-ECs. In order to deal with the vagueness, uncertainty and diversity in decision-making, the fuzzy set theory and group decision-making technique are used in the super-matrix approach of ANP. From the requirements of manufacturer, the data envelopment analysis (DEA) approach is applied to adjust the initial weights of ECs taking into account business competition and implementation difficulty. A case study is carried out to demonstrate the effectiveness of the developed integrated approach for prioritizing ECs in PSS conceptual design.  相似文献   

6.
Information quality is one of the key determinants of information system success. When information quality is poor, it can cause a variety of risks in an organization. To manage resources for information quality improvement effectively, it is necessary to understand where, how, and how much information quality impacts an organization's ability to successfully deliver its objectives. So far, existing approaches have mostly focused on the measurement of information quality but not adequately on the impact that information quality causes. This paper presents a model to quantify the business impact that arises through poor information quality in an organization by using a risk based approach. It hence addresses the inherent uncertainty in the relationship between information quality and organizational impact. The model can help information managers to obtain quantitative figures which can be used to build reliable and convincing business cases for information quality improvement.  相似文献   

7.
In today’s dynamic business environments, organizations are under pressure to modernize their existing software systems in order to respond to changing business demands. Service oriented architectures provide a composition framework to create new business functionalities from autonomous building blocks called services, enabling organizations to quickly adapt to changing conditions and requirements. Characteristics of services offer the promise of leveraging the value of enterprise systems through source code reuse. In this respect, existing system components can be used as the foundation of newly created services. However, one problem to overcome is the lack of business semantics to support the reuse of existing source code. Without sufficient semantic knowledge about the code in the context of business functionality, it would be impossible to utilize source code components in services development. In this paper, we present an automated approach to enrich source code components with business semantics. Our approach is based on the idea that the gap between the two ends of an enterprise system—(1) services as processes and (2) source code—can be bridged via similarity of data definitions used in both ends. We evaluate our approach in the framework of a commercial enterprise systems application. Initial results indicate that the proposed approach is useful for annotating source code components with business specific knowledge.  相似文献   

8.
Content services such as content filtering and transcoding adapt contents to meet system requirements, display capacities, or user preferences. Data security in such a framework is an important problem and crucial for many Web applications. In this paper, we propose an approach that addresses data integrity and confidentiality in content adaptation and caching by intermediaries. Our approach permits multiple intermediaries to simultaneously perform content services on different portions of the data. Our protocol supports decentralized proxy and key management and flexible delegation of services. Our experimental results show that our approach is efficient and minimizes the amount of data transmitted across the network.  相似文献   

9.
To provide an effective service-oriented solution for a given business problem, it is necessary to explore all available options for providing the required functionality while ensuring a flawless data transfer within the composed services. Existing service composition approaches fall short of this ideal, as functional requirements and data mediation are not considered in a unified framework. We propose a service composition framework that addresses both of these aspects by integrating existing techniques in formal methods, service oriented computing and data mediation. Our framework guarantees the correct interaction of services in a composition by verifying certain behavioral constraints, and resolving data mismatches at semantic, syntactic and structural levels, in a unified manner. A tableau based algorithm is used to generate and explore compositions in a goal-directed fashion that proves or disproves the existence of a service choreographer. Data models, to detect and resolve data mismatches, are generated using WSDL documents and regular expressions. We also apply our framework to examples adapted from the existing service composition literature that provide strong testimony that the approach can be effectively applied in practical settings.  相似文献   

10.
Catastrophic forgetting of learned knowledges and distribution discrepancy of different data are two key problems within fault diagnosis fields of rotating machinery. However, existing intelligent fault diagnosis methods generally tackle either the catastrophic forgetting problem or the domain adaptation problem. In complex industrial environments, both the catastrophic forgetting problem and the domain adaptation problem will occur simultaneously, which is termed as continual transfer problem. Therefore, it is necessary to investigate a more practical and challenging task where the number of fault categories are constantly increasing with industrial streaming data under varying operation conditions. To address the continual transfer problem, a novel framework named deep continual transfer learning network with dynamic weight aggregation (DCTLN-DWA) is proposed in this study. The DWA module is used to retain the diagnostic knowledge learned from previous phases and learn new knowledge from the new samples. The adversarial training strategy is applied to eliminate the data distribution discrepancy between source and target domains. The effectiveness of the proposed framework is investigated on an automobile transmission dataset. The experimental results demonstrate that the proposed framework can effectively handle the industrial streaming data under different working conditions and can be utilized as a promising tool for solving actual industrial problem.  相似文献   

11.
Requirements engineering for e-business advantage   总被引:1,自引:0,他引:1  
As a means of contributing to the achievement of business advantage for companies engaging in e-business, we propose a requirements engineering framework that incorporates a business strategy dimension. We employ Jackson’s Problem Frames approach, goal modeling, and business process modeling (BPM) to achieve this. Jackson’s context diagrams, used to represent business model context, are integrated with goal models to describe the requirements of the business strategy. We leverage the paradigm of projection in both approaches as a means of simultaneously decomposing both the requirement and context parts, from an abstract business level to concrete system requirements. Our approach maintains traceability to high-level business objectives via contribution relationship links in the goal model. We integrate use of role activity diagrams to describe business processes in detail where needed. The feasibility of our approach is shown by a well-known case study taken from the literature.  相似文献   

12.
Business intelligence (BI) offers tremendous potential for business organizations to gain insights into their day-to-day operations, as well as longer term opportunities and threats. However, most of today’s BI tools are based on models that are too much data-oriented from the point of view of business decision makers. We propose an enterprise modeling approach to bridge the business-level understanding of the enterprise with its representations in databases and data warehouses. The business intelligence model (BIM) offers concepts familiar to business decision making—such as goals, strategies, processes, situations, influences, and indicators. Unlike many enterprise models which are meant to be used to derive, manage, or align with IT system implementations, BIM aims to help business users organize and make sense of the vast amounts of data about the enterprise and its external environment. In this paper, we present core BIM concepts, focusing especially on reasoning about situations, influences, and indicators. Such reasoning supports strategic analysis of business objectives in light of current enterprise data, allowing analysts to explore scenarios and find alternative strategies. We describe how goal reasoning techniques from conceptual modeling and requirements engineering have been applied to BIM. Techniques are also provided to support reasoning with indicators linked to business metrics, including cases where specifications of indicators are incomplete. Evaluation of the proposed modeling and reasoning framework includes an on-going prototype implementation, as well as case studies.  相似文献   

13.
Information system development projects face tremendous challenges because of business changes and technology changes. Research has shown that software team flexibility has a positive effect on project outcomes, but specific requirements for enhancing flexibility are lacking. Drawing from the input-mediator-outcome (IMO) team effectiveness framework, this research investigates the contextual inputs and team processes that lead to development team flexibility and how well team flexibility improves project outcomes. A survey was developed to consider a model derived from the IMO framework. One hundred fourteen members of information systems development project teams in China responded to the survey. Partial least squares analysis was used served to analyze the data. Results indicate that a participatory culture and cooperative norms are an effective foundation for improving required processes that include project coordination of the project and knowledge sharing activities. In turn, the improved process performance extends responses to changes in technology and the business climate. The improved flexibility in meeting change is predictive of outcomes related to project performance and quality of the final product.  相似文献   

14.
Information technology companies currently use data mining techniques in different areas with the goal of increasing the quality of decision-making and to improve their business performance. The study described in this paper uses a data mining approach to produce an effort estimation of a software development process. It is based on data collected in a Croatian information technology company. The study examined 27 software projects with a total effort exceeding 42 000 work hours. The presented model employs a modified Cross-Industry Standard Process for Data Mining, where prior to model creation, additional clustering of projects is performed. The results generated by the proposed approach generally had a smaller effort estimation error than the results of human experts. The applied approach has proved that sound results can be gained through the use of data mining within the studied area. As a result, it would be wise to use such estimates as additional input in the decision-making process.  相似文献   

15.
In this paper, we address the problem of domain adaptation for binary classification. This problem arises when the distributions generating the source learning data and target test data are somewhat different. From a theoretical standpoint, a classifier has better generalization guarantees when the two domain marginal distributions of the input space are close. Classical approaches try mainly to build new projection spaces or to reweight the source data with the objective of moving closer the two distributions. We study an original direction based on a recent framework introduced by Balcan et?al. enabling one to learn linear classifiers in an explicit projection space based on a similarity function, not necessarily symmetric nor positive semi-definite. We propose a well-founded general method for learning a low-error classifier on target data, which is effective with the help of an iterative procedure compatible with Balcan et?al.??s framework. A reweighting scheme of the similarity function is then introduced in order to move closer the distributions in a new projection space. The hyperparameters and the reweighting quality are controlled by a reverse validation procedure. Our approach is based on a linear programming formulation and shows good adaptation performances with very sparse models. We first consider the challenging unsupervised case where no target label is accessible, which can be helpful when no manual annotation is possible. We also propose a generalization to the semi-supervised case allowing us to consider some few target labels when available. Finally, we evaluate our method on a synthetic problem and on a real image annotation task.  相似文献   

16.
We present some basic concepts of a modelling environment for data integration in business analytics. Main emphasis is on defining a process model for the different activities occurring in connection with data integration, which allow later on assessment of the quality of the data. The model is based on combination of knowledge and techniques from statistical metadata management and from workflow processes. The modelling concepts are presented in a problem oriented formulation. The approach is embedded into an open model framework which aims for a modelling platform for all kinds of models useful in business applications.  相似文献   

17.
Model-Driven Architecture (MDA) brings benefits to software development, among them the potential for connecting software models with the business domain. This paper focuses on the upstream or Computation-Independent Model (CIM) phase of MDA. Our contention is that, whilst there are many models and notations available within the CIM phase, those that are currently popular and supported by the Object Management Group (OMG) may not be the most useful notations for business analysts nor sufficient to fully support software requirements and specification. Therefore, with specific emphasis on the value of the Business Process Modelling Notation (BPMN) for business analysts, this paper provides an example of a typical CIM approach before describing an approach that incorporates specific requirements techniques. A framework extension to MDA is then introduced, which embeds requirements and specification within the CIM, thus further enhancing the utility of MDA by providing a more complete method for business analysis.  相似文献   

18.
工作流修正是工作流重用的重要任务.目前在基于工作流的可重用片段——stream的语义工作流修正中,当工作流stream库中不存在与检索语义工作流中的工作流stream结构相似的stream时,无法修正检索工作流.针对这种情况,提出了一种改进方案——基于stream行为特征修正语义工作流.使用任务紧邻关系集表达stream的行为特征.对于检索语义工作流中的每个与变更请求不一致的stream,使用锚集合数据索引和stream匹配规则对工作流stream库过滤得到候选匹配stream集;之后基于stream的行为相似性和变更请求对候选stream集进行验证,得到与变更请求一致程度最高和足够相似的匹配stream;然后更新变更请求,使用每个检索到的匹配stream替换原stream以逐步修正检索语义工作流中的缺陷;最后得到修正语义工作流.实验结果表明,与现有的基于工作流stream的修正算法相比,本文的算法得到了整体质量更好的修正语义工作流集,其适应性更好.该修正算法能为业务过程管理人员为适应新业务需求的工作流建模提供较好质量的修正语义工作流供参考,对提高工作流重用的效率和质量有较大帮助.  相似文献   

19.
For over six years Raytheon's Missile Systems Division, Information Processing Systems Organization has used a successful approach in developing and maintaining business software. The approach centers on the fact that 60 percent of all business application designs and code are redundant and can be standardized and reused. This approach has resulted in significant gains in productivity and reliability and improved end-user relations, while providing better utilization of data processing personnel, primarily in the maintenance phase of the software life cycle.  相似文献   

20.
Storm支持流式数据的高性能实时计算,是一种广泛使用的流式计算框架。在Storm应用的开发中,开发人员需要针对不同的流式数据需求定制开发相应的计算模块,从而导致大量的重复工作,且难以适应数据需求的变动。如何根据流式数据格式和计算方式等数据需求,快速开发Storm应用并配置相应的环境,是提升大部分流式计算应用开发效率的重要问题。提出了流式数据需求描述方法,设计并实现了一种基于Storm的、由数据需求驱动的流式数据实时处理应用辅助开发框架,其根据业务人员描述的领域数据需求自动生成符合数据处理需求的Storm实时数据处理应用。实验表明,该框架能帮助不具备Storm开发能力甚至非软件开发人员快速配置常见的基于Storm的流式计算应用,对于常见的流式数据的实时处理需求具有一定的适应性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号