首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Increasingly, business processes are being controlled and/or monitored by information systems. As a result, many business processes leave their “footprints” in transactional information systems, i.e., business events are recorded in so-called event logs. Process mining aims at improving this by providing techniques and tools for discovering process, control, data, organizational, and social structures from event logs, i.e., the basic idea of process mining is to diagnose business processes by mining event logs for knowledge. In this paper we focus on the potential use of process mining for measuring business alignment, i.e., comparing the real behavior of an information system or its users with the intended or expected behavior. We identify two ways to create and/or maintain the fit between business processes and supporting information systems: Delta analysis and conformance testing. Delta analysis compares the discovered model (i.e., an abstraction derived from the actual process) with some predefined processes model (e.g., the workflow model or reference model used to configure the system). Conformance testing attempts to quantify the “fit” between the event log and some predefined processes model. In this paper, we show that Delta analysis and conformance testing can be used to analyze business alignment as long as the actual events are logged and users have some control over the process.
W. M. P. van der AalstEmail:
  相似文献   

2.
Given a model of the expected behavior of a business process and given an event log recording its observed behavior, the problem of business process conformance checking is that of identifying and describing the differences between the process model and the event log. A desirable feature of a conformance checking technique is that it should identify a minimal yet complete set of differences. Existing conformance checking techniques that fulfill this property exhibit limited scalability when confronted to large and complex process models and event logs. One reason for this limitation is that existing techniques compare each execution trace in the log against the process model separately, without reusing computations made for one trace when processing subsequent traces. Yet, the execution traces of a business process typically share common fragments (e.g. prefixes and suffixes). A second reason is that these techniques do not integrate mechanisms to tackle the combinatorial state explosion inherent to process models with high levels of concurrency. This paper presents two techniques that address these sources of inefficiency. The first technique starts by transforming the process model and the event log into two automata. These automata are then compared based on a synchronized product, which is computed using an A* heuristic with an admissible heuristic function, thus guaranteeing that the resulting synchronized product captures all differences and is minimal in size. The synchronized product is then used to extract optimal (minimal-length) alignments between each trace of the log and the closest corresponding trace of the model. By representing the event log as a single automaton, this technique allows computations for shared prefixes and suffixes to be made only once. The second technique decomposes the process model into a set of automata, known as S-components, such that the product of these automata is equal to the automaton of the whole process model. A product automaton is computed for each S-component separately. The resulting product automata are then recomposed into a single product automaton capturing all the differences between the process model and the event log, but without minimality guarantees. An empirical evaluation using 40 real-life event logs shows that, used in tandem, the proposed techniques outperform state-of-the-art baselines in terms of execution times in a vast majority of cases, with improvements ranging from several-fold to one order of magnitude. Moreover, the decomposition-based technique leads to optimal trace alignments for the vast majority of datasets and close to optimal alignments for the remaining ones.  相似文献   

3.
Process analytical technologies (PAT) are increasingly being explored and adopted by pharmaceutical and industrial biotechnology companies for enhanced process understanding, Quality by Design (QbD) and Real Time Release (RTR). To achieve these aspirations there is a critical need to extract the most information, and hence understanding, from complex and often ‘messy’ spectroscopic data. This contribution reviews a number of new approaches that have been shown to overcome the limitations of existing calibration/modelling methodologies and describes a practical system which would enhance robustness of the closed loop process control system and overall ‘control strategy’. Application studies are described of the use of on-line spectroscopy for the monitoring and control of a downstream solvent recovery column, batch cooling crystallization and pharmaceutical fermentation.  相似文献   

4.
Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today׳s economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically.In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking (how to assess the conformance of the schedule and the actual process execution) and performance improvement (how to improve the operational process performance). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%.  相似文献   

5.
Process mining can be seen as the “missing link” between data mining and business process management. The lion's share of process mining research has been devoted to the discovery of procedural process models from event logs. However, often there are predefined constraints that (partially) describe the normative or expected process, e.g., “activity A should be followed by B” or “activities A and B should never be both executed”. A collection of such constraints is called a declarative process model. Although it is possible to discover such models based on event data, this paper focuses on aligning event logs and predefined declarative process models. Discrepancies between log and model are mediated such that observed log traces are related to paths in the model. The resulting alignments provide sophisticated diagnostics that pinpoint where deviations occur and how severe they are. Moreover, selected parts of the declarative process model can be used to clean and repair the event log before applying other process mining techniques. Our alignment-based approach for preprocessing and conformance checking using declarative process models has been implemented in ProM and has been evaluated using both synthetic logs and real-life logs from a Dutch hospital.  相似文献   

6.
WS-BPEL业务流程与访问控制   总被引:1,自引:1,他引:1       下载免费PDF全文
梅彪  姜新文  吴恒 《计算机工程》2008,34(19):144-146
针对面向服务的体系结构下企业应用安全需求,通过分析WS-BPEL业务流程特点,提出一种面向执行体的访问控制模型。该模型可以动态地进行权限授予与回收,并引入角色和约束机制。在此基础上,将流程活动映射到访问控制模型元素,从而在流程定义和权限管理隔离的情况下,实现WS-BPEL业务流程执行过程中的访问控制策略实施。  相似文献   

7.
杨春  陈立行 《现代计算机》2005,(5):17-20,38
Web服务的商业流程执行语言(简称BPEL4WS或BPEL)是一种基于XML的工作流定义语言,可以作为企业工作流建模和实现工作流管理系统的基础.本文首先介绍了工作流和BPEL4WS的基本概念,然后由一个例子具体介绍了BPEL4W的流程,最后给出了基于BPEL4WS工作流管理系统的实现.  相似文献   

8.

Context

Requirements Engineering (RE) is a critical discipline mostly driven by uncertainty, since it is influenced by the customer domain or by the development process model used. Volatile project environments restrict the choice of methods and the decision about which artefacts to produce in RE.

Objective

We aim to investigate RE processes in successful project environments to discover characteristics and strategies that allow us to elaborate RE tailoring approaches in the future.

Method

We perform a field study on a set of projects at one company. First, we investigate by content analysis which RE artefacts were produced in each project and to what extent they were produced. Second, we perform qualitative analysis of semi-structured interviews to discover project parameters that relate to the produced artefacts. Third, we use cluster analysis to infer artefact patterns and probable RE execution strategies, which are the responses to specific project parameters. Fourth, we investigate by statistical tests the effort spent in each strategy in relation to the effort spent in change requests to evaluate the efficiency of execution strategies.

Results

We identified three artefact patterns and corresponding execution strategies. Each strategy covers different project parameters that impact the creation of certain artefacts. The effort analysis shows that the strategies have no significant differences in their effort and efficiency.

Conclusions

In contrast to our initial assumption that an increased effort in requirements engineering lowers the probability of change requests or project failures in general, our results show no statistically significant difference between the efficiency of the strategies. In addition, it turned out that many parameters considered as the main causes for project failures can be successfully handled. Hence, practitioners can apply the artefact patterns and related project parameters to tailor the RE process according to individual project characteristics.  相似文献   

9.
A continuous evolution of business process parameters, constraints and needs, hardly foreseeable initially, requires a continuous design from the business process management systems. In this article we are interested in developing a reactive design through process log analysis ensuring process re-engineering and execution reliability. We propose to analyse workflow logs to discover workflow transactional behaviour and to subsequently improve and correct related recovery mechanisms. Our approach starts by collecting workflow logs. Then, we build, by statistical analysis techniques, an intermediate representation specifying elementary dependencies between activities. These dependencies are refined to mine the transactional workflow model. The analysis of the discrepancies between the discovered model and the initially designed model enables us to detect design gaps, concerning particularly the recovery mechanisms. Thus, based on this mining step, we apply a set of rules on the initially designed workflow to improve workflow reliability. The work presented in this paper was partially supported by the EU under the SUPER project (FP6-026850) and by the Lion project supported by Science Foundation Ireland under Grant No. SFI/02/CE1/I131.  相似文献   

10.
Mining business process variants: Challenges, scenarios, algorithms   总被引:1,自引:0,他引:1  
During the last years a new generation of process-aware information systems has emerged, which enables process model configurations at buildtime as well as process instance changes during runtime. Respective model adaptations result in a large number of model variants that are derived from the same process model, but slightly differ in structure. Generally, such model variants are expensive to configure and maintain. In this paper we address two scenarios for learning from process model adaptations and for discovering a reference model out of which the variants can be configured with minimum efforts. The first one is characterized by a reference process model and a collection of related process variants. The goal is to improve the original reference process model such that it fits better to the variant models. The second scenario comprises a collection of process variants, while the original reference model is unknown; i.e., the goal is to “merge” these variants into a new reference process model. We suggest two algorithms that are applicable in both scenarios, but have their pros and cons. We provide a systematic comparison of the two algorithms and further contrast them with conventional process mining techniques. Comparison results indicate good performance of our algorithms and also show that specific techniques are needed for learning from process configurations and adaptations. Finally, we provide results from a case study in automotive industry in which we successfully applied our algorithms.  相似文献   

11.
The emergence of several new computing applications, such as virtual reality and smart environments, has become possible due to availability of large pool of cloud resources and services. However, the delay-sensitive applications pose strict delay requirements that transforms euphoria into a problem. The cloud computing paradigm is unable to meet the requirements of low latency, location awareness, and mobility support. In this context, Mobile Edge Computing (MEC) was introduced to bring the cloud services and resources closer to the user proximity by leveraging the available resources in the edge networks. In this paper, we present the definitions of the MEC given by researchers. Further, motivation of the MEC is highlighted by discussing various applications. We also discuss the opportunities brought by the MEC and some of the important research challenges are highlighted in MEC environment. A brief overview of accepted papers in our Special Issue on MEC is presented. Finally we conclude this paper by highlighting the key points and summarizing the paper.  相似文献   

12.
The development of estimation and control theories for quantum systems is a fundamental task for practical quantum technology. This vision article presents a brief introduction to challenging problems and potential opportunities in the emerging areas of quantum estimation, control and learning. The topics cover quantum state estimation, quantum parameter identification, quantum filtering, quantum open-loop control, quantum feedback control, machine learning for estimation and control of quantum systems, and quantum machine learning.  相似文献   

13.
Deletion, replacement and mean-shift model are three approaches frequently used to detect influential observations and outliers. For general linear model with known covariance matrix, it is known that these three approaches lead to the same update formulae for the estimates of the regression coefficients. However if the covariance matrix is indexed by some unknown parameters which also need to be estimated, the situation is unclear. In this paper, we show under a common subclass of linear mixed models that the three approaches are no longer equivalent. For maximum likelihood estimation, replacement is equivalent to mean-shift model but both are not equivalent to case deletion. For restricted maximum likelihood estimation, mean-shift model is equivalent to case deletion but both are not equivalent to replacement. We also demonstrate with real data that misuse of replacement and mean-shift model in place of case deletion can lead to incorrect results.  相似文献   

14.
Scalable,parallel computers: Alternatives,issues, and challenges   总被引:3,自引:0,他引:3  
The 1990s will be the era of scalable computers. By giving up uniform memory access, computers can be built that scale over a range of several thousand. These provide highpeak announced performance (PAP), by using powerful, distributed CMOS microprocessor-primary memory pairs interconnected by a high performance switch (network). The parameters that determine these structures and their utility include: whether hardware (a multiprocessor) or software (a multicomputer) is used to maintain a distributed, or shared virtual memory (DSM) environemnt; the power of computing nodes (these improve at 60% per year); the size and scalability of the switch; distributability (the ability to connect to geographically dispersed computers including workstations); and all forms of software to exploit their inherent parallelism. To a great extent, viability is determined by a computer's generality—the ability to efficiently handle a range of work that requires varying processing (from serial to fully parallel), memory, and I/O resources. A taxonomy and evolutionary time line outlines the next decade of computer evolution, included distributed workstations, based on scalability and parallelism. Workstations can be the best scalables.  相似文献   

15.
Drawing the strengths of data science and machine learning, process mining has recently emerged as an effective research approach for process management and its decision support. Bottleneck identification and analysis is a key problem in process mining which is considered a critical component for process improvement. While previous studies focusing on bottlenecks have been reported, visible gaps remain. Most of these studies considered bottleneck identification from local perspectives by quantitative metrics, such as machine operation and resource requirement, which can not be applied to knowledge-intensive processes. Moreover, the root cause of such bottlenecks has not been given enough attention, which limits the impact of process optimisation. This paper proposes an approach that utilises fusion-based clustering and hyperbolic neural network-based knowledge graph embedding for bottleneck identification and root cause analysis. Firstly, a fusion-based clustering is proposed to identify bottlenecks automatically from a global perspective, where the execution frequency of each stage at different periods is calculated to reveal the abnormal stage. Secondly, a process knowledge graph representing tasks, organisations, workforce and relation features as hierarchical and logical patterns is established. Finally, a hyperbolic cluster-based community detection mechanism is researched, based on the process knowledge graph embedding trained by a hyperbolic neural network, to analyse the root cause from a process perspective. Experimental studies using real-world data collected from a multidisciplinary design project revealed the merits of the proposed approach. The execution of the proposed approach is not limited to event logs; it can automatically identify bottlenecks without local quantitative metrics and analyse the causes from a process perspective.  相似文献   

16.
《Information & Management》2016,53(2):183-196
This study presents a new concept called information systems control alignment, which examines the degree that the underlying characteristics of four main information systems (IS) control dimensions are mutually complementary. Using three case studies, our research uncovers two high-functioning control patterns – one with traditional characteristics and one with agile characteristics – that demonstrate positive alignment among the control environment, control mechanisms, socio-emotional behaviors, and execution of controls. By better understanding the circumstances that contribute to control conflicts, organizations can be increasingly mindful of cultivating a complementary relationship among the control dimensions when designing, implementing, monitoring and adjusting controls within IS processes.  相似文献   

17.
18.
With today’s global digital environment, the Internet is readily accessible anytime from everywhere, so does the digital image manipulation software; thus, digital data is easy to be tampered without notice. Under this circumstance, integrity verification has become an important issue in the digital world. The aim of this paper is to present an in-depth review and analysis on the methods of detecting image tampering. We introduce the notion of content-based image authentication and the features required to design an effective authentication scheme. We review major algorithms and frequently used security mechanisms found in the open literature. We also analyze and discuss the performance trade-offs and related security issues among existing technologies.  相似文献   

19.
A business process (BP) consists of a set of activities which are performed in coordination in an organizational and technical environment and which jointly realize a business goal. In such context, BP management (BPM) can be seen as supporting BPs using methods, techniques, and software in order to design, enact, control, and analyze operational processes involving humans, organizations, applications, and other sources of information. Since the accurate management of BPs is receiving increasing attention, conformance checking, i.e., verifying whether the observed behavior matches a modelled behavior, is becoming more and more critical. Moreover, declarative languages are more frequently used to provide an increased flexibility. However, whereas there exist solid conformance checking techniques for imperative models, little work has been conducted for declarative models. Furthermore, only control-flow perspective is usually considered although other perspectives (e.g., data) are crucial. In addition, most approaches exclusively check the conformance without providing any related diagnostics. To enhance the accurate management of flexible BPs, this work presents a constraint-based approach for conformance checking over declarative BP models (including both control-flow and data perspectives). In addition, two constraint-based proposals for providing related diagnosis are detailed. To demonstrate both the effectiveness and the efficiency of the proposed approaches, the analysis of different performance measures related to a wide diversified set of test models of varying complexity has been performed.  相似文献   

20.
随着车联网VANETs(Vehicular Ad hoc Networks)应用日益受到关注,研究者对VANETs路由协议进行了深入研究.为此,首先总结VANETs的特点及应用,再介绍了基于位置的、基于拓扑以及基于广播路由的概念,并着重分析和总结了近期路由协议的核心思想以及特点,此外,还从应用场景、前提条件、虚拟设备要求、电子地图需求、路由恢复策略以及转发模式六个方面对路由协议进行全面比较,最后,展望了VANETs路由技术的未来研究方向.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号