首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Existing techniques for automated discovery of process models from event logs generally produce flat process models. Thus, they fail to exploit the notion of subprocess as well as error handling and repetition constructs provided by contemporary process modeling notations, such as the Business Process Model and Notation (BPMN). This paper presents a technique, namely BPMN Miner, for automated discovery of hierarchical BPMN models containing interrupting and non-interrupting boundary events and activity markers. The technique employs approximate functional and inclusion dependency discovery techniques in order to elicit a process–subprocess hierarchy from the event log. Given this hierarchy and the projected logs associated to each node in the hierarchy, parent process and subprocess models are discovered using existing techniques for flat process model discovery. Finally, the resulting models and logs are heuristically analyzed in order to identify boundary events and markers. By employing approximate dependency discovery techniques, BPMN Miner is able to detect and filter out noise in the event log arising for example from data entry errors, missing event records or infrequent behavior. Noise is detected during the construction of the subprocess hierarchy and filtered out via heuristics at the lowest possible level of granularity in the hierarchy. A validation with one synthetic and two real-life logs shows that process models derived by the proposed technique are more accurate and less complex than those derived with flat process discovery techniques. Meanwhile, a validation on a family of synthetically generated logs shows that the technique is resilient to varying levels of noise.  相似文献   

2.
Process mining techniques have been used to analyze event logs from information systems in order to derive useful patterns. However, in the big data era, real-life event logs are huge, unstructured, and complex so that traditional process mining techniques have difficulties in the analysis of big logs. To reduce the complexity during the analysis, trace clustering can be used to group similar traces together and to mine more structured and simpler process models for each of the clusters locally. However, a high dimensionality of the feature space in which all the traces are presented poses different problems to trace clustering. In this paper, we study the effect of applying dimensionality reduction (preprocessing) techniques on the performance of trace clustering. In our experimental study we use three popular feature transformation techniques; singular value decomposition (SVD), random projection (RP), and principal components analysis (PCA), and the state-of-the art trace clustering in process mining. The experimental results on the dataset constructed from a real event log recorded from patient treatment processes in a Dutch hospital show that dimensionality reduction can improve trace clustering performance with respect to the computation time and average fitness of the mined local process models.  相似文献   

3.
An automated process discovery technique generates a process model from an event log recording the execution of a business process. For it to be useful, the generated process model should be as simple as possible, while accurately capturing the behavior recorded in, and implied by, the event log. Most existing automated process discovery techniques generate flat process models. When confronted to large event logs, these approaches lead to overly complex or inaccurate process models. An alternative is to apply a divide-and-conquer approach by decomposing the process into stages and discovering one model per stage. It turns out, however, that existing divide-and-conquer process discovery approaches often produce less accurate models than flat discovery techniques, when applied to real-life event logs. This article proposes an automated method to identify business process stages from an event log and an automated technique to discover process models based on a given stage-based process decomposition. An experimental evaluation shows that: (i) relative to existing automated process decomposition methods in the field of process mining, the proposed method leads to stage-based decompositions that are closer to decompositions derived by human experts; and (ii) the proposed stage-based process discovery technique outperforms existing flat and divide-and-conquer discovery techniques with respect to well-accepted measures of accuracy and achieves comparable results in terms of model complexity.  相似文献   

4.
Process mining is the research domain that is dedicated to the a posteriori analysis of business process executions. The techniques developed within this research area are specifically designed to provide profound insight by exploiting the untapped reservoir of knowledge that resides within event logs of information systems. Process discovery is one specific subdomain of process mining that entails the discovery of control-flow models from such event logs. Assessing the quality of discovered process models is an essential element, both for conducting process mining research as well as for the use of process mining in practice. In this paper, a multi-dimensional quality assessment is presented in order to comprehensively evaluate process discovery techniques. In contrast to previous studies, the major contribution of this paper is the use of eight real-life event logs. For instance, we show that evaluation based on real-life event logs significantly differs from the traditional approach to assess process discovery techniques using artificial event logs. In addition, we provide an extensive overview of available process discovery techniques and we describe how discovered process models can be assessed regarding both accuracy and comprehensibility. The results of our study indicate that the HeuristicsMiner algorithm is especially suited in a real-life setting. However, it is also shown that, particularly for highly complex event logs, knowledge discovery from such data sets can become a major problem for traditional process discovery techniques.  相似文献   

5.
6.
Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years; however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model (MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets (MPNs) that are an extension of Petri nets with distinguishable tokens. Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used. The proposed discovery approach is properly implemented as plugins in the ProM toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-the-art process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.   相似文献   

7.
Declarative process models define the behaviour of business processes as a set of constraints. Declarative process discovery aims at inferring such constraints from event logs. Existing discovery techniques verify the satisfaction of candidate constraints over the log, but completely neglect their interactions. As a result, the inferred constraints can be mutually contradicting and their interplay may lead to an inconsistent process model that does not accept any trace. In such a case, the output turns out to be unusable for enactment, simulation or verification purposes. In addition, the discovered model contains, in general, redundancies that are due to complex interactions of several constraints and that cannot be cured using existing pruning approaches. We address these problems by proposing a technique that automatically resolves conflicts within the discovered models and is more powerful than existing pruning techniques to eliminate redundancies. First, we formally define the problems of constraint redundancy and conflict resolution. Second, we introduce techniques based on the notion of automata-product monoid, which guarantees the consistency of the discovered models and, at the same time, keeps the most interesting constraints in the pruned set. The level of interestingness is dictated by user-specified prioritisation criteria. We evaluate the devised techniques on a set of real-world event logs.  相似文献   

8.
Considerable amounts of data, including process events, are collected and stored by organisations nowadays. Discovering a process model from such event data and verification of the quality of discovered models are important steps in process mining. Many discovery techniques have been proposed, but none of them combines scalability with strong quality guarantees. We would like such techniques to handle billions of events or thousands of activities, to produce sound models (without deadlocks and other anomalies), and to guarantee that the underlying process can be rediscovered when sufficient information is available. In this paper, we introduce a framework for process discovery that ensures these properties while passing over the log only once and introduce three algorithms using the framework. To measure the quality of discovered models for such large logs, we introduce a model–model and model–log comparison framework that applies a divide-and-conquer strategy to measure recall, fitness, and precision. We experimentally show that these discovery and measuring techniques sacrifice little compared to other algorithms, while gaining the ability to cope with event logs of 100,000,000 traces and processes of 10,000 activities on a standard computer.  相似文献   

9.
Business processes leave trails in a variety of data sources (e.g., audit trails, databases, and transaction logs). Hence, every process instance can be described by a trace, i.e., a sequence of events. Process mining techniques are able to extract knowledge from such traces and provide a welcome extension to the repertoire of business process analysis techniques. Recently, process mining techniques have been adopted in various commercial BPM systems (e.g., BPM|one, Futura Reflect, ARIS PPM, Fujitsu Interstage, Businesscape, Iontas PDF, and QPR PA). Unfortunately, traditional process discovery algorithms have problems dealing with less structured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in such a way that event logs can be explored easily. Trace alignment can be used to explore the process in the early stages of analysis and to answer specific questions in later stages of analysis. Hence, it complements existing process mining techniques focusing on discovery and conformance checking. The proposed techniques have been implemented as plugins in the ProM framework. We report the results of trace alignment on one synthetic and two real-life event logs, and show that trace alignment has significant promise in process diagnostic efforts.  相似文献   

10.
Process mining is a tool to extract non-trivial and useful information from process execution logs. These so-called event logs (also called audit trails, or transaction logs) are the starting point for various discovery and analysis techniques that help to gain insight into certain characteristics of the process. In this paper we use a combination of process mining techniques to discover multiple perspectives (namely, the control-flow, data, performance, and resource perspective) of the process from historic data, and we integrate them into a comprehensive simulation model. This simulation model is represented as a colored Petri net (CPN) and can be used to analyze the process, e.g., evaluate the performance of different alternative designs. The discovery of simulation models is explained using a running example. Moreover, the approach has been applied in two case studies; the workflows in two different municipalities in the Netherlands have been analyzed using a combination of process mining and simulation. Furthermore, the quality of the CPN models generated for the running example and the two case studies has been evaluated by comparing the original logs with the logs of the generated models.  相似文献   

11.
Process mining techniques allow for extracting information from event logs. For example, the audit trails of a workflow management system or the transaction logs of an enterprise resource planning system can be used to discover models describing processes, organizations, and products. Traditionally, process mining has been applied to structured processes. In this paper, we argue that process mining can also be applied to less structured processes supported by computer supported cooperative work (CSCW) systems. In addition, the ProM framework is described. Using ProM a wide variety of process mining activities are supported ranging from process discovery and verification to conformance checking and social network analysis.  相似文献   

12.
Given a model of the expected behavior of a business process and given an event log recording its observed behavior, the problem of business process conformance checking is that of identifying and describing the differences between the process model and the event log. A desirable feature of a conformance checking technique is that it should identify a minimal yet complete set of differences. Existing conformance checking techniques that fulfill this property exhibit limited scalability when confronted to large and complex process models and event logs. One reason for this limitation is that existing techniques compare each execution trace in the log against the process model separately, without reusing computations made for one trace when processing subsequent traces. Yet, the execution traces of a business process typically share common fragments (e.g. prefixes and suffixes). A second reason is that these techniques do not integrate mechanisms to tackle the combinatorial state explosion inherent to process models with high levels of concurrency. This paper presents two techniques that address these sources of inefficiency. The first technique starts by transforming the process model and the event log into two automata. These automata are then compared based on a synchronized product, which is computed using an A* heuristic with an admissible heuristic function, thus guaranteeing that the resulting synchronized product captures all differences and is minimal in size. The synchronized product is then used to extract optimal (minimal-length) alignments between each trace of the log and the closest corresponding trace of the model. By representing the event log as a single automaton, this technique allows computations for shared prefixes and suffixes to be made only once. The second technique decomposes the process model into a set of automata, known as S-components, such that the product of these automata is equal to the automaton of the whole process model. A product automaton is computed for each S-component separately. The resulting product automata are then recomposed into a single product automaton capturing all the differences between the process model and the event log, but without minimality guarantees. An empirical evaluation using 40 real-life event logs shows that, used in tandem, the proposed techniques outperform state-of-the-art baselines in terms of execution times in a vast majority of cases, with improvements ranging from several-fold to one order of magnitude. Moreover, the decomposition-based technique leads to optimal trace alignments for the vast majority of datasets and close to optimal alignments for the remaining ones.  相似文献   

13.
Process mining techniques relate observed behavior (i.e., event logs) to modeled behavior (e.g., a BPMN model or a Petri net). Process models can be discovered from event logs and conformance checking techniques can be used to detect and diagnose differences between observed and modeled behavior. Existing process mining techniques can only uncover these differences, but the actual repair of the model is left to the user and is not supported. In this paper we investigate the problem of repairing a process model w.r.t. a log such that the resulting model can replay the log (i.e., conforms to it) and is as similar as possible to the original model. To solve the problem, we use an existing conformance checker that aligns the runs of the given process model to the traces in the log. Based on this information, we decompose the log into several sublogs of non-fitting subtraces. For each sublog, either a loop is discovered that can replay the sublog or a subprocess is derived that is then added to the original model at the appropriate location. The approach is implemented in the process mining toolkit ProM and has been validated on logs and models from several Dutch municipalities.  相似文献   

14.
Genetic process mining: an experimental evaluation   总被引:4,自引:0,他引:4  
One of the aims of process mining is to retrieve a process model from an event log. The discovered models can be used as objective starting points during the deployment of process-aware information systems (Dumas et al., eds., Process-Aware Information Systems: Bridging People and Software Through Process Technology. Wiley, New York, 2005) and/or as a feedback mechanism to check prescribed models against enacted ones. However, current techniques have problems when mining processes that contain non-trivial constructs and/or when dealing with the presence of noise in the logs. Most of the problems happen because many current techniques are based on local information in the event log. To overcome these problems, we try to use genetic algorithms to mine process models. The main motivation is to benefit from the global search performed by this kind of algorithms. The non-trivial constructs are tackled by choosing an internal representation that supports them. The problem of noise is naturally tackled by the genetic algorithm because, per definition, these algorithms are robust to noise. The main challenge in a genetic approach is the definition of a good fitness measure because it guides the global search performed by the genetic algorithm. This paper explains how the genetic algorithm works. Experiments with synthetic and real-life logs show that the fitness measure indeed leads to the mining of process models that are complete (can reproduce all the behavior in the log) and precise (do not allow for extra behavior that cannot be derived from the event log). The genetic algorithm is implemented as a plug-in in the ProM framework.  相似文献   

15.
常用的迹聚类方法大都使用相对单一的标准,如利用活动序列关系,而忽略了活动的行为关系、时间或资源属性,这对于一些柔性配置的业务流程系统提升过程挖掘质量是不利的。针对该问题,提出了一种结合活动行为关系与关联时间的多视角迹聚类方法。首先,根据活动之间的行为关系构建控制流编码;同时,在时间属性上,把迹表示为一组最近关联活动对及其时间差;其次使用加权聚合的方式集成两个视角下的迹相似性,然后进行聚类调整。最后,将所提方法应用于登录系统场景,并且在五个真实日志上与其他聚类方法进行对比。实验结果表明该方法能够从复杂的登录系统中发现过程场景,并且从适应度、精度和F1分数三个度量标准上验证了该方法的优越性。  相似文献   

16.
Previous computational models of self-replication using cellular automata (CA) have been manually designed, a difficult and time-consuming process. We show here how genetic algorithms can be applied to automatically discover rules governing self-replicating structures. The main difficulty in this problem lies in the choice of the fitness evaluation technique. The solution we present is based on a multiobjective fitness function consisting of three independent measures: growth in number of components, relative positioning of components, and the multiplicity of replicants. We introduce a new paradigm for CA models with weak rotational symmetry, called orientation-insensitive input, and hypothesize that it facilitates discovery of self-replicating structures by reducing search-space sizes. Experimental yields of self-replicating structures discovered using our technique are shown to be statistically significant. The discovered self-replicating structures compare favorably in terms of simplicity with those generated manually in the past, but differ in unexpected ways. These results suggest that further exploration in the space of possible self-replicating structures will yield additional new structures. Furthermore, this research sheds light on the process of creating self-replicating structures, opening the door to future studies on the discovery of novel self-replicating molecules and self-replicating assemblers in nanotechnology  相似文献   

17.
Process mining techniques aim at extracting knowledge from event logs. One of the most important tasks in process mining is process model discovery. In discovering process models, an algorithm is designed to build a process model from a given event log. In this paper, a new model to discover process models has been proposed. A combination of Genetic Algorithm and Simulated Annealing has been used in this model. Genetic Algorithms has previously been used in this context. Previous approaches had drawbacks in fitness evaluation that misguided the algorithm. Another problem was that the quality of the candidates, in the population, was low such that it reduced the chance of finding a perfect answer. In this paper, a new fitness measure has been proposed to evaluate process models based on event logs. Moreover SA has been used to improve the quality of candidates in the population. It has been demonstrated that the proposed model outperformed in terms of rediscovering process models, compared to other approaches which are proposed in the literature, which was the result of better fitness evaluation and increased quality of individuals,. It came to conclusion that using GA and SA in combination with each other can be effective in this context.  相似文献   

18.
Considering the presence of large amounts of data in organizations today, the need to transform this data into useful information and subsequently into knowledge, increasingly gains attention. Process discovery is a technique to automatically discover process models from data in event logs. Since process discovery is gaining attention among researchers as well as practitioners, the quality of the resulting process model must be assured. In this paper, the quality of the frequently used Heuristics Miner is improved as anomalies were found concerning the validity and completeness of the resulting process model. For this purpose, a new artifact called the Updated Heuristics Miner was constructed containing alterations to the tool and to the algorithm itself. Evaluations of this artifact resulted in the conclusion that the Updated Heuristics Miner indeed demonstrates higher validity and completeness. This study contributes to the body of knowledge first by improving the quality of the an often used research instrument and second by stating that there is a need for a systematic developing and evaluation method for process discovery techniques.  相似文献   

19.
Analysing performance of business processes is an important vehicle to improve their operation. Specifically, an accurate assessment of sojourn times and remaining times enables bottleneck analysis and resource planning. Recently, methods to create respective performance models from event logs have been proposed. These works have several limitations, though: They either consider control-flow and performance information separately, or rely on an ad-hoc selection of temporal relations between events. In this paper, we introduce the Temporal Network Representation (TNR) of a log. It is based on Allen’s interval algebra, comprises the pairwise temporal relations for activity executions, and potentially incorporates the context in which these relations have been observed. We demonstrate the usefulness of the TNR for detecting (unrecorded) delays and for probabilistic mining of variants when modelling the performance of a process. In order to compare different models from the performance perspective, we further develop a framework for measuring performance fitness. Under this framework, TNR-based process discovery is guaranteed to dominate existing techniques in measuring performance characteristics of a process. In addition, we show how contextual information in terms of the congestion levels of the process can be mined in order to further improve capabilities for performance analysis. To illustrate the practical value of the proposed models, we evaluate our approaches with three real-life datasets. Our experiments show that the TNR yields an improvement in performance fitness over state-of-the-art algorithms, while congestion learning is able to accurately reconstruct congestion levels from event data.  相似文献   

20.

The problem of automatically discovering business process models from event logs has been intensely investigated in the past two decades, leading to a wide range of approaches that strike various trade-offs between accuracy, model complexity, and execution time. A few studies have suggested that the accuracy of automated process discovery approaches can be enhanced by means of metaheuristic optimization techniques. However, these studies have remained at the level of proposals without validation on real-life datasets or they have only considered one metaheuristic in isolation. This article presents a metaheuristic optimization framework for automated process discovery. The key idea of the framework is to construct a directly-follows graph (DFG) from the event log, to perturb this DFG so as to generate new candidate solutions, and to apply a DFG-based automated process discovery approach in order to derive a process model from each DFG. The framework can be instantiated by linking it to an automated process discovery approach, an optimization metaheuristic, and the quality measure to be optimized (e.g., fitness, precision, F-score). The article considers several instantiations of the framework corresponding to four optimization metaheuristics, three automated process discovery approaches (Inductive Miner—directly-follows, Fodina, and Split Miner), and one accuracy measure (Markovian F-score). These framework instances are compared using a set of 20 real-life event logs. The evaluation shows that metaheuristic optimization consistently yields visible improvements in F-score for all the three automated process discovery approaches, at the cost of execution times in the order of minutes, versus seconds for the baseline approaches.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号