首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
一致性检查是关于计算流程模型与其执行实际之间相符情况的问题. 运行时一致性检查因反馈的实时性和良好的应用前景, 成为当前一致性检查的新问题. 针对每个新产生的事件, 如何以较小的性能代价计算得到最优的一致性检查结果是运行时一致性检查的难点. 基于流程模型的结构信息(refined process structure tree, RPST)提出一致性监控树(conformance monitoring tree, CMT), 基于CMT提出求解最优一致性结果的动态规划算法. 通过3个实验数据集表明, 对比已有相关工作, 本文算法具备较明显的性能优势.  相似文献   

2.
Process-aware information systems (PAIS) are systems relying on processes, which involve human and software resources to achieve concrete goals. There is a need to develop approaches for modeling, analysis, improvement and monitoring processes within PAIS. These approaches include process mining techniques used to discover process models from event logs, find log and model deviations, and analyze performance characteristics of processes. The representational bias (a way to model processes) plays an important role in process mining. The BPMN 2.0 (Business Process Model and Notation) standard is widely used and allows to build conventional and understandable process models. In addition to the flat control flow perspective, subprocesses, data flows, resources can be integrated within one BPMN diagram. This makes BPMN very attractive for both process miners and business users, since the control flow perspective can be integrated with data and resource perspectives discovered from event logs. In this paper, we describe and justify robust control flow conversion algorithms, which provide the basis for more advanced BPMN-based discovery and conformance checking algorithms. Thus, on the basis of these conversion algorithms low-level models (such as Petri nets, causal nets and process trees) discovered from event logs using existing approaches can be represented in terms of BPMN. Moreover, we establish behavioral relations between Petri nets and BPMN models and use them to adopt existing conformance checking and performance analysis techniques in order to visualize conformance and performance information within a BPMN diagram. We believe that the results presented in this paper can be used for a wide variety of BPMN mining and conformance checking algorithms. We also provide metrics for the processes discovered before and after the conversion to BPMN structures. Cases for which conversion algorithms produce more compact or more complicated BPMN models in comparison with the initial models are identified.  相似文献   

3.
An exponential growth of event data can be witnessed across all industries. Devices connected to the internet (internet of things), social interaction, mobile computing, and cloud computing provide new sources of event data and this trend will continue. The omnipresence of large amounts of event data is an important enabler for process mining. Process mining techniques can be used to discover, monitor and improve real processes by extracting knowledge from observed behavior. However, unprecedented volumes of event data also provide new challenges and often state-of-the-art process mining techniques cannot cope. This paper focuses on “conformance checking in the large” and presents a novel decomposition technique that partitions larger process models and event logs into smaller parts that can be analyzed independently. The so-called Single-Entry Single-Exit (SESE) decomposition not only helps to speed up conformance checking, but also provides improved diagnostics. The analyst can zoom in on the problematic parts of the process. Importantly, the conditions under which the conformance of the whole can be assessed by verifying the conformance of the SESE parts are described, which enables the decomposition and distribution of large conformance checking problems. All the techniques have been implemented in ProM, and experimental results are provided.  相似文献   

4.
Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behavior provided in the form of a process model. Recently, techniques for conformance checking based on declarative specifications have been developed. Such specifications are suitable to describe processes characterized by high variability. However, an open challenge in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. This means that declarative models used for conformance checking should not only describe the process behavior from the control flow point of view, but also from other perspectives like data or time. In this paper, we close this gap by presenting an approach for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented using artificial and real-life event logs.  相似文献   

5.
The practical relevance of process mining is increasing as more and more event data become available. Process mining techniques aim to discover, monitor and improve real processes by extracting knowledge from event logs. The two most prominent process mining tasks are: (i) process discovery: learning a process model from example behavior recorded in an event log, and (ii) conformance checking: diagnosing and quantifying discrepancies between observed behavior and modeled behavior. The increasing volume of event data provides both opportunities and challenges for process mining. Existing process mining techniques have problems dealing with large event logs referring to many different activities. Therefore, we propose a generic approach to decompose process mining problems. The decomposition approach is generic and can be combined with different existing process discovery and conformance checking techniques. It is possible to split computationally challenging process mining problems into many smaller problems that can be analyzed easily and whose results can be combined into solutions for the original problems.  相似文献   

6.
Process mining techniques relate observed behavior (i.e., event logs) to modeled behavior (e.g., a BPMN model or a Petri net). Process models can be discovered from event logs and conformance checking techniques can be used to detect and diagnose differences between observed and modeled behavior. Existing process mining techniques can only uncover these differences, but the actual repair of the model is left to the user and is not supported. In this paper we investigate the problem of repairing a process model w.r.t. a log such that the resulting model can replay the log (i.e., conforms to it) and is as similar as possible to the original model. To solve the problem, we use an existing conformance checker that aligns the runs of the given process model to the traces in the log. Based on this information, we decompose the log into several sublogs of non-fitting subtraces. For each sublog, either a loop is discovered that can replay the sublog or a subprocess is derived that is then added to the original model at the appropriate location. The approach is implemented in the process mining toolkit ProM and has been validated on logs and models from several Dutch municipalities.  相似文献   

7.
Given a model of the expected behavior of a business process and given an event log recording its observed behavior, the problem of business process conformance checking is that of identifying and describing the differences between the process model and the event log. A desirable feature of a conformance checking technique is that it should identify a minimal yet complete set of differences. Existing conformance checking techniques that fulfill this property exhibit limited scalability when confronted to large and complex process models and event logs. One reason for this limitation is that existing techniques compare each execution trace in the log against the process model separately, without reusing computations made for one trace when processing subsequent traces. Yet, the execution traces of a business process typically share common fragments (e.g. prefixes and suffixes). A second reason is that these techniques do not integrate mechanisms to tackle the combinatorial state explosion inherent to process models with high levels of concurrency. This paper presents two techniques that address these sources of inefficiency. The first technique starts by transforming the process model and the event log into two automata. These automata are then compared based on a synchronized product, which is computed using an A* heuristic with an admissible heuristic function, thus guaranteeing that the resulting synchronized product captures all differences and is minimal in size. The synchronized product is then used to extract optimal (minimal-length) alignments between each trace of the log and the closest corresponding trace of the model. By representing the event log as a single automaton, this technique allows computations for shared prefixes and suffixes to be made only once. The second technique decomposes the process model into a set of automata, known as S-components, such that the product of these automata is equal to the automaton of the whole process model. A product automaton is computed for each S-component separately. The resulting product automata are then recomposed into a single product automaton capturing all the differences between the process model and the event log, but without minimality guarantees. An empirical evaluation using 40 real-life event logs shows that, used in tandem, the proposed techniques outperform state-of-the-art baselines in terms of execution times in a vast majority of cases, with improvements ranging from several-fold to one order of magnitude. Moreover, the decomposition-based technique leads to optimal trace alignments for the vast majority of datasets and close to optimal alignments for the remaining ones.  相似文献   

8.
There seems to be a never ending stream of new process modeling notations. Some of these notations are foundational and have been around for decades (e.g., Petri nets). Other notations are vendor specific, incremental, or are only popular for a short while. Discussions on the various competing notations concealed the more important question “What makes a good process model?”. Fortunately, large scale experiences with process mining allow us to address this question. Process mining techniques can be used to extract knowledge from event data, discover models, align logs and models, measure conformance, diagnose bottlenecks, and predict future events. Today’s processes leave many trails in data bases, audit trails, message logs, transaction logs, etc. Therefore, it makes sense to relate these event data to process models independent of their particular notation. Process models discovered based on the actual behavior tend to be very different from the process models made by humans. Moreover, conformance checking techniques often reveal important deviations between models and reality. The lessons that can be learned from process mining shed a new light on process model quality. This paper discusses the role of process models and lists seven problems related to process modeling. Based on our experiences in over 100 process mining projects, we discuss these problems. Moreover, we show that these problems can be addressed by exposing process models and modelers to event data.  相似文献   

9.
Business processes leave trails in a variety of data sources (e.g., audit trails, databases, and transaction logs). Hence, every process instance can be described by a trace, i.e., a sequence of events. Process mining techniques are able to extract knowledge from such traces and provide a welcome extension to the repertoire of business process analysis techniques. Recently, process mining techniques have been adopted in various commercial BPM systems (e.g., BPM|one, Futura Reflect, ARIS PPM, Fujitsu Interstage, Businesscape, Iontas PDF, and QPR PA). Unfortunately, traditional process discovery algorithms have problems dealing with less structured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in such a way that event logs can be explored easily. Trace alignment can be used to explore the process in the early stages of analysis and to answer specific questions in later stages of analysis. Hence, it complements existing process mining techniques focusing on discovery and conformance checking. The proposed techniques have been implemented as plugins in the ProM framework. We report the results of trace alignment on one synthetic and two real-life event logs, and show that trace alignment has significant promise in process diagnostic efforts.  相似文献   

10.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.  相似文献   

11.
Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today׳s economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically.In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking (how to assess the conformance of the schedule and the actual process execution) and performance improvement (how to improve the operational process performance). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%.  相似文献   

12.
Business processes described by formal or semi-formal models are realized via information systems. Event logs generated from these systems are probably not consistent with the existing models due to insufficient design of the information system or the system upgrade. By comparing an existing process model with event logs, we can detect inconsistencies called deviations, verify and extend the business process model, and accordingly improve the business process. In this paper, some abnormal activities in business processes are formally defined based on Petri nets. An efficient approach to detect deviations between the process model and event logs is proposed. Then, business process models are revised when abnormal activities exist. A clinical process in a healthcare information system is used as a case study to illustrate our work. Experimental results show the effectiveness and efficiency of the proposed approach.   相似文献   

13.
Artifact行为的一致性检测,是在流程建模、运行之后亟待解决的关键问题之一.针对现有一致性检测技术忽略数据操作方面检测的问题,提出了一种基于Artifact快照序列的行为一致性检测方法.首先,利用全序Artifact快照序列定义了Artifact的行为模式,该行为模式不仅体现了服务的运行轨迹,也描述出了Artifact数据属性赋值的状态变化;然后,将Artifact行为一致性检测问题转换为语言可判定问题,证明了该问题是一个可判定问题,该过程中,设计一台判定该语言的图灵机作为一致性验证模型,该模型不仅检测了Artifact生命周期中服务路径的一致性,同时也检测了生命周期中Artifact属性赋值的正确性;进一步地,利用服务-快照关联矩阵的等价转换,给出了行为一致性量化指标中确切度的精确计算方法;最后,通过实例分析及实验对所提出的方法进行了验证.  相似文献   

14.
Sampling-Based Roadmap of Trees for Parallel Motion Planning   总被引:1,自引:0,他引:1  
This paper shows how to effectively combine a sampling-based method primarily designed for multiple-query motion planning [probabilistic roadmap method (PRM)] with sampling-based tree methods primarily designed for single-query motion planning (expansive space trees, rapidly exploring random trees, and others) in a novel planning framework that can be efficiently parallelized. Our planner not only achieves a smooth spectrum between multiple-query and single-query planning, but it combines advantages of both. We present experiments which show that our planner is capable of solving problems that cannot be addressed efficiently with PRM or single-query planners. A key advantage of our planner is that it is significantly more decoupled than PRM and sampling-based tree planners. Exploiting this property, we designed and implemented a parallel version of our planner. Our experiments show that our planner distributes well and can easily solve high-dimensional problems that exhaust resources available to single machines and cannot be addressed with existing planners.  相似文献   

15.
Process mining techniques allow for extracting information from event logs. For example, the audit trails of a workflow management system or the transaction logs of an enterprise resource planning system can be used to discover models describing processes, organizations, and products. Traditionally, process mining has been applied to structured processes. In this paper, we argue that process mining can also be applied to less structured processes supported by computer supported cooperative work (CSCW) systems. In addition, the ProM framework is described. Using ProM a wide variety of process mining activities are supported ranging from process discovery and verification to conformance checking and social network analysis.  相似文献   

16.
The reasoning power of human-oriented plan-based reasoning systems is primarily derived from their domain-specific problem solving knowledge. Such knowledge is, however, intrinsically incomplete. In order to model the human ability of adapting existing methods to new situations we present in this work a declarative approach for representing methods, which can be adapted by so-called meta-methods. Since the computational success of this approach relies on the existence of general and strong meta-methods, we describe several meta-methods of general interest in detail by presenting the problem solving process of two familiar classes of mathematical problems. These examples should illustrate our philosophy of proof planning as well: besides planning with a pre-defined repertory of methods, the repertory of methods evolves with experience in that new ones are created by meta-methods that modify existing ones. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

17.
一个可配置的业务过程模型可以通过配置来满足组织的特定要求。配置活动需要自动确定一个可配置过程模型的变体,同时保证该特定过程模型的正确性。然而,很少有方法能够解决此问题。提出一种新方法,即先将可配置的过程模型自动分离成原子子过程模型,再由这些模型组合成需要的过程模型,该方法能保证得到的过程模型的正确性(该过程模型无异常行为问题)。与现有的方法相比,新方法与特定的语言无关,而且避免了独立处理配置活动,降低了计算复杂度。  相似文献   

18.
19.
20.
A case-based approach to heuristic planning   总被引:1,自引:1,他引:0  
Most of the great success of heuristic search as an approach to AI Planning is due to the right design of domain-independent heuristics. Although many heuristic planners perform reasonably well, the computational cost of computing the heuristic function in every search node is very high, causing the planner to scale poorly when increasing the size of the planning tasks. For tackling this problem, planners can incorporate additional domain-dependent heuristics in order to improve their performance. Learning-based planners try to automatically acquire these domain-dependent heuristics using previous solved problems. In this work, we present a case-based reasoning approach that learns abstracted state transitions that serve as domain control knowledge for improving the planning process. The recommendations from the retrieved cases are used as guidance for pruning or ordering nodes in different heuristic search algorithms applied to planning tasks. We show that the CBR guidance is appropriate for a considerable number of planning benchmarks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号