Robotic process automation is a disruptive technology to automate already digital yet manual tasks and subprocesses as well as whole business processes rapidly. In contrast to other process automation technologies, robotic process automation is lightweight and only accesses the presentation layer of IT systems to mimic human behavior. Due to the novelty of robotic process automation and the varying approaches when implementing the technology, there are reports that up to 50% of robotic process automation projects fail. To tackle this issue, we use a design science research approach to develop a framework for the implementation of robotic process automation projects. We analyzed 35 reports on real-life projects to derive a preliminary sequential model. Then, we performed multiple expert interviews and workshops to validate and refine our model. The result is a framework with variable stages that offers guidelines with enough flexibility to be applicable in complex and heterogeneous corporate environments as well as for small and medium-sized companies. It is structured by the three phases of initialization, implementation, and scaling. They comprise eleven stages relevant during a project and as a continuous cycle spanning individual projects. Together they structure how to manage knowledge and support processes for the execution of robotic process automation implementation projects.
With the complexity of modern vehicles tremendously increasing, quality engineers play a key role within today’s automotive
industry. Field data analysis supports corrective actions in development, production and after sales support. We decompose
the requirements and show that association rules, being a popular approach to generating explanative models, still exhibit
shortcomings. Interactive rule cubes, which have been proposed recently, are a promising alternative. We extend this work
by introducing a way of intuitively visualizing and meaningfully ranking them. Moreover, we present methods to interactively
factorize a problem and validate hypotheses by ranking patterns based on expectations, and by browsing a cube-based network
of related influences. All this is currently in use as an interactive tool for warranty data analysis in the automotive industry.
A real-world case study shows how engineers successfully use it in identifying root causes of quality issues.
Four experiments were conducted to test whether recent developments in display technology would suffice to eliminate the well-known disadvantages in reading from screen as compared with paper. Proofreading speed and performance were equal for a TFT-LCD and a paper display, but there were more symptoms of eyestrain in the screen condition accompanied by a strong preference for paper (Experiment 1). These results were replicated using a longer reading duration (Experiment 2). Additional experiments were conducted to test hypotheses about the reasons for the higher amount of eyestrain associated with reading from screen. Reduced screen luminance did not change the pattern of results (Experiment 3), but positioning both displays in equal inclination angles eliminated the differences in eyestrain symptoms and increased proofreading speed in the screen condition (Experiment 4). A paper-like positioning of TFT-LCDs seems to enable unimpaired reading without evidence of increased physical strain.
Practitioner Summary: Given the developments in screen technology, a re-assessment of the differences in proofreading speed and performance, well-being, and preference between computer screen and paper was conducted. State-of-the-art TFT-LCDs enable unimpaired reading, but a book-like positioning of screens seems necessary to minimise eyestrain symptoms. 相似文献
We investigate the complexity of preorder checking when the specification is a flat finite-state system whereas the implementation is either a non-flat finite-state system or a standard timed automaton. In both cases, we show that simulation checking is Exptime-hard, and for the case of a non-flat implementation, the result holds even if there is no synchronization between the parallel components and their alphabets of actions are pairwise disjoint. Moreover, we show that the considered problems become Pspace-complete when the specification is assumed to be deterministic. Additionally, we establish that comparing a synchronous non-flat system with no hiding and a flat system is Pspace-hard for any relation between trace containment and bisimulation equivalence, even if the flat system is assumed to be fixed. 相似文献
Recent developments in cellular imaging now permit the minimally invasive study of protein interactions in living cells. These advances are of enormous interest to cell biologists, as proteins rarely act in isolation, but rather in concert with others in forming cellular machinery. Up until recently, all protein interactions had to be determined in vitro using biochemical approaches. This biochemical legacy has provided cell biologists with the basis to test defined protein-protein interactions not only inside cells, but now also with spatial resolution. More recent developments in TCSPC imaging are now also driving towards being able to determine protein interaction rates with similar spatial resolution, and together, these experimental advances allow investigators to perform biochemical experiments inside living cells. Here, we discuss some findings we have made along the way which may be useful for physiologists to consider. 相似文献
In the current study a meshfree Lagrangian particle method for the Landau–Lifshitz Navier–Stokes (LLNS) equations is developed. The LLNS equations incorporate thermal fluctuation into macroscopic hydrodynamics by the addition of white noise fluxes whose magnitudes are set by a fluctuation–dissipation theorem. The study focuses on capturing the correct variance and correlations computed at equilibrium flows, which are compared with available theoretical values. Moreover, a numerical test for the random walk of standing shock wave has been considered for capturing the shock location. 相似文献
Engineering frameworks are currently required to support the easy, low-cost, modular and integrated development of manufacturing systems addressing the emergent requirements of re-configurability, responsiveness and robustness. This paper discusses the integration of 2D/3D digital software tools with Petri net based service-oriented frameworks to allow the design, configuration, analysis, validation, simulation, monitoring and control of manufacturing systems in a virtual environment and its posterior smooth migration into the real ??physical?? environment. An experimental case study was implemented to validate the proposed concepts, using the Continuum platform to design, compose, analyze, validate and simulate the Petri nets based service-oriented manufacturing control system, and the Delmia AutomationTM software suite to support the rapid prototyping and the easy simulation of the designed control solution. The experimental results prove several aspects of the proposed approach, notably the smooth migration between the design and the operation phases, one of the main objectives of the work. 相似文献
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with an approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle–particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers. 相似文献
In an old weighing puzzle, there are n?3 coins that are identical in appearance. All the coins except one have the same weight, and that counterfeit one is a little bit lighter or heavier than the others, though it is not known in which direction. What is the smallest number of weighings needed to identify the counterfeit coin and to determine its type, using balance scales without measuring weights? This question was fully answered in 1946 by Dyson [The Mathematical Gazette 30 (1946) 231-234]. For values of n that are divisible by three, Dyson's scheme is non-adaptive and hence its later weighings do not depend on the outcomes of its earlier weighings. For values of n that are not divisible by three, however, Dyson's scheme is adaptive. In this note, we show that for all values n?3 there exists an optimal weighing scheme that is non-adaptive. 相似文献