This paper proposes a simplicity-oriented approach and framework for language-to-language transformation of, in particular, graphical languages. Key to simplicity is the decomposition of the transformation specification into sub-rule systems that separately specify purpose-specific aspects. We illustrate this approach by employing a variation of Plotkin’s Structural Operational Semantics (SOS) for pattern-based transformations of typed graphs in order to address the aspect ‘computation’ in a graph rewriting fashion. Key to our approach are two generalizations of Plotkin’s structural rules: the use of graph patterns as the matching concept in the rules, and the introduction of node and edge types. Types do not only allow one to easily distinguish between different kinds of dependencies, like control, data, and priority, but may also be used to define a hierarchical layering structure. The resulting Type-based Structural Operational Semantics (TSOS) supports a well-structured and intuitive specification and realization of semantically involved language-to-language transformations adequate for the generation of purpose-specific views or input formats for certain tools, like, e.g., model checkers. A comparison with the general-purpose transformation frameworks ATL and Groove, illustrates along the educational setting of our graphical WebStory language that TSOS provides quite a flexible format for the definition of a family of purpose-specific transformation languages that are easy to use and come with clear guarantees.
Abstract— In the past, a five‐mask LTPS CMOS process requiring only one single ion‐doping step was used. Based on that process, all necessary components for the realization of a fully integrated AMOLED display using a 3T1C current‐feedback pixel circuit has recently been developed. The integrated data driver is based on a newly developed LTPS operational amplifier, which does not require any compensation for Vth or mobility variations. Only one operational amplifier per column is used to perform digital‐to‐analog conversion as well as current control. In order to achieve high‐precision analog behavior, the operational amplifier is embedded in a switched capacitor network. In addition to circuit verification by simulation and analytic analysis, a 1‐in. fully integrated AMOLED demonstrator was successfully built. To the best of the authors' knowledge, this is the first implementation of a fully integrated AMOLED display with current feedback. 相似文献
We present an approach for the visualization and interactive analysis of dynamic graphs that contain a large number of time steps. A specific focus is put on the support of analyzing temporal aspects in the data. Central to our approach is a static, volumetric representation of the dynamic graph based on the concept of space-time cubes that we create by stacking the adjacency matrices of all time steps. The use of GPU-accelerated volume rendering techniques allows us to render this representation interactively. We identified four classes of analytics methods as being important for the analysis of large and complex graph data, which we discuss in detail: data views, aggregation and filtering, comparison, and evolution provenance. Implementations of the respective methods are presented in an integrated application, enabling interactive exploration and analysis of large graphs. We demonstrate the applicability, usefulness, and scalability of our approach by presenting two examples for analyzing dynamic graphs. Furthermore, we let visualization experts evaluate our analytics approach.
Droplet-based microfluidic allows high throughput experimentation in with low volume droplets. Essential fluidic process steps are on the one hand the proper control of the droplet composition and on the other hand the droplet processing, manipulation and storage. Beside integrated fluidic chips, standard PTFE-tubings and fluid connectors can be used in combination with appropriate pumps to realize almost all necessary fluidic processes. The segmented flow technique usually operates with droplets of about 100–500 nL volume. These droplets are embedded in an immiscible fluid and confined by channel walls. For the integration of segmented flow applications in established research workflows—which are usually base on microtiter plates—robotic interface tools for parallel/serial and serial/parallel transfer operations are necessary. Especially dose–response experiments are well suited for the segmented flow technique. We developed different transfer tools including an automated “gradient take-up tool” for the generation of segment sequences with gradually changing composition of the individual droplets. The general working principles are introduced and the fluidic characterizations are given. As exemplary application for a dose–response experiment the inhibitory effect of antibiotic tetracycline on Escherichia coli bacteria cultivated inside nanoliter droplets was investigated. 相似文献
It is an open problem in the area of effective (algorithmic) randomness whether Kolmogorov-Loveland randomness coincides with Martin-Löf randomness. Joe Miller and André Nies suggested some variations of Kolmogorov-Loveland randomness to approach this problem and to provide a partial solution. We show that their proposed notion of injective randomness is still weaker than Martin-Löf randomness. Since in this proof some of the ideas we use are clearer, we also show the weaker theorem that permutation randomness is weaker than Martin-Löf randomness. 相似文献
We present a black-box active learning algorithm for inferring extended finite state machines (EFSM)s by dynamic black-box analysis. EFSMs can be used to model both data flow and control behavior of software and hardware components. Different dialects of EFSMs are widely used in tools for model-based software development, verification, and testing. Our algorithm infers a class of EFSMs called register automata. Register automata have a finite control structure, extended with variables (registers), assignments, and guards. Our algorithm is parameterized on a particular theory, i.e., a set of operations and tests on the data domain that can be used in guards.Key to our learning technique is a novel learning model based on so-called tree queries. The learning algorithm uses tree queries to infer symbolic data constraints on parameters, e.g., sequence numbers, time stamps, identifiers, or even simple arithmetic. We describe sufficient conditions for the properties that the symbolic constraints provided by a tree query in general must have to be usable in our learning model. We also show that, under these conditions, our framework induces a generalization of the classical Nerode equivalence and canonical automata construction to the symbolic setting. We have evaluated our algorithm in a black-box scenario, where tree queries are realized through (black-box) testing. Our case studies include connection establishment in TCP and a priority queue from the Java Class Library. 相似文献
The World Wide Web has turned hypertext into a success story by enabling world-wide sharing of unstructured information and informal knowledge. The Semantic Web targets the sharing of structured information and formal knowledge pursuing objectives of achieving collective intelligence on the Web. Germane to the structure of the Semantic Web is a layering and standardization of concerns. These concerns are reflected by an architecture of the Semantic Web that we present through a common use case. Semantic Web data for the use case is now found on the Web and is part of a quickly growing set of Semantic Web resources available for formal processing. 相似文献
We show on a case study from an autonomous aerospace context how to apply a game-based model-checking approach as a powerful
technique for the verification, diagnosis, and adaptation of system behaviors based on temporal properties. This work is part
of our contribution within the SHADOWS project, where we provide a number of enabling technologies for model-driven self-healing.
We propose here to use GEAR, a game-based model checker, as a user-friendly tool that can offer automatic proofs of critical
properties of such systems. Although it is a model checker for the full modal μ-calculus, it also supports derived, more user-oriented logics. With GEAR, designers and engineers can interactively investigate
automatically generated winning strategies for the games, by this way exploring the connection between the property, the system,
and the proof.
This work has been partially supported by the European Union Specific Targeted Research Project SHADOWS (IST-2006-35157), exploring a Self-Healing Approach to Designing cOmplex softWare Systems. The project’s web page is at .
This article is an extended version of Renner et al. [18] presented at ISoLA 2007, Poitiers, December 2007. 相似文献
Retrieving similar images from large image databases is a challenging task for today’s content-based retrieval systems. Aiming at high retrieval performance, these systems frequently capture the user’s notion of similarity through expressive image models and adaptive similarity measures. On the query side, image models can significantly differ in quality compared to those stored on the database side. Thus, similarity measures have to be robust against these individual quality changes in order to maintain high retrieval performance. In this paper, we investigate the robustness of the family of signature-based similarity measures in the context of content-based image retrieval. To this end, we introduce the generic concept of average precision stability, which measures the stability of a similarity measure with respect to changes in quality between the query and database side. In addition to the mathematical definition of average precision stability, we include a performance evaluation of the major signature-based similarity measures focusing on their stability with respect to querying image databases by examples of varying quality. Our performance evaluation on recent benchmark image databases reveals that the highest retrieval performance does not necessarily coincide with the highest stability. 相似文献