首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Rewriting logic is a flexible and expressive logical framework that unifies denotational semantics and SOS in a novel way, avoiding their respective limitations and allowing very succinct semantic definitions. The fact that a rewrite theory's axioms include both equations and rewrite rules provides a very useful “abstraction knob” to find the right balance between abstraction and observability in semantic definitions. Such semantic definitions are directly executable as interpreters in a rewriting logic language such as Maude, whose generic formal tools can be used to endow those interpreters with powerful program analysis capabilities.  相似文献   

2.
3.
4.
In this paper, we present a new approach and a novel interface, Virtual Human Sketcher (VHS), which enables those who can draw, to sketch-out various human body models. Our approach supports freehand drawing input and a “Stick Figure→Fleshing-out→Skin Mapping” modelling pipeline. Following this pipeline, a stick figure is drawn first to illustrate a figure pose, which is automatically reconstructed into 3D through a “Multi-layered Back-Front Ambiguity Clarifier”. It is then fleshed-out with freehand body contours. A “Creative Model-based Method” is developed for interpreting the body size, shape, and fat distribution of the sketched figure and transferring it into a 3D human body through graphical comparisons and generic model morphing. The generic model is encapsulated with three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially through rigid morphing, fatness morphing, and surface matching to match the 2D figure sketch. The initial resulting 3D body model can be incrementally modified through sketching directly on the 3D model. In addition, this body surface can be mapped onto a series of posed stick figures to be interpolated as a 3D character animation. VHS has been tested by various users on Tablet PC. After minimal training, even a beginner can create plausible human bodies and animate them within minutes.  相似文献   

5.
The “hash–sign–switch” paradigm was firstly proposed by Shamir and Tauman with the aim to design an efficient on-line/off-line signature scheme. Nonetheless, all existing on-line/off-line signature schemes based on this paradigm suffer from the key exposure problem of chameleon hashing. To avoid this problem, the signer should pre-compute and store a plenty of different chameleon hash values and the corresponding signatures on the hash values in the off-line phase, and send the collision and the signature for a certain hash value in the on-line phase. Hence, the computation and storage cost for the off-line phase and the communication cost for the on-line phase in Shamir–Tauman’s signature scheme are still a little more overload. In this paper, we first introduce a special double-trapdoor hash family based on the discrete logarithm assumption and then incorporate it to construct a more efficient generic on-line/off-line signature scheme without key exposure. Furthermore, we also present the first key-exposure-free generic on-line/off-line threshold signature scheme without a trusted dealer. Additionally, we prove that the proposed schemes have achieved the desired security requirements.  相似文献   

6.
Efficient constrained local model fitting for non-rigid face alignment   总被引:1,自引:1,他引:0  
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The “simultaneous” algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2–3 fps). The “project-out” algorithm for fitting an AAM achieves faster than real time performance (>200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the “simultaneous” AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the “exhaustive local search” (ELS) algorithm. Experiments were conducted on the CMU MultiPIE database.  相似文献   

7.
Extensible programming languages and their compilers use highly modular specifications of languages and language extensions that allow a variety of different language feature sets to be easily imported into the programming environment by the programmer. Our model of extensible languages is based on higher-order attribute grammars and an extension called “forwarding” that mimics a simple rewriting process. It is designed so that no additional attribute definitions need to be written when combining a language with language extensions. Thus, programmers can remain unaware of the underlying attribute grammars when building customized languages by importing various extensions. In this paper we show how aspects and the aspect weaving process from Aspect-Oriented Programming can be specified as a modular language extension and imported into a base language specified in an extensible programming language framework.  相似文献   

8.
A generic integrated line detection algorithm (GILDA) is presented and demonstrated. GILDA is based on the generic graphics recognition approach, which abstracts the graphics recognition as a stepwise recovery of the multiple components of the graphic objects and is specified by the object–process methodology. We define 12 classes of lines which appear in engineering drawings and use them to construct a class inheritance hierarchy. The hierarchy highly abstracts the line features that are relevant to the line detection process. Based on the “Hypothesis and Test” paradigm, lines are detected by a stepwise extension to both ends of a selected first key component. In each extension cycle, one new component which best meets the current line's shape and style constraints is appended to the line. Different line classes are detected by controlling the line attribute values. As we show in the experiments, the algorithm demonstrates high performance on clear synthetic drawings as well as on noisy, complex, real-world drawings.  相似文献   

9.
Alternating tree automata and AND/OR graphs provide elegant formalisms that enable branching- time logics to be verified in linear time. The seminal work of Kupferman et al. [Orna Kupferman, Moshe Y. Vardi, and Pierre Wolper. An automata-theoretic approach to branching-time model checking. J. ACM, 47(2):312–360, 2000] showed that 1) branching-time model checking is reducible to the language non-emptiness checking of the product of two alternating automata representing the model and property under verification, and 2) the non-emptiness problem can be solved by performing a search on an AND/OR graph representing this product. Their algorithm, however, can only be implemented in an explicit-state model checker because it needs stacks to detect accept and reject runs. In this paper, we propose a BDD-based approach to check the language non-emptiness of the product automaton. We use a technique called “state recording” from Schuppan and Biere [Viktor Schuppan and Armin Biere. Efficient reduction of finite state model checking to reachability analysis. Int. Journal on Software Tools for Technology Transfer (STTT), 5(2–3):185–204, 2004] to emulate the stack mechanism from explicit-state model checking. This technique allows us to transform the product automaton into a well-defined AND/OR graph. We develop a BDD-based reachability algorithm to efficiently determine whether a solution graph for the AND/OR graph exists and thereby solve the model-checking problem. While “state recording” increases the size of the state space, the advantage of our approach lies in the memory saving BDDs can offer and the potential it opens up for optimisation of the reachability analysis. We remark that this technique always detects the shortest counter-example.  相似文献   

10.
A crucial step in the modeling of a system is to determine the values of the parameters to use in the model. In this paper we assume that we have a set of measurements collected from an operational system, and that an appropriate model of the system (e.g., based on queueing theory) has been developed. Not infrequently proper values for certain parameters of this model may be difficult to estimate from available data (because the corresponding parameters have unclear physical meaning or because they cannot be directly obtained from available measurements, etc.). Hence, we need a technique to determine the missing parameter values, i.e., to calibrate the model.As an alternative to unscalable “brute force” technique, we propose to view model calibration as a non-linear optimization problem with constraints. The resulting method is conceptually simple and easy to implement. Our contribution is twofold. First, we propose improved definitions of the “objective function” to quantify the “distance” between performance indices produced by the model and the values obtained from measurements. Second, we develop a customized derivative-free optimization (DFO) technique whose original feature is the ability to allow temporary constraint violations. This technique allows us to solve this optimization problem accurately, thereby providing the “right” parameter values. We illustrate our method using two simple real-life case studies.  相似文献   

11.
At a recent conference on games in education, we made a radical decision to transform our standard presentation of PowerPoint slides and computer game demonstrations into a unified whole, inserting the PowerPoint presentation to the computer game. This opened up various questions relating to learning and teaching theories, which were debated by the conference delegates. In this paper, we reflect on these discussions, we present our initial experiment, and relate this to various theories of learning and teaching. In particular, we consider the applicability of “concept maps” to inform the construction of educational materials, especially their topological, geometrical and pedagogical significance. We supplement this “spatial” dimension with a theory of the dynamic, temporal dimension, grounded in a context of learning processes, such as Kolb’s learning cycle. Finally, we address the multi-player aspects of computer games, and relate this to the theories of social and collaborative learning. This paper attempts to explore various theoretical bases, and so support the development of a new learning and teaching virtual reality approach.  相似文献   

12.
13.
Formal systems for cryptographic protocol analysis typically model cryptosystems in terms of free algebras. Modeling the behavior of a cryptosystem in terms of rewrite rules is more expressive, however, and there are some attacks that can only be discovered when rewrite rules are used. But free algebras are more efficient, and appear to be sound for “most” protocols. In [J. Millen, “On the freedom of decryption”, Information Processing Letters 86 (6) (June 2003) 329–333] Millen formalizes this intuition for shared key cryptography and provides conditions under which it holds; that is, conditions under which security for a free algebra version of the protocol implies security of the version using rewrite rules. Moreover, these conditions fit well with accepted best practice for protocol design. However, he left public key cryptography as an open problem. In this paper, we show how Millen's approach can be extended to public key cryptography, giving conditions under which security for the free algebra model implies security for the rewrite rule model. As in the case for shared key cryptography, our conditions correspond to standard best practice for protocol design.  相似文献   

14.
A large class of diagrammatic languages falls under the broad definition of “executable graphics”, meaning that some transformational semantics can be devised for them. On the other hand, the definition of static aspects of visual languages often relies on some form of parsing or constructive process. We propose here an approach to the definition of visual languages syntax and semantics based on a notion of transition as production/consumption of resources. Transitions can be represented in forms which are intrinsic to the diagrams or external to them. A collection of abstract metamodels is presented to discuss the approach.  相似文献   

15.
In this work we discuss the problem of performing distributed CTL model checking by splitting the given state space into several “partial state spaces” . The partial state space is modelled as a Kripke structure with border states. Each computer involved in the distributed computation owns a partial state space and performs a model checking algorithm on this incomplete structure. To be able to proceed, the border states are augmented by assumptions about the truth of formulas and the computers exchange assumptions about relevant states as they compute more precise information. In the paper we give the basic definitions and present the distributed algorithm.  相似文献   

16.
Defining operational semantics for a process algebra is often based either on labeled transition systems that account for interaction with a context or on the so-called reduction semantics: we assume to have a representation of the whole system and we compute unlabeled reduction transitions (leading to a distribution over states in the probabilistic case). In this paper we consider mixed models with states where the system is still open (towards interaction with a context) and states where the system is already closed. The idea is that (open) parts of a system “P” can be closed via an operator “PG” that turns already synchronized actions whose “handle” is specified inside “G” into prioritized reduction transitions (and, therefore, states performing them into closed states). We show that we can use the operator “PG” to express multi-level priorities and external probabilistic choices (by assigning weights to handles inside G), and that, by considering reduction transitions as the only unobservable τ transitions, the proposed technique is compatible, for process algebra with general recursion, with both standard (probabilistic) observational congruence and a notion of equivalence which aggregates reduction transitions in a (much more aggregating) trace based manner. We also observe that the trace-based aggregated transition system can be obtained directly in operational semantics and we present the “aggregating” semantics. Finally, we discuss how the open/closed approach can be used to also express discrete and continuous (exponential probabilistic) time and we show that, in such timed contexts, the trace-based equivalence can aggregate more with respect to traditional lumping based equivalences over Markov Chains.  相似文献   

17.
This paper is concerned with the problem of balancing the competing objectives of allowing statistical analysis of confidential data while maintaining standards of privacy and confidentiality. Remote analysis servers have been proposed as a way to address this problem by delivering results of statistical analyses without giving the analyst any direct access to data. Several national statistical agencies operate remote analysis servers [Australian Bureau of Statistics Remote Access Data Laboratory (RADL), <www.abs.gov.au>; Luxembourg Income Study, <www.lisproject.org>].Remote analysis servers are not free from disclosure risk, and current implementations address this risk by “confidentialising” the underlying data and/or by denying some queries. In this paper we explore the alternative solution of “confidentialising” the output of a server so that no confidential information is revealed or can be inferred.In this paper we review results on remote analysis servers, and provide a list of measures for confidentialising the output from a single regression query to a remote server as developed by Sparks et al. [R. Sparks, C. Carter, J. Donnelly, J. Duncan, C.M. O’Keefe, L. Ryan, A framework for performing statistical analyses of unit record health data without violating either privacy or confidentiality of individuals, in: Proceedings of the 55th Session of the International Statistical Institute, Sydney, 2005; R. Sparks, C. Carter, J. Donnelly, C.M. O’Keefe, J. Duncan, T. Keighley, D. McAullay, Remote access methods for exploratory data analysis and statistical modelling: privacy-preserving Analytics, Comput. Meth. Prog. Biomed. 91 (2008) 208–222.] We provide a fully worked example, and compare the confidentialised output from the query with the output from a traditional statistical package. Finally, we provide a comparison the confidentialised regression diagnostics with the synthetic regression diagnostics generated by the alternative method of Reiter [J.P. Reiter, Model diagnostics for remote-access regression servers, Statistics and Computing 13 (2003) 371–380].  相似文献   

18.
TNO is doing research in many areas of industrial automation and is heavily involved in European projects financed by R&D programmes such as Esprit, Eureka and Brite, and in many ISO and CEN standardization activities. From this experience it becomes clear that the I of Integration in CIM has not only to do with the integration of the so-called “islands of automation” but also with the integration of ”islands of manufacturing”: how we can improve the transfer of manufacturing knowledge. We have to increase the semantic content of our integration approaches, so that not only can computer scientist be involved, but also people from the companies we are trying to help, and people who are responsible for the development of new CIM components. The real problem is not a problem of technical integration of computers, but much more a “conceptual modelling” problem. Fundamental questions are, for instance, how we can, on the semantic level really required, model information transfer upstream and downstream in the product life cycle. Based on the analysis of existing CIM projects such as CAD*I, CIM- OSA, Open CAM Systems (Esprit I) IMPACT (Esprit II), CAM-I's CIM Architecture, the Danish Principal model for CIM, and more, we developed a generic and reusable CIM reference architecture. This architecture shows manufacturing activities, real and information flow objects, CIM components and industrial automation standards like STEP, MAP, TOP, EDIFACT, MMS etc. in an integrated way. In this paper we describe the CIM base model used to express the CIM reference architecture and give some details of the CIM reference architecture itself.  相似文献   

19.
“Walkthrough” and “Jogthrough” techniques are well known expert based methodologies for the evaluation of user interface design. In this paper we describe the use of “Graphical” Jogthrough method for evaluating the interface design of the Network Simulator, an educational simulation program that enables users to virtually build a computer network, install hardware and software components, make the necessary settings and test the functionality of the network. Graphical Jogthrough is a further modification of a typical Jogthrough method, where evaluators' ratings produce evidence in the form of a graph, presenting estimated proportion of users who effectively use the interface versus the time they had to work with it in order to succeed effectiveness. We comment on the question: “What are the possible benefits and limitations of the Graphical Jogthrough method when applied in the case of educational software interface design?” We present the results of the evaluation session, and concluding from our experience we argue that the method could offer designers quantitative and qualitative data for formulating a useful (though rough in some aspects) estimation about the novice–becoming–expert pace that end users might follow when working with the evaluated interface.  相似文献   

20.
Petri net modules in the transformation-based component framework   总被引:1,自引:0,他引:1  
Component-based software engineering needs to be backed by thorough formal concepts and modeling techniques. This paper combines two concepts introduced independently by the two authors in previous papers. On one hand, the concept of Petri net modules introduced at IDPT 2002 in Padberg [J. Padberg, Petri net modules, Journal on Integrated Design and Process Technology 6 (4) (2002) 105–120], and on the other hand a generic component framework for system modeling introduced at FASE 2002 in Ehrig et al. [H. Ehrig, F. Orejas, B. Braatz, M. Klein, M. Piirainen, A generic component concept for system modeling, in: Proceedings of FASE ’02, Lecture Notes in Computer Science, vol. 2306, Springer, 2002]. First we develop a categorical formalization of the transformation based approach to components that is based on pushouts. This is the frame in which we show that Petri net modules can be considered as an instantiation of the generic component framework. This allows applying the transformation based semantics and compositionality result of the generic framework to Petri net modules. In addition to general Petri net modules we introduce Petri net modules preserving safety properties which can be considered as another instantiation of pushout based formalization of the generic framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号