首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Craig interpolation has become a versatile tool in formal verification, used for instance to generate program assertions that serve as candidates for loop invariants. In this paper, we consider Craig interpolation for quantifier-free Presburger arithmetic (QFPA). Until recently, quantifier elimination was the only available interpolation method for this theory, which is, however, known to be potentially costly and inflexible. We introduce an interpolation approach based on a sequent calculus for QFPA that determines interpolants by annotating the steps of an unsatisfiability proof with partial interpolants. We prove our calculus to be sound and complete. We have extended the Princess theorem prover to generate interpolating proofs, and applied it to a large number of publicly available Presburger arithmetic benchmarks. The results document the robustness and efficiency of our interpolation procedure. Finally, we compare the procedure against alternative interpolation methods, both for QFPA and linear rational arithmetic.  相似文献   

2.
The use of Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms for obtaining interpolants for conjunctions of linear Diophantine equations, linear modular equations (linear congruences), and linear Diophantine disequations. We also present an interpolation result for conjunctions of mixed integer linear equations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates in a counterexample guided abstraction refinement (CEGAR) framework. This has enabled verification of simple programs that cannot be checked using existing CEGAR based model checkers. This paper is an extended version of [14]. This research was sponsored by the Gigascale Systems Research Center (GSRC), Semiconductor Research Corporation (SRC), the National Science Foundation (NSF), the Office of Naval Research (ONR), the Naval Research Laboratory (NRL), the Defense Advanced Research Projects Agency (DARPA), the Army Research Office (ARO), and the General Motors Collaborative Research Lab at CMU. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of GSRC, SRC, NSF, ONR, NRL, DARPA, ARO, GM, or the U.S. government.  相似文献   

3.
Point-based geometric models are gaining popularity in both the computer graphics and CAD fields. A related design/modelling problem is the focus of the reported research: drawing curves onto digital surfaces represented by clouds of points. The problem is analyzed and solved, and a set of ‘design tools’ are proposed which allow the user/designer to efficiently perform ‘product development’ (alternative name: ‘detail design’) tasks which require efficient processing of a ‘digital surface’. The primary tool is a robust and efficient point projection algorithm combined with a smoothing technique for producing smooth ‘digital curves’ lying onto the cloud surface. The new design tools are tested on a real-life industrial example with very satisfactory results, which are thoroughly presented in the paper.  相似文献   

4.
It is widely believed that, in molecular dynamics (MD) simulations, round-off errors can cause numerical irreversibility since, in the standard MD, floating-point real number arithmetic is employed and round-off errors cannot be avoided. To investigate the characteristic of this numerical irreversibility, the ‘bit-reversible algorithm’, which is completely time-reversible and is free from any round-off error, is made use of as a test bed. Through this study, it is clearly demonstrated that, other than the extent of the stability of the system, the appearance of irreversibility is related to the ‘quantity’ of the controlled noise. By means of the bit-reversible simulation added to the controlled noise of an appropriate ‘quantity’, the characteristic of the numerical irreversibility in the standard MD is revealed.  相似文献   

5.
Sculptured surface machining using triangular mesh slicing   总被引:7,自引:0,他引:7  
In this paper, an optimized procedure for tool path generation in regional milling is presented. The proposed procedure computes tool paths by slicing a CL-surface (Cutter Location surface), which is a triangular, mesh containing invalid triangles. Tool path generation consists of two steps: firstly, it obtains a set of line segments by slicing the triangular mesh with two-dimensional geometric elements (slicing elements), and, secondly, it extracts a valid tool path from the line segments by removing invalid portions. Two algorithms based on the slicing elements are presented: a ‘line projection’ algorithm based on the plane sweeping paradigm, which works efficiently by using the characteristics of a monotone chain; and a ‘curve projection’ algorithm for the projection of curves, which transforms the curve projection problem into a line projection problem by mapping the XYZ-space of the cylinder surface to the TZ-plane of the unfolded cylinder. The proposed procedure has been implemented and applied to tool path generation in regional milling. Performance tests show the efficiency of the proposed procedure.  相似文献   

6.

The use of propositional logic and systems of linear inequalities over reals is a common means to model software for formal verification. Craig interpolants constitute a central building block in this setting for over-approximating reachable states, e.g. as candidates for inductive loop invariants. Interpolants for a linear system can be efficiently computed from a Simplex refutation by applying the Farkas’ lemma. However, these interpolants do not always suit the verification task—in the worst case, they can even prevent the verification algorithm from converging. This work introduces the decomposed interpolants, a fundamental extension of the Farkas interpolants, obtained by identifying and separating independent components from the interpolant structure, using methods from linear algebra. We also present an efficient polynomial algorithm to compute decomposed interpolants and analyse its properties. We experimentally show that the use of decomposed interpolants in model checking results in immediate convergence on instances where state-of-the-art approaches diverge. Moreover, since being based on the efficient Simplex method, the approach is very competitive in general.

  相似文献   

7.
Morphing is an interpolation technique that changes one form into another through a seamless transition, producing, in the process, an infinite number of ‘intermediate’ forms between the original and the target. This paper examines the possibility of using the morphing technique for generating a large number of hull forms rapidly based on a number of target forms, existing or newly generated.The paper discusses the technique developed for applying morphing technique to hull form definition. The algorithm first projects the vertices of the original and target 3D surfaces onto 2D planes. After ‘regularising’ the vertices on 2D, they are projected back on the 3D surfaces. The corresponding vertices of the two surfaces are then used for interpolation. It has been found that the interpolated hull forms can be generated almost instantaneously, allowing the whole algorithm to be embedded in an optimisation program.  相似文献   

8.
One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elements on the fly and can reduce the storage requirements by an order of magnitude or more. We discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.  相似文献   

9.
We formulate the time-constrained backpacker problem as an extension of the classical knapsack problem (KP), where a ‘backpacker’ travels from a origin to a destination on a directed acyclic graph, and collects items en route within the capacity of his knapsack and within a fixed time limit. We present a dynamic programming (DP) algorithm to solve this problem to optimality, and a ‘shift-and-merge’ DP algorithm to solve larger instances. The latter is an extension of the list-type DP, which has been successful for one-dimensional KPs, to the two-dimensional case. Computational experiments on a series of instances demonstrate advantage of the shift-and-merge technique over commercial MIP solvers.  相似文献   

10.
Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool, CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied ‘as is’, with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied ‘out of the box’ in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently.  相似文献   

11.
To extract information about the Earth's surface from Earth Observation data, a key processing step is the separation of pixels representing clear-sky observations of land or water surfaces from observations substantially influenced by clouds. This paper presents an algorithm used for this purpose specifically for data from the AATSR sensor on ENVISAT. The algorithm is based on the structure of the SPARC cloud detection scheme developed at CCRS for AVHRR data, then modified, calibrated and validated for AATSR data. It uses a series of weighted tests to calculate per-pixel cloud presence probability, and also produces an estimate of cloud top height and a cloud shadow flag. Algorithm parameters have been optimized for daytime use in Canada, and evaluation shows good performance with a mean daytime kappa coefficient of 0.76 for the ‘cloud’/‘clear’ classification when compared to independent validation data. Performance is independent of season, and is a dramatic improvement over the existing AATSR L1B cloud flag for Canada. The algorithm will be used at CCRS for processing AATSR data, and will form the basis of similar processing for data from the SLSTR sensors on Sentinel-3.  相似文献   

12.
An improved algorithm, together with its implementation, is presented for the automatic computation of the complete root classification of a real parametric polynomial. The algorithm offers improved efficiency and a new test for non-realizable conditions. The improvement lies in the direct use of ‘sign lists’, obtained from the discriminant sequence, rather than ‘revised sign lists’. It is shown that the discriminant sequences, upon which the sign lists are based, are closely related both to Sturm–Habicht sequences and to subresultant sequences. Thus calculations based on any of these quantities are essentially equivalent.  相似文献   

13.
In continuous-time system identification and adaptive control the least-squares parameter estimation algorithm has always been used with regressor filtering, which adds to the dynamic order of the identifier and affects its performance. We present an approach for designing a least-squares estimator that uses an unfiltered regressor. We also consider a problem of adaptive nonlinear control and present the first least-squares-based adaptive nonlinear control design that yields a complete Lyapunov function. The design is presented for linearly parametrized nonlinear control systems in ‘normal form’. A scalar linear example is included which adds insight into the key ideas of our approach and allows showing that, for linear systems, our Lyapunov-LS design with unfiltered regressor, presented in the note for unnormalized least-squares, can also be extended to normalized least-squares.  相似文献   

14.
For successful new product development, it is necessary to understand the customers' holistic experience of the product beyond traditional task completion, and acceptance measures. This paper describes research in which ninety-eight UK owners of luxury saloons assessed the feel of push-switches in five luxury saloon cars both in context (in-car) and out of context (on a bench). A combination of hedonic data (i.e. a measure of ‘liking’), qualitative data and semantic differential data was collected. It was found that customers are clearly able to differentiate between switches based on the degree of liking for the samples' perceived haptic qualities, and that the assessment environment had a statistically significant effect, but that it was not universal. A factor analysis has shown that perceived characteristics of switch haptics can be explained by three independent factors defined as ‘Image’, ‘Build Quality’, and ‘Clickiness’. Preliminary steps have also been taken towards identifying whether existing theoretical frameworks for user experience may be applicable to automotive human-machine interfaces.  相似文献   

15.
Intelligent tutoring systems often make use of students’ answers to adapt instruction or feedback on a task. In this paper, we explore the alternative possibility of adapting a system based on the perceived affective and cognitive state of a student. A system can potentially better adapt to the needs of each individual student by using non-verbal behavior. We used a new experimental paradigm inspired by ‘brain training’ software to collect primary school children’s answers to easy and difficult arithmetic problems and made audiovisual recordings of their answers. Adult observers rated these films on perceived difficulty level. Results showed that adults were able to correctly interpret children’s perceived level of difficulty, especially if they saw their face (compared to hearing their voice). They paid attention to features such as ‘looking away’, and ‘frowning’. Then we checked whether we could also automatically predict if the posed problem was either easy or difficult based on the first second of their response. This ‘thin-slice analysis’ could correctly predict the difficulty level in 71% of all cases. When trained on sufficiently many recordings, Adaptive Tutoring Systems should be able to detect children’s state and adapt the difficulty level of their learning materials accordingly.  相似文献   

16.
We extend process algebra with guards, comparable to the guards in guarded commands or conditions in common programming constructs such as if — then — else — fi and while — do — od.The extended language is provided with an operational semantics based on transitions between pairs of a process and a (data-)state. The data-states are given by a data environment that also defines in which data-states guards hold and how atomic actions (non-deterministically) transform these states. The operational semantics is studied modulo strong bisimulation equivalence. For basic process algebra (without operators for parallelism) we present a small axiom system that is complete with respect to a general class of data environments. Given a particular data environmentL we add three axioms to this system, which is then again complete, provided weakest preconditions are expressible andL is sufficiently deterministic.Then we study process algebra with parallelism and guards. A two phase-calculus is provided that makes it possible to prove identities between parallel processes. Also this calculus is complete. In the last section we show that partial correctness formulas can easily be expressed in this setting. We use process algebra with guards to prove the soundness of a Hoare logic for linear processes by translating proofs in Hoare logic into proofs in process algebra.Supported by ESPRIT Basic Research Action no. 3006 (CONCUR) and by RACE project no. 1046 (SPECS).Supported by RACE project no. 1046 (SPECS).  相似文献   

17.
Although much analyses have been performed on the collaborative nature of software development in papers (7, 8, 9 and 10) with some of them in the perspective of Vygotsky’s Activity theory, less focus has been given on the discursive evolution of software as different ‘Genres’. In this article we will investigate discursive formation of software and the programming languages in course of time driven by increased ‘Activities’, ‘Dialogue’ and ‘Power’ exercised by certain user groups and entities which will complement our efforts with Activity theory and Foucaultdian POWER-KNOWLEDGE. We will show that POWER relation is affecting user preferences, choices and activities, which are producing changes in the programming languages and creating new software genres. We have borrowed the term ‘Genre’ from the literary studies of Bakhtin and applying it for software. The way different coexisting social classes in a specific time in history leave their fingerprints in different speech and text-genres, we claim that similar mechanisms exist in the software world. We will show that a modern software system is developing improved ‘Dialogism’ or ‘Intertextuality’, ‘Chronotope’ ‘Heteroglossia’ and forming its own discourse. Our presentation is heavily dependent on Mikhail Bakhtin’s concept of literary genres and Foucaultdian concept of POWER-KNOWLEDGE.  相似文献   

18.
Sales prediction is an essential part of stock planning for the wholesales and retail business. It is a complex task because of the large number of factors affecting the demand. Designing an intelligent predictor that would beat a simple moving average baseline across a number of products appears to be a non-trivial task. We present an intelligent two level sales prediction approach that switches the predictors depending on the properties of the historical sales. First, we learn how to categorize the sales time series into ‘predictable’ and ‘random’ based on structural, shape and relational features related to the products and the environment using meta learning approach. We introduce a set of novel meta features to capture behavior, shape and relational properties of the sales time series. Next, for the products identified as ‘predictable’ we apply an intelligent base predictor, while for ‘random’ we use a moving average. Using the real data from a food wholesales company we show how the prediction accuracy can be improved using this strategy, as compared to the baseline predictor as well as an ensemble of predictors. In our study we also show that by applying an intelligent predictor for the most ‘predictable’ products we can control the risk of performing worse than the baseline.  相似文献   

19.
20.
For multimodality images, a novel fusion algorithm based on the shiftable complex directional pyramid transform (SCDPT) is proposed in this paper. As well, with the aid of the structural similarity (SSIM) index, a ‘similarity-based’ idea is employed to distinguish regions with ‘redundant’ or ‘complementary’ information between source imagers before the SCDPT coefficients are merged. A ‘weighted averaging’ scheme for regions with ‘redundant’ information and a ‘selecting’ scheme for regions with ‘complementary’ information are then employed, respectively. When merging the low-pass subband coefficients, the SSIM index in spatial domain (SP-SSIM) is employed as similarity measure, and three types of regions are thus determined. Especially, for regions with similar intensity values but different intensity changing directions between source images, a ‘selecting’ scheme based on gradient and energy is proposed. When merging the directional band-pass subband coefficients, the SSIM index in complex wavelet domain (CW-SSIM) is employed as similarity measure. With the CW-SSIM index, not only the magnitude information but also the phase information of SCDPT coefficients can be employed. Compared to the traditional energy matching (EM) index based fusion methods, the proposed method can better deal with ‘redundant’ and ‘complementary’ information of source images. In addition, because of the shift-invariance of the SCDPT and the CW-SSIM index, the proposed fusion algorithm performs well even if the input images are not well registered. Several sets of experimental results demonstrate the validity and feasibility of the proposed method in terms of both visual quality and objective evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号