One very fertile domain of applied Artificial Intelligence is constraint solving technologies. Especially, constraint networks that concern problems that can be represented using discrete variables, together with constraints on allowed instantiation values for these variables. Every solution to a constraint network must satisfy every constraint. When no solution exists, the user might want to know the actual reasons leading to the absence of global solution. In this respect, extracting mucs (Minimal Unsatisfiable Cores) from an unsatisfiable constraint network is a useful process when causes of unsatisfiability must be understood so that the network can be re-engineered and relaxed to become satisfiable. Despite bad worst-case computational complexity results, various muc-finding approaches that appear tractable for many real-life instances have been proposed. Many of them are based on the successive identification of so-called transition constraints. In this respect, we show how local search can be used to possibly extract additional transition constraints at each main iteration step. In the general constraint networks setting, the approach is shown to outperform a technique based on a form of model rotation imported from the sat-related technology and that also exhibits additional transition constraints. Our extensive computational experimentations show that this enhancement also boosts the performance of state-of-the-art DC(WCORE)-like MUC extractors. 相似文献
Many everyday activities depend on the capacity to organize and smoothly execute motor sequences. The authors tested the hypothesis that a sequencing deficit is associated with schizophrenia. They used a new method to distinguish between lower and higher order mechanisms for the impairment. The 1st task involved triggered sequences in which sensory information from 1 movement was the cue for initiation of the following movement. Results showed that the motor sequences were performed as fluently in patients as in controls. The 2nd and 3rd tasks involved sequences in which the entire movement sequence could be preplanned. Patients executed the sequences less fluently than controls but only under the condition where action sequences were required. Furthermore, the patients' fluency deficit increased with sequence complexity. The high discrimination power of Task 3 gave the authors a means to control for a potential psychometric confound involving differential discriminating power and to argue in favor of a specific higher order motor fluency deficit in patients with schizophrenia. It is suggested that basic lower order mechanisms that integrate sensory information with outgoing motor commands are preserved in schizophrenia, whereas higher order integrative mechanisms that are required for the smooth coordination of motor sequences are impaired. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code.
Program summary
Program title:CADNACatalogue identifier:AEAT_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.htmlProgram obtainable from:CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.:53 420No. of bytes in distributed program, including test data, etc.:566 495Distribution format:tar.gzProgramming language:FortranComputer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBMOperating system:LINUX, UNIXClassification:4.14, 6.5, 20Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time.Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic.Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays.Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.References:
[1]
The CADNA library, URL address: http://www.lip6.fr/cadna.
[2]
J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995.
[3]
J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261.
[4]
J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
In biological modelling of the coastal phytoplankton dynamics, the light attenuation coefficient is often expressed as a function of the concentrations of chlorophyll and mineral suspended particulate matter (SPM). In order to estimate the relationship between these parameters over the continental shelf of the northern Bay of Biscay, a set of in situ data has been gathered for the period 1998-2003 when SeaWiFS imagery is available. These data comprise surface measurements of the concentrations of total SPM, chlorophyll, and irradiance profiles from which is derived the attenuation coefficient of the photosynthetically available radiation, KPAR. The performance of the IFREMER look-up table used to retrieve the chlorophyll concentration from the SeaWiFS radiance is evaluated on this new set of data. The quality of the estimated chlorophyll concentration is assessed from a comparison of the variograms of the in situ and satellite-derived chlorophyll concentrations. Once the chlorophyll concentration is determined, the non living SPM, which is defined as the SPM not related to the dead or alive endogenous phytoplankton, is estimated from the radiance at 555 nm by inverting a semi-analytic model. This method provides realistic estimations of concentrations of chlorophyll and SPM over the continental shelf all over the year. Finally, a relationship, based on non living SPM and chlorophyll, is proposed to estimate KPAR on the continental shelf of the Bay of Biscay. The same formula is applied to non living SPM and chlorophyll concentrations, observed in situ or derived from SeaWiFS radiance. 相似文献
A growing number of empirical studies evaluate the influence of Mental Health (MH) technology on the clinical effectiveness, the therapeutic relationship (i.e., therapeutic alliance), and usability issues. However, to the authors’ knowledge, no studies have yet been performed regarding the influence of technology on the therapeutic process in terms of collaboration. This study evaluates the quality of collaboration between the client and therapist in Augmented Reality Exposure Therapy (ARET) context and the traditional, In Vivo Exposure Therapy (IVET) context with the Therapeutic Collaborative Scale (TCS). Twenty participants received an intensive session of cognitive behavioral therapy in either a technology-mediated therapeutic context or in a traditional therapeutic context. The results indicate that both therapeutic conditions show high collaboration scores. However, the asymmetry of roles between the therapist and the client under both conditions were detected. Also, a greater level of distraction was observed for therapists in ARET, which affected the quality of the therapists’ involvement in the therapeutic session. The implications of these results are discussed. 相似文献
A graph G was defined in [16] as P4-reducible, if no vertex in G belongs to more than one chordless path on four vertices or P4. A graph G is defined in [15] as P4-sparse if no set of five vertices induces more than one P4, in G. P4-sparse graphs generalize both P4-reducible and the well known class of p4-free graphs or cographs. In an extended abstract in [11] the first author introduced a method using the modular decomposition tree of a graph as the framework for the resolution of algorithmic problems. This method was applied to the study of P4-sparse and extendedP4-sparse graphs.
In this paper, we begin by presenting the complete information about the method used in [11]. We propose a unique tree representation of P4-sparse and a unique tree representation of P4-reducible graphs leading to a simple linear recognition algorithm for both classes of graphs. In this way we simplify and unify the solutions for these problems, presented in [16–19]. The tree representation of an n-vertex P4-sparse or a P4-reducible graph is the key for obtaining O(n) time algorithms for the weighted version of classical optimization problems solved in [20]. These problems are NP-complete on general graphs.
Finally, by relaxing the restriction concerning the exclusion of the C5 cycles from P4-sparse and P4-reducible graphs, we introduce the class of the extendedP4-sparse and the class of the extendedP4-reducible graphs. We then show that a minimal amount of additional work suffices for extending most of our algorithms to these new classes of graphs. 相似文献
This exploratory study aims to achieve a better understanding of the users-related factors that affect the choice of routes in public transport (PT). We also look at what can motivate route and modes changes towards alternatives in a real situation. We investigated the experience of 19 users of PTs, using the critical incident technique (Flanagan in Psychol Bull 51(4):327, 1954). We asked participants to report incidents (i.e. situations) in cases they were very satisfied or dissatisfied with their choice. For both situations, the case of their usual route and case of an alternative were considered. A total of 91 incidents were collected and analysed using a multiple correspondences analysis. Additionally, users’ profiles were characterized and superposed to the analysis of incidents content. The main results are as follows. First, the user’s choice of PT route depends on the context (i.e. aim of the travel, time of day). Second, taking an alternative to the usual PT route or using a route combining different transport modes is determined by the context and by factors related to the pleasantness of the travel (e.g. to accompany a friend along the travel). Finally, depending on the user’s profile (i.e. combination of attitude towards PT and demographic variables), the factors taken into account to make the choice of a PT route are related to the efficiency or the pleasantness of the trip. These results show the importance of the contextual factors and the users’ profiles in route choice. They suggest that these factors should be further taken into account in new tools and services for mobility. 相似文献
We propose and study quantitative measures of smoothness f?A(f) which are adapted to anisotropic features such as edges in images or shocks in PDE’s. These quantities govern the rate of approximation by adaptive finite elements, when no constraint is imposed on the aspect ratio of the triangles, the simplest example being \(A_{p}(f)=\|\sqrt{|\mathrm{det}(d^{2}f)|}\|_{L^{\tau}}\) which appears when approximating in the Lp norm by piecewise linear elements when \(\frac{1}{\tau}=\frac{1}{p}+1\). The quantities A(f) are not semi-norms, and therefore cannot be used to define linear function spaces. We show that these quantities can be well defined by mollification when f has jump discontinuities along piecewise smooth curves. This motivates for using them in image processing as an alternative to the frequently used total variation semi-norm which does not account for the smoothness of the edges. 相似文献
Image and geometry processing applications estimate the local geometry of objects using information localized at points. They usually consider information about the tangents as a side product of the points coordinates. This work proposes parabolic polygons as a model for discrete curves, which intrinsically combines points and tangents. This model is naturally affine invariant, which makes it particularly adapted to computer vision applications. As a direct application of this affine invariance, this paper introduces an affine curvature estimator that has a great potential to improve computer vision tasks such as matching and registering. As a proof-of-concept, this work also proposes an affine invariant curve reconstruction from point and tangent data. 相似文献