首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Numeration systems, the basis of which is defined by a linear recurrence with integer coefficients, are considered. We give conditions on the recurrence under which the function of normalization which transforms any representation of an integer into the normal one—obtained by the usual algorithm—can be realized by a finite automaton. Addition is a particular case of normalization. The same questions are discussed for the representation of real numbers in basis , where is a real number > 1, in connection with symbolic dynamics. In particular it is shown that if is a Pisot number, then the normalization and the addition in basis are computable by a finite automaton.This work has been supported by the PRC Mathématiques et Informatique.  相似文献   

2.
For the equation x(t) = x(t) (1-(1/) t-- t- x(u)du), > 0, > 0, > 0, conditions for the stability of a nonzero stationary solution under small perturbations are determined.  相似文献   

3.
We consider a class of stochastic models for which the performance measure is defined as a mathematical expectation that depends on a parameter , say (), and we are interested in constructing estimators of in functional form (i.e., entire functions of ), which can be computed from a single simulation experiment. We focus on the case where is a continuous parameter, and also consider estimation of the derivative (). One approach for doing that, when is a parameter of the probability law that governs the system, is based on the use of likelihood ratios and score functions. In this paper, we study a different approach, called split-and-merge, for the case where is a threshold parameter. This approach can be viewed as a practical way of running parallel simulations at an infinite number of values of , with common random numbers. We give several examples showing how different kinds of parameters such as the arrival rate in a queue, the probability that an arriving customer be of a given type, a scale parameter of a service time distribution, and so on, can be turned into threshold parameters. We also discuss implementation issues.  相似文献   

4.
Exact algorithms for detecting all rotational and involutional symmetries in point sets, polygons and polyhedra are described. The time complexities of the algorithms are shown to be (n) for polygons and (n logn) for two- and three-dimensional point sets. (n logn) time is also required for general polyhedra, but for polyhedra with connected, planar surface graphs (n) time can be achieved. All algorithms are optimal in time complexity, within constants.  相似文献   

5.
A problem of estimating a functional parameter (x) and functionals () based on observation of a solution u (t, x) of the stochastic partial differential equation is considered. The asymptotic problem setting, as the noise intensity 0, is investigated.  相似文献   

6.
We consider queueing networks for which the performance measureJ ( ) depends on a parameter , which can be a service time parameter or a buffer size, and we are interested in sensitivity analysis of J ( ) with respect to . We introduce a new method, called customer-oriented finite perturbation analysis (CFPA), which predicts J ( + ) for an arbitrary, finite perturbation from a simulation experiment at . CFPA can estimate the entire performance function (by using a finite number of chosen points and fitting a least-squares approximating polynomial to the observation) within one simulation experiment. We obtain CFPA by reformulating finite perturbation analysis (FPA) for customers. The main difference between FPA and CFPA is that the former calculates the sensitivities of timing epochs of events, such as external arrivals or service time completions, while the latter yields sensitivities of departure epochs of customers. We give sufficient conditions for unbiasedness of CFPA. Numerical examples show the efficiency of the method. In particular, we address sensitivity analysis with respect to buffer sizes and thereby give a solution to the problem for which perturbation analysis was originally built.  相似文献   

7.
Fast Theta-Subsumption with Constraint Satisfaction Algorithms   总被引:1,自引:0,他引:1  
Relational learning and Inductive Logic Programming (ILP) commonly use as covering test the -subsumption test defined by Plotkin. Based on a reformulation of -subsumption as a binary constraint satisfaction problem, this paper describes a novel -subsumption algorithm named Django,1 which combines well-known CSP procedures and -subsumption-specific data structures. Django is validated using the stochastic complexity framework developed in CSPs, and imported in ILP by Giordana et Saitta. Principled and extensive experiments within this framework show that Django improves on earlier -subsumption algorithms by several orders of magnitude, and that different procedures are better at different regions of the stochastic complexity landscape. These experiments allow for building a control layer over Django, termed Meta-Django, which determines the best procedures to use depending on the order parameters of the -subsumption problem instance. The performance gains and good scalability of Django and Meta-Django are finally demonstrated on a real-world ILP task (emulating the search for frequent clauses in the mutagenesis domain) though the smaller size of the problems results in smaller gain factors (ranging from 2.5 to 30).  相似文献   

8.
The notion of the rational closure of a positive knowledge base K of conditional assertions | (standing for if then normally ) was first introduced by Lehmann (1989) and developed by Lehmann and Magidor (1992). Following those authors we would also argue that the rational closure is, in a strong sense, the minimal information, or simplest, rational consequence relation satisfying K. In practice, however, one might expect a knowledge base to consist not just of positive conditional assertions, | , but also negative conditional assertions, i (standing for not if then normally . Restricting ourselves to a finite language we show that the rational closure still exists for satisfiable knowledge bases containing both positive and negative conditional assertions and has similar properties to those exhibited in Lehmann and Magidor (1992). In particular an algorithm in Lehmann and Magidor (1992) which constructs the rational closure can be adapted to this case and yields, in turn, completeness theorems for the conditional assertions entailed by such a mixed knowledge base.  相似文献   

9.
We review a complexity measure () and its statistical properties, then apply it to four stock returns. Theta is a ratio of two correlation integral estimates, one taken before and one after shuffling the series to investigate. For random processes 1 while approaches zero for data with low complexity. Sixteen artificially generated series with different dynamical characteristics – each represented by three sample sizes – were employed to investigate 's statistical properties. Its distribution approaches normality as the sample size is increased. When applied to stock returns, those computed at every price change proved less complex than lower frequency one- and five-minute returns, implying that information is being lost by increasingly less frequent sampling.  相似文献   

10.
Pushing Convertible Constraints in Frequent Itemset Mining   总被引:1,自引:0,他引:1  
Recent work has highlighted the importance of the constraint-based mining paradigm in the context of frequent itemsets, associations, correlations, sequential patterns, and many other interesting patterns in large databases. Constraint pushing techniques have been developed for mining frequent patterns and associations with antimonotonic, monotonic, and succinct constraints. In this paper, we study constraints which cannot be handled with existing theory and techniques in frequent pattern mining. For example, avg(S)v, median(S)v, sum(S)v (S can contain items of arbitrary values, {<, <, , } and v is a real number.) are customarily regarded as tough constraints in that they cannot be pushed inside an algorithm such as Apriori. We develop a notion of convertible constraints and systematically analyze, classify, and characterize this class. We also develop techniques which enable them to be readily pushed deep inside the recently developed FP-growth algorithm for frequent itemset mining. Results from our detailed experiments show the effectiveness of the techniques developed.  相似文献   

11.
Conditions are presented under which the maximum of the Kolmogorov complexity (algorithmic entropy) K(1... N ) is attained, given the cost f( i ) of a message 1... N . Various extremal relations between the message cost and the Kolmogorov complexity are also considered; in particular, the minimization problem for the function f( i ) – K(1... N ) is studied. Here, is a parameter, called the temperature by analogy with thermodynamics. We also study domains of small variation of this function.  相似文献   

12.
Environmental protection activities in industry have rapidly increased in number over the last years. Additionally, surveys of environmental activities have identified a change in the kind or in the approaches used to environmental problem solving. A new paradigm Clean Technology has been developed which gradually seems to replace the Clean-up Technology paradigm and the older Dilute and Disperse paradigm. The new Clean Technology paradigm brings with it not only a new way of looking at environmental protection, but also a range of rules guiding the application of technology and the design of technological systems. This paper presents a few case studies highlighting and evaluating Clean Technology activities.  相似文献   

13.
Layup optimization against buckling of shear panels   总被引:1,自引:1,他引:0  
The object of the study was to optimize the shear buckling load of laminated composite plates. The laminates lacked coupling between bending and extension (B ij=0) but had otherwise arbitrary selection of the ply angle variation through the thickness. The plates were rectangular and either simply supported or clamped on all edges. For orthotropic plates, it was seen that there is only one parameter necessary for finding the optimal design for different materials and plate aspect ratios. This parameter can be interpreted as the layup angle in a (+/–) orthotropic laminate. When bendingtwisting coupling is present, the buckling strength depends on the direction of the applied load. A laminate with non-zero bending-twisting coupling stiffnesses can be described with four lamination parameters. The allowable region of these parameters was investigated, and an optimization of the buckling load within this region was performed. It was seen that even this is a one parameter problem. This parameter can be interpreted as the layup anlge in an off-axis unidirectional laminate ().Notations A ij in-plane stiffnesses of anisotropic plates, Tsai and Hahn (1980) - B ij coupling stiffnesses of anisotropic plates - D ij bending stiffnesses of anisotropic plates - D ij * normalized bending stiffnesses - a, b, h length, width and thickness of the plate - x, y in-plane coordinates - z through-the-thickness coordinate - z * normalized through-the-thickness coordinate - w (x, y) out-of-plane deformation - N xy shear buckling load - W 1 * toW 4 * lamination parameters - U 1 toU 5 linear combinations of the on-axis moduli - (z) layup angle - f k functional of(z)  相似文献   

14.
Visibility,occlusion, and the aspect graph   总被引:4,自引:2,他引:2  
This chapter studies the ways in which the topology of the image of a polyhedron changes with changing viewpoint. We catalog the ways that the topological appearance, or aspect, can change. This enables us to find maximal regions of viewpoints of the same aspect. We use these techniques to construct the viewpoint space partition (VSP), a partition of viewpoint space into maximal regions of constant aspect, and its dual, the aspect graph. Here, we present tight bounds on the maximum size of the VSP and the aspect graph and give algorithms for their construction, first in the convex case and then in the general case. In particular, we give bounds on the maximum size of (n 2) and (n 6) under an orthographic projection viewing model and of (n 3) and (n 9) under a perspective viewing model. The algorithms make use of a new representation of the appearance of polyhedra from all viewpoints, called the aspect representation or asp. We believe that this representation is one of the significant contributions of this paper.This work was supported in part by the NSF under grants DCR-8520870 and IRI-8802436.  相似文献   

15.
We develop a theory of communication within branching programs that provides exponential lower bounds on the size of branching programs that are bounded alternating. Our theory is based on the algebraic concept of -branching programs, : , a semiring homomorphism, that generalizes ordinary branching programs, -branching programs [M2] andMOD p-branching programs [DKMW].Due to certain exponential lower and polynomial upper bounds on the size of bounded alternating -branching programs we are able to separate the corresponding complexity classesN ba ,co-N ba ba , andMOD p - ba ,p prime, from each other, and from that classes corresponding to oblivious linear length-bounded branching programs investigated in the past.  相似文献   

16.
We describe methodology of cognitive experiments (based on interference of probabilities for mental observables) which could verify quantum-like structure of mental information, namely, interference of probabilities for incompatible observables. In principle, such experiments can be performed in psychology, cognitive, and social sciences. In fact, the general contextual probability theory predicts not only quantum-like trigonometric (cos ) interference of probabilities, but also hyperbolic (cosh ) interference of probabilities (as well as hyper-trigonometric). In principle, statistical data obtained in experiments with cognitive systems can produce hyperbolic (cosh ) interference of probabilities. We introduce a wave function of (e.g., human) population. In general, we should not reject the possibility that cognitive functioning is neither quantum nor classical. We discuss the structure of state spaces for cognitive systems.  相似文献   

17.
Our starting point is a definition of conditional event EH which differs from many seemingly similar ones adopted in the relevant literature since 1935, starting with de Finetti. In fact, if we do not assign the same third value u (undetermined) to all conditional events, but make it depend on EH, it turns out that this function t(EH) can be taken as a general conditional uncertainty measure, and we get (through a suitable – in a sense, compulsory – choice of the relevant operations among conditional events) the natural axioms for many different (besides probability) conditional measures.  相似文献   

18.
On improving the accuracy of the Hough transform   总被引:4,自引:0,他引:4  
The subject of this paper is very high precision parameter estimation using the Hough transform. We identify various problems that adversely affect the accuracy of the Hough transform and propose a new, high accuracy method that consists of smoothing the Hough arrayH(, ) prior to finding its peak location and interpolating about this peak to find a final sub-bucket peak. We also investigate the effect of the quantizations and ofH(, ) on the final accuracy. We consider in detail the case of finding the parameters of a straight line. Using extensive simulation and a number of experiments on calibrated targets, we compare the accuracy of the method with results from the standard Hough transform method of taking the quantized peak coordinates, with results from taking the centroid about the peak, and with results from least squares fitting. The largest set of simulations cover a range of line lengths and Gaussian zero-mean noise distributions. This noise model is ideally suited to the least squares method, and yet the results from the method compare favorably. Compared to the centroid or to standard Hough estimates, the results are significantly better—for the standard Hough estimates by a factor of 3 to 10. In addition, the simulations show that as and are increased (i.e., made coarser), the sub-bucket interpolation maintains a high level of accuracy. Experiments using real images are also described, and in these the new method has errors smaller by a factor of 3 or more compared to the standard Hough estimates.  相似文献   

19.
When implementing computational lexicons it is important to keep in mind the texts that a NLP system must deal with. Words relate to each other in many different, often odd ways this information is rarely found in dictionaries, and it is quite hard to deduce a priori. In this paper we present a technique for the acquisition of statistically significant selectional restrictions from corpora and discuss the results of an experimental application with reference to two specific sublaguages (legal and commercial). We show that there are important cooccurrence preferences among words which cannot be established a priori as they are determined for each choice of sublanguage. The method for detecting cooccurrences is based on the analysis of word associations augmented with syntactic markers and semantic tags. Word pairs are extracted by a morphosyntactic analyzer and clustered according to their semantic tags. A statistical measure is applied to the data to evaluate the sigificance of any relations detected. Selectional restrictions are acquired by a two-step process. First, statistically prevailing coarse grained conceptual patterns are used by a linguist to identify the relevant selectional restrictions in sublanguages. Second, semiautomatically acquired coarse selectional restrictions are used as the semantic bias of a system, ARIOSTO_LEX, for the automatic acquisition of a case-based semantic lexicon.  相似文献   

20.
Reasoning about programs in continuation-passing style   总被引:6,自引:0,他引:6  
Plotkin's v -calculus for call-by-value programs is weaker than the -calculus for the same programs in continuation-passing style (CPS). To identify the call-by-value axioms that correspond to on CPS terms, we define a new CPS transformation and an inverse mapping, both of which are interesting in their own right. Using the new CPS transformation, we determine the precise language of CPS terms closed under -transformations, as well as the call-by-value axioms that correspond to the so-called administrative -reductions on CPS terms. Using the inverse mapping, we map the remaining and equalities on CPS terms to axioms on call-by-value terms. On the pure (constant free) set of -terms, the resulting set of axioms is equivalent to Moggi's computational -calculus. If the call-by-value language includes the control operatorsabort andcall-with-current-continuation, the axioms are equivalent to an extension of Felleisenet al.'s v -C-calculus and to the equational subtheory of Talcott's logic IOCC.This article is a revised and extended version of the conference paper with the same title [42]. The technical report of the same title contains additional material.The authors were supported in part by NSF grant CCR 89-17022 and by Texas ATP grant 91-003604014.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号