首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary Geffert has shown that earch recursively enumerable languageL over can be expressed in the formL{h(x) –1 g(x)x in +} * where is an alphabet andg, h is a pair of morphisms. Our purpose is to give a simple proof for Geffert's result and then sharpen it into the form where both of the morphisms are nonerasing. In our method we modify constructions used in a representation of recursively enumerable languages in terms of equality sets and in a characterization of simple transducers in terms of morphisms. As direct consequences, we get the undecidability of the Post correspondence problem and various representations ofL. For instance,L =(L 0) * whereL 0 is a minimal linear language and is the Dyck reductiona, A.  相似文献   

2.
An experiment was performed to test a distinct-window conferencing screen design as an electronic cue of social status differences in computer-mediated group decision-making. The screen design included one distinct window to symbolize high-status, and two nondistinct windows to symbolize low-status. The results indicated that the distinct-window screen design did produce status affects in groups of peers making decisions on judgmental problems. Randomly assigned occupants of the distinct window had greater influence on group decisions and member's attitudes than occupants of nondistinct windows.The authors would like to thank Shyam Kamadolli and Phaderm Nangsue, the programmers who developed the software used in this experiment. We would also like to thank the editor and our three anonymous reviewers for exceedingly helpful comments on an earlier draft of this article.  相似文献   

3.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

4.
We present a deep X-ray mask with integrated bent-beam electrothermal actuator for the fabrication of 3D microstructures with curved surface. The mask absorber is electroplated on the shuttle mass, which is supported by a pair of 20-m-thick single crystal silicon bent-beam electrothermal actuators and oscillated in a rectilinear direction due to the thermal expansion of the bent-beams. The width of each bent-beam is 10 m or 20 m and the length and bending angle are 1 mm and 0.1 rad, respectively, and the shuttle mass size is 1 mm × 1 mm. For 10-m-wide bent-beams, the shuttle mass displacement is around 15 m at 180 mW (3.6 V) dc input power. For 20-m-wide bent-beams, the shuttle mass displacement is around 19 m at 336 mW (4.2 V) dc input power. Sinusoidal cross-sectional PMMA microstructures with a pitch of 40 m and a height of 20 m are fabricated by 0.5 Hz, 20-m-amplitude sinusoidal shuttle mass oscillation.This research, under the contract project code MS-02-338-01, has been supported by the Intelligent Microsystem Center, which carries out one of the 21st centurys Frontier R & D Projects sponsored by the Korea Ministry of Science & Technology. Experiments at PLS were supported in part by MOST and POSCO.  相似文献   

5.
Summary The efficient implementation and extension of various approximate methods for general queueing networks require the study of two-station cyclic queues. In this paper maximum entropy formalism is used to analyse two-station cyclic queues with multiple general servers and a fixed number of jobs. New robust one step recursions for the queue length distribution are derived and asymptotic connections to infinite capacity queues are established. Links with Birth-Death and global balance solutions are determined and extensions to load dependent servers with Bernoulli feedback are presented. Numerical examples provide useful information on how critically system behaviour is affected by the distributional form of service times and simple bounds for typical performance measures such as throughout and mean queue length are defined. Moreover, the utility of the work as a building block for the approximate analysis of a general central server model is demonstrated.Some of the material included in this paper has been orally presented to the International Workshop on Computer Performance Evaluation, 28–30 April 1986, Sophia-Antipolis (INRIA), France [1]This work is jointly supported by Science and Engineering Research Council (SERC), UK and Metron Technology Ltd., UK, under grants GR/D/12422 and GR/AA/772, respectively  相似文献   

6.
Two views of AI in leisure and the work-place and two views of society are discussed. There is a conceptualisation of AI systems enhancing people in their work and leisure and another of AI automata which tends to degrade and replace human activity. Researchers tend to resolve into Optimists who work within a micro-sociological view and see AI systems as inevitable and beneficent. Others are Pessimists who adopt a macro-sociological view and see AI in its automata role and deliterious social consequences. These polarised perspectives must be integrated as only enhancing AI is socially acceptable.  相似文献   

7.
The derivative based approach to solve the optimal toll problem is demonstrated in this paper for a medium scale network. It is shown that although the method works for most small problems with only a few links tolled, it fails to converge for larger scale problems. This failure led to the development of an alternative genetic algorithm (GA) based approach for finding optimal toll levels for a given set of chargeable links. A variation on the GA based approach is used to identify the best toll locations making use of location indices suggested by Verhoef (2002).  相似文献   

8.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

9.
Summary A framework is proposed for the structured specification and verification of database dynamics. In this framework, the conceptual model of a database is a many sorted first order linear tense theory whose proper axioms specify the update and the triggering behaviour of the database. The use of conceptual modelling approaches for structuring such a theory is analysed. Semantic primitives based on the notions of event and process are adopted for modelling the dynamic aspects. Events are used to model both atomic database operations and communication actions (input/output). Nonatomic operations to be performed on the database (transactions) are modelled by processes in terms of trigger/reaction patterns of behaviour. The correctness of the specification is verified by proving that the desired requirements on the evolution of the database are theorems of the conceptual model. Besides the traditional data integrity constraints, requirements of the form Under condition W, it is guaranteed that the database operation Z will be successfully performed are also considered. Such liveness requirements have been ignored in the database literature, although they are essential to a complete definition of the database dynamics.

Notation

Classical Logic Symbols (Appendix 1) for all (universal quantifier) - exists at least once (existential quantifier) - ¬ no (negation) - implies (implication) - is equivalent to (equivalence) - and (conjunction) - or (disjunction) Tense Logic Symbols (Appendix 1) G always in the future - G 0 always in the future and now - F sometime in the future - F 0 sometime in the future or now - H always in the past - H 0 always in the past and now - P sometime in the past - P 0 sometime in the past or now - X in the next moment - Y in the previous moment - L always - M sometime Event Specification Symbols (Sects. 3 and 4.1) (x) means immediately after the occurrence of x - (x) means immediately before the occurrence of x - (x) means x is enabled, i.e., x may occur next - { } ({w 1} x{w 2}) states that if w 1 holds before the occurrence of x, then w 2 will hold after the occurrence of x (change rule) - [ ] ([oa1, ..., oan]x) states that only the object attributes oa1, ..., oa n are modifiable by x (scope rule) - {{ }} ({{w}}x) states that if x may occur next, then w holds (enabling rule) Process Specification Symbols (Sects. 5.3 and 5.4) :: for causal rules - for behavioural rules Transition-Pattern Composition Symbols (Sects. 5.2 and 5.3) ; sequential composition - ¦ choice composition - parallel composition - :| guarded alternative composition Location Predicates (Sect. 5.2) (z) means immediately after the occurrence of the last event of z (after) - (z) means immediately before the occurrence of the first event of z (before) - (z) means after the beginning of z and before the end of z (during) - ( z) means before the occurrence of an event of z (at)  相似文献   

10.
The simple rational partial functions accepted by generalized sequential machines are shown to coincide with the compositions P –1 , where P consists of the prefix codings. The rational functions accepted by generalized sequential machines are proved to coincide with the compositions P –1 , where is the family of endmarkers and is the family of removals of endmarkers. (The compositions are read from left to right). We also show that P –1 is the family of the subsequential functions.This work was partially supported by the Esprit Basic Research Action Working Group No. 3166 ASMICS, the CNRS and the Academy of Finland  相似文献   

11.
A neural network for recognition of handwritten musical notes, based on the well-known Neocognitron model, is described. The Neocognitron has been used for the what pathway (symbol recognition), while contextual knowledge has been applied for the where (symbol placement). This way, we benefit from dividing the process for dealing with this complicated recognition task. Also, different degrees of intrusiveness in learning have been incorporated in the same network: More intrusive supervised learning has been implemented in the lower neuron layers and less intrusive in the upper one. This way, the network adapts itself to the handwriting of the user. The network consists of a 13×49 input layer and three pairs of simple and complex neuron layers. It has been trained to recognize 20 symbols of unconnected notes on a musical staff and was tested with a set of unlearned input notes. Its recognition rate for the individual unseen notes was up to 93%, averaging 80% for all categories. These preliminary results indicate that a modified Neocognitron could be a good candidate for identification of handwritten musical notes.  相似文献   

12.
Kumiko Ikuta 《AI & Society》1990,4(2):137-146
The role of craft language in the process of teaching (learning) Waza (skill) will be discussed from the perspective of human intelligence.It may be said that the ultimate goal of learning Waza in any Japanese traditional performance is not the perfect reproduction of the teaching (learning) process of Waza. In fact, a special metaphorical language (craft language) is used, which has the effect of encouraging the learner to activate his creative imagination. It is through this activity that the he learns his own habitus (Kata).It is suggested that, in considering the difference of function between natural human intelligence and artificial intelligence, attention should be paid to the imaginative activity of the learner as being an essential factor for mastering Kata.This article is a modified English version of Chapter 5 of my bookWaza kara shiru (Learning from Skill), Tokyo University Press, 1987, pp. 93–105.  相似文献   

13.
Summary This paper is devoted to developing and studying a precise notion of the encoding of a logical data structure in a physical storage structure, that is motivated by considerations of computational efficiency. The development builds upon the notion of an encoding of one graph in another. The cost of such an encoding is then defined so as to reflect the structural compatibility of the two graphs, the (externally specified) costs of implementing the host graph, and the (externally specified) set of intended usage patterns of the guest graph. The stability of the constructed framework is demonstrated in terms of a number of results; the faithfulness of the formalism is argued in terms of a number of examples from the literature; and the tractability of the model is hinted at by several results and by further references to the literature.  相似文献   

14.
When interpolating incomplete data, one can choose a parametric model, or opt for a more general approach and use a non-parametric model which allows a very large class of interpolants. A popular non-parametric model for interpolating various types of data is based on regularization, which looks for an interpolant that is both close to the data and also smooth in some sense. Formally, this interpolant is obtained by minimizing an error functional which is the weighted sum of a fidelity term and a smoothness term.The classical approach to regularization is: select optimal weights (also called hyperparameters) that should be assigned to these two terms, and minimize the resulting error functional.However, using only the optimal weights does not guarantee that the chosen function will be optimal in some sense, such as the maximum likelihood criterion, or the minimal square error criterion. For that, we have to consider all possible weights.The approach suggested here is to use the full probability distribution on the space of admissible functions, as opposed to the probability induced by using a single combination of weights. The reason is as follows: the weight actually determines the probability space in which we are working. For a given weight , the probability of a function f is proportional to exp(– f2 uu du) (for the case of a function with one variable). For each different , there is a different solution to the restoration problem; denote it by f. Now, if we had known , it would not be necessary to use all the weights; however, all we are given are some noisy measurements of f, and we do not know the correct . Therefore, the mathematically correct solution is to calculate, for every , the probability that f was sampled from a space whose probability is determined by , and average the different f's weighted by these probabilities. The same argument holds for the noise variance, which is also unknown.Three basic problems are addressed is this work: Computing the MAP estimate, that is, the function f maximizing Pr(f/D) when the data D is given. This problem is reduced to a one-dimensional optimization problem. Computing the MSE estimate. This function is defined at each point x as f(x)Pr(f/D) f. This problem is reduced to computing a one-dimensional integral.In the general setting, the MAP estimate is not equal to the MSE estimate. Computing the pointwise uncertainty associated with the MSE solution. This problem is reduced to computing three one-dimensional integrals.  相似文献   

15.
In this paper, we investigate the numerical solution of a model equation u xx = exp(– ) (and several slightly more general problems) when 1 using the standard central difference scheme on nonuniform grids. In particular, we are interested in the error behaviour in two limiting cases: (i) the total mesh point number N is fixed when the regularization parameter 0, and (ii) is fixed when N. Using a formal analysis, we show that a generalized version of a special piecewise uniform mesh 12 and an adaptive grid based on the equidistribution principle share some common features. And the optimal meshes give rates of convergence bounded by |log()| as 0 and N is given, which are shown to be sharp by numerical tests.  相似文献   

16.
Summary The author's inquiry [1] on learning systems is generalized in the following respects: The process of learning, instead of coming to an end when the learning goal has been reached, is supposed to last for ever, so that the above definitive learning as well as phenomena such as forgetting, re-learning, changing the goal etc. become describable.We take over the notion of semi-uniform solvability of a set of learning problems (2.2), but now (trivial cases excluded) the whole capacity of a learning system is never s. u. solvable. Finite such sets are. The notion of a solving-basis of some is introduced and we can state necessary conditions that possess such a basis (2.14), so that examples of sets without a basis can be provided. On the other hand, any s. u. solvable has as basis. The notion of uniform solvability (3.1) reinforces that of s. u. solvability, and there are given sufficient conditions for to be uniformly solvable (3.6). In some finite cases, s. u. solvability, existence of a basis and uniform solvability coincide (3.7–3.9). At last we give the construction for the weakest learning system solving a uniformly solvable problem set (3.12–3.19).Eine deutsche Fassung wurde am 30. Mai 1972 eingereicht.  相似文献   

17.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

18.
An important motivation for the object-oriented paradigm is to improve the changeability of the software, thereby reducing lifetime development costs. This paper describes the results of controlled experiments assessing the changeability of a given responsibility-driven (RD) design versus an alternative control-oriented mainframe (MF) design. According to Coad and Yourdon's OO design quality principles, the RD design represents a good design. The MF design represents a bad design. To investigate which of the designs have better changeability, we conducted two controlled experiments--a pilot experiment and a main experiment. In both experiments, the subjects were divided in two groups in which the individuals designed, coded and tested several identical changes on one of the two design alternatives.The results clearly indicate that the good RD design requires significantly more change effort for the given set of changes than the alternative bad MF design. This difference in change effort is primarily due to the difference in effort required to understand how to solve the change tasks. Consequently, reducing class-level coupling and increasing class cohesion may actually increase the cognitive complexity of a design. With regards to correctness and learning curve, we found no significant differences between the twodesigns. However, we found that structural attributes change less for the RD design than for the MF design. Thus, the RD design may be less prone to structural deterioration. A challenging issue raised in this paper is therefore the tradeoff between change effort and structural stability.  相似文献   

19.
In this paper is indicated the possible utility of isotonic spaces as a background language for discussing systems. In isotonic spaces the basic duality between neighborhood and convergent first achieves a proper background permitting applications beyond the scope of topological spaces. A generalization of continuity of mappings based on ancestral relations is presented and this definition is applied to establish a necessary and sufficient condition that mappings preserve connectedness. Fortunately for systems theory, it is not necessary to have infinite sets or infinitary operators to apply definitions of neighborhood, convergents, continuity and connectedness.This work was supported in part by a grant from the National Science Foundation.  相似文献   

20.
A general method of conflictless arbitrary permutation of large data elements that can be divided into a multitude of smaller data blocks was considered for switches structured as the Cayley graphs. The method was specified for arbitrary permutations in the generalized hypercubes and multidimensional grids, and their characteristics were considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号