首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Within the last decade increasing computing power and the scientific advancement of algorithms allowed the analysis of various aspects of human faces such as facial expression estimation [20], head pose estimation [17], person identification [2] or face model fitting [31]. Today, computer scientists can use a bunch of different techniques to approach this challenge 4, 29, 3, 17, 9 and 21. However, each of them still has to deal with non-perfect accuracy or high execution times.  相似文献   

2.
Annotated logic is a formalism that has been applied to a variety of situations in knowledge representation, expert database systems, quantitative reasoning, and hybrid databases [6], [13], [19], [20], [21], [22], [23], [24], [30], [33], [35], [36]. Annotated Logic Programming (ALP) is a subset of annotated logics that can be used directly for programming annotated logic applications [22], [23]. A top-down query processing procedure containing elements of constraint solving, called ca-resolution, is developed for ALPs. It simplifies a number of previously proposed procedures, and also improves on their efficiency. The key to its development is in observing that satisfaction, as introduced originally for ALPs, may be naturally generalized. A computer implementation of ca-resolution for ALPs is described which offers important theoretical and practical insights. Strategies for improving its efficiency are discussed.This material is based upon work supported by the NSF under Grant CCR9225037. A preliminary version of this paper appears in the proceedings of the International Conference on Logic Programming, 1994.  相似文献   

3.
We have developed a Generalized Timed Petri Net (GTPN) model for evaluating the performance of computer systems. Our model is a generalization of the TPN model proposed by Zuberek [1] and extended by Razouk and Phelps [2]. In this paper, we define the GTPN model and present how performance estimates are obtained from the GTPN. We demonstrate the use of our automated GTPN analysis techniques on the dining philosophers example. This example violates restrictions made in the earlier TPN models. Finally, we compare the GTPN to the stochastic Petri net (SPN) models. We show that the GTPN model has capabilities for modeling and analyzing parallel systems lacking in existing SPN models. The GTPN provides an efficient, easily used method of obtaining accurate performance estimates for models of computer systems which include both deterministic and geometric holding times.  相似文献   

4.
The results reported in this paper create a step toward the rough set-based foundations of data mining and machine learning. The approach is based on calculi of approximation spaces. In this paper, we present the summarization and extension of our results obtained since 2003 when we started investigations on foundations of approximation of partially defined concepts (see, e.g., [2], [3], [7], [37], [20], [21], [5], [42], [39], [38], [40]). We discuss some important issues for modeling granular computations aimed at inducing compound granules relevant for solving problems such as approximation of complex concepts or selecting relevant actions (plans) for reaching target goals. The problems discussed in this article are crucial for building computer systems that assist researchers in scientific discoveries in many areas such as biology. In this paper, we present foundations for modeling of granular computations inside of system that is based on granules called approximation spaces. Our approach is based on the rough set approach introduced by Pawlak [24], [25]. Approximation spaces are fundamental granules used in searching for relevant complex granules called as data models, e.g., approximations of complex concepts, functions or relations. In particular, we discuss some issues that are related to generalizations of the approximation space introduced in [33], [34]. We present examples of rough set-based strategies for the extension of approximation spaces from samples of objects onto a whole universe of objects. This makes it possible to present foundations for inducing data models such as approximations of concepts or classifications analogous to the approaches for inducing different types of classifiers known in machine learning and data mining. Searching for relevant approximation spaces and data models are formulated as complex optimization problems. The proposed interactive, granular computing systems should be equipped with efficient heuristics that support searching for (semi-)optimal granules.  相似文献   

5.
This paper introduces reconfigurable computing (RC) and specifically chooses one of the prototypes in this field, MorphoSys (M1) [1], [2], [3], [4], [5]. The paper addresses the results obtained when using RC in mapping algorithms pertaining to digital coding in relation to previous research [6], [7], [8], [9], [10]. The chosen algorithms relate to cyclic coding techniques, namely the CCITT CRC-16 and the CRC-16. A performance analysis study of the M1 RC system is also presented to evaluate the efficiency of the algorithm execution on the M1 system. For comparison purposes, three other systems were used to map the same algorithms showing the advantages and disadvantages of each compared with the M1 system. The algorithms were run on the 8×8 RC (reconfigurable) array of the M1 (MorphoSys) system; numerical examples were simulated to validate our results, using the MorphoSys mULATE program, which simulates MorphoSys operations.  相似文献   

6.
A mathematical interpretation is given to the notion of a data type, which allows procedural data types and circularly defined data types. This interpretation seems to provide a good model for what most computer scientists would call data types, data structures, types, modes, clusters or classes. The spirit of this paper is that of McCarthy [43] and Hoare [18]. The mathematical treatment is the conjunction of the ideas of Scott on the solution of domain equations [34], [35], and [36] and the initiality property noticed by the ADJ group (ADJ [2] and [3]). The present work adds operations to the data types proposed by Scott and proposes an alternative to the equational specifications proposed by Guttag [14], Guttag and Horning [15] and ADJ [2]. The advantages of such a mathematical interpretation are the following: throwing light on some ill-understood constructs in high-level programming languages, easing the task of writing correct programs and making possible proofs of correctness for programs or implementations.This research was conducted at the University of Warwick while both authors were supported by the Science Research Council grant B/RG 31948 to D. Park and M. Paterson. During the final redaction of the paper the first author was partially supported by the National Science Foundation grant MCS78-07461. EDITOR'S NOTE: This paper is one of several invited for submission to this journal to present different approaches to the subject of the semantics of programming languages.  相似文献   

7.
Owing to its rapid convergence, ease of computer implementation[1], and applicability to a wide class of practical problems[2, 3], separable programming is well established among the more useful non-linear programming techniques[2, 4]. Yet at the same time, its impracticality for highly nonlinear problems, pointed out repeatedly[1, 5, 6], constitutes a severe limitation of this important approach. This emerges even more strongly when one observes the essential failure of the method for some of the very small (2 × 2) problems included in this report. In this context of high nonlinearity, we examine the performance of a convergent (to within a given ε> 0 of the optimal) alternative procedure based on Refs.[7, 8] which obviates the major difficulties effectively by solving a series of non-heuristic, rigorously determined small separable programs as opposed to a single large one in the standard separable programming technique given, e.g., in Refs.[1, 2, 5]. Specifically, this paper, first, in absence of any such study in the literature, demonstrates the extreme degree of vulnerability of standard separable programming to high nonlinearity, then states the algorithm and some of its important characteristics, and shows its effectiveness for computational examples. Problems requiring up to about 10,000 nonzero elements in their specifications and about 45,000 nonzero elements in the intermediate separable programs, resulting from up to 70 original nonlinear variables and 70 nonlinear constraints are included in these examples.  相似文献   

8.
A computer interview involves a program asking questions of the user, who responds by providing answers directly to the computer. Using a computer interview has been shown to be an effective method of eliciting information, and particularly personal information which many people find difficult to discuss face to face. While the simulation of some of the characteristics of human–human communication seems to enhance the dialogue, it appears to be the absence of others, such as being non-judgmental, unshockable, completely consistent, and unendingly patient, that gives computer interviewing its particular effectiveness.

The work reported in this paper investigated the effect of simulating in a computer interview two techniques which good human interviewers use: empathy and grouping questions. Thirty nine interviewees answered 40 questions on a computer, in combinations of human-like or computer-like question styles, and presented in either a logical or a random order.

They found the use of the human interviewer technique in the wording of questions made the computer interviews more interesting and enjoyable, than when blunt, direct questioning was used, and they answered honestly more often to the human-like style.

This investigation has shown that a computer interview can be made more effective by simulating the human interviewer technique of empathising with interviewees and softening those questions which are of a sensitive nature. It seems therefore that it is the combination of the right non-human characteristics with the right human characteristics that can produce a successful computer interview. The question for further research is which are the right characteristics in each case, given the purpose of the interview.  相似文献   


9.
In recent years, several quite successful attempts have been made to solve systems of polynomial constraints, using geometric design tools, exploiting the availability of subdivision-based solvers [7], [11], [12], [15]. This broad range of methods includes both binary domain subdivision as well as the projected polyhedron method of Sherbrooke and Patrikalakis [15]. A prime obstacle in using subdivision solvers is their scalability. When the given constraint is represented as a tensor product of all its independent variables, it grows exponentially in size as a function of the number of variables. In this work, we show that for many applications, especially geometric ones, the exponential complexity of the constraints can be reduced to a polynomial by representing the underlying structure of the problem in the form of expression trees that represent the constraints. We demonstrate the applicability and scalability of this representation and compare its performance to that of tensor product constraint representation through several examples.  相似文献   

10.
The usefulness of an interactive computer program in eliciting children's reports about an event was examined. Fifty-nine 5- to 6-year-old and fifty-two 7- to 8-year old children participated in an event with their regular class teacher, which involved several activities and a mildly negative secret. Four days later, the children were interviewed individually in one of three interview conditions; computer program alone, computer program with adult assistant present and standard verbal interview format. The computer program incorporated animation and audio whereby an animated figure asked the questions and the child was required to provide a verbal response. Results revealed that the children were just as willing to recount details of the event to the computer, compared to the standard interviewer; there was no effect of interview condition on the number of words and event features recalled, and on children's willingness to disclose the secret. However, the children favoured the computer interview format; they were more willing to revise their answers to the computer than to the adult interviewer. The implications of these findings and possible directions for future research are discussed.  相似文献   

11.
This paper describes two ideas and sample simulation results of a heuristic reinforcement-learning system and its application to the problem of digital computer control of a simple nuclear plant model. The idea of the system is interconnection between the well known reactor control heuristic rules [8,9], and the reinforcement learning algorithms [4,5]. The control signal is proposed as a vector depending on complex physical properties of the plant. Such an approach is far more flexible than deterministic or stochastic techniques when dealing with unknown processes and novel control situations.  相似文献   

12.
PATH's automated vehicle control application software is responsible for the longitudinal and lateral control of each vehicle in a platoon [5]. The software consists of a set of processes running concurrently on a PC, reading data from various sensors (e.g., radar, speedometer, accelerometer, magnetometer), writing to actuators (throttle, brake and steering), and using radio to communicate data to other vehicles. The processes exchange data with each other using a publish/subscribe scheme. In this paper, we describe the current software, and propose a model written in the synchronous language Esterel [1]. We use Taxys [2,7], a tool for timing analysis of Esterel based on the Kronos model-checker [3], and the Esterel compiler Saxo-RT [6], to verify that the application meets its deadlines. Timing analysis is done on-the-fly during the execution of the appropriately instrumented C code generated by the compiler. Instrumentation allows the verifier to observe the execution time of the application code. The C code generated by Saxo-RT, appropriately linked to the publish/subscribe library, can be run on the vehicles.  相似文献   

13.
We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: low-stretch routing schemes [48], distance labeling [25], searchable small worlds [31], and triangulation-based distance estimation [34]. Focusing on metrics of low doubling dimension, we approach these problems with a common technique called rings of neighbors, which refers to a sparse distributed data structure that underlies all our constructions. Apart from improving the previously known bounds for these problems, our contributions include extending Kleinberg’s small world model to doubling metrics, and a short proof of the main result in Chan et al. [15]. Doubling dimension is a notion of dimensionality for general metrics that has recently become a useful algorithmic concept in the theoretical computer science literature. This work was done when A. Slivkins was a graduate student at Cornell University and was supported by the Packard Fellowship of Jon Kleinberg. Preliminary version of this paper has appeared in 24th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC), 2005.  相似文献   

14.
15.
The problems of developing software requirements and quality assurance techniques have basically dealt with an environment where a single organization acts as the designer, developer, and user of the software product. Since the mid-1970' s, however, there has been a great increase in the use of "packaged" software products designed and developed by one organization for use in a variety of other organizations. The great profusion of products has resulted in many products being peddled for generic applications (accounting, manufacturing, etc.) which are of questionable quality and/or "fit" to a given organization's environment. This paper describes some techniques that are being used to certify software produced by third parties and how to determine if the "fit" is there. Current quality assurance techniques deal with the "correctness" of a program as compared to its specifications [2], [4], [7], [8], [12]. The real issue for a purchaser of software is whether the software is "correct" for its environment.  相似文献   

16.
Summary Proof methods adequate for a wide range of computer programs have been given in [1–6]. This paper develops a method suitable for programs which incorporate coroutines. The implementation of coroutines described follows closely that given in SIMULA [7, 8], a language in which such features may be used to great advantage. Proof rules for establishing the correctness of coroutines are given and the method is illustrated by the proof of a useful program for histogram compilation.  相似文献   

17.
Varga, in his excellent book [4] and in a later paper of his [5], extended the SOR theory in various directions by having considered the well known Ostrowski-Reich theorem as a starting point. In this paper we extend the theory by considering three-part splittings of Varga's type, where one of the basic parts is negative definite instead of being positive definite. Thus we are able to construct SOR-type schemes which converge for all the values of the overrelaxation parameter ω which do not belong to the familiar interval [0,2]. Then by following a similar but more complicated analysis, than that in [5], we are able to obtain the corresponding optimum schemes in the various possible cases.  相似文献   

18.
Interviewing stakeholders is a way to elicit information about requirements for a system-to-be. A difficulty when preparing such elicitation interviews is to select the topics to discuss, so as to avoid missing important information. Stakeholders may spontaneously share information on some topics, but remain silent on others, unless asked explicitly. We propose the Elicitation Topic Map (ETM) to help engineers in preparing interviews. ETM is a diagram showing topics that may be discussed during interviews, and shows how likely stakeholders discuss each of these topics spontaneously. If a topic is less likely to be discussed spontaneously, then this suggests that engineers may want to prepare questions on it, before the interview. ETM was produced through theoretical and empirical research. The theoretical part consisted of identifying topic sets based on a conceptual model of communication context, grounded in philosophy, artificial intelligence, and computer science. The empirical part involved interviews with Requirements Engineering professionals to identify the topic sets and topics in each set, surveys of business people in order to evaluate how likely they would spontaneously share information about topics, and evaluations of how likely students would share information about each topic, when asked about requirements for social network websites.  相似文献   

19.
The upward separation technique was developed by Hartmanis, who used it to show thatE=NE iff there is no sparse set inNP -P [15]. This paper shows some inherent limitations of the technique. The main result of this paper is the construction of an oracle relative to which there are extremely sparse sets inNP-P, butNEE=EE; this is in contradiction to a result claimed in [14] and [16]. Thus, although the upward separation technique is useful in relating the existence of sets of polynomial (and greater) density inNP-P to the NTIME(T(n))=DTIME(T(n)) problem, the existence of sets ofvery low density inNP-P cannot be shown to have any bearing on this problem using proof techniques that relativize.The oracle construction is also of interest since it is the first example of an oracle relative to whichEE=NEE andE NE. (The techniques of [10], [17], [21], and [23] do not suffice to construct such an oracle.) The construction is novel and the techniques may be useful in other settings.In addition, this paper also presents a number of new applications of the upward separation technique, including some new generalizations of the original result of [15].A preliminary version of this paper was presented at the 16th International Colloquium on Automata, Languages, and Programming [3]. The author was supported in part by National Science Foundation Research Initiation Grant Number CCR-8810467.  相似文献   

20.
An infinite network for parallel computation is presented which can for every k become partitioned in cube-connected cycles-networks of size 22k2k [1]. This construction extends a result from [2], where finite such networks are constructed. This infinite network is useful for simplifying the structure and improving the efficiency of the general purpose parallel computer shown in [3].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号