首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a general architecture for action (mimicking) and program (gesture) level visual imitation. Action-level imitation involves two modules. The viewpoint Transformation (VPT) performs a "rotation" to align the demonstrator's body to that of the learner. The Visuo-Motor Map (VMM) maps this visual information to motor data. For program-level (gesture) imitation, there is an additional module that allows the system to recognize and generate its own interpretation of observed gestures to produce similar gestures/goals at a later stage. Besides the holistic approach to the problem, our approach differs from traditional work in i) the use of motor information for gesture recognition; ii) usage of context (e.g., object affordances) to focus the attention of the recognition system and reduce ambiguities, and iii) use iconic image representations for the hand, as opposed to fitting kinematic models to the video sequence. This approach is motivated by the finding of visuomotor neurons in the F5 area of the macaque brain that suggest that gesture recognition/imitation is performed in motor terms (mirror) and rely on the use of object affordances (canonical) to handle ambiguous actions. Our results show that this approach can outperform more conventional (e.g., pure visual) methods.  相似文献   

2.
This paper presents grammatical evolution (GE) as an approach to select and combine features for detecting epileptic oscillations within clinical intracranial electroencephalogram (iEEG) recordings of patients with epilepsy. Clinical iEEG is used in preoperative evaluations of a patient who may have surgery to treat epileptic seizures. Literature suggests that pathological oscillations may indicate the region(s) of brain that cause epileptic seizures, which could be surgically removed for therapy. If this presumption is true, then the effectiveness of surgical treatment could depend on the effectiveness in pinpointing critically diseased brain, which in turn depends on the most accurate detection of pathological oscillations. Moreover, the accuracy of detecting pathological oscillations depends greatly on the selected feature(s) that must objectively distinguish epileptic events from average activity, a task that visual review is inevitably too subjective and insufficient to resolve. Consequently, this work suggests an automated algorithm that incorporates grammatical evolution (GE) to construct the most sufficient feature(s) to detect epileptic oscillations within the iEEG of a patient. We estimate the performance of GE relative to three alternative methods of selecting or combining features that distinguish an epileptic gamma (~65-95 Hz) oscillation from normal activity: forward sequential feature-selection, backward sequential feature-selection, and genetic programming. We demonstrate that a detector with a grammatically evolved feature exhibits a sensitivity and selectivity that is comparable to a previous detector with a genetically programmed feature, making GE a useful alternative to designing detectors.  相似文献   

3.
A Rule-Based Information Distribution System (RBIDS) is a system that uses distributed rules to control message passing (communications) between distributed nodes within a real-time, concurrent distributed environment. The problem addressed in this paper is that there do not exist adaptation (debugging, error removal) algorithms that can be used to maintain existing information-distribution rules (IDRs) to achieve effective and efficient message-communication responses in a distributed system of fact and rule bases. A partial solution to this problem is presented in the paper. This paper presents an algorithm that is used to overcome dynamic erors that have occurred in a RBIDS environment.  相似文献   

4.
We consider four test sequencing problems that frequently arise in test planning and design for testability (DFT) processes. Specifically, we consider the following problems: (1) how to determine a test sequence that does not depend on the failure probability distribution; (2) how to determine a test sequence that minimizes expected testing cost while not exceeding a given testing time; (3) how to determine a test sequence that does not utilize more than a given number of tests, while minimizing the average ambiguity group size; and (4) how to determine a test sequence that minimizes the storage cost of tests in the diagnostic strategy. We present various solution approaches to solve the above problems and illustrate the usefulness of the proposed algorithms  相似文献   

5.
In this paper the concept of efficiency in collaborative writing is considered in detail and a definition of efficiency is proposed. The definition of efficiency leads to the development of a research framework that delineates five operational measures of efficiency: (a) writing activities efficiency, (b) coordination efficiency, (c) quality of output, (d) absence of breakdowns, and (e) satisfaction with group performance. A comparative study is subsequently presented on the effects that groupware and conventional technologies have on the effciency of collaborative writing. The hypothesis is advanced that groupware can improve the efficiency of collaborative writing over conventional technologies. The results seem to support the hypothesis and indicate that (a) the groupware system examined in this study (MUCH system) offers efficiency benefits in terms of coordination, (b) MUCH users tend to face communication breakdowns while users of conventional technologies tend to face task-related breakdowns, (c) the documents produced with MUCH are of higher content quality, more coherent, and of higher rhetorical effectiveness than the documents produced with conventional technologies, and (d) the comparison of MUCH with conventional technologies shows no significant difference in terms of their effects on group performance satisfaction.  相似文献   

6.
The EVA (espacios virtuales de aprendizaje, or virtual learning spaces) project applies artificial intelligence tools to teleteaching methods, in a way that eliminates or mitigates the need for synchronous and in situ education. (A) A taxonomy of the space of knowledge (also called ‘Learning Space'; currently our prototype teaches M.Sc. courses in Computer Science) is formed and discretized. (B) EVA finds each student's initial knowledge state (through a computer examination) and final (desired) knowledge state, and from these, a particular learning trajectory is designed for that student. (C) Personalized books (called polybooks, because they are formed by modules (chapters) written in a variety of media) are armed by concatenating — along the learning trajectory — modules from a large pool, and sent to the student through the net in a store-and-forward fashion. (D) EVA searches the net for teaching material which has not been indexed in the discretized learning space, using a tool (Clasitex) inside an agent that finds the main themes or topics that an article (written in natural language) covers. (E) EVA also schedules for each student synchronous activities (lectures in TV, teleconferences, on-line question and answering sessions, chats). (F) EVA suggests for each student suitable ‘classmates' (students having similar learning trajectories) in her town, as well as possible advisers (students or alumni having knowledge that the student is acquiring). The present status, problems, models and tools of EVA are presented.  相似文献   

7.
Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: (1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and (2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: (1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, (2) higher utilities than agents that adopt BL to learn only RP, and (3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.  相似文献   

8.
Partial redundancy elimination (PRE) is a program transformation that identifies and eliminates expressions that are redundant on at least one (but not necessarily all) execution paths of a program without increasing any path length. Chow, Kennedy and co‐workers devised an algorithm (SSAPRE) for performing partial redundancy elimination on intermediate representations in static single assignment (SSA) form. The practicality of that algorithm is limited by the following concerns: (1) it makes assumptions about the namespace that are stronger than those of SSA form and that may not be valid if other optimizations have already been performed on the program; (2) if redundancies occur in nested expressions, the algorithm may expose but not eliminate them (requiring a second pass of the algorithm); (3) it misses cases covered by the state of the art in PRE; and (4) it is difficult to understand and implement. We present an algorithm (A‐SSAPRE) structurally similar to SSAPRE that uses anticipation rather than availability; this algorithm is simpler than SSAPRE, covers more cases, eliminates nested redundancies on a single pass, and makes no assumptions about the namespace. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
This paper proposes a neural network that stores and retrieves sparse patterns categorically, the patterns being random realizations of a sequence of biased (0,1) Bernoulli trials. The neural network, denoted as categorizing associative memory, consists of two modules: 1) an adaptive classifier (AC) module that categorizes input data; and 2) an associative memory (AM) module that stores input patterns in each category according to a Hebbian learning rule, after the AC module has stabilized its learning of that category. We show that during training of the AC module, the weights in the AC module belonging to a category converge to the probability of a “1” occurring in a pattern from that category. This fact is used to set the thresholds of the AM module optimally without requiring any a priori knowledge about the stored patterns  相似文献   

10.
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.  相似文献   

11.
We have investigated a new approach to efficiently find a novel inhibitor against a serine protease (i.e. an activated coagulation factor X, FXa) by using de novo design programs and the X-ray crystal structure of the target enzyme. FXa is a coagulant enzyme that generates thrombin (a serine protease) and participates in both intrinsic and extrinsic coagulation pathways. We adopted multiple copy simultaneous search (MCSS) and CAVEAT linker search techniques, which disclosed a novel FXa inhibitor (T01312) consisting of two binding moieties (the benzamidinyl and adamantyl groups) and a linker unit (the carboxybenzylamine group). The inhibitory activity of T01312 against FXa was determined to be a small K(i)-value of 48nM, which is two orders of magnitude smaller than that against thrombin. An X-ray crystal analysis of T01312 complexed with trypsin (an analogue of FXa) and docking studies of T01312 with trypsin and FXa showed that: (i) the benzamidinyl group is a predominant binding moiety in the anionic pocket (S1 site) with an asparatic acid residue; (ii) a hydrophobic pocket (S4 site) is the binding site of the adamantyl group; (iii) the carboxylate group of the linker contributes to the selectivity for FXa against thrombin. Thus, the combination of the knowledge of the X-ray crystal structure of the target molecule with MCSS and CAVEAT linker search techniques proved to be an effective hit-finding method that does not require the screening of huge compound libraries.  相似文献   

12.
Synthetic aperture radar (SAR) imagery from the sea can contain ships and their ambiguities. The ambiguities are visually identifiable due to their high intensities in the low radar backscatter background of sea environments and can be mistaken as ships, resulting in false alarms in ship detection. Analysing polarimetric characteristics of ships and ambiguities, we found that (a) backscattering from a ship consisted of a mixture of single-bounced, double-bounced and depolarized or diffused scattering types due to its complex physical structure; (b) that only a strong single- or double-bounce scatterer produced ambiguities in azimuth that look like relatively strong double- or single-bounce scatterers, respectively; and (c) that eigenvalues corresponding to the single- or double-bounce scattering mechanisms of the ambiguities were high but the eigenvalue corresponding to the depolarized scattering mechanisms of the ambiguities was low. With these findings, we proposed a ship detection method that applies the eigenvalue to differentiate the ship target and azimuth ambiguities. One set of C-band JPL AIRSAR (Jet Propulsion Laboratory Airborne Synthetic Aperture Radar) polarimetric data from the sea have been chosen to evaluate the method that can effectively delineate ships from their azimuth ambiguities.  相似文献   

13.
Enterprises use enterprise models to represent and analyse their processes, products, decisions, organisation, information flows, etc. Nevertheless, the enterprise knowledge that exists in enterprise models is not used beyond these purposes. The main goal of this paper is to present a framework that allows enterprises to reuse enterprise models to build software. The framework includes these dimensions: (1) a methodology that guides the use of the other dimensions in the reutilisation of enterprise models in software generation; (2) a set of metamodels to represent enterprises at the Computation Independent Model (CIM) level; (3) a modelling guide to make enterprise models using the metamodels proposed in this paper; (4) an extraction algorithm to discriminate the part of the CIM model to reuse; and (5) a set of transformation rules to reuse enterprise models to build Platform Independent Models. In addition, a case example is shown to validate the work that was carried out and to identify limitations.  相似文献   

14.
15.
Two Dogmas of Computationalism   总被引:3,自引:3,他引:0  
This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

16.
The effects of feedback provided by a person versus that provided by a computer on performance, motivation, and feedback seeking were studied. Employing a 2 × 3 experimental design, subjects were assigned to one of three feedback conditions: (a) no feedback, (b) feedback only upon request, and (c) automatic feedback with feedback provided either by a person or a computer. The results indicate that (a) subjects are more likely to seek feedback from a computer than from another person; (b) feedback from a person causes a decline in performance relative to a condition where a person is present but does not deliver feedback; (c) both human- and computer-mediated feedback reduce motivation in comparison to a control group that receives no feedback; and (d) personality — in this case, self-esteem and public and private self-consciousness — interacts with the receipt of person-mediated feedback to negatively affect performance.  相似文献   

17.
This study presents a novel weight-based multiobjective artificial immune system (WBMOAIS) based on opt-aiNET, the artificial immune system algorithm for multi-modal optimization. The proposed algorithm follows the elementary structure of opt-aiNET, but has the following distinct characteristics: (1) a randomly weighted sum of multiple objectives is used as a fitness function. The fitness assignment has a much lower computational complexity than that based on Pareto ranking, (2) the individuals of the population are chosen from the memory, which is a set of elite solutions, and a local search procedure is utilized to facilitate the exploitation of the search space, and (3) in addition to the clonal suppression algorithm similar to that used in opt-aiNET, a new truncation algorithm with similar individuals (TASI) is presented in order to eliminate similar individuals in memory and obtain a well-distributed spread of non-dominated solutions. The proposed algorithm, WBMOAIS, is compared with the vector immune algorithm (VIS) and the elitist non-dominated sorting genetic system (NSGA-II) that are representative of the state-of-the-art in multiobjective optimization metaheuristics. Simulation results on seven standard problems (ZDT6, SCH2, DEB, KUR, POL, FON, and VNT) show WBMOAIS outperforms VIS and NSGA-II and can become a valid alternative to standard algorithms for solving multiobjective optimization problems.  相似文献   

18.
Learning from past accidents is pivotal for improving safety in construction. However, hazard records are typically documented and stored as unstructured or semi-structured free-text rendering the ability to analyse such data a difficult task. The research presented in this study presents a novel and robust framework that combines deep learning and text mining technologies that provide the ability to analyse hazard records automatically. The framework comprises four-step modelling approach: (1) identification of hazard topics using a Latent Dirichlet Allocation algorithm (LDA) model; (2) automatic classification of hazards using a Convolution Neural Network (CNN) algorithm; (3) the production of a Word Co-occurrence Network (WCN) to determine the interrelations between hazards; and (4) quantitative analysis by Word Cloud (WC) technology of keywords to provide a visual overview of hazard records. The proposed framework is validated by analysing hazard records collected from a large-scale transport infrastructure project. It is envisaged that the use of the framework can provide managers with new insights and knowledge to better ensure positive safety outcomes in projects. The contributions of this research are threefold: (1) it is demonstrated that the process of analysing hazard records can be automated by combining deep learning and text learning; (2) hazards are able to be visualized using a systematic and data-driven process; and (3) the automatic generation of hazard topics and their classification over specific time periods enabling managers to understand their patterns of manifestation and therefore put in place strategies to prevent them from reoccurring.  相似文献   

19.
Bias Error Analysis of the Generalised Hough Transform   总被引:1,自引:0,他引:1  
The generalised Hough transform (GHT) extends the Hough transform (HT) to the extraction of arbitrary shapes. In practice, the performance of both techniques differs considerably. The literature suggests that, whilst the HT can provide accurate results with significant levels of noise and occlusion, the performance of the GHT is in fact much more sensitive to noise. In this paper we extend previous error analyses by considering the possible causes of bias errors of the GHT. Our analysis considers both formulation and implementation issues. First, we compare the formulation of the GHT against the general formulation of the standard HT. This shows that, in fact, the GHT definition increases the robustness of the standard HT formulation. Then, in order to explain this paradoxical situation we consider four possible sources of errors that are introduced due to the implementation of the GHT: (i) errors in the computation of gradient direction; (ii) errors due to false evidence attributed to the range of values defined by the point spread function; (iii) errors due to the contribution of false evidence by background points; and (iv) errors due to the non-analytic (i.e., tabular) representation used to store the properties of the model. After considering the effects of each source of error we conclude that: (i) in theory, the GHT is actually more robust than the standard HT; (ii) that clutter and occlusion have a reduced effect in the GHT with respect to the HT; and (iii) that a significant source of error can be due to the use of a non-analytic representation. A non-analytic representation defines a discrete point spread function that is mapped into a discrete accumulator array. The discrete point spread function is scaled and rotated in the gathering process, increasing the amount of inaccurate evidence. Experimental results demonstrate that the analysis of errors is congruent with practical implementation issues. Our results demonstrate that the GHT is more robust than the HT when the non-analytic representation is replaced by an analytic representation and when evidence is gathered using a suitable range of values in gradient direction. As such, we show that errors in the GHT are due to implementation issues and that the technique actually provides a more powerful model-based shape extraction approach than has previously been acknowledged.  相似文献   

20.
The application-specific integrated circuit (ASIC) design and the performance of a graphics processor that uses a pipelined-cache with FIFO memory to transfer a 3D pixel array and its z values to the frame buffer in one cycle are described in detail. The functional modules in the graphics processor include: (1) a video refresh converter, (2) a module that combines texture-mapped patterns onto Phong-shaded surfaces, and (3) a bidircctional parallel link between external devices and the frame-buffer modules. Digital differential analyzer (DDA) algorithms and the size of the pixel cache relative to the frame-buffer bandwidth, have been selected for good overall performance. A drawing speed of 8 ns/pixel (32 bits/pixel) or 1.2 million Phong-shaded polygons/s (100-pixel polygons, texture mapped with hidden surface removal) was achieved when 60-ns accesstime single port DRAMs and synchronous DRAMs were used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号