共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Methods and algorithms for digital image processing that increase the probability of correct recognition of alphanumeric data
are considered. The computational costs of the control block of an autonomous image recognition system based on TMS320C5416
signal processor are analyzed.
Natalia S. Novozhilova. Born 1955. Graduated from Moscow Institute of Electronic Engineering in 1978. Received her candidate’s degree (Mathematics
and Physics) in 1984. At present, she is a senior researcher at Lukin Scientific Research Institute of Physical Problems.
Scientific interests: pattern recognition and mathematical methods for processing the results of a physical experiment. Author
of 22 publications.
Aleksandr G. Safonov. Born 1952. Graduated from Moscow Power Engineering Institute in 1975. Received his candidate’s degree (Technical Sciences)
in 1982. At present, he is a head of department at Lukin Scientific Research Institute of Physical Problems. Scientific interests:
automated data processing complexes, pattern recognition, and microelectronics. Author of more than 50 publications. 相似文献
3.
Several approaches to finding the connected components of a graph on a hypercube multicomputer are proposed and analyzed. The results of experiments conducted on an NCUBE hypercube are also presented. The experimental results support the analysis.This research was supported in part by the National Science Foundation under grants DCR84-20935 and MIP 86-17374. 相似文献
4.
In this paper, an approach for analysing the structural indistinguishability between two uncontrolled (or autonomous) analytic systems is presented. The approach involves constructing, if possible, a smooth mapping between the trajectories of two candidate models. If either of the models satisfies an observability criterion, then such a transformation always exists when the models are indistinguishable from their outputs. The approach is illustrated by examples from epidemiology and chemical reaction kinetics. One important outcome is that the susceptible, infectious, recovered (SIR) and SIR with temporary immunity (SIRS) models are shown to be indistinguishable when a proportion of the number of infectives is measured. 相似文献
5.
面向Agent计算(AOC)原理 总被引:2,自引:0,他引:2
Agent的出现使我们看到了AI进一步发展的希望,由于对Agent缺乏统一的认识,使得在设计Agent系统时出现一些混乱,我们认为造成这一状况的根本原因在于设计者对面向Agent计算原理认识有偏差,本文将智能划分三个层次,并从知识层次和行为层次阐述AOC基本原理。 相似文献
6.
This paper introduces the theoretical foundation for the development of a pen-based system dedicated to helping to teach handwriting in primary schools. Knowledge given by a kinematic theory of rapid human movements is used. The system proposed includes a letter model generator which is used to create letter shapes with a human-like kinematics. The system generates feedback to pupils after a multilevel analysis of the handwriting. The analysis presented deals with shape conformity, shape error identification, fluency analysis and kinematic parameter evaluation. Discussion on how fluency measurement and error quantification can be useful in developing a learning metric is also presented. 相似文献
7.
Xing Feng Lijun Chang Xuemin Lin Lu Qin Wenjie Zhang Long Yuan 《Distributed and Parallel Databases》2018,36(3):555-592
The paper studies three fundamental problems in graph analytics, computing connected components (CCs), biconnected components (BCCs), and 2-edge-connected components (ECCs) of a graph. With the recent advent of big data, developing efficient distributed algorithms for computing CCs, BCCs and ECCs of a big graph has received increasing interests. As with the existing research efforts, we focus on the Pregel programming model, while the techniques may be extended to other programming models including MapReduce and Spark. The state-of-the-art techniques for computing CCs and BCCs in Pregel incur \(O(m\times \#\text {supersteps})\) total costs for both data communication and computation, where m is the number of edges in a graph and #supersteps is the number of supersteps. Since the network communication speed is usually much slower than the computation speed, communication costs are the dominant costs of the total running time in the existing techniques. In this paper, we propose a new paradigm based on graph decomposition to compute CCs and BCCs with O(m) total communication cost. The total computation costs of our techniques are also smaller than that of the existing techniques in practice, though theoretically almost the same. Moreover, we also study distributed computing ECCs. We are the first to study this problem and an approach with O(m) total communication cost is proposed. Comprehensive empirical studies demonstrate that our approaches can outperform the existing techniques by one order of magnitude regarding the total running time. 相似文献
8.
Automated negotiation systems with software agents representing individuals or organizations and capable of reaching agreements
through negotiation are becoming increasingly important and pervasive. Examples, to mention a few, include the industrial
trend toward agent-based supply chain management, the business trend toward virtual enterprises, and the pivotal role that
electronic commerce is increasingly assuming in many organizations. Artificial intelligence (AI) researchers have paid a great
deal of attention to automated negotiation over the past decade and a number of prominent models have been proposed in the
literature. These models exhibit fairly different features, make use of a diverse range of concepts, and show performance
characteristics that vary significantly depending on the negotiation context. As a consequence, assessing and relating individual
research contributions is a difficult task. Currently, there is a need to build a framework to define and characterize the
essential features that are necessary to conduct automated negotiation and to compare the usage of key concepts in different
publications. Furthermore, the development of such a framework can be an important step to identify the core elements of autonomous
negotiating agents, to provide a coherent set of concepts related to automated negotiation, to assess progress in the field,
and to highlight new research directions. Accordingly, this paper introduces a generic framework for automated negotiation.
It describes, in detail, the components of the framework, assesses the sophistication of the majority of work in the AI literature
on these components, and discusses a number of prominent models of negotiation. This paper also highlights some of the major
challenges for future automated negotiation research. 相似文献
9.
10.
Computational scientific applications tend to be very data I/O intensive, producing a large amount of data as the execution result. In this research, we propose a new storage system using next-generation non-volatile memory that is suitable for exa-scale computing systems. This storage system is called the Cloud Computing Burst System (CCBS) and is composed of a unified table management module, data scoring module, and CCBS storage. In particular, CCBS operates as a workload enlightened storage system using its own data scoring module. The CCBS storage architecture consists of PCM/NAND Flash arrays and a data migration engine. CCBS storage cannot only provide a scaling out feature, but also improve the overall performance of the storage system. In addition, by using new non-volatile memory array, many benefits, such as low energy consumption, density scaling, and high performance, can be achieved. We demonstrate the effectiveness of our proposed system by simulating the storage system using scientific benchmarking tool. Our data scoring algorithm can provide 7% more hit rate than other methods for CCBS. In addition, our proposed system has improved storage system speed by 1.64 times, compared with only NAND Flash conventional model. 相似文献
11.
Model-based operation support technology such as Model Predictive Control (MPC) is a proven and accepted technology for multivariable and constrained large scale control problems in process industry. Despite the growing number of successful implementations, the low level of operational efficiency of MPC is an existing problem, specifically the lack of advanced maintenance technology. To this end, within the EU FP 7 program, a project (Autoprofit 1) has been executed to advance the level of autonomy and automated maintenance of MPC technology.Taking linear model-based technology as a starting point, in the project a philosophy has been developed for autonomous performance monitoring, diagnosis, experiment design, model adaptation and controller re-tuning, that is driven by economic criteria in each step, working towards an operation support system in which effective maintenance and adaptation of MPC controllers becomes feasible.In this development, challenging research questions have been addressed in the areas of on-line performance monitoring and diagnosis, least costly experiment design, automated adaptation of models, and auto-tuning, and new fundamental techniques have been developed. Although a full fledge and industrially proven (semi-)automated system is not yet realised, parts of the on-line system have been implemented and validated on real life cases provided by the industrial partners, showing that the formulated objectives are within reach. 相似文献
12.
We give efficient algorithms for distributed computation on oriented, anonymous, asynchronous hypercubes with possible faulty
components (i.e. processors and links) and deterministic processors. Initially, the processors know only the size of the network
and that they are inter-connected in a hypercube topology. Faults may occur only before the start of the computation (and
that despite this the hypercube remains a connected network). However, the processors do not know where these faults are located.
As a measure of complexity we use the total number of bits transmitted during the execution of the algorithm and we concentrate
on giving algorithms that will minimize this number of bits. The main result of this paper is an algorithm for computing Boolean
functions on anonymous hypercubes with bit cost , where is the number of faulty components (i.e. links plus processors), is the number of links which are either faulty, or non-faulty but adjacent to faulty processors, and is the diameter of the hypercube with faulty components.
Received: October 1992 / Accepted: April 2001 相似文献
13.
14.
15.
Multirate sampled-data systems: computing fast-rate models 总被引:2,自引:2,他引:2
This paper studies identification of a general multirate sampled-data system. Using the lifting technique, we associate the multirate system with an equivalent linear time-invariant system, from which a fast-rate discrete-time system is extracted. Uniqueness of the fast-rate system, controllability and observability of the lifted system, and other related issues are discussed. The effectiveness is demonstrated through simulation and real-time implementation. 相似文献
16.
Problems in fault-tolerant distributed computing have been studied in a variety of models. These models are structured around
two central ideas: (1) degree of synchrony and failure model are two independent parameters that determine a particular type of system, (2) the notion of faulty component is helpful and even necessary for the analysis of distributed computations when faults occur. In this work, we question these
two basic principles of fault-tolerant distributed computing, and show that it is both possible and worthy to renounce them
in the context of benign faults: we present a computational model based only on the notion of transmission faults. In this model, computations evolve in rounds, and messages missed in a round are lost. Only information transmission is
represented: for each round r and each process p, our model provides the set of processes that p “hears of” at round r (heard-of set), namely the processes from which p receives some message at round r. The features of a specific system are thus captured as a whole, just by a predicate over the collection of heard-of sets.
We show that our model handles benign failures, be they static or dynamic, permanent or transient, in a unified framework.
We demonstrate how this approach leads to shorter and simpler proofs of important results (non-solvability, lower bounds).
In particular, we prove that the Consensus problem cannot be generally solved without an implicit and permanent consensus
on heard-of sets. We also examine Consensus algorithms in our model. In light of this specific agreement problem, we show
how our approach allows us to devise new interesting solutions.
A. Schiper’s research funded by the Swiss National Science Foundation under grant number 200021-111701 and Hasler Foundation
under grant number 2070. 相似文献
17.
The problem of global observer design for autonomous systems is investigated in this paper. A constructive approach is presented for the explicit design of global observers for completely observable systems whose solution trajectories are bounded from any initial condition. Since the bound of a solution trajectory depends on the initial condition and is therefore not known a priori, the idea of universal control is employed to tune the observer gains on‐line, achieving global asymptotic convergence of the proposed high‐gain observer. Copyright © 2007 John Wiley & Sons, Ltd. 相似文献
18.
Ali Ayad 《Computing》2010,89(1-2):45-68
This paper presents a new algorithm for computing absolutely irreducible components of n-dimensional algebraic varieties defined implicitly by parametric homogeneous polynomial equations over ${\mathbb{Q}}$ , the field of rational numbers. The algorithm computes a finite partition of the parameters space into constructible sets such that the absolutely irreducible components are given uniformly in each constructible set. Each component will be represented by two items: first by a parametric representative system, i.e., the equations that define the component and second by a parametric effective generic point which gives a parametric rational univariate representation of the elements of the component. The number of absolutely irreducible components is constant in each constructible set. The complexity bound of this algorithm is ${\delta^{O(r^4)}d^{r^4d^{O(n^3)}}}$ , being double exponential in n, where d (resp. δ) is an upper bound on the degrees of the input parametric polynomials w.r.t. the main n variables (resp. w.r.t. r parameters). 相似文献
19.
This essay is a speculation of the impact of the next generation technological platform — the internetwork computing architecture (InterNCA) — on systems development. The impact will be deep and pervasive and more substantial than when computing migrated from closed computer rooms to ubiquitous personal computers and flexible client-server solutions. Initially, by drawing upon the notion of a technological frame, the InterNCA, and how it differs from earlier technological frames, is examined. Thereafter, a number of hypotheses are postulated with regard to how the architecture will affect systems development content, scope, organization and processes. Finally, some suggestions for where the information systems research community should focus its efforts (if the call for relevance is not to be taken lightly) are proposed. 相似文献
20.
Iterative computing is pervasive in web applications, data mining and scientific computing. Many parallel algorithms for such applications are synchronous algorithms which need strict synchronization between iterations to ensure their correctness, making the performance sensitive to computational skews in each iteration. Current load balancing approaches may alleviate the effect of computational skew, but cannot completely solve the problem. As a result, for many applications, the skews in each iteration still exist and they are accumulated, seriously affecting the completion time of these applications. In this paper, we propose an effective approach to make synchronous iterative computing applications themselves have the ability to tolerate the negative effects of unresolved computational skews. This approach divides a large computational task in a computing node or worker into a number of sub-tasks which only depend on the states of a few objects from the previous iteration. This allows the sub-tasks in subsequent iterations to proceed in advance whenever the states of related data objects are available. Consequently, the idle time caused by strict synchronization is reduced and the overall performance is thus enhanced. Experimental results show that this approach can improve the overall performance by up to \(2.45\times \) in comparison with the state-of-the-art approaches. 相似文献