首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
In the context of robotics, configuration space (cspace) is widely used for non-circular robots to engage tasks such as path planning, collision check, and motion planning. In many real-time applications, it is important for a robot to give a quick response to the user’s command. Therefore, a constant bound on planning time per action is severely imposed. However, existing search algorithms used in c-space gain first move lags which vary with the size of the underlying problem. Furthermore, applying real-time search algorithms on c-space maps often causes the robots being trapped within local minima. In order to solve the above mentioned problems, we extend the learning real-time search (LRTS) algorithm to search on a set of c-space generalized Voronoi diagrams (c-space GVDs), helping the robots to incrementally plan a path, to efficiently avoid local minima, and to execute fast movement. In our work, an incremental algorithm is firstly proposed to build and represent the c-space maps in Boolean vectors. Then, the method of detecting grid-based GVDs from the c-space maps is further discussed. Based on the c-space GVDs, details of the LRTS and its implementation considerations are studied. The resulting experiments and analysis show that, using LRTS to search on the c-space GVDs can 1) gain smaller and constant first move lags which is independent of the problem size; 2) gain maximal clearance from obstacles so that collision checks are much reduced; 3) avoid local minima and thus prevent the robot from visually unrealistic scratching.  相似文献   

2.
3.
Bayesian Treed Models   总被引:1,自引:0,他引:1  
When simple parametric models such as linear regression fail to adequately approximate a relationship across an entire set of data, an alternative may be to consider a partition of the data, and then use a separate simple model within each subset of the partition. Such an alternative is provided by a treed model which uses a binary tree to identify such a partition. However, treed models go further than conventional trees (e.g. CART, C4.5) by fitting models rather than a simple mean or proportion within each subset. In this paper, we propose a Bayesian approach for finding and fitting parametric treed models, in particular focusing on Bayesian treed regression. The potential of this approach is illustrated by a cross-validation comparison of predictive performance with neural nets, MARS, and conventional trees on simulated and real data sets.  相似文献   

4.
5.
The main goal of this paper is to provide simple parameter identification process for most of hyperelastic constitutive laws in biomechanics of soft tissues. The advantage of our approach lies on its rapidity and effectiveness, by reducing analytically the number of parameters to identify in the model during the identification procedure. With the use of genetic algorithms, the search for an adequate initial guess point is avoided and the space solution of the objective function is reduced to meet only parameters that cannot be calculated analytically. As an example, we focus on models that predict arterial wall behaviour such as laws based on Fung’s type energy function (Holzapfel, 2006) [14] and (Holzapfel et al., 2000 model) [15]. Our approach is applied on uniaxial extension tests and the results are compared with available data in the literature.  相似文献   

6.
In this paper the performability analysis of fault-tolerant computer systems using a hierarchical decomposition technique is presented. A special class of queueing network (QN) models, the so-called BCMP [4], and generalized stochastic Petri nets (GSPN) [1] which are often used to separately model performance and reliability respectively, have been combined in order to preserve the best modelling features of both.

A conceptual model is decomposed into GSPN and BCMP submodels, which are solved in isolation. Then, the remaining GSPN portion of the model is aggregated with flow-equivalents of BCMP models, in order to compute performability measures. The substitutes of BCMP models are presented by means of simple GSPN constructs, thereby preserving the 1st and 2nd moments of the throughput. A simple example of a data communication system where failed transmissions are corrected, is presented.  相似文献   


7.
The results reported in this paper create a step toward the rough set-based foundations of data mining and machine learning. The approach is based on calculi of approximation spaces. In this paper, we present the summarization and extension of our results obtained since 2003 when we started investigations on foundations of approximation of partially defined concepts (see, e.g., [2], [3], [7], [37], [20], [21], [5], [42], [39], [38], [40]). We discuss some important issues for modeling granular computations aimed at inducing compound granules relevant for solving problems such as approximation of complex concepts or selecting relevant actions (plans) for reaching target goals. The problems discussed in this article are crucial for building computer systems that assist researchers in scientific discoveries in many areas such as biology. In this paper, we present foundations for modeling of granular computations inside of system that is based on granules called approximation spaces. Our approach is based on the rough set approach introduced by Pawlak [24], [25]. Approximation spaces are fundamental granules used in searching for relevant complex granules called as data models, e.g., approximations of complex concepts, functions or relations. In particular, we discuss some issues that are related to generalizations of the approximation space introduced in [33], [34]. We present examples of rough set-based strategies for the extension of approximation spaces from samples of objects onto a whole universe of objects. This makes it possible to present foundations for inducing data models such as approximations of concepts or classifications analogous to the approaches for inducing different types of classifiers known in machine learning and data mining. Searching for relevant approximation spaces and data models are formulated as complex optimization problems. The proposed interactive, granular computing systems should be equipped with efficient heuristics that support searching for (semi-)optimal granules.  相似文献   

8.
Theoretical studies have shown that fuzzy models are capable of approximating any continuous function on a compact domain to any degree of accuracy. However, constructing a good fuzzy model requires finding a good tradeoff between fitting the training data and keeping the model simple. A simpler model is not only easily understood, but also less likely to overfit the training data. Even though heuristic approaches to explore such a tradeoff for fuzzy modeling have been developed, few principled approaches exist in the literature due to the lack of a well-defined optimality criterion. In this paper, we propose several information theoretic optimality criteria for fuzzy models construction by extending three statistical information criteria: 1) the Akaike information criterion [AIC] (1974); 2) the Bhansali-Downham information criterion [BDIC] (1977); and 3) the information criterion of Schwarz (1978) and Rissanen (1978) [SRIC]. We then describe a principled approach to explore the fitness-complexity tradeoff using these optimality criteria together with a fuzzy model reduction technique based on the singular value decomposition (SVD). The role of these optimality criteria in fuzzy modeling is discussed and their practical applicability is illustrated using a nonlinear system modeling example  相似文献   

9.
In [13, 14] we have proclaimed a singularity theory based programme of investigations of kinematic singularities in robot manipulators. The main achievement of the programme consists in providing local candidate models of kinematic singularities. However, due to the specific form of the manipulator kinematics, fitting the candidate models into the prescribed robot kinematics is a fairly open problem. The problem is easily solvable only around non-singular configurations of manipulators, where locally the kinematics can be modelled by linear injections or projections. In this paper we are concerned with planar manipulator kinematics, and prove that, under a mild geometric condition, such kinematics can be transformed around singular configurations to simple quadratic models of the Morse type. The models provide a complete local classification of generic planar kinematics of robot manipulators.  相似文献   

10.
Jane  Nigel 《Performance Evaluation》1999,35(3-4):171-192
The advantages of the compositional structure within the Markovian process algebra PEPA for model construction and simplification have already been demonstrated. In this paper we show that for some PEPA models this structure may also be used to advantage during the solution of the model. Several papers offering product form solutions of stochastic Petri nets have been published during the last 10 years. In [R. Boucherie, A characterisation of independence for competing Markov chains with applications to stochastic Petri nets, IEEE Trans. Software Engrg. 20 (7) (1994) 536–544], Boucherie showed that these solutions were a special case of a simple exclusion mechanism for the product process of a collection of Markov chains. The results presented in this paper take advantage of his observation. In particular we show that PEPA models that generate such processes may be readily identified and show how the product form solution may be obtained. Although developed here in the context of PEPA the results presented can be easily generalised to any of the other Markovian process algebra languages.  相似文献   

11.
谷物害虫图像识别中数理统计特征的提取   总被引:1,自引:0,他引:1  
本文阐述了利用计算机数字图像处理技术,对谷物害虫图像的一阶灰度值直方图和图像的目标区域,进行自动提取其一阶灰度值统计量和几何形状等数理统计特征的主要技术和方法。试验结果表明:该方法可以为谷物害虫的计算机自动模式识别(快速分类)提供稳定的特征参数值,有效地提高了识别率。  相似文献   

12.
We propose a novel unsupervised learning framework to model activities and interactions in crowded and complicated scenes. Hierarchical Bayesian models are used to connect three elements in visual surveillance: low-level visual features, simple "atomic" activities, and interactions. Atomic activities are modeled as distributions over low-level visual features, and multi-agent interactions are modeled as distributions over atomic activities. These models are learnt in an unsupervised way. Given a long video sequence, moving pixels are clustered into different atomic activities and short video clips are clustered into different interactions. In this paper, we propose three hierarchical Bayesian models, Latent Dirichlet Allocation (LDA) mixture model, Hierarchical Dirichlet Process (HDP) mixture model, and Dual Hierarchical Dirichlet Processes (Dual-HDP) model. They advance existing language models, such as LDA [1] and HDP [2]. Our data sets are challenging video sequences from crowded traffic scenes and train station scenes with many kinds of activities co-occurring. Without tracking and human labeling effort, our framework completes many challenging visual surveillance tasks of board interest such as: (1) discovering typical atomic activities and interactions; (2) segmenting long video sequences into different interactions; (3) segmenting motions into different activities; (4) detecting abnormality; and (5) supporting high-level queries on activities and interactions.  相似文献   

13.
Closed-loop identification of systems with known time delays can be effectively carried out with simple model structures like Autoregressive with Exogenous Input (ARX) and Autoregressive Moving Average with Exogenous Input (ARMAX). However, when the system contains large uncertain time delay, such structures may lead to inaccurate models with significant bias if the time delay estimate used in the identification is less accurate. On the other hand, conventional orthonormal basis filter (OBF) model structures are very effective in capturing the dynamics of systems with uncertain time delays. However, they are not effective for closed-loop identification. In this paper, an ARX-OBF model structure which is obtained by modifying the ARX structure is shown to be effective in handling closed-loop identification of systems with uncertain time delays. In addition, the paper shows that this advantage of ARX-OBF models over simple ARX model is considerable in multi-step ahead predictions.  相似文献   

14.
Algebraic models of programs with procedures extend algebraic models of programs that are free of procedures (simple models of programs). A specific feature of both types of models is that they are built for some formalization of software programs. Models of programs are intended for studying functional equivalence of formalized programs and constructing wide sets of equivalent transformations of programs. Two basic problems in the theory of algebraic models of programs are the equivalence problem and the problem of building complete systems of equivalent transformations. An increasing interest in models of programs with procedures is due to the abundance of results obtained for simple models of programs. The most suitable model of programs with procedures is a gateway model. A remarkable feature of these models is that every such model is induced by some simple model of programs. This paper gives a survey of the latest results obtained for gateway models of programs.  相似文献   

15.
Crowdsourcing is widely used for solving simple tasks (e.g. tagging images) and recently, some researchers (Kittur et al., 2011 [9] and Kulkarni et al., 2012 [10]) propose new crowdsourcing models to handle complex tasks (e.g. article writing). In both type of crowdsourcing models (for simple and complex tasks), voting is a technique that is widely used for quality control [9]. For example, 5 workers are asked to write 5 outlines for an article, and another 5 workers are asked to vote for the best outline among the 5 outlines. However, we argue that voting is actually a technique that selects a high quality answer from a set of answers. It does not directly enhance answer quality. In this paper, we propose a new quality control approach for crowdsourcing that can incrementally improve answer quality. The new approach is based upon two principles – evolutionary computing and slow intelligence, which help the crowdsourcing system to propagate knowledge among workers and incrementally improve the answer quality. We perform explicitly 2 experimental case studies to show the effectiveness of the new approach. The case study results show that the new approach can incrementally improve answer quality and produce high quality answers.  相似文献   

16.
This paper sets out to evaluate simple queueing models embodying the concept of adaptation of service to demand and vice versa, systems which hold out the promise of improved operational characteristics by comparison with conventional non-adaptive systems. Over the many years during which research in queueing theory has been in progress there has been little concern with models of this kind, yet it is in the direction of environmental adaptation that one must look for operational improvements. The models studied are mainly of birth and death type. Some consideration is given to renewal models which, in a certain sense, are the equivalents of birth and death types. The emphasis in the paper is placed on the requirements of an operational assessment and on its realization. The use of digital computers in research of this kind is underlined. For illustration extracts from [9] are used to describe the performance of three fundamental adaptive systems.  相似文献   

17.
In this paper we formalise three different views of a virtual shared memory system and show that they are equivalent. The formalisation starts with five basic component processes specified in the language of CSP [Hoa85], which can be adapted as necessary by two operations called labelling and clamping, and are combined in two basic ways: either they are chained, so that the output of one component becomes the input of the next, or they are put in parallel, so that their communications are arbitrarily interleaved. Using the laws of CSP we show that these basic processes and operators satisfy a number of algebraic equivalences, which enable us to prove equivalence of the different models of the memory system by reasoning entirely at the level of processes, instead of at the lower and more complicated level of events. As a result the proofs of equivalence of the different models are purely algebraic and very simple.The specification is intended to provide a general framework for any architecture using an interconnection network, such as the on-chip interconnect between macrocells or the networks of processor nodes connected by bit-serial interconnect which are described in [Jon93]. It addresses architecture independent design issues such as access transparency, connectivity, addressing models and serialisability. By structuring it as a hierarchy of models it is hoped that the treatment of these many issues is made as clear and tractable as possible, whilst the proofs of equivalence ensure consistency.Funded by Esprit Project 7267/ OMI-Standards.  相似文献   

18.
说话人识别及其应用的研究   总被引:1,自引:0,他引:1  
虽然理论上隐马尔可夫模型(HMM)是较为有效的一种说话人识别方法,但传统的模型训练方法──Baum-Welch算法不仅运算量和存储量较大,而且若因经验不足、模型初值设置不当会导致算法发散或迭代收敛到非全局最优点。本文提出一种新的方法,将状态分割、动态聚类、模糊统计与传统的Baum-Welch算法相结合应用于说话人识别,既降低了运算量和存储量,又避免了因初值设置不当而导致算法迭代收敛到非全局最优点。本文在大量实验的基础上,建立了说话人识别系统并进行了实验研究,收到了良好的效果。该系统模型数目少,运算复杂度低,可扩充性强,易于训练,便于识别,具有广阔的应用前景。  相似文献   

19.
介绍一种新的基于特征统计的JPEG图像隐写分析技术.利用选择的特征组,进行多元回归分析设计一个线性分类器,可以精确地判定图像中隐藏信息的存在性.这种方法可以检测各种JPEG图像信息嵌入系统,如J-Steg,OutGuess和F5算法.相对于现有的基于图像质量度量(QM)的统计方法,新方法的检测准确率更高(对同一嵌入率,本方法准确性提高了至少10%)  相似文献   

20.
Eppstein [5] characterized the minor-closed graph families for which the treewidth is bounded by a function of the diameter, which includes, e.g., planar graphs. This characterization has been used as the basis for several (approximation) algorithms on such graphs (e.g., [2] and [5]–[8]). The proof of Eppstein is complicated. In this short paper we obtain the same characterization with a simple proof. In addition, the relation between treewidth and diameter is slightly better and explicit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号