首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Coverage of Known Spaces: The Boustrophedon Cellular Decomposition   总被引:8,自引:0,他引:8  
Coverage path planning is the determination of a path that a robot must take in order to pass over each point in an environment. Applications include de-mining, floor scrubbing, and inspection. We developed the boustrophedon cellular decomposition, which is an exact cellular decomposition approach, for the purposes of coverage. Essentially, the boustrophedon decomposition is a generalization of the trapezoidal decomposition that could allow for non-polygonalobstacles, but also has the side effect of having more efficient coverage paths than the trapezoidal decomposition. Each cell in the boustrophedon decomposition is covered with simple back and forth motions. Once each cell is covered, then the entire environment is covered. Therefore, coverage is reduced to finding an exhaustive path through a graph which represents the adjacency relationships of the cells in the boustrophedon decomposition. This approach is provably complete and experiments on a mobile robot validate this approach.  相似文献   

2.
A common assumption of coverage path planning research is a static environment.Such environments require only a single visit to each area to achieve coverage.However,some real-world environments are characterised by the presence of unexpected,dynamic obstacles.They require areas to be revisited periodically to maintain an accurate coverage map,as well as reactive obstacle avoidance.This paper proposes a novel swarmbased control algorithm for multi-robot exploration and repeated coverage in envir...  相似文献   

3.
We propose a novel approach to deal with the online complete-coverage task of cleaning robots in unknown workspaces with arbitrarily-shaped obstacles. Our approach is based on the boustrophedon motions, the boundary-following motions, and the Theta* algorithm known as B-Theta*. Under control of B-Theta*, the robot performs a single boustrophedon motion to cover an unvisited region. While performing the boustrophedon motion, if the robot encounters an obstacle with a boundary that has not yet been covered, it switches to the boundary mode to cover portions along the obstacle boundary, and then continues the boustrophedon motion until it detects an ending point. To move to an unvisited region, the robot detects backtracking points based on its accumulated knowledge, and applies an intelligent backtracking mechanism thanks to the proposed Theta* for multi-goals in order to reach the next starting point. Complete coverage is achieved when no starting point exists for a new boustrophedon motion. Computer simulations and experiments on real workspaces show that our proposed B-Theta* is efficient for the complete-coverage task of cleaning robots.  相似文献   

4.
We provide an algorithmic method for constructing projective resolutions of modules over quotients of path algebras. This algorithm is modified to construct minimal projective resolutions of linear modules over Koszul algebras.  相似文献   

5.
To apply fuzzy logic, two major tasks need to be performed: the derivation of production rules and the determination of membership functions. These tasks are often difficult and time consuming. This paper presents an algorithmic method for generating membership functions and fuzzy production rules; the method includes an entropy minimization for screening analog values. Membership functions are derived by partitioning the variables into the desired number of fuzzy terms and production rules are obtained from minimum entropy clustering decisions. In the rule derivation process, rule weights are also calculated. This algorithmic approach alleviates many problems in the application of fuzzy logic to binary classification  相似文献   

6.
We present two algorithms that can be used to check whether a given holomorphic foliation of the projective plane has an algebraic solution, and discuss the performance of their implementations in the computer algebra system Singular.  相似文献   

7.
This paper considers the development of a general model algorithmic control (MAC). It is shown that the proposed structure contains all the features and principles of MAC. The scheme developed allows choices of prediction and reference trajectories. The regulation effect is studied and non-minimum phase systems are also exemplified. The tracking and regulation properties of the proposed MAC depend on the choice of regulator and controller parameters. Results of simulation studies indicate the effectiveness of the developed algorithmic control.  相似文献   

8.
Designing embedded systems efficiently has always been of significant interest. This has been tremendously scaled-up for contemporary and high-end applications with their increasing complexity and the need to satisfy multiple conflicting constraints. This paper presents a high-speed Hardware Software Partitioning technique for the design of such systems. The partitioning problem has been modeled as a multi-dimensional optimization problem with the aim of minimizing the area utilization, power dissipation, time of execution and system memory requirement of the implementation. A two-phased algorithm (Phased Greedy Metaheuristic Algorithm or PGMA) has been proposed which also takes into consideration the communication costs between hardware and software Processing-Engines (PEs) while partitioning. Subsequently, a detailed empirical analysis of the proposed algorithm is presented to ascertain its efficiency, quality and speed. The execution time is as low as 18 ms for partitioning an algorithm consisting of 1000 blocks. Thereafter, the proposed algorithm is applied to a real-life embedded system, the Joint Photographic Expert-Group (JPEG) Encoder, to demonstrate its effectiveness. For a power constraint of 600 mW, an area utilization of 58.28% has been achieved, which is the maximum amongst all the reported works till date, to the best of our knowledge. This allowed for a decreased offloading of tasks to software, resulting in a memory usage of only 14 KB and execution time of 20 ms.  相似文献   

9.
An indecomposable shape is like a prime number. It cannot be decomposed further as a Minkowski sum of two simpler shapes. With respect to Minkowski addition (dilation), therefore, the indecomposable shapes are the fundamental building blocks of all geometric shapes. However, just as it is difficult to identify whether a given number is a prime number or not, it is equally or more difficult to say whether a given shape is indecomposable or not. In this paper we take up a subdomain of binary images, called the weakly taxicab convex image domain, and show how the indecomposability problem in that shape domain can be approached in a manner closely analogous to the number theoretic way. Apart from our attempt to show that the indecomposability problem is an extremely interesting mathematical problem, our algorithmic treatment of the problem also leads to an efficient method of computing Minkowski addition and decomposition of binary images.  相似文献   

10.
Using a step-by-step approach, a system-specific non-mechanistic model based on statistical analyses of real-world response of process parameters to bulking, was formulated for an aerobic Sequencing Batch Reactor (SBR). The approach involved 2 phases — “Diagnosis” and “Analysis”. In “Diagnosis” phase, model parameters were identified via statistical rules and existing knowledge base on bulking, while in the “Analysis” phase, the parameters were modelled with the Sludge Volume Index (SVI) to formulate the non-mechanistic model. Validation results yielded satisfactory results for the modelling analysis. The statistical approach was executed using a multivariate data analysis package and resulted in a non-mechanistic model that was practically appealing to practitioners for its ease of use as a predictive tool for bulking. This approach, being dependent only on data analysis and statistical modelling, required no bench-scale experiments to be conducted for preliminary identification of the bulking problem. Furthermore, such an approach being algorithmic in nature, could potentially form the concept behind design of expert systems for very rapid, and economical diagnosis of bulking problems before a more in-depth and slow analytical approach involving costly laboratory procedures is adopted.  相似文献   

11.
Methods of combination are used to synthesize pieces of evidence of equal standing that represent different aspects of a specific system about which a diagnosis is to be made. Combination is distinct from consensus, when complete diagnoses rendered by different knowledge sources require synthesis, and conditionalization, where pieces of evidence to be synthesized have dissymmetric relationships to each other. The Dempster-Shafer Rule is the quintessential combination method. However, it has been criticized for its inability to handle inconsistent pieces of evidence and for the way it focuses the weight of evidence. This article presents an alternative combination method that is capable of handling inconsistent evidence and relates evidence focusing to the amount of information resident in pieces of evidence. The method is capable of combining belief functions. Future research should address extending the method to the combination of a broad class of imprecise probability functions. © 1996 John Wiley & Sons, Inc.  相似文献   

12.
We show that the following fundamental question in student sectioning can be efficiently decided in polynomial time: Is it possible to assign \(m\) students to \(k\) sectioned courses with respect to a given timetable with \(l\) timeslots such that the individual capacities of all sections are not exceeded and no student has more than one appointment per timeslot? Our result opens the possibility of efficiently checking the feasibility of candidate timetables with respect to this question as part of timetabling algorithms. Based on a succinct representation of solutions, we also provide an algorithm to compute optimal assignments in \(O( k^2l^2 \log (\mathrm{sum}_{A}))\), where \(\mathrm{sum}_{A}\) is the sum of all specified section capacities. On the other hand, we prove that adding any single of the following constraints turns the above question into an NP-complete problem:
  • Course-selection constraint: Student’s course-selections must be respected.
  • Timeslot constraint: Students have individual timeslot restrictions.
  • Multiple-event constraint: Sections may have multiple events in the timetable, and there must be no timeslot clashes between all section-events for each student.
Hence our investigation gives insight into the location of the borderline between efficiently solvable and computationally hard problem variations.
  相似文献   

13.
Unified Approach for Developing EfficientAlgorithmic Programs   总被引:5,自引:0,他引:5       下载免费PDF全文
A unified approach called partition-and-recur for developing efficient and correct algorithmic programs is presented.An algorithm(represented by recurrence and initiation)is separated from program,and special attention is paid to algorithm manipulation rather than proram calculus.An algorithm is exactly a set of mathematical formulae.It is easier for formal erivation and proof.After getting efficient and correct algorithm,a trivial transformation is used to get a final rogram,The approach covers several known algorithm design techniques,e.g.dynamic programming,greedy,divide-and-conquer and enumeration,etc.The techniques of partition and recurrence are not new.Partition is a general approach for dealing with complicated objects and is typically used in divide-and-conquer approach.Recurrence is used in algorithm analysis,in developing loop invariants and dynamic programming approach.The main contribution is combining two techniques used in typical algorithm development into a unified and systematic approach to develop general efficient algorithmic programs and presenting a new representation of algorithm that is easier for understanding and demonstrating the correctness and ingenuity of algorithmicprograms.  相似文献   

14.
An algorithmic method of producingq-series identities from any given power series is discassed. This recursive technique is then used to give new proofs of several classicalq-identities of Gauss and Rogers. Dedicated to the memory of John Knopfmacher, 1937–1999, the inventor of the Engel expansions for q-series. The first author was partially supported by National Science Foundation Grant DMS-9206993 and by The Centre for Applicable Analysis and Number Theory of the University of the Witwatersrand. He wishes to express his gratitude to the second author who provided the hospitality and support which made his participation possible This paper was presented at the fourth International Conference on Average-Case Analysis of Algorithms, Princeton, NJ, by the second author. Online publication September 6, 2000.  相似文献   

15.
While the career of the Charles Babbage (1791-1871) shows a remarkable range of interests, strong threads bind together several of the principal ones: algorithmic thinking, with intimate links to algebra and to semiotics. The links connect especially his mathematical researches in functional equations with his work on mathematical tables and on calculating machines, but they are evident also in some of his social and industrial concerns. Evidence is presented to show that Babbage was consciously aware of at least some of these links. Attention to them casts light upon his achievements  相似文献   

16.
This paper discusses learning algorithms for ascertaining membership, inclusion, and equality in permutation groups. The main results are randomized learning algorithms which take a random generator set of a fixed group GSn as input. We discuss randomized algorithms for learning the concepts of group membership, inclusion, and equality by representing the group in terms of its strong sequence of generators using random examples from G. We present O(n3 log n) time sequential learning algorithms for testing membership, inclusion and equality. The running time is expressed as a function of the size of the object set. (GSn can have as many as n! elements.) Our bounds hold for all input groups. We also introduce limited parallelism, and our lower processor bounds make our algorithms more practical.Finally, we show that learning two-groups is in class NC by reducing the membership, inclusion, and inequality problems to solving linear systems over GF(2). We present an O(log3 n) time learning algorithm using nω processors for learning two-groups from examples (where n × n matrix product takes logarithmic time using nω processors).  相似文献   

17.
International Journal of Control, Automation and Systems - In this paper, we address a problem of multi-robotic coverage, where an area of interest is covered by multiple sensors, each mounted on...  相似文献   

18.
《Advanced Robotics》2013,27(4):327-344
Coordinate transformation is one of the most important issues in robotic manipulator control. Robot tasks are naturally specified in work space coordinates, usually a Cartesian frame, while control actions are developed on joint coordinates. Effective inverse kinematic solutions are analytical in nature; they exist only for special manipulator geometries and geometric intuition is usually required. Computational inverse kinematic algorithms have recently been proposed; they are based on general closed-loop schemes which perform the mapping of the desired Cartesian trajectory into the corresponding joint trajectory. The aim of this paper is to propose an effective computational scheme to the inverse kinematic problem for manipulators with spherical wrists. First an insight into the formulation of kinematics is given in order to detail the general scheme for this specific class of manipulators. Algorithm convergence is then ensured by means of the Lyapunov direct method. The resulting algorithm is based on the hand position and orientation vectors usually adopted to describe motion in the task space. The analysis of the computational burden is performed by taking the Stanford arm as a reference. Finally a case study is developed via numerical simulations.  相似文献   

19.
A method of choosing between competing models of small sets of observations is presented. The intended application is to the modelling of ‘badly defined’ systems. The ideas of algorithmic information theory are surveyed, and applied to model discrimination. A novel definition of a model as a program which computes the observations exactly is given; this allows the size of a model to be interpreted as a measure of the informativeness of the model. It also imposes a trade-off between approximation and complexity, which is essential if overfitting of models is to be avoided. Two examples of discriminating between models of field data are presented.  相似文献   

20.
The rapidly increasing scale of data warehouses is challenging today’s data analytical technologies. A conventional data analytical platform processes data warehouse queries using a star schema — it normalizes the data into a fact table and a number of dimension tables, and during query processing it selectively joins the tables according to users’ demands. This model is space economical. However, it faces two problems when applied to big data. First, join is an expensive operation, which prohibits a parallel database or a MapReduce-based system from achieving efficiency and scalability simultaneously. Second, join operations have to be executed repeatedly, while numerous join results can actually be reused by different queries. In this paper, we propose a new query processing framework for data warehouses. It pushes the join operations partially to the pre-processing phase and partially to the postprocessing phase, so that data warehouse queries can be transformed into massive parallelized filter-aggregation operations on the fact table. In contrast to the conventional query processing models, our approach is efficient, scalable and stable despite of the large number of tables involved in the join. It is especially suitable for a large-scale parallel data warehouse. Our empirical evaluation on Hadoop shows that our framework exhibits linear scalability and outperforms some existing approaches by an order of magnitude.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号