共查询到20条相似文献,搜索用时 15 毫秒
1.
Design and Implementation of a Novel Spherical Mobile Robot 总被引:1,自引:0,他引:1
Saber Mahboubi Mir Masoud Seyyed Fakhrabadi Ahmad Ghanbari 《Journal of Intelligent and Robotic Systems》2013,71(1):43-64
In this paper, the design, modeling and implementation of a novel spherical mobile robot is presented. The robot composes of a spherical outer shell made of a transparent thermoplastic material, two pendulums, two DC motors with gearboxes, two equipments for linear motion and two control units. It possesses four distinct motional modes including: driving, steering, jumping and zero-radius turning. In driving and steering modes, the robot moves along straight and circular trajectories, respectively. The robot performs these motional modes using movable internal masses. In the jumping mode, it can jump over obstacles and in the zero-radius turning mode, the robot can turn with zero-radius to improve the motion flexibility. Furthermore, the attempts to establish the dynamic models of some motional modes are made and finally, the accuracy of the obtained dynamic models is verified by simulation and experimental results. 相似文献
2.
On the effect of multirate co-simulation techniques in the efficiency and accuracy of multibody system dynamics 总被引:1,自引:0,他引:1
Francisco González Miguel Ángel Naya Alberto Luaces Manuel González 《Multibody System Dynamics》2011,25(4):461-483
Dynamic simulation of complex mechatronic systems can be carried out in an efficient and modular way making use of weakly coupled co-simulation setups. When using this approach, multirate methods are often needed to improve the efficiency, since the physical components of the system usually have different frequencies and time scales. However, most multirate methods have been designed for strongly coupled setups, and their application in weakly coupled co-simulation is not straightforward due to the limitations enforced by commercial simulation tools used in mechatronics design. This work describes a weakly coupled multirate method intended to be a generic multirate interface between block diagram software and multibody dynamics simulators, arranged in a co-simulation setup. Its main advantage is that it does not enforce equidistant or synchronized communication time-grids and, therefore, it can be easily applied to set up weakly-coupled co-simulations using off-the-shelf commercial block diagram simulators while giving the user a great flexibility for selecting the integration scheme for each subsystem. 相似文献
3.
Theory of Computing Systems - We examine several notions of randomness for elements in a given ${\Pi }_{1}^{0}$ class $\mathcal {P}$ . Such an effectively closed subset $\mathcal {P}$ of 2 ω... 相似文献
4.
5.
It is well-known that heuristic search in ILP is prone to plateau phenomena. An explanation can be given after the work of Giordana and Saitta: the ILP covering test is NP-complete and therefore exhibits a sharp phase transition in its coverage probability. As the heuristic value of a hypothesis depends on the number of covered examples, the regions “yes” and “no” represent plateaus that need to be crossed during search without an informative heuristic value. Several subsequent works have extensively studied this finding by running several learning algorithms on a large set of artificially generated problems and argued that the occurrence of this phase transition dooms every learning algorithm to fail to identify the target concept. We note however that only generate-and-test learning algorithms have been applied and that this conclusion has to be qualified in the case of data-driven learning algorithms. Mostly building on the pioneering work of Winston on near-miss examples, we show that, on the same set of problems, a top-down data-driven strategy can cross any plateau if near-misses are supplied in the training set, whereas they do not change the plateau profile and do not guide a generate-and-test strategy. We conclude that the location of the target concept with respect to the phase transition alone is not a reliable indication of the learning problem difficulty as previously thought. Editors: Stephen Muggleton, Ramon Otero, Simon Colton. 相似文献
6.
Felix Brandt Markus Brill Felix Fischer Paul Harrenstein 《Theory of Computing Systems》2011,49(1):162-181
In game theory, an action is said to be weakly dominated if there exists another action of the same player that, with respect to what the other players do, is never worse and sometimes strictly better. We investigate the computational complexity of the process of iteratively eliminating weakly dominated actions (IWD) in two-player constant-sum games, i.e., games in which the interests of both players are diametrically opposed. It turns out that deciding whether an action is eliminable via IWD is feasible in polynomial time whereas deciding whether a given subgame is reachable via IWD is NP-complete. The latter result is quite surprising, as we are not aware of other natural computational problems that are intractable in constant-sum normal-form games. Furthermore, we slightly improve on a result of V. Conitzer and T. Sandholm by showing that typical problems associated with IWD in win-lose games with at most one winner are NP-complete. 相似文献
7.
A. I. Delis I. K. Nikolos M. Kazolea 《Archives of Computational Methods in Engineering》2011,18(1):57-118
Finite volume (FV) methods for solving the two-dimensional (2D) nonlinear shallow water equations (NSWE) with source terms
on unstructured, mostly triangular, meshes are known for some time now. There are mainly two basic formulations of the FV
method: node-centered (NCFV) and cell-centered (CCFV). In the NCFV formulation the finite volumes, used to satisfy the integral
form of the equations, are elements of the mesh dual to the computational mesh, while for the CCFV approach the finite volumes
are the mesh elements themselves. For both formulations, details are given of the development and application of a second-order
well-balanced Godunov-type scheme, developed for the simulation of unsteady 2D flows over arbitrary topography with wetting
and drying. The popular approximate Riemann solver of Roe is utilized to compute the numerical fluxes, while second-order
spatial accuracy is achieved with a MUSCL-type reconstruction technique. The Green-Gauss (G-G) formulation for gradient computations
is implemented for both formulations, in order to maintain a common framework. Two different stencils for the G-G gradient
computations in the CCFV formulation are implemented and tested. An edge-based limiting procedure is applied for the control
of the total variation of the reconstructed field. This limiting procedure is proved to be effective for the NCFV scheme but
inadequate for the CCFV approach. As such, a simple but very effective modification to the reconstruction procedure is introduced
that takes into account geometrical characteristics of the computational mesh. In addition, consistent well-balanced second-order
discretizations for the topography source term treatment and the wet/dry front treatment are presented for both FV formulations,
ensuring absolute mass conservation, along with a stable friction term treatment. 相似文献
8.
Okon H. Akpan 《The Journal of supercomputing》2012,60(3):410-419
The focus of this study is the design of a parallel solution method that utilizes a fourth-order compact scheme. The applicability of the method is demonstrated on a time-dependent parabolic system with Neumann boundaries. The core of the parallel computing facilities used in the study is a 2-head-node, 224-compute-node Apple Xserve G5 multiprocessor. The system is first discretized in both time and space such that it remains in its stability regimes, before being solved with the method. The solution requires time marching in which every time step, h t , calls for a single parallel solve of the intermediary subsystems generated. The solution uses p processors ranging in numbers from 3 to 63. The speedups, s p , approach their limiting value of p only when p is small. The solution produces good computational results at large p, but poor results as p becomes progressively small. Also, the parallel solution produces accurate results yielding good speedups and efficiencies only when p is within some reasonable range of values. The intermediary systems generated by this method are linear and fine-grained, therefore, they are best suited for solution on massively-parallel processors. The solution method proposed in this study is, therefore, expected to yield more impressive results if applied in a massively-parallel computing environment. 相似文献
9.
In this paper we study the convergence of the well-known Greedy Rank-One Update Algorithm. It is used to construct the rank-one series solution for full-rank linear systems. The existence of the rank one approximations is also not new, but surprisingly the focus there has been more on the applications side more that in the convergence analysis. Our main contribution is to prove the convergence of the algorithm and also we study the required rank one approximation in each step. We also give some numerical examples and describe its relationship with the Finite Element Method for High-Dimensional Partial Differential Equations based on the tensorial product of one-dimensional bases. We illustrate this situation taking as a model problem the multidimensional Poisson equation with homogeneous Dirichlet boundary condition. 相似文献
10.
The quantum query complexity of searching for local optima has been a subject of much interest in the recent literature. For the d-dimensional grid graphs, the complexity has been determined asymptotically for all fixed d≥5, but the lower dimensional cases present special difficulties, and considerable gaps exist in our knowledge. In the present paper we present near-optimal lower bounds, showing that the quantum query complexity for the 2-dimensional grid [n]2 is Ω(n 1/2?δ ), and that for the 3-dimensional grid [n]3 is Ω(n 1?δ ), for any fixed δ>0.A general lower bound approach for this problem, initiated by Aaronson (based on Ambainis’ adversary method for quantum lower bounds), uses random walks with low collision probabilities. This approach encounters obstacles in deriving tight lower bounds in low dimensions due to the lack of degrees of freedom in such spaces. We solve this problem by the novel construction and analysis of random walks with non-uniform step lengths. The proof employs in a nontrivial way sophisticated results of Sárközy and Szemerédi, Bose and Chowla, and Halász from combinatorial number theory, as well as less familiar probability tools like Esseen’s Inequality. 相似文献
11.
We consider the question of whether adaptivity can improve the complexity of property testing algorithms in the dense graphs model. It is known that there can be at most a quadratic gap between adaptive and non-adaptive testers in this model, but it was not known whether any gap indeed exists. In this work we reveal such a gap. 相似文献
12.
Zou Chuhang Su Jheng-Wei Peng Chi-Han Colburn Alex Shan Qi Wonka Peter Chu Hung-Kuo Hoiem Derek 《International Journal of Computer Vision》2021,129(5):1410-1431
International Journal of Computer Vision - Recent approaches for predicting layouts from 360 $$^{circ }$$ panoramas produce excellent results. These approaches build on a common framework... 相似文献
13.
Given a graph G=(V,E) with strictly positive integer weights ω
i
on the vertices i∈V, an interval coloring of G is a function I that assigns an interval I(i) of ω
i
consecutive integers (called colors) to each vertex i∈V so that I(i)∩I(j)=∅ for all edges {i,j}∈E. The interval coloring problem is to determine an interval coloring that uses as few colors as possible. Assuming that a
strictly positive integer weight δ
ij
is associated with each edge {i,j}∈E, a bandwidth coloring of G is a function c that assigns an integer (called a color) to each vertex i∈V so that |c(i)−c(j)|≥δ
ij
for all edges {i,j}∈E. The bandwidth coloring problem is to determine a bandwidth coloring with minimum difference between the largest and the
smallest colors used. We prove that an optimal solution of the interval coloring problem can be obtained by solving a series
of bandwidth coloring problems. Computational experiments demonstrate that such a reduction can help to solve larger instances
or to obtain better upper bounds on the optimal solution value of the interval coloring problem. 相似文献
14.
Two new constructions of Steiner quadruple systems S(v, 4, 3) are given. Both preserve resolvability of the original Steiner system and make it possible to control the rank of the resulting system. It is proved that any Steiner system S(v = 2 m , 4, 3) of rank r ≤ v ? m + 1 over F2 is resolvable and that all systems of this rank can be constructed in this way. Thus, we find the number of all different Steiner systems of rank r = v ? m + 1. 相似文献
15.
Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage
to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm
for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed
Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme.
Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially,
the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and
later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance
and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied
in a Cloud Computing environment both performance-wise and cost-wise. 相似文献
16.
Detecting, locating and repairing faults is a hard task. This holds especially in cases where dependent failures occur in practice. In this paper we present a methodology which is capable of handling dependent failures. For this purpose we extend the model-based diagnosis approach by explicitely representing knowledge about such dependencies which are stored in a failure dependency graph. Beside the theoretical foundations we present algorithms for computing diagnoses and repair actions that are based on these extensions. Moreover, we introduce a case study which makes use of a larger control program of an autonomous and mobile robot. The case study shows that the proposed approach can be effectively used in practice. 相似文献
17.
Giovanni Bellettini Valentina Beorchia Maurizio Paolini 《Journal of Mathematical Imaging and Vision》2008,32(3):265-291
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford
(Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest
in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier,
New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer.
These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity
and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden
contours.
相似文献
Maurizio PaoliniEmail: |
18.
C. D. JOHNSON 《International journal of control》2013,86(3):529-534
In a recent article in this Journal, Porter (1969 a) described a simple and attractive alternative procedure for synthesizing a linear feedback control law which will realize a certain pre-selected closed-loop eigenvalue pattern for a linear dynamical system. The purpose of the present note is to investigate the conditions for applicability of Porter's method. In addition, another method is proposed. 相似文献
19.
Andreas Heckmann 《Multibody System Dynamics》2010,23(2):141-163
The modal representation of the deformation field is a widespread and efficient approach in the analysis of flexible multibody
systems. However, it requires a pre-processing in advance to the actual multibody survey that includes the imposition of boundary
conditions for the evaluation of the mode functions as an essential user input. Quite often the appropriateness of these boundary
conditions is a point of discussion. Therefore the present paper reviews the theoretical background and the implications of
this task. Then, a consistent and comprehensive proposal is made how these boundary conditions may be chosen. The suggestion
is justified by theoretical considerations and compared to alternative approaches from the literature in a simulation study
with three representative examples. It may be concluded that several approaches lead to reasonable results for a sufficient
number of mode functions. However, the proposed approach turned out to be the most efficient one and provides a consistent
framework. 相似文献
20.
Peder Lindberg James Leingang Daniel Lysaker Samee Ullah Khan Juan Li 《The Journal of supercomputing》2012,59(1):323-360
In this paper, we study the problem of scheduling tasks on a distributed system, with the aim to simultaneously minimize energy consumption and makespan subject to the deadline constraints and the tasks’ memory requirements. A total of eight heuristics are introduced to solve the task scheduling problem. The set of heuristics include six greedy algorithms and two naturally inspired genetic algorithms. The heuristics are extensively simulated and compared using an simulation test-bed that utilizes a wide range of task heterogeneity and a variety of problem sizes. When evaluating the heuristics, we analyze the energy consumption, makespan, and execution time of each heuristic. The main benefit of this study is to allow readers to select an appropriate heuristic for a given scenario. 相似文献