首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we address whole-body manipulation of bulky objects by a humanoid robot. We adopt a “pivoting” manipulation method that allows the humanoid to displace an object without lifting, but by the support of the ground contact. First, the small-time controllability of pivoting is demonstrated. On its basis, an algorithm for collision-free pivoting motion planning is established taking into account the naturalness of motion as nonholonomic constraints. Finally, we present a whole-body motion generation method by a humanoid robot, which is verified by experiments.  相似文献   

2.
Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme. Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially, the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied in a Cloud Computing environment both performance-wise and cost-wise.  相似文献   

3.
We consider initial value problems for semilinear parabolic equations, which possess a dispersive term, nonlocal in general. This dispersive term is not necessarily dominated by the dissipative term. In our numerical schemes, the time discretization is done by linearly implicit schemes. More specifically, we discretize the initial value problem by the implicit–explicit Euler scheme and by the two-step implicit–explicit BDF scheme. In this work, we extend the results in Akrivis et al. (Math. Comput. 67:457–477, 1998; Numer. Math. 82:521–541, 1999), where the dispersive term (if present) was dominated by the dissipative one and was integrated explicitly. We also derive optimal order error estimates. We provide various physically relevant applications of dispersive–dissipative equations and systems fitting in our abstract framework.  相似文献   

4.
The long-term dynamic behavior of many dynamical systems evolves on a low-dimensional, attracting, invariant slow manifold, which can be parameterized by only a few variables (“observables”). The explicit derivation of such a slow manifold (and thus, the reduction of the long-term system dynamics) is often extremely difficult or practically impossible. For this class of problems, the equation-free framework has been developed to enable performing coarse-grained computations, based on short full model simulations. Each full model simulation should be initialized so that the full model state is consistent with the values of the observables and close to the slow manifold. To compute such an initial full model state, a class of constrained runs functional iterations was proposed (Gear and Kevrekidis, J. Sci. Comput. 25(1), 17–28, 2005; Gear et al., SIAM J. Appl. Dyn. Syst. 4(3), 711–732, 2005). The schemes in this class only use the full model simulator and converge, under certain conditions, to an approximation of the desired state on the slow manifold. In this article, we develop an implementation of the constrained runs scheme that is based on a (preconditioned) Newton-Krylov method rather than on a simple functional iteration. The functional iteration and the Newton-Krylov method are compared in detail using a lattice Boltzmann model for one-dimensional reaction-diffusion as the full model simulator. Depending on the parameters of the lattice Boltzmann model, the functional iteration may converge slowly or even diverge. We show that both issues are largely resolved by using the Newton-Krylov method, especially when a coarse grid correction preconditioner is incorporated.  相似文献   

5.
The reconstruction of geometry or, in particular, the shape of objects is a common issue in image analysis. Starting from a variational formulation of such a problem on a shape manifold we introduce a regularization technique incorporating statistical shape knowledge. The key idea is to consider a Riemannian metric on the shape manifold which reflects the statistics of a given training set. We investigate the properties of the regularization functional and illustrate our technique by applying it to region-based and edge-based segmentation of image data. In contrast to previous works our framework can be considered on arbitrary (finite-dimensional) shape manifolds and allows the use of Riemannian metrics for regularization of a wide class of variational problems in image processing.  相似文献   

6.
Operational planning within public transit companies has been extensively tackled but still remains a challenging area for operations research models and techniques. This phase of the planning process comprises vehicle-scheduling, crew-scheduling and rostering problems. In this paper, a new integer mathematical formulation to describe the integrated vehicle-crew-rostering problem is presented. The method proposed to obtain feasible solutions for this binary non-linear multi-objective optimization problem is a sequential algorithm considered within a preemptive goal programming framework that gives a higher priority to the integrated vehicle-crew-scheduling goal and a lower priority to the driver rostering goals. A heuristic approach is developed where the decision maker can choose from different vehicle-crew schedules and rosters, while respecting as much as possible management’s interests and drivers’ preferences. An application to real data of a Portuguese bus company shows the influence of vehicle-crew-scheduling optimization on rostering solutions.  相似文献   

7.
A distributed approach is described for solving lineality (or linearity) space (LS) problems with large cardinalities and a large number of dimensions. The LS solution has applications in engineering, science, and business, and includes a subset of solutions of the more general extended linear complementarity problem (ELCP). A parallel MATLAB framework is employed and results are computed on an 8-node Rocks based cluster computer using Remote Procedure Calls (RPCs) and the MPICH2 Message Passing Interface (MPI). Results show that both approaches perform comparably when solving distributed LS problems. This indicates that when deciding which parallel approach to use, the implementation details particular to the method are the decisive factors, which in this investigation give MPICH2 MPI the advantage.
Mario E. CaireEmail:
  相似文献   

8.
Tardiness bounds under global EDF scheduling on a multiprocessor   总被引:2,自引:2,他引:0  
We consider the scheduling of a sporadic real-time task system on an identical multiprocessor. Though Pfair algorithms are theoretically optimal for such task systems, in practice, their runtime overheads can significantly reduce the amount of useful work that is accomplished. On the other hand, if all deadlines need to be met, then every known non-Pfair algorithm requires restrictions on total system utilization that can approach approximately 50% of the available processing capacity. This may be overkill for soft real-time systems, which can tolerate occasional or bounded deadline misses (i.e. bounded tardiness). In this paper we derive tardiness bounds under preemptive and non-preemptive global when the total system utilization is not restricted, except that it not exceed the available processing capacity. Hence, processor utilization can be improved for soft real-time systems on multiprocessors. Our tardiness bounds depend on the total system utilization and per-task utilizations and execution costs—the lower these values, the lower the tardiness bounds. As a final remark, we note that global may be superior to partitioned for multiprocessor-based soft real-time systems in that the latter does not offer any scope to improve system utilization even if bounded tardiness can be tolerated.
UmaMaheswari C. DeviEmail:
  相似文献   

9.
Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.
Jürgen GallEmail:
  相似文献   

10.
We propose a motion planning formulation of overarm throw for a 55-degree-of-freedom biped human multibody system. The unique characteristics of the throwing task—highly redundant, highly nonlinear, and highly dynamic—make the throwing motion simulation challenging in the literature and are addressed within the framework of multibody dynamics and optimization. To generate physically feasible throwing motions in a fully predictive method without input reference from motion capture or animation, rigorous dynamics modeling, such as dynamic balance based on Zero-Moment Point (ZMP) and ground reaction loads, is associated with the constraints. Given the target location and the object mass, the algorithm outputs the motion, required actuator torques, release conditions, and projectile and flight time of the object. Realistic human-like motions of throwing are generated for different input parameters, which demonstrate valid cause–effect relations in terms of both kinematic and kinetic outputs.  相似文献   

11.
In this paper, we study adaptive finite element approximation schemes for a constrained optimal control problem. We derive the equivalent a posteriori error estimators for both the state and the control approximation, which particularly suit an adaptive multi-mesh finite element scheme. The error estimators are then implemented and tested with promising numerical results.  相似文献   

12.
Graphics processor units (GPU) that are originally designed for graphics rendering have emerged as massively-parallel “co-processors” to the central processing unit (CPU). Small-footprint multi-GPU workstations with hundreds of processing elements can accelerate compute-intensive simulation science applications substantially. In this study, we describe the implementation of an incompressible flow Navier–Stokes solver for multi-GPU workstation platforms. A shared-memory parallel code with identical numerical methods is also developed for multi-core CPUs to provide a fair comparison between CPUs and GPUs. Specifically, we adopt NVIDIA’s Compute Unified Device Architecture (CUDA) programming model to implement the discretized form of the governing equations on a single GPU. Pthreads are then used to enable communication across multiple GPUs on a workstation. We use separate CUDA kernels to implement the projection algorithm to solve the incompressible fluid flow equations. Kernels are implemented on different memory spaces on the GPU depending on their arithmetic intensity. The memory hierarchy specific implementation produces significantly faster performance. We present a systematic analysis of speedup and scaling using two generations of NVIDIA GPU architectures and provide a comparison of single and double precision computational performance on the GPU. Using a quad-GPU platform for single precision computations, we observe two orders of magnitude speedup relative to a serial CPU implementation. Our results demonstrate that multi-GPU workstations can serve as a cost-effective small-footprint parallel computing platform to accelerate computational fluid dynamics (CFD) simulations substantially.  相似文献   

13.
Similarity is one of the most important abstract concepts in human perception of the world. In computer vision, numerous applications deal with comparing objects observed in a scene with some a priori known patterns. Often, it happens that while two objects are not similar, they have large similar parts, that is, they are partially similar. Here, we present a novel approach to quantify partial similarity using the notion of Pareto optimality. We exemplify our approach on the problems of recognizing non-rigid geometric objects, images, and analyzing text sequences.  相似文献   

14.
This paper describes how to interpret a program’s performance in terms of its computational energy spectrum. High spikes in the spectrum correspond to important events during execution, such as cache misses, for example, and their positions show when they happen and how they effect other events. The area under the spectrum measures a program’s size in terms of the computational action norm, a measure of how efficiently it moves through computational phase space. The distance from one program to another is the area between their action curves. The measured energy spectra for a set of real programs executing on real hardware support the conjecture that the best program generates the least action, the Principle of Computational Least Action.  相似文献   

15.
This paper addresses the problem of developing an optimization model to aid the operational scheduling in a real-world pipeline scenario. The pipeline connects refinery and harbor, conveying different types of commodities (gasoline, diesel, kerosene, etc.). An optimization model was developed to determine pipeline scheduling with improved efficiency. This model combines constraint logic programming (CLP) and mixed integer linear programming (MILP) in a CLP-MILP approach. The proposed model uses decomposition strategies, continuous time representation, intervals that indicate time constraints (time windows), and a series of operational issues, such as the seasonal and hourly cost of electric energy (on-peak demand hours). Real cases were solved in a matter of seconds. The computational results have demonstrated that the model is able to define new operational points to the pipeline, providing significant cost savings. Indeed the CLP-MILP model is an efficient tool to aid operational decision-making within this real-world pipeline scenario.  相似文献   

16.
This paper presents a framework for allocating radio resources to the Access Points (APs) introducing an Access Point Controller (APC). Radio resources can be either time slots or subchannels. The APC assigns subchannels to the APs using a dynamic subchannel allocation scheme. The developed framework evaluates the dynamic subchannel allocation scheme for a downlink multicellular Orthogonal Frequency Division Multiple Access (OFDMA) system. In the considered system, each AP and the associated Mobile Terminals (MTs) are not operating on a frequency channel with fixed bandwidth, rather the channel bandwidth for each AP is dynamically adapted according to the traffic load. The subchannels assignment procedure is based on quality estimations due to the interference measurements and the current traffic load. The traffic load estimation is realized with the measurement of the utilization of the assigned radio resources. The reuse partitioning for the radio resources is done by estimating mutual Signal to Interference Ratio (SIR) of the APs. The developed dynamic subchannel allocation ensures Quality of Service (QoS), better traffic adaptability, and higher spectrum efficiency with less computational complexity.
Chanchal Kumar Roy (Corresponding author)Email:
  相似文献   

17.
Recently, several standards have emerged for ontology markup languages that can be used to formalize all kinds of knowledge. However, there are no widely accepted standards yet that define APIs to manage ontological data. Processing ontological information still suffers from the heterogeneity imposed by the plethora of available ontology management systems. Moreover, ubiquitous computing environments usually comprise software components written in a variety of different programming languages, which makes it particularly difficult to establish a common ontology management API with programming language agnostic semantics. We implemented an ontological Knowledge Base Server, which can expose the functionality of arbitrary off-the-shelf ontology management systems via a formally specified and well defined API. A case study was carried out in order to demonstrate the feasibility of our approach to use a formally specified ontology management API to implement a registry for ubiquitous computing systems.  相似文献   

18.
《Ergonomics》2012,55(3):221-234
Portable ladders are one of the most ancient tools conceived by man. They remain ubiquitous and indispensable even today. It is interesting to note that there is little difference between the makeshift portable ladders used throughout history and some still used today. The design of portable ladders seems to have simply evolved, rather than been subject to formal design process, including ergonomic criteria. An analysis of 277 fatalities associated with ladders was conducted to describe the pattern of ladder fatalities and identify and assess ergonomic design controls. All ladder fatalities analysed were found to contain multiple human, equipment (ladder) and environmental causative factors. It is hypothesized that significant gains with regard to reducing future fatalities can be achieved by applying ergonomic design principles to ladders to accommodate predictable and undesirable human behaviour. Without effective future change, the only prediction that can be made is that the pattern of ladder fatalities will simply continue.  相似文献   

19.
With widespread adoption of computer-based distance education as a mission-critical component of the institution's educational program, the need for evaluation has emerged. In this research, we aim to expand on the systems approach by offering a model for evaluation based on socio-technical systems theory addressing a stated need in the literature for comprehensive models for evaluating e-learning environments (Holsapple, C.W. and Lee-Post, A., 2006 Holsapple, C. W. and Lee-Post, A. 2006. Defining, assessing, and promoting e-learning success: an information systems perspective. Decision Sciences Journal of Innovative Education, 4(1): 6785. [Crossref] [Google Scholar]. Defining, assessing, and promoting e-learning success: an information systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67–85). The proposed systems model evaluates distance learning success from the instructor's perspective. It defines and develops measures for course quality, system quality and corresponding impacts. The model is tested based on the data collected from 548 instructors of seven universities in the Midwest region of the USA. The results suggest that the proposed multi-dimensional system flexibility scale is reliable. The course quality significantly affects both system flexibility and faculty perceived impacts of distance education. The system flexibility also significantly affects both course quality and faculty perceived impacts.  相似文献   

20.
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford (Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier, New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer. These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden contours.
Maurizio PaoliniEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号