This paper describes a syntactic approach to imitation learning that captures important task structures in the form of probabilistic activity grammars from a reasonably small number of samples under noisy conditions. We show that these learned grammars can be recursively applied to help recognize unforeseen, more complicated tasks that share underlying structures. The grammars enforce an observation to be consistent with the previously observed behaviors which can correct unexpected, out-of-context actions due to errors of the observer and/or demonstrator. To achieve this goal, our method (1) actively searches for frequently occurring action symbols that are subsets of input samples to uncover the hierarchical structure of the demonstration, and (2) considers the uncertainties of input symbols due to imperfect low-level detectors.We evaluate the proposed method using both synthetic data and two sets of real-world humanoid robot experiments. In our Towers of Hanoi experiment, the robot learns the important constraints of the puzzle after observing demonstrators solving it. In our Dance Imitation experiment, the robot learns 3 types of dances from human demonstrations. The results suggest that under reasonable amount of noise, our method is capable of capturing the reusable task structures and generalizing them to cope with recursions. 相似文献
Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input. 相似文献
Local learning algorithms use a neighborhood of training data close to a given testing query point in order to learn the local parameters and create on-the-fly a local model specifically designed for this query point. The local approach delivers breakthrough performance in many application domains. This paper considers local learning versions of regularization networks (RN) and investigates several options for improving their online prediction performance, both in accuracy and speed. First, we exploit the interplay between locally optimized and globally optimized hyper-parameters (regularization parameter and kernel width) each new predictor needs to optimize online. There is a substantial reduction of the operation cost in the case we use two globally optimized hyper-parameters that are common to all local models. We also demonstrate that this global optimization of the two hyper-parameters produces more accurate models than the other cases that locally optimize online either the regularization parameter, or the kernel width, or both. Then by comparing Eigenvalue decomposition (EVD) with Cholesky decomposition specifically for the local learning training and testing phases, we also reveal that the Cholesky-based implementations are faster that their EVD counterparts for all the training cases. While EVD is suitable for validating cost-effectively several regularization parameters, Cholesky should be preferred when validating several neighborhood sizes (the number of k-nearest neighbors) as well as when the local network operates online. Then, we exploit parallelism in a multi-core system for these local computations demonstrating that the execution times are further reduced. Finally, although the use of pre-computed stored local models instead of the online learning local models is even faster, this option deteriorates the performance. Apparently, there is a substantial gain in waiting for a testing point to arrive before building a local model, and hence the online local learning RNs are more accurate than their pre-computed stored local models. To support all these findings, we also present extensive experimental results and comparisons on several benchmark datasets.
We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair
of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent
with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise
constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world,
and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to
piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having
depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as
compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling,
existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond
to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval
uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal
edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation
of these new constraints provides correct results. 相似文献
This paper examines the impact of information and communication technologies (ICT) adoption on management praxis. The study, building on the theoretical framework developed by Scott Morton and his colleagues, attempts to identify the dynamic relationships between ICT adoption and management efforts towards modernization and reorganization. Using data from leading Greek firms, we report evidence as to how changes in strategy, organizational structure, management systems, and human skills link with the current and prospective level of use of various types of advanced ICT. Findings generally appear to suggest that Greek firms are in a process of recognizing the potential of ICT to enable and support changes that are necessary for successfully competing in a hyper-competitive environment. In particular, ICT adoption is shown to affect strategy by supporting long-term strategic objectives and the quest for profitability. Indirectly, it also links to strategic planning systems. ICT is found to be related to an internal environment characterized by open organization and flexibility. Finally, the results show that the sample firms recognize the need for multi-skilled personnel to exploit the advantages stemming from ICT adoption. 相似文献