首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is difficult to express the parallelism present in complex computations by using existing higher level abstractions such as MapReduce and Dryad. These computations include applications from wide variety of domains, like Artificial Intelligence, Decision Tree Algorithms, Association Rule Mining, Recommender Systems, Graph Algorithms, Clustering Algorithms, Compute Intensive Scientific Workflows, Optimization Algorithms, and so forth. Their execution graphs introduce new challenges in terms of programmer expressibility and runtime performance such as iterative and recursive computations, shared communication model, and so forth. We propose an extension to MapReduce, called Generate‐Map‐Reduce (GMR), targeted towards modeling these applications. GMR introduces a new Generate abstraction into the MapReduce framework that captures recursive computations. The runtime also supports iterative jobs and a distributed communication model by using shared data structures. We illustrate recursive computations with GMR by modeling complex applications such as simulated annealing, A* search, and adaptive quadrature computation that require recursive spawning of new tasks to handle variable degree of parallelism. GMR runtime supports caching of common data across iterations in memory and local disks. We illustrate how this caching helps in achieving significant speedup for iterative computations by modeling k‐means clustering. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
The image motion of a planar surface between two camera views is captured by a homography (a 2D projective transformation). The homography depends on the intrinsic and extrinsic camera parameters, as well as on the 3D plane parameters. While camera parameters vary across different views, the plane geometry remains the same. Based on this fact, we derive linear subspace constraints on the relative homographies of multiple (⩾ 2) planes across multiple views. The paper has three main contributions: 1) We show that the collection of all relative homographies (homologies) of a pair of planes across multiple views, spans a 4-dimensional linear subspace. 2) We show how this constraint can be extended to the case of multiple planes across multiple views. 3) We show that, for some restricted cases of camera motion, linear subspace constraints apply also to the set of homographies of a single plane across multiple views. All the results derived are true for uncalibrated cameras. The possible utility of these multiview constraints for improving homography estimation and for detecting nonrigid motions are also discussed  相似文献   

3.
We address the problem of navigating unmanned vehicles safely through urban canyons in two dimensions using only vision‐based techniques. Two commonly used vision‐based obstacle avoidance techniques (namely stereo vision and optic flow) are implemented on an aerial and a ground‐based robotic platform and evaluated for urban canyon navigation. Optic flow is evaluated for its ability to produce a centering response between obstacles, and stereo vision is evaluated for detecting obstacles to the front. We also evaluate a combination of these two techniques, which allows a vehicle to detect obstacles to the front while remaining centered between obstacles to the side. Through experiments on an unmanned ground vehicle and in simulation, this combination is shown to be beneficial for navigating urban canyons, including T‐junctions and 90‐deg bends. Experiments on a rotorcraft unmanned aerial vehicle, which was constrained to two‐dimensional flight, demonstrate that stereo vision allowed it to detect an obstacle to the front, and optic flow allowed it to turn away from obstacles to the side. We discuss the theory behind these techniques, our experience in implementing them on the robotic platforms, and their suitability to the urban canyon navigation problem. © 2009 Wiley Periodicals, Inc.  相似文献   

4.
基于广义预测控制的移动机器人视觉导航   总被引:1,自引:1,他引:0       下载免费PDF全文
研究了室内环境下移动机器人的视觉导航问题。由单目传感器获取场景图像,利用颜色信息提取路径,采用最小二乘法拟合路径参数,简化图像处理过程,提高了算法的实时性。通过消除相对参考路径的距离偏差和角度偏差来修正机器人的位姿状态,实现机器人对路径的跟踪。为消除机器视觉识别和传输的耗时,达到实时控制,采用改进的多变量广义预测控制方法预测下一时刻控制信号的变化量来修正系统滞后。仿真和实验结果证明了控制算法的可靠性。  相似文献   

5.
Visually impaired people face significant challenges in their work, studies, and daily lives. Nowadays, numerous visually impaired navigation devices are available to help visually impaired people solve daily life problems. These devices typically include modules for target recognition, distance measurement, and text reading aloud. They are designed to help visually impaired people avoid obstacles and understand the presence of things around them through text reading aloud. Due to the need to avoid all potential obstacles, the target recognition algorithms embedded in these devices must recognize a wide range of targets. However, the text reading aloud module cannot read all of them. Therefore, we designed a visual saliency assistance mechanism that simulates the regions that humans may pay the most attention to in the whole picture. The output of the visual saliency assistance mechanism is overlaid with the target recognition result, which can greatly reduce the target number of text reading aloud. This way, the visually impaired navigation device can not only help to avoid obstacles but also help visually impaired people understand the interest targets of most people in the whole picture. The visual saliency assistance mechanism we designed consists of three components: a spatio-temporal feature extraction (STFE) module, a spatio-temporal feature fusion (STFF) module, and a multi-scale feature fusion (MSFF) module. The STFF module fuses long-term spatio-temporal features and improves the temporal memory between frames. The MSFF module fully integrates information at different scales to improve the accuracy of saliency prediction. Therefore, this proposed visual saliency model can assist in the efficient operation of visually impaired navigation systems. The area under roc curve judd (AUC-J) metric of the proposed model was 93.9%, 93.8%, and 91.5% on three widely used saliency datasets: Holly-wood2, UCF Sports, and DHF1K, respectively. The results show that our proposed model outperforms the current state-of-the-art models.  相似文献   

6.
The proposed method computes a navigation mesh for arbitrary and dynamic 3D environments based on curvature and is robust and efficient. This method addresses a number of known limitations in state‐of‐the‐art techniques to produce navigation meshes that are tightly coupled to the original geometry, incorporate geometric details that are crucial for movement decisions, can robustly handle complex surfaces and can efficiently repair the navigation mesh to accommodate dynamically changing environments. The method is integrated into a standard navigation and collision avoidance system to simulate thousands of agents on complex 3D surfaces in real time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
How to put probabilities on homographies   总被引:2,自引:0,他引:2  
We present a family of "normal" distributions over a matrix group together with a simple method for estimating its parameters. In particular, the mean of a set of elements can be calculated. The approach is applied to planar projective homographies, showing that using priors defined in this way improves object recognition.  相似文献   

8.
Pixel‐based visualizations have become popular, because they are capable of displaying large amounts of data and at the same time provide many details. However, pixel‐based visualizations are only effective if the data set is not sparse and the data distribution not random. Single pixels – no matter if they are in an empty area or in the middle of a large area of differently colored pixels – are perceptually difficult to discern and may therefore easily be missed. Furthermore, trends and interesting passages may be camouflaged in the sea of details. In this paper we compare different approaches for visual boosting in pixel‐based visualizations. Several boosting techniques such as halos, background coloring, distortion, and hatching are discussed and assessed with respect to their effectiveness in boosting single pixels, trends, and interesting passages. Application examples from three different domains (document analysis, genome analysis, and geospatial analysis) show the general applicability of the techniques and the derived guidelines.  相似文献   

9.
This paper describes a light detection and ranging (LiDAR)‐based autonomous navigation system for an ultralightweight ground robot in agricultural fields. The system is designed for reliable navigation under cluttered canopies using only a 2D Hokuyo UTM‐30LX LiDAR sensor as the single source for perception. Its purpose is to ensure that the robot can navigate through rows of crops without damaging the plants in narrow row‐based and high‐leaf‐cover semistructured crop plantations, such as corn (Zea mays) and sorghum ( Sorghum bicolor). The key contribution of our work is a LiDAR‐based navigation algorithm capable of rejecting outlying measurements in the point cloud due to plants in adjacent rows, low‐hanging leaf cover or weeds. The algorithm addresses this challenge using a set of heuristics that are designed to filter out outlying measurements in a computationally efficient manner, and linear least squares are applied to estimate within‐row distance using the filtered data. Moreover, a crucial step is the estimate validation, which is achieved through a heuristic that grades and validates the fitted row‐lines based on current and previous information. The proposed LiDAR‐based perception subsystem has been extensively tested in production/breeding corn and sorghum fields. In such variety of highly cluttered real field environments, the robot logged more than 6 km of autonomous run in straight rows. These results demonstrate highly promising advances to LiDAR‐based navigation in realistic field environments for small under‐canopy robots.  相似文献   

10.
Visual Teach and Repeat (VT&R) is an effective method to enable a vehicle to repeat any previously driven route using just a visual sensor and without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically alter the appearance of the scene. In an effort to achieve lighting invariance, this paper details the design of a VT&R system that uses a laser scanner as the primary sensor. Unlike a traditional scan‐matching approach, we apply appearance‐based computer vision techniques to laser intensity images for motion estimation, providing us the benefit of lighting invariance. Field tests were conducted in an outdoor, planetary analogue environment, over an entire diurnal cycle, repeating a 1.1 km route more than 10 times with an autonomy rate of 99.7% by distance. We describe, in detail, our experimental setup and results, as well as how we address the various off‐nominal scenarios related to feature‐poor environments, hardware failures, and estimation drift. An analysis on motion distortion and a comparison with a stereo‐based system is also presented. We show that even without motion compensation, our system is robust enough to repeat long‐range routes accurately and reliably. © 2012 Wiley Periodicals, Inc.  相似文献   

11.
The visual simulation of natural phenomena has been widely studied. Although several methods have been proposed to simulate melting, the flows of meltwater drops on the surfaces of objects are not taken into account. In this paper, we propose a particle‐based method for the simulation of the melting and freezing of ice objects and the interactions between ice and fluids. To simulate the flow of meltwater on ice and the formation of water droplets, a simple interfacial tension is proposed, which can be easily incorporated into common particle‐based simulation methods such as Smoothed Particle Hydrodynamics. The computations of heat transfer, the phase transition between ice and water, the interactions between ice and fluids, and the separation of ice due to melting are further accelerated by implementing our method using CUDA. We demonstrate our simulation and rendering method for depicting melting ice at interactive frame‐rates.  相似文献   

12.
采用基于计算机视觉的视觉导航技术,设计了一种符合四轴飞行器定点降落的视觉导航系统。在地面铺设着陆平台,利用机载摄像头采集周围环境图像,并进行特征点提取,确定四轴飞行器位置,通过无线通信的方式发送控制指令,实现对四轴飞行器的飞行过程导航控制。  相似文献   

13.
视觉导航技术是保证机器人自主移动的关键技术之一。为了从整体上把握当前国际上最新的视觉导航研究动态,全面评述了仿生机器人视觉导航技术的研究进展,重点分析了视觉SLAM(Simultaneous Local-ization and Mapping)、闭环探测、视觉返家三个关键问题的研究现状及存在的问题。提出了一个新的视觉SLAM算法框架,给出了待解决的关键理论问题,并对视觉导航技术发展的难点及未来趋势进行了总结。  相似文献   

14.
15.
In this work, we are interested in technologies that will allow users to actively browse and navigate large image databases and to retrieve images through interactive fast browsing and navigation. The development of a browsing/navigation-based image retrieval system has at least two challenges. The first is that the system's graphical user interface (GUI) should intuitively reflect the distribution of the images in the database in order to provide the users with a mental picture of the database content and a sense of orientation during the course of browsing/navigation. The second is that it has to be fast and responsive, and be able to respond to users actions at an interactive speed in order to engage the users. We have developed a method that attempts to address these challenges of a browsing/navigation based image retrieval systems. The unique feature of the method is that we take an integrated approach to the design of the browsing/navigation GUI and the indexing and organization of the images in the database. The GUI is tightly coupled with the algorithms that run in the background. The visual cues of the GUI are logically linked with various parts of the repository (image clusters of various particular visual themes) thus providing intuitive correspondences between the GUI and the database contents. In the backend, the images are organized into a binary tree data structure using a sequential maximal information coding algorithm and each image is indexed by an n-bit binary index thus making response to users’ action very fast. We present experimental results to demonstrate the usefulness of our method both as a pre-filtering tool and for developing browsing/navigation systems for fast image retrieval from large image databases.  相似文献   

16.
In this article a new Data‐Driven formulation of the Particle Filter framework is proposed. The new formulation is able to learn an approximate proposal distribution from previous data. By doing so, the need to explicitly model all the disturbances that might affect the system is relaxed. Such characteristics are particularly suited for Terrain Based Navigation for sensor‐limited AUVs, where typical scenarios often include non‐negligible sources of noise affecting the system, which are unknown and hard to model. Numerical results are presented that demonstrate the superior accuracy, robustness and efficiency of the proposed Data‐Driven approach.  相似文献   

17.
Many visual analytics systems allow users to interact with machine learning models towards the goals of data exploration and insight generation on a given dataset. However, in some situations, insights may be less important than the production of an accurate predictive model for future use. In that case, users are more interested in generating of diverse and robust predictive models, verifying their performance on holdout data, and selecting the most suitable model for their usage scenario. In this paper, we consider the concept of Exploratory Model Analysis (EMA), which is defined as the process of discovering and selecting relevant models that can be used to make predictions on a data source. We delineate the differences between EMA and the well‐known term exploratory data analysis in terms of the desired outcome of the analytic process: insights into the data or a set of deployable models. The contributions of this work are a visual analytics system workflow for EMA, a user study, and two use cases validating the effectiveness of the workflow. We found that our system workflow enabled users to generate complex models, to assess them for various qualities, and to select the most relevant model for their task.  相似文献   

18.
19.
This paper presents a review of surgical navigation systems in orthopaedics and categorizes these systems according to the image modalities that are used for the visualization of surgical action. Medical images used to be an essential part of surgical education and documentation as well as diagnosis and operation planning over many years. With the recent introduction of navigation techniques in orthopaedic surgery, a new field of application has been opened. Today surgical navigation systems — also known as image‐guided surgery systems — are available for various applications in orthopaedic surgery. They visualize the position and orientation of surgical instruments as graphical overlays onto a medical image of the operated anatomy on a computer monitor. Preoperative image data such as computed tomography scans or intraoperatively generated images (for example, ultrasonic, endoscopic or fluoroscopic images) are suitable for this purpose. A new category of medical images termed ‘surgeon‐defined anatomy’ has been developed that exclusively relies upon the usage of navigation technology. Points on the anatomy are digitized interactively by the surgeon and are used to build up an abstract geometrical model of the bony structures to be operated on. This technique may be used when no other image data is available or appropriate for a given application. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, the RF energy harvesting system and RF‐based wireless power transfer system are proposed and designed for battery‐less self‐sustaining application. For energy harvesting, the designed antenna array improves the received RF power effectively and also can harvest RF energy in multi‐frequency bands. For wireless power transfer, the proposed helical antenna realizes the system design in miniaturization. Subsequently, the T shape LC matching network are designed between the antenna and the rectifying circuit to obtain more power transmission. The measured results show that the proposed Wi‐Fi rectifier and 433 MHz rectifier offer a maximum conversion efficiency of 66.8% and 76% in case of the input power is ?3 dBm and 0 dBm, respectively. Finally, the performance of the RF‐based wireless power transfer system and RF energy harvesting system are attested by experimentally measurement, the measured results indicate that these systems can be used to power electronic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号