首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A software environment tailored to computer vision and image processing (CVIP) that focuses on how information about the CVIP problem domain can make the high-performance algorithms and the sophisticated algorithm techniques being designed by algorithm experts more readily available to CVIP researchers is presented. The environment consists of three principle components: DISC, Cloner, and Graph Matcher. DISC (dynamic intelligent scheduling and control) supports experimentation at the CVIP task level by creating a dynamic schedule from a user's specification of the algorithms that constitute a complex task. Cloner is aimed at the algorithm development process and is an interactive system that helps a user design new parallel algorithms by building on and modifying existing library algorithms. Graph Matcher performs the critical step of mapping new algorithms onto the target parallel architecture. Initial implementations of DISC and Graph Matcher have been completed, and work on Cloner is in progress  相似文献   

2.
3.
A perspective on range finding techniques for computer vision   总被引:9,自引:0,他引:9  
In recent times a great deal of interest has been shown, amongst the computer vision and robotics research community, in the acquisition of range data for supporting scene analysis leading to remote (noncontact) determination of configurations and space filling extents of three-dimensional object assemblages. This paper surveys a variety of approaches to generalized range finding and presents a perspective on their applicability and shortcomings in the context of computer vision studies.  相似文献   

4.
Recently a number of machine vision systems have been successfully implemented using pipeline architectures and various new algorithms have been proposed. In this paper we propose a method of analysis of both time complexity and space complexity for algorithms using conventional general purpose pipeline architectures. We illustrate our method by applying it to an algorithm schema for local window operations satisfying a property we define as decomposability. It is shown that the proposed algorithm schema and its analysis generalize previous published results. We further analyse algorithms implementing operators that are not decomposable. In particular the complexities of several median-type operations are compared and the implication on algorithm choice is discussed. We conclude with discussions on space-time trade-offs and implementation issues.This research was partially supported by a grant from the Natural Science and Engineering Research Council of Canada. Part of this work was done while the author was at the University of Guelph, Guelph, Ontario, Canada.  相似文献   

5.
This paper presents several static and dynamic data decomposition techniques for parallel implementation of common computer vision algorithms. These techniques use the distribution of features in the input data as a measure of load for data decomposition. Experimental results are presented by implementing algorithms from a motion estimation system using these techniques on a hypercube multiprocessor. Normally in a vision system a sequence of algorithms is employed in which output of an algorithm is input to the next algorithm in the sequence. The distribution of features computed as a by-product of the current task is used to repartition the data for the next task in the system. This allows parallel computation of feature distribution, and therefore the overhead of estimating the load is kept small. It is observed that the communication overhead to repartition data using these run-time decomposition techniques is very small. It is shown that significant performance improvements over uniform-block-oriented partitioning schemes are obtained.  相似文献   

6.
The authors explore the connection between CAGD (computer-aided geometric design) and computer vision. A method for the automatic generation of recognition strategies based on the 3-D geometric properties of shape has been devised and implemented. It uses a novel technique to quantify the following properties of features which compose models used in computer vision: robustness, completeness, consistency, cost, and uniqueness. By utilizing this information, the automatic synthesis of a specialized recognition scheme, called a strategy tree, is accomplished. Strategy trees describe, in a systematic and robust manner, the search process used for recognition and localization of particular objects in the given scene. The consist of selected 3-D features which satisfy system constraints and corroborating evidence subtrees which are used in the formation of hypotheses. Verification techniques, used to substantiate or refute these hypotheses are explored. Experiments utilizing 3-D data are presented  相似文献   

7.
8.
9.
The cost of vision loss worldwide has been estimated at nearly $3 trillion (http://www.amdalliance.org/cost-of-blindness.html). Non-preventable diseases cause a significant proportion of blindness in developed nations and will become more prevalent as people live longer. Prosthetic vision technologies including retinal implants will play an important therapeutic role. Retinal implants convert an input image stream to visual percepts via stimulation of the retina. This paper highlights some barriers to restoring functional human vision for current generation visual prosthetic devices that computer vision can help overcome. Such computer vision is interactive, aiming to restore function including visuo-motor tasks and recognition.  相似文献   

10.
A similarity measure is needed in many Computer Vision problems. Although Euclidean distance has traditionally been used, median distance was recently proposed as an alternative, mostly due to its robustness properties. In this paper, a parametric class of distances is presented which allow to introduce a notion of similarity depending on the problem being considered.  相似文献   

11.
On quantization errors in computer vision   总被引:1,自引:0,他引:1  
The author considers the error resulting in the computation of multivariable functions h(X1, X, . . ., Xn), where all the Xis are only available in the quantized form. In image processing and computer vision problems, the variables are typically a mixture of the spatial coordinates and the intensity levels of objects in an image. A method is introduced using a first-order Taylor series expansion together with a periodic extension of the resulting expression and its Fourier series representation so that the moments and the probability distribution function of the error can be computed in closed form. This method only requires that the joint probability density function of Xi s be known and makes no assumption on the behavior on the quantization errors of the variables. Examples are also given where these results are applied  相似文献   

12.
Some aspects of a long-term parallel-processing research project (PACS, Pax, and Qcd Pax) begun in 1977 at Kyoto University and Hitachi Corporation's Nuclear Power Division are discussed. The discussion is based on an analysis of a number of papers, a book detailing this work, several visits to the project laboratory in Japan, and an examination of some programs that now run on cd Pax. The initial name, processor array for continuum simulation (PACS), was soon changed to Processor Array experiment, or Pax. Qcd Pax (for quantum chromodynamics) is the current running computer. The characteristics of the family are described, and the hardware, communication, and memory functions of the host computer, the use of four levels of parallelism programming, and performance are examined  相似文献   

13.
14.
A finite element, adaptive mesh, free surface seepage parallel algorithm is studied using performance analysis tools in order to optimize its performance. The physical problem being solved is a free boundary seepage problem which is nonlinear and whose free surface is unknown a priori. A fixed domain formulation of the problem is discretized and the parallel solution algorithm is of successive over-relaxation type. During the iteration process there is message-passing of data between the processors in order to update the calculations along the interfaces of the decomposed domains. A key theoretical aspect of the approach is the application of a projection operator onto the positive solution domain. This operation has to be applied at each iteration at each computational point.The VAMPIR and PARAVER performance analysis software are used to analyze and understand the execution behavior of the parallel algorithm such as: communication patterns, processor load balance, computation versus communication ratios, timing characteristics, and processor idle time. This is all done by displays of post-mortem trace-files. Performance bottlenecks can easily be identified at the appropriate level of detail. This will numerically be demonstrated using example test data and comparisons of software capabilities that will be made using the Blue Horizon parallel computer at the San Diego Supercomputer Center.  相似文献   

15.
A massively parallel fine-grained SIMD (single-instruction multi-data-stream) computer for machine vision computations is described. The architecture features a polymorphic-torus network which inserts an individually controllable switch into every node of the two-dimensional torus such that the network is dynamically reconfigurable to match the algorithm. Reconfiguration is accomplished by circuit switching and is achieved at fine-grained level. Using both the processor coordinate in the torus and the data for reconfiguration, the polymorphic-torus achieves solution time that is superior or equivalent to that of popular vision architectures such as mesh, tree, pyramid and hypercube for many vision algorithms discussed. Implementation of the architecture is given to illustrate its VLSI efficiency  相似文献   

16.
Some general principles are formulated about geometric reasoning in the context of model-based computer vision. Such reasoning tries to draw inferences about the spatial relationships between objects in a scene based on the fragmentary and uncertain geometric evidence provided by an image. The paper discusses the tasks the reasoner is to perform for the vision program, the basic competences it requires and the various methods of implementation. In the section on basic competences, some specifications of the data types and operations needed in any geometric reasoner are given.  相似文献   

17.
Robot guidance using computer vision   总被引:1,自引:0,他引:1  
This paper describes a procedure for locating an autonomous robot vehicle in a three-dimensional space. The method is an extension of a two-dimensional technique described by Fukui. The method is simple, requires little computation and provides the unique three-dimensional position of the robot relative to a standard mark.  相似文献   

18.
In this discussion paper, I present my views on the role on mathematical statistics for solving computer vision problems.  相似文献   

19.
20.
Reeves  A.P. 《Software, IEEE》1991,8(6):51-59
Two Unix environments developed for programming parallel computers to handle image-processing and vision applications are described. Visx is a portable environment for the development of vision applications that has been used for many years on serial computers in research. Visx was adapted to run on a multiprocessor with modest parallelism by using functional decomposition and standard operating-system capabilities to exploit the parallel hardware. Paragon is a high-level environment for multiprocessor systems that has facilities for both functional decomposition and data partitioning. It provides primitives that will work efficiently on several parallel-processing systems. Paragon's primitives can be used to build special image-processing operations, allowing one's own programming environment to be grown naturally  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号