首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
During capture of a free-flying object, a robotic servicer can be subject to impacts, which may separate it from the object or damage crucial subsystems. However, the reactions can be minimized using the Centre of Percussion (CoP) concept. Following a brief introduction of the two- and three-dimensional cases, the performance of a robot under impact is assessed when the CoP concept is employed. The effects of the parametric uncertainties on manipulator joint reactions are studied. Α control method to compensate for the reaction forces is proposed. Implementation guidelines are discussed. Simulations of a planar space robot validate the analysis.  相似文献   

2.
In computer vision, motion analysis is a fundamental problem. Applying the concepts of congruence checking in computational geometry and geometric hashing, which is a technique used for the recognition of partially occluded objects from noisy data, we present a new random sampling approach for the estimation of the motion parameters in two- and three-dimensional Euclidean spaces of both a completely measured rigid object and a partially occluded rigid object. We assume that the two- and three-dimensional positions of the vertices of the object in each image frame are determined using appropriate methods such as a range sensor or stereo techniques. We also analyze the relationships between the quantization errors and the errors in the estimation of the motion parameters by random sampling, and we show that the solutions obtained using our algorithm converge to the true solutions if the resolution of the digitalization is increased.  相似文献   

3.
对于移动对象历史轨迹索引,现有的方案绝大多数都基于室外空间,难以直接应用于室内空间中;同时,未将对象本身作为一个独立的维度加以索引,无法提供高效的对象轨迹查询方式。对此,提出了一个室内环境下的移动对象索引结构DR-tree来对移动数据的位置、时间、对象三个维度进行索引,并将位置维与对象维解耦,将三维索引转换为两个二维索引,同时给出查询优化方案。实验结果表明,与现有的室内环境下的索引方案RTR-tree相比,该结构不仅能够提供高效的时空查询,而且还能提供高效的对象轨迹查询。  相似文献   

4.
The MaxBRNN problem is to find an optimal region such that setting up a new service within this region might attract the maximum number of customers by proximity. The MaxBRNN problem has many practical applications such as service location planning and emergency schedule. In typical real-life applications the data volume of the problem is huge, thus an efficient solution is highly desired. In this paper, we propose two efficient algorithms, namely, OptRegion, and 3D-OptRegion to tackle the MaxBRNN problem and MaxBRkNN in two- and three-dimensional spaces, especially for the 3D-OptRegion, we propose a powerful pruning strategy Fine-grained Pruning Strategy to reduce the searching space. Our method employs three optimization techniques, i.e., sweep line (sweep plane in a three-dimensional space), pruning strategy (based on upper bound estimation), and influence value computation (of candidate points), to improve the search performance. In a three-dimensional space, we additionally use a fine-grained pruning strategy to further improve the pruning effect. Extensive experimental evaluation using both real and synthetic datasets confirms that both OptRegion and 3D-OptRegion outperform the existing algorithms significantly under all problem instances.  相似文献   

5.
Visualization is one of the most effective methods for analyzing how high-dimensional data are distributed. Dimensionality reduction techniques, such as PCA, can be used to map high dimensional data to a two- or three-dimensional space. In this paper, we propose an algorithm called HyperMap that can be effectively applied to visualization. Our algorithm can be seen as a generalization of FastMap. It preserves its linear computation complexity, and overcomes several main shortcomings, especially in visualization. Since there are more than two pivot objects in each axis of a target space, more distance information needs to be preserved in each dimension. Then in visualization, the number of pivot objects can go beyond the limitation of six (2-pivot objects × 3-dimensions). Our HyperMap algorithm also gives more flexibility to the target space, such that the data distribution can be observed from various viewpoints. Its effectiveness is confirmed by empirical evaluations on both real and synthetic datasets.  相似文献   

6.

One of the most efficient means to understand complex data is by visualizing them in two- or three-dimensional space. As meaningful data are likely to be high dimensional, visualizing them requires dimensional reduction algorithms, which objective is to map high-dimensional data into low-dimensional space while preserving some of their underlying structures. For labeled data, their low-dimensional representations should embed their classifiability so that their class-structures become visible. It is also beneficial if an algorithm can classify labeled input while at the same time executes dimensional reduction to visually offer information regarding the data’s structure to give rational behind the classification. However, most of the currently available dimensional reduction methods are not usually equipped with classification features, while most classification algorithm lacks transparencies in rationalizing their decisions. In this paper, the restricted radial basis function networks (rRBF), a recently proposed supervised neural network with low-dimensional internal representation, is utilized for visualizing high-dimensional data while also performing classification. The primary focus of this paper is to empirically explain the classifiability and visual transparency of the rRBF.

  相似文献   

7.
《Real》1998,4(5):349-359
We have previously demonstrated that the performance of tracking algorithms can be improved by integrating information from multiple cues in a model-driven Bayesian reasoning framework. Here we extend our work to active vision tracking, with variable camera geometry. Many existent active tracking algorithms avoid the problem of variable camera geometry by tracking view independent features, such as corners and lines. However, the performance of algorithms based on those single features will greatly deteriorate in the presence of specularities and dense clutter. We show, by integrating multiple cues and updating the camera geometry on-line, that it is possible to track a complicated object moving arbitrarily in three-dimensional (3D) space.We use a four degree-of-freedom (4-DoF) binocular camera rig to track three focus features of an industrial object, whose complete model is known. The camera geometry is updated by using the rig control commands and kinematic model of the stereo head. The extrinsic parameters are further refined by interpolation from a previously sampled calibration of the head work space.The 2D target position estimates are obtained by a combination of blob detection, edge searching and gray-level matching, with the aid of model geometrical structure projection using current estimates of camera geometry. The information is represented in the form of a probability density distribution, and propagated in a Bayes Net. The Bayesian reasoning that is performed in the 2D image is coupled by the rigid model geometry constraint in 3D space.An αβ filter is used to smooth the tracking pursuit and to predict the position of the object in the next iteration of data acquisition. The solution of the inverse kinematic problem at the predicted position is used to control the position of the stereo head.Finally, experiments show that the target undertaking arbitrarily 3D motion can be successfully tracked in the presence of specularities and dense clutter.  相似文献   

8.
A technique for determining the distortion parameters (location and orientation) of general three-dimensional objects from a single range image view is introduced. The technique is based on an extension of the straight-line Hough transform to three-dimensional space. It is very efficient and robust, since the dimensionality of the feature space is low and since it uses range images directly (with no preprocessing such as segmentation and edge or gradient detection). Because the feature space separates the translation and rotation effects, a hierarchical algorithm to detect object rotation and translation is possible. The new Hough space can also be used as a feature space for discriminating among three-dimensional objects  相似文献   

9.
Range and k-nearest neighbor searching are core problems in pattern recognition. Given a database S of objects in a metric space M and a query object q in M, in a range searching problem the goal is to find the objects of S within some threshold distance to g, whereas in a k-nearest neighbor searching problem, the k elements of S closest to q must be produced. These problems can obviously be solved with a linear number of distance calculations, by comparing the query object against every object in the database. However, the goal is to solve such problems much faster. We combine and extend ideas from the M-tree, the multivantage point structure, and the FQ-tree to create a new structure in the "bisector tree" class, called the Antipole tree. Bisection is based on the proximity to an "Antipole" pair of elements generated by a suitable linear randomized tournament. The final winners a, b of such a tournament is far enough apart to approximate the diameter of the splitting set. If dist(a, b) is larger than the chosen cluster diameter threshold, then the cluster is split. The proposed data structure is an indexing scheme suitable for (exact and approximate) best match searching on generic metric spaces. The Antipole tree outperforms by a factor of approximately two existing structures such as list of clusters, M-trees, and others and, in many cases, it achieves better clustering properties.  相似文献   

10.
Finding proximity information is crucial for massive database search. Locality Sensitive Hashing (LSH) is a method for finding nearest neighbors of a query point in a high-dimensional space. It classifies high-dimensional data according to data similarity. However, the “curse of dimensionality” makes LSH insufficiently effective in finding similar data and insufficiently efficient in terms of memory resources and search delays. The contribution of this work is threefold. First, we study a Token List based information Search scheme (TLS) as an alternative to LSH. TLS builds a token list table containing all the unique tokens from the database, and clusters data records having the same token together in one group. Querying is conducted in a small number of groups of relevant data records instead of searching the entire database. Second, in order to decrease the searching time of the token list, we further propose the Optimized Token list based Search schemes (OTS) based on index-tree and hash table structures. An index-tree structure orders the tokens in the token list and constructs an index table based on the tokens. Searching the token list starts from the entry of the token list supplied by the index table. A hash table structure assigns a hash ID to each token. A query token can be directly located in the token list according to its hash ID. Third, since a single-token based method leads to high overhead in the results refinement given a required similarity, we further investigate how a Multi-Token List Search scheme (MTLS) improves the performance of database proximity search. We conducted experiments on the LSH-based searching scheme, TLS, OTS, and MTLS using a massive customer data integration database. The comparison experimental results show that TLS is more efficient than an LSH-based searching scheme, and OTS improves the search efficiency of TLS. Further, MTLS per forms better than TLS when the number of tokens is appropriately chosen, and a two-token adjacent token list achieves the shortest query delay in our testing dataset.  相似文献   

11.
A computer program for graphical analysis of multidimensional flow cytometric list mode data is described. The program offers one-, two-, and three-dimensional inspection of an amount of data that is only limited by disk space. Subpopulations within the original data set can be identified by setting one or more two-dimensional AND gates around them. The order of measurement can be used as a parameter for evaluation of time-dependent processes. Other new parameters can be made by zooming in on a parameter, logarithmic transformation, or division of two parameters. The program is written in Turbo Pascal and it can run on any MS-DOC PC with an EGA/VGA resolution screen.  相似文献   

12.
通过对Spark大数据平台以及Eclat算法的深入分析,提出了基于Spark的Eclat算法(即SPEclat)。针对串行算法在处理大规模数据时出现的不足,该方法在多方面进行改进:为减少候选项集支持度计数带来的损耗,改变了数据的存储方式;将数据按前缀进行分组,并划分到不同的计算节点,压缩数据的搜索空间,实现并行化计算。最终将算法结合Spark云计算平台的优势加以实现。实验表明该算法可在处理海量数据集时高效运行,并且在面对数据量大规模增长的情况下,具备良好的可扩展性。  相似文献   

13.
In ray tracing the two most commonly used data structures are the octree and uniform cell division. The octree structure allows efficient adaptive subdivision of space, while taking care of the spatial coherence of the objects in it; however, the tree structure locating the next node in the path of a ray is complex and time consuming. The cell structure, on the other hand, can be stored in a three-dimensional array, and each cell can be efficiently accessed by specifying three indices. However, such a uniform cell division does not take care of object coherence. The proposed data structure combines the positive features of the above data structures while minimising their disadvantages. The entire object space is implicitly assumed to be a three-dimensional grid of cells. Initially, the entire object space is a single voxel which later undergoes “adaptive cell division.” But, unlike in the octree structure, where each voxel is divided exactly at the middle of each dimension, in adaptive cell division, each voxel is divided at the nearest cell boundary. The result is that each voxel contains an integral number of cells along each axis. Corresponding to the implicit cell division we maintain a three-dimensional array, with each array element containing the voxel number which is used to index into the voxel array. The voxel array is used to store information about the structure of each voxel, in particular, the objects in each voxel. While a ray moves from one voxel to another we always keep track of the cell through which the ray is currently passing. Since only arrays are involved in accessing the next voxel in the path of the ray, the operation is very efficient.  相似文献   

14.
On describing complex surface shapes   总被引:1,自引:0,他引:1  
The fractal surface model1 is extended to provide a formalism that is competent to describe complex, natural three-dimensional surfaces in either a quantitative or qualitative manner and which, in addition, closely mimics human perceptual judgments of surface structure (eg ‘peaks’, ‘ridges’ or ‘valleys’) and threedimensional texture. This representation is then used to develop a statistical version of scale-space filtering that is applicable to one-, two- or three-dimensional data.  相似文献   

15.
针对小文本的Web数据挖掘技术及其应用   总被引:4,自引:2,他引:4  
现有搜索引擎技术返回给用户的信息太多太杂,为此提出一种针对小文本的基于近似网页聚类算法的Web文本数据挖掘技术,该技术根据用户的兴趣程度形成词汇库,利用模糊聚类方法获得分词词典组,采用MD5算法去除重复页面,采用近似网页聚类算法对剩余页面聚类,并用马尔可夫Web序列挖掘算法对聚类结果排序,从而提供用户感兴趣的网页簇序列,使用户可以迅速找到感兴趣的页面。实验证明该算法在保证查全率和查准率的基础上大大提高了搜索效率。由于是针对小文本的数据挖掘,所研究的算法时间和空间复杂度都不高,因此有望成为一种实用、有效的信息检索技术。  相似文献   

16.
17.
Algorithms are presented for converting between different three-dimensional object representations: from a collection of cross section outlines to surface points, and from surface points to a collection of overlapping spheres. The algorithms effect a conversion from surface representations (outlines or surface points) to a volume representation (spheres). The spherical representation can be useful for graphical display, and perhaps as an intermediate representation for conversions to representations with other primitives. The spherical decomposition also permits the computation of points on the symmetric surface of an object, the three-dimensional analog of Blum's symmetric axis. The algorithms work in real coordinates rather than in a discrete space, and so avoid error introduced by the quantization of the space.  相似文献   

18.
物体拍摄环境具有测量数据量大、物体外轮廓信息复杂等特点,采用当前方法能够获得物体精确的三维点云数据,但缺乏颜色和纹理信息,导致物体重构精度不高,真实感较差;为此,提出一种基于三维激光扫描的物体重构建模方法;该方法通过三维激光扫描技术获取物体点云数据,采用显式的欧拉积分方法对物体整个三维曲面进行平滑,依据三角生长法进行物体三维空间三角划分,将物体网格顶点向球面进行映射,由此构造物体三角网格模型,通过迭代最近点算法对物体非同步点云数据初步匹配结果进行精确配准,利用最近点搜索算法将经多视图立体视觉算法优化后的物体颜色信息和三维点云数据坐标相融合;实验结果表明,所提方法可以快速精确地建立物体三维重构模型,验证了所提方法的可行性。  相似文献   

19.
Genetic object recognition using combinations of views   总被引:1,自引:0,他引:1  
Investigates the application of genetic algorithms (GAs) for recognizing real 2D or 3D objects from 2D intensity images, assuming that the viewpoint is arbitrary. Our approach is model-based (i.e. we assume a pre-defined set of models), while our recognition strategy relies on the theory of algebraic functions of views. According to this theory, the variety of 2D views depicting an object can be expressed as a combination of a small number of 2D views of the object. This implies a simple and powerful strategy for object recognition: novel 2D views of an object (2D or 3D) can be recognized by simply matching them to combinations of known 2D views of the object. In other words, objects in a scene are recognized by "predicting" their appearance through the combination of known views of the objects. This is an important idea, which is also supported by psychophysical findings indicating that the human visual system works in a similar way. The main difficulty in implementing this idea is determining the parameters of the combination of views. This problem can be solved either in the space of feature matches among the views ("image space") or the space of parameters ("transformation space"). In general, both of these spaces are very large, making the search very time-consuming. In this paper, we propose using GAs to search these spaces efficiently. To improve the efficiency of genetic searching in the transformation space, we use singular value decomposition and interval arithmetic to restrict the genetic search to the most feasible regions of the transformation space. The effectiveness of the GA approaches is shown on a set of increasingly complex real scenes where exact and near-exact matches are found reliably and quickly  相似文献   

20.
高维数据流形的低维嵌入及嵌入维数研究   总被引:29,自引:0,他引:29  
发现高维数据空间流形中有意义的低维嵌入是一个经典难题.Isomap是提出的一种有效的基于流形理论的非线性降维方法,它不仅能够揭示高维观察数据的内在结构,还能够发现潜在的低维参教空间.Isomap的理论基础是假设在高维数据空间和低维参数空间存在等距映射,但并没有进行证明.首先给出了高维数据的连续流形和低维参数空间之间的等距映射存在性证明,然后区分了嵌入空间维数、高维数据空间的固有维数和流形维数,并证明存在环状流形高维数据空间的参数空间维数小于嵌入空间维数.最后提出一种环状流形的发现算法,判断高维数据空间是否存在环状流形,进而估计其固有维教及潜在空间维数.在多姿态三维对象的实验中证明了算法的有效性,并得到正确的低维参数空间.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号