首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
In this article, we propose a method to find corresponding object-set pairs between image and map polygon object data sets by means of latent semantic analysis. Latent semantic analysis assigns each polygon object of both data sets to feature vectors in a continuous geometric space in which the similarities between the vectors are proportional to the priorities to constitute a corresponding object-set pair. Thus, object clusters can be obtained by applying an agglomerative hierarchical clustering to the feature vectors. These object clusters are separated into object-set pairs according to the data sets to which the objects belong and are evaluated with a geometric matching criterion to find corresponding object-set pairs. We applied the proposed method to the segmentation result of a composite image with six normalized difference vegetation index (NDVI) images and a forest inventory map. The proposed method was compared to a graph-embedding-based method. The results showed that the proposed method found more corresponding object-set pairs with a similar accuracy in terms of shape similarities and shared information of found pairs.  相似文献   

2.
一种全局优化的多边形变形方法及应用   总被引:11,自引:1,他引:10  
通过对多边形的凸部分,并建立2种不同多边形的凸子集映射,提出了一种全新的基于凸多边形的全局优化方法,解决了任意非同拓扑结构(包括有孔及凸边形)的变形问题。理论上证明了此方法的正确性,讨论了不同凸剖分对变形的影响。实验证明此方法变形效果自然、质量好、速度快、自动化程度高,并可用于汉字的合成与二维动画关键帧的内插。  相似文献   

3.
4.
The problem of record linkage is to identify records from two datasets, which refer to the same entities (e.g. patients). A particular issue of record linkage is the presence of missing values in records, which has not been fully addressed. Another issue is how privacy and confidentiality can be preserved in the process of record linkage. In this paper, we propose an approach for privacy preserving record linkage in the presence of missing values. For any missing value in a record, our approach imputes the similarity measure between the missing value and the value of the corresponding field in any of the possible matching records from another dataset. We use the k-NNs (k Nearest Neighbours in the same dataset) of the record with the missing value and their distances to the record for similarity imputation. For privacy preservation, our approach uses the Bloom filter protocol in the settings of both standard privacy preserving record linkage without missing values and privacy preserving record linkage with missing values. We have conducted an experimental evaluation using three pairs of synthetic datasets with different rates of missing values. Our experimental results show the effectiveness and efficiency of our proposed approach.  相似文献   

5.
Li  Zhi  Guo  Jun  Jiao  Wenli  Xu  Pengfei  Liu  Baoying  Zhao  Xiaowei 《Multimedia Tools and Applications》2020,79(7-8):4931-4947

Person Re-Identification (person re-ID) is an image retrieval task which identifies the same person in different camera views. Generally, a good person re-ID model requires a large dataset containing over 100000 images to reduce the risk of over-fitting. Most current handcrafted person re-ID datasets, however, are insufficient for training a learning model with high generalization ability. In addition, the lacking of images with various levels of occlusion is still remaining in most existing datasets. Motivated by these two problems, this paper proposes a new data augmentation method called Random Linear Interpolation that can enlarge the sizes of person re-ID datasets and improve the generalization ability of the learning model. The key enabler of our approach is generating fused images by interpolating pairs of original images. In other words, the innovation of the proposed approach is considering data augmentation between two random samples. Plenty of experimental results demonstrates that the proposed method is effective to improve baseline models. On Market1501 and DukeMTMC-reID datasets, our approach can achieve 92.71% and 82.19% rank-1 accuracy, respectively.

  相似文献   

6.
Shape is an important consideration in green building design due to its significant impact on energy performance and construction costs. This paper presents a methodology to optimize building shapes in plan using the genetic algorithm. The building footprint is represented by a multi-sided polygon. Different geometrical representations for a polygon are considered and evaluated in terms of their potential problems such as epistasis, which occurs when one gene pair masks or modifies the expression of other gene pairs, and encoding isomorphism, which occurs when chromosomes with different binary strings map to the same solution in the design space. Two alternative representations are compared in terms of their impact on computational effectiveness and efficiency. An optimization model is established considering the shape-related variables and several other envelope-related design variables such as window ratios and overhangs. Life-cycle cost and life-cycle environmental impact are the two objective functions used to evaluate the performance of a green building design. A case study is presented where the shape of a typical floor of an office building defined by a pentagon is optimized with a multi-objective genetic algorithm.  相似文献   

7.
In this paper we present an approximation method for the convolution of two planar curves using pairs of compatible cubic Bézier curves with linear normals (LN). We characterize the necessary and sufficient conditions for two compatible cubic Bézier LN curves with the same linear normal map to exist. Using this characterization, we obtain the cubic spline approximation of the convolution curve. As illustration, we apply our method to the approximation of a font where the letters are constructed as the Minkowski sum of two planar curves. We also present numerical results using our approximation method for offset curves and compare our method to previous results.  相似文献   

8.
Based on a new spectral vector field analysis on triangle meshes, we construct a compact representation for near conformal mesh surface correspondences. Generalizing the functional map representation, our representation uses the map between the low‐frequency tangent vector fields induced by the correspondence. While our representation is as efficient, it is also capable of handling a more generic class of correspondence inference. We also formulate the vector field preservation constraints and regularization terms for correspondence inference, with function preservation treated as a special case. A number of important vector field–related constraints can be implicitly enforced in our representation, including the commutativity of the mapping with the usual gradient, curl, divergence operators or angle preservation under near conformal correspondence. For function transfer between shapes, the preservation of function values on landmarks can be strictly enforced through our gradient domain representation, enabling transfer across different topologies. With the vector field map representation, a novel class of constraints can be specified for the alignment of designed or computed vector field pairs. We demonstrate the advantages of the vector field map representation in tests on conformal datasets and near‐isometric datasets.  相似文献   

9.
SuperMap 2000是全组件式GIS开发工具,它的开放性使用户能够比较容易地扩展它的功能。论文讨论了在Delhpi环境中如何利用SuperMap 2000实现由格网DEM生成坡度图和坡向图以及用多边形切割DEM和Grid,并给出了具体算法和程序流程。  相似文献   

10.
目的 直接基于点云数据本身的拼合算法对点云模型的位置和重叠度有着较高的要求。为了克服这种缺陷,提出一种针对散乱点云的分步拼合算法。方法 不同于大多数已有的基于曲率信息的拼合算法,本文算法包含了一个序贯式的匹配点对筛选过程和一个基于霍夫变换的坐标变换参数估计过程。在筛选过程中,首先利用曲率相似度确定点云数据之间的初始匹配关系,然后利用刚体不变量特征邻域标识相似度以及持续特征直方图相似度对初始匹配点对进行连续两次筛选以便得到更为精确的匹配点对集。在参数估计阶段,通过对匹配点对的旋转矩阵和平移矢量的参数化处理,利用霍夫变换消除错误匹配点对对坐标变换参数估计的影响,从而得到更加准确的坐标变换参数,实现点云的3维拼合。结果 利用本文算法对两片部分重叠的点云数据进行了拼接实验。实验结果表明,本文算法能很好地实现对部分重叠点云的拼合。由于霍夫变换的引入,本文算法相较于经典的Ransac算法具有更高的正确率、稳定性以及抗噪性,在运行速度上也具有一定的优越性。结论 本文算法不仅能适用于任何具有任意初始相对位置的部分重叠点云的拼接,而且可以取得很高的拼合精度和很好的噪声鲁棒性。  相似文献   

11.
We introduce a novel approach to recognizing facial expressions over a large range of head poses. Like previous approaches, we map the features extracted from the input image to the corresponding features of the face with the same facial expression but seen in a frontal view. This allows us to collect all training data into a common referential and therefore benefit from more data to learn to recognize the expressions. However, by contrast with such previous work, our mapping depends on the pose of the input image: We first estimate the pose of the head in the input image, and then apply the mapping specifically learned for this pose. The features after mapping are therefore much more reliable for recognition purposes. In addition, we introduce a non-linear form for the mapping of the features, and we show that it is robust to occasional mistakes made by the pose estimation stage. We evaluate our approach with extensive experiments on two protocols of the BU3DFE and Multi-PIE datasets, and show that it outperforms the state-of-the-art on both datasets.  相似文献   

12.
We propose a method for coarse registration of multiple range images that uses a log-polar height map (LPHM) as the key for establishing correspondence. The LPHM is a local height map orthogonally mapped on the tangent plane with the log-polar coordinate system. The input range images are roughly represented by signed distance field (SDF) samples. For each SDF sample, an LPHM is generated and is converted to an invariant feature vector. Point correspondence is established by a nearest neighbor search in feature space. The RANSAC algorithm is applied on the corresponding point pairs between each pair of range images, and the pairwise registration of input range images is determined by the extracted inlier point pairs. Finally, the global registration is determined by constructing a view tree, which is the spanning tree that maximizes the total number of inlier point pairs. The result of coarse registration is used as the initial state of the fine registration and modeling. The proposed method was tested on multiple real range image datasets.  相似文献   

13.
基于C—Tree的无级比例尺GIS多边形综合技术   总被引:6,自引:0,他引:6       下载免费PDF全文
无级比例尺GIS是数字地球和Web GIS的核心技术之一,但随着GIS的广泛应用和深入发展,现有的GIS技术已经不能满足信息社会的需要,其中一个需要解决的重要问题就是GIS的空间数据量如何随着比例尺的变化自动增减。针对无级比例尺GIS多边形综合中的选取与合并技术,在对选取的数量规律和质量原则以及合并的原则进行充分论述的基础上,提出了一种多边形图层数据组织策略C-Tree,并给出了基于C-Tree的多边形综合算法。对于给定的大给与大比例尺地图多边形图层数据,该算法可以高效率地完成多边形选取与合并的综合操作,输出小比例尺地图图层数据。该算法现已成功应用于时空一体化智能城建信息系统,并获得了满意的结果。  相似文献   

14.
The Convolutional Neural Networks (CNNs) based multi-focus image fusion methods have recently attracted enormous attention. They greatly enhanced the constructed decision map compared with the previous state of the art methods that have been done in the spatial and transform domains. Nevertheless, these methods have not reached to the satisfactory initial decision map, and they need to undergo vast post-processing algorithms to achieve a satisfactory decision map. In this paper, a novel CNNs based method with the help of the ensemble learning is proposed. It is very reasonable to use various models and datasets rather than just one. The ensemble learning based methods intend to pursue increasing diversity among the models and datasets in order to decrease the problem of the overfitting on the training dataset. It is obvious that the results of an ensemble of CNNs are better than just one single CNNs. Also, the proposed method introduces a new simple type of multi-focus images dataset. It simply changes the arranging of the patches of the multi-focus datasets, which is very useful for obtaining the better accuracy. With this new type arrangement of datasets, the three different datasets including the original and the Gradient in directions of vertical and horizontal patches are generated from the COCO dataset. Therefore, the proposed method introduces a new network that three CNNs models which have been trained on three different created datasets to construct the initial segmented decision map. These ideas greatly improve the initial segmented decision map of the proposed method which is similar, or even better than, the other final decision map of CNNs based methods obtained after applying many post-processing algorithms. Many real multi-focus test images are used in our experiments, and the results are compared with quantitative and qualitative criteria. The obtained experimental results indicate that the proposed CNNs based network is more accurate and have the better decision map without post-processing algorithms than the other existing state of the art multi-focus fusion methods which used many post-processing algorithms.  相似文献   

15.
Pan  Baiyu  Zhang  Liming  Yin  Hanxiong  Lan  Jun  Cao  Feilong 《Multimedia Tools and Applications》2021,80(13):19179-19201

3D movies/videos have become increasingly popular in the market; however, they are usually produced by professionals. This paper presents a new technique for the automatic conversion of 2D to 3D video based on RGB-D sensors, which can be easily conducted by ordinary users. To generate a 3D image, one approach is to combine the original 2D color image and its corresponding depth map together to perform depth image-based rendering (DIBR). An RGB-D sensor is one of the inexpensive ways to capture an image and its corresponding depth map. The quality of the depth map and the DIBR algorithm are crucial to this process. Our approach is twofold. First, the depth maps captured directly by RGB-D sensors are generally of poor quality because there are many regions missing depth information, especially near the edges of objects. This paper proposes a new RGB-D sensor based depth map inpainting method that divides the regions with missing depths into interior holes and border holes. Different schemes are used to inpaint the different types of holes. Second, an improved hole filling approach for DIBR is proposed to synthesize the 3D images by using the corresponding color images and the inpainted depth maps. Extensive experiments were conducted on different evaluation datasets. The results show the effectiveness of our method.

  相似文献   

16.
目的 基于全卷积神经网络的图像语义分割研究已成为该领域的主流研究方向。然而,在该网络框架中由于特征图的多次下采样使得图像分辨率逐渐下降,致使小目标丢失,边缘粗糙,语义分割结果较差。为解决或缓解该问题,提出一种基于特征图切分的图像语义分割方法。方法 本文方法主要包含中间层特征图切分与相对应的特征提取两部分操作。特征图切分模块主要针对中间层特征图,将其切分成若干等份,同时将每一份上采样至原特征图大小,使每个切分区域的分辨率增大;然后,各个切分特征图通过参数共享的特征提取模块,该模块中的多尺度卷积与注意力机制,有效利用各切块的上下文信息与判别信息,使其更关注局部区域的小目标物体,提高小目标物体的判别力。进一步,再将提取的特征与网络原输出相融合,从而能够更高效地进行中间层特征复用,对小目标识别定位、分割边缘精细化以及网络语义判别力有明显改善。结果 在两个城市道路数据集CamVid以及GATECH上进行验证实验,论证本文方法的有效性。在CamVid数据集上平均交并比达到66.3%,在GATECH上平均交并比达到52.6%。结论 基于特征图切分的图像分割方法,更好地利用了图像的空间区域分布信息,增强了网络对于不同空间位置的语义类别判定能力以及小目标物体的关注度,提供更有效的上下文信息和全局信息,提高了网络对于小目标物体的判别能力,改善了网络整体分割性能。  相似文献   

17.
目的 深度图像作为一种普遍的3维场景信息表达方式在立体视觉领域有着广泛的应用。Kinect深度相机能够实时获取场景的深度图像,但由于内部硬件的限制和外界因素的干扰,获取的深度图像存在分辨率低、边缘不准确的问题,无法满足实际应用的需要。为此提出了一种基于彩色图像边缘引导的Kinect深度图像超分辨率重建算法。方法 首先对深度图像进行初始化上采样,并提取初始化深度图像的边缘;进一步利用高分辨率彩色图像和深度图像的相似性,采用基于结构化学习的边缘检测方法提取深度图的正确边缘;最后找出初始化深度图的错误边缘和深度图正确边缘之间的不可靠区域,采用边缘对齐的策略对不可靠区域进行插值填充。结果 在NYU2数据集上进行实验,与8种最新的深度图像超分辨率重建算法作比较,用重建之后的深度图像和3维重建的点云效果进行验证。实验结果表明本文算法在提高深度图像的分辨率的同时,能有效修正上采样后深度图像的边缘,使深度边缘与纹理边缘对齐,也能抑制上采样算法带来的边缘模糊现象;3维点云效果显示,本文算法能准确区分场景中的前景和背景,应用于3维重建等应用能取得较其他算法更好的效果。结论 本文算法普遍适用于Kinect深度图像的超分辨率重建问题,该算法结合同场景彩色图像与深度图像的相似性,利用纹理边缘引导深度图像的超分辨率重建,可以得到较好的重建结果。  相似文献   

18.
本文在对现有的相交检测算法进行研究的基础上,提出了基于夹边边对的空间平面凸多边形快速相交检测算法,为平面凸多边形间判交问题提供了一致的计算方法,并将算法的应用对象扩展到任意空间平面凸多边形。该算法分为两步:第一步,确定所要检测的两个凸多边形是否都存在相对于另一凸多边形所在平面的夹边边对,如果至少一个凸多多边形中不存在相对于另一凸多边形所在平面的夹边边对,那么立即返回两个多边形不相交;第二步,根据前面计算得到的两个凸多边形中的夹边边对,计算两组边对间对应夹边的符号距离判断两个多边形是否相交  相似文献   

19.
受相机景深的限制,单次成像无法对不同景深的内容全部清晰成像.多聚焦图像融合技术可以将不同聚焦层次的图像融合为一幅全聚焦的图像,其中如何得到准确的聚焦映射是多聚焦图像融合中的关键问题.对此,利用卷积神经网络强大的特征提取能力,设计具有公共分支和私有分支的联合卷积自编码网络以学习多源图像的特征,公共分支学习多幅图像之间的公共特征,每幅图像的私有分支学习该图像区别于其他图像的私有特征.基于私有特征计算图像的活动测度,得到图像聚焦区域映射,据此设计融合规则以融合两幅多聚焦图像,最终得到全聚焦的融合图像.在公开数据集上的对比实验结果显示:主观评测上,所提出的方法能够较好地融合聚焦区域,视觉效果自然清晰;客观指标上,该方法在多个评价指标上优于对比方法.  相似文献   

20.
FlashGIS是实现地图网络发布的轻量级WebGIS软件。详细探讨了FlashGIS系统中点状、线状和面状地物三种地物类型的地图符号化设计方法。通过对线状地物设计基于lineGradientStyle参数设置的叠合重绘法,能较好地支持地图的缩放处理。通过对面状地物构建位图图元函数的方法,轻松扩展了面状地物位图填充样式。对传统WebGIS系统中难以进行动态交互式符号化配置的难题,分别设计了应用于点状地物与线状地物的图形重绘法和应用于面状地物的遮罩法,为FlashGIS系统的动态交互式符号化配置提供了解决方案。符号化的应用必将更深入地推动FlashGIS系统的应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号