首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Maps should be designed so that users can comprehend and use the information. Display decisions, such as choosing the scale at which an area is shown, depend on properties of the displayed information such as the perceived density (PD) of the information. Taking a psychophysical approach we suggest that the PD of information in a road map is related to the scale and properties of the mapped area. 54 participants rated the PD of 60 maps from different regions. We provide a simple model that predicts the PD of electronic road map displays, using the logarithm of the number of roads, the logarithm of the number of junctions and the length of the shown roads. The PD model was cross-validated using a different set of 60 maps (n = 44). The model can be used for automatically adjusting display scales and for evaluating map designs, considering the required PD to perform a map-related task.  相似文献   

2.
Active snake contours and Kohonen’s self-organizing feature maps (SOMs) are employed for representing and evaluating discrete point maps of indoor environments efficiently and compactly. A generic error criterion is developed for comparing two different sets of points based on the Euclidean distance measure. The point sets can be chosen as (i) two different sets of map points acquired with different mapping techniques or different sensing modalities, (ii) two sets of fitted curve points to maps extracted by different mapping techniques or sensing modalities, or (iii) a set of extracted map points and a set of fitted curve points. The error criterion makes it possible to compare the accuracy of maps obtained with different techniques among themselves, as well as with an absolute reference. Guidelines for selecting and optimizing the parameters of active snake contours and SOMs are provided using uniform sampling of the parameter space and particle swarm optimization (PSO). A demonstrative example from ultrasonic mapping is given based on experimental data and compared with a very accurate laser map, considered an absolute reference. Both techniques can fill the erroneous gaps in discrete point maps. Snake curve fitting results in more accurate maps than SOMs because it is more robust to outliers. The two methods and the error criterion are sufficiently general that they can also be applied to discrete point maps acquired with other mapping techniques and other sensing modalities.  相似文献   

3.
This article provides a formal data model which allows to establish geometrical-topological integrity of areal objects in a geographical information system (GIS). The data model leads to an automatic tool able to check consistency of a given set of data and to avoid inconsistencies caused by updates of the database. To this end we start from the mathematical notion of a map which provides an irregular tessellation, i.e., a partition of the plane which is non-overlapping and covering. From another perspective, a map is a plane graph with an explicit representation of faces as its atomic areal components. The concept of nested maps extends this standard notion by the specification of a hierarchical structure which aggregates the set of faces. Such aggregations are common in political and administrative structures. Whereas the mathematical notion of a map is familiar in GIS and the base for many tools supporting topological editing, there was a lack of effectively checkable integrity constraints which are correct and complete, i.e., equivalent, for maps. This article provides an axiomatic, effectively checkable characterization of maps which is equivalent to the standard mathematical one, extends it to nested maps and discusses how to use them in order to achieve and maintain integrity in a GIS.  相似文献   

4.
This paper describes a novel algorithm to extract surface meshes directly from implicitly represented heterogeneous models made of different constituent materials. Our approach can directly convert implicitly represented heterogeneous objects into a surface model separating homogeneous material regions, where every homogeneous region in a heterogeneous structure is enclosed by a set of two-manifold surface meshes. Unlike other discretization techniques of implicitly represented heterogeneous objects, the intermediate surfaces between two constituent materials can be directly extracted by our algorithm. Therefore, it is more convenient to adopt the surface meshes from our approach in the boundary element method (BEM) or as a starting model to generate volumetric meshes preserving intermediate surfaces for the finite element method (FEM). The algorithm consists of three major steps: firstly, a set of assembled two-manifold surface patches coarsely approximating the interfaces between homogeneous regions are extracted and segmented; secondly, signed distance fields are constructed such that each field expresses the Euclidean distance from points to the surface of one homogeneous material region; and finally, coarse patches generated in the first step are dynamically optimized to give adaptive and high-quality surface meshes. The manifold topology is preserved on each surface patch.  相似文献   

5.
Dynamic self-organizing maps with controlled growth for knowledgediscovery   总被引:16,自引:0,他引:16  
The growing self-organizing map (GSOM) algorithm is presented in detail and the effect of a spread factor, which can be used to measure and control the spread of the GSOM, is investigated. The spread factor is independent of the dimensionality of the data and as such can be used as a controlling measure for generating maps with different dimensionality, which can then be compared and analyzed with better accuracy. The spread factor is also presented as a method of achieving hierarchical clustering of a data set with the GSOM. Such hierarchical clustering allows the data analyst to identify significant and interesting clusters at a higher level of the hierarchy, and continue with finer clustering of the interesting clusters only. Therefore, only a small map is created in the beginning with a low spread factor, which can be generated for even a very large data set. Further analysis is conducted on selected sections of the data and of smaller volume. Therefore, this method facilitates the analysis of even very large data sets.  相似文献   

6.
叶子童  邹炼  颜佳  范赐恩 《计算机应用》2017,37(9):2652-2658
针对现有的基于引导学习的显著性检测模型存在的训练样本不纯净和特征提取方式过于简单的问题,提出一种改进的基于引导(Boosting)的算法来检测显著性,从提升训练样本集的准确度和改进特征提取的方式来达到学习效果的提升。首先,根据显著性检测的自底向上模型产生粗选样本图,并通过元胞自动机对粗选样本图进行快速有效优化来建立可靠的引导样本,完成对原图的标注建立训练样本集;然后,在训练集上对样本进行颜色纹理特征提取;最后,使用不同特征不同核的支持向量机(SVM)弱分类器生成基于Boosting学习一个强分类器,对每幅图像的超像素点进行前景背景分类,得到显著图。在ASD数据库和SED1数据库上的实验结果显示该模型能对复杂和简单的图像生成完备清晰的显著图,并在准确率召回率曲线和曲线下面积(AUC)测评值上有较大提升。由于其准确性,能应用在计算机视觉预处理阶段。  相似文献   

7.
Development of a geospatial model to quantify, describe and map urban growth   总被引:11,自引:0,他引:11  
In the United States, there is widespread concern about understanding and curbing urban sprawl, which has been cited for its negative impacts on natural resources, economic health, and community character. There is not, however, a universally accepted definition of urban sprawl. It has been described using quantitative measures, qualitative terms, attitudinal explanations, and landscape patterns. To help local, regional and state land use planners better understand and address the issues attributed to sprawl, researchers at NASA's Northeast Regional Earth Science Applications Center (RESAC) at The University of Connecticut have developed an urban growth model. The model, which is based on land cover derived from remotely sensed satellite imagery, determines the geographic extent, patterns, and classes of urban growth over time.Input data to the urban growth model consist of two dates of satellite-derived land cover data that are converted, based on user-defined reclassification options, to just three classes: developed, non-developed, and water. The model identifies three classes of undeveloped land as well as developed land for both dates based on neighborhood information. These two images are used to create a change map that provides more detail than a traditional change analysis by utilizing the classes of non-developed land and including contextual information. The change map becomes the input for the urban growth analysis where five classes of growth are identified: infill, expansion, isolated, linear branch, and clustered branch.The output urban growth map is a powerful visual and quantitative assessment of the kinds of urban growth that have occurred across a landscape. Urban growth further can be characterized using a temporal sequence of urban growth maps to illustrate urban growth dynamics. Beyond analysis, the ability of remote sensing-based information to show changes to a community's landscape, at different geographic scales and over time, is a new and unique resource for local land use decision makers as they plan the future of their communities.  相似文献   

8.
This paper examines what can be learned about bodies of literature using a concept mapping tool, Leximancer. Statistical content analysis and concept mapping were used to analyse bodies of literature from different domains in three case studies. In the first case study, concept maps were generated and analysed for two closely related document sets—a thesis on language games and the background literature for the thesis. The aim for the case study was to show how concept maps might be used to analyse related document collections for coverage. The two maps overlapped on the concept of “language”; however, there was a stronger focus in the thesis on “simulations” and “agents.” Other concepts were not as strong in the thesis map as expected. The study showed how concept maps can help to establish the coverage of the background literature in a thesis. In the second case study, three sets of documents from the domain of conceptual and spatial navigation were collected, each discussing a separate topic: navigational strategies, the brain's role in navigation, and concept mapping. The aim was to explore emergent patterns in a set of related concept maps that may not be apparent from reading the literature alone. Separate concept maps were generated for each topic and also for the combined set of literature. It was expected that each of the topics would be situated in different parts of the combined map, with the concept of “navigation” central to the map. Instead, the concept of “spatial” was centrally situated and the areas of the map for the brain and for navigational strategies overlaid the same region. The unexpected structure provided a new perspective on the coverage of the documents. In the third and final case study, a set of documents on sponges—a domain unfamiliar to the reader—was collected from the Internet and then analysed with a concept map. The aim of this case study was to present how a concept map could aid in quickly understanding a new, technically intensive domain. Using the concept map to identify significant concepts and the Internet to look for their definitions, a basic understanding of key terms in the domain was obtained relatively quickly. It was concluded that using concept maps is effective for identifying trends within documents and document collections, for performing differential analysis on documents, and as an aid for rapidly gaining an understanding in a new domain by exploring the local detail within the global scope of the textual corpus.  相似文献   

9.
目的 视觉显著性在众多视觉驱动的应用中具有重要作用,这些应用领域出现了从2维视觉到3维视觉的转换,从而基于RGB-D数据的显著性模型引起了广泛关注。与2维图像的显著性不同,RGB-D显著性包含了许多不同模态的线索。多模态线索之间存在互补和竞争关系,如何有效地利用和融合这些线索仍是一个挑战。传统的融合模型很难充分利用多模态线索之间的优势,因此研究了RGB-D显著性形成过程中多模态线索融合的问题。方法 提出了一种基于超像素下条件随机场的RGB-D显著性检测模型。提取不同模态的显著性线索,包括平面线索、深度线索和运动线索等。以超像素为单位建立条件随机场模型,联合多模态线索的影响和图像邻域显著值平滑约束,设计了一个全局能量函数作为模型的优化目标,刻画了多模态线索之间的相互作用机制。其中,多模态线索在能量函数中的权重因子由卷积神经网络学习得到。结果 实验在两个公开的RGB-D视频显著性数据集上与6种显著性检测方法进行了比较,所提模型在所有相关数据集和评价指标上都优于当前最先进的模型。相比于第2高的指标,所提模型的AUC(area under curve),sAUC(shuffled AUC),SIM(similarity),PCC(Pearson correlation coefficient)和NSS(normalized scanpath saliency)指标在IRCCyN数据集上分别提升了2.3%,2.3%,18.9%,21.6%和56.2%;在DML-iTrack-3D数据集上分别提升了2.0%,1.4%,29.1%,10.6%,23.3%。此外还进行了模型内部的比较,验证了所提融合方法优于其他传统融合方法。结论 本文提出的RGB-D显著性检测模型中的条件随机场和卷积神经网络充分利用了不同模态线索的优势,将它们有效融合,提升了显著性检测模型的性能,能在视觉驱动的应用领域发挥一定作用。  相似文献   

10.
Concept map is a graphical tool for representing knowledge. They have been used in many different areas, including education, knowledge management, business and intelligence. Constructing of concept maps manually can be a complex task; an unskilled person may encounter difficulties in determining and positioning concepts relevant to the problem area. An application that recommends concept candidates and their position in a concept map can significantly help the user in that situation. This paper gives an overview of different approaches to automatic and semi-automatic creation of concept maps from textual and non-textual sources. The concept map mining process is defined, and one method suitable for the creation of concept maps from unstructured textual sources in highly inflected languages such as the Croatian language is described in detail. Proposed method uses statistical and data mining techniques enriched with linguistic tools. With minor adjustments, that method can also be used for concept map mining from textual sources in other morphologically rich languages.  相似文献   

11.
12.
Extracting View-Dependent Depth Maps from a Collection of Images   总被引:1,自引:0,他引:1  
Stereo correspondence algorithms typically produce a single depth map. In addition to the usual problems of occlusions and textureless regions, such algorithms cannot model the variation in scene or object appearance with respect to the viewing position. In this paper, we propose a new representation that overcomes the appearance variation problem associated with an image sequence. Rather than estimating a single depth map, we associate a depth map with each input image (or a subset of them). Our representation is motivated by applications such as view interpolation and depth-based segmentation for model-building or layer extraction. We describe two approaches to extract such a representation from a sequence of images.The first approach, which is more classical, computes the local depth map associated with each chosen reference frame independently. The novelty of this approach lies in its combination of shiftable windows, temporal selection, and graph cut optimization. The second approach simultaneously optimizes a set of self-consistent depth maps at multiple key-frames. Since multiple depth maps are estimated simultaneously, visibility can be modeled explicitly and disparity consistency imposed across the different depth maps. Results, which include a difficult specular scene example, show the effectiveness of our approach.  相似文献   

13.
In this work we propose an efficient algorithm for progressive point set surface compression based on shape pattern analysis. The algorithm proceeds as follows. First, the model surface is segmented into square patches according to the principal directions of the surfel. Then, the square patch is parameterized into a 2D domain and regularly resampled. After the resampling, each patch can be described as a height map. Using the height maps, we do the similarity analysis between patches. The patches which have the similar shape are classified into the same cluster, called a shape pattern. For patches in the same shape pattern, a representative patch is computed; then each patch can be represented as the representative patch plus an error correction. When decoding, the profile of the model can be quickly reconstructed using the representative patches and transformation parameters. Then with the decoding of the error image, the model can be gradually refined, implementing progressive compression of 3D point-based models.  相似文献   

14.
We propose an associatively learnable hypercolumn model (AHCM). A hyper-column model is a self-organized, competitive, and hierarchical multilayer neural network. It is derived from the neocognitron by replacing each S cell and C cell with a two-layer hierarchical self-organizing map. The HCM can recognize images with variant object size, position, orientation and spatial resolution. However, feature maps may integrate some features extracted in the lower layer even if the features are extracted from input data which belong to different categories. The learning algorithm of the HCM causes this problem because it is an unsupervised learning used by a self-organizing map. An associative learning method is therefore introduced, which is derived from the HCM by appending associative signals and associative weights to traditional input data and connection weights, respectively. The AHCM was applied to hand-shape recognition. We found that the AHCM could generate an appropriate feature map and higher recognition accuracy compared with the HCM. This work was presented in part at the 11th International Symposium on Artificial Life and Robotics, Oita, Japan, January 23–25, 2006  相似文献   

15.
目的提出一种结合因子图的多目的地地图生成方法。方法首先,由用户选择多个感兴趣的目的地,系统根据相应规则自动地选择与目的地最相关的路线。然后,通过定义一组衡量布局质量的约束规则,采用因子图方法将定义的每条规则编码成因子,并采用Metropolis Hastings算法对由因子图构建得到的目标分布函数进行采样得到符合约束规则的多目的地地图。结果实验结果表明,使用这种方法得到的多目的地地图,可以在同一显示空间中显示多个目的地之间的道路信息,同时又保留了各目的地区域之间的拓扑和空间关系。结论提出的多目的地地图能有效地为用户提供导航,解决了当前在线地图无法在同一视野中为用户提供空间距离较远的区域道路信息的问题。  相似文献   

16.
A dynamic saliency attention model based on local complexity is proposed in this paper. Low-level visual features are extracted from current and some previous frames. Every feature map is resized into some different sizes. The feature maps in same size and same feature for all the frames are used to calculate a local complexity map. All the local complexity maps are normalized and are fused into a dynamic saliency map. In the same time, a static saliency map is acquired by the current frame. Then dynamic and static saliency maps are fused into a final saliency map. Experimental results indicate that: when there is noise among the frames or there is change of illumination among the frames, our model is excellent to Marat?s model and Shi?s model; when the moving objects do not belong to the static salient regions, our model is better than Ban?s model.  相似文献   

17.
高精(high-definition, HD)地图可以提供准确的道路信息和丰富的语义信息,使自动驾驶系统引导车辆正确行驶。高精地图通常依赖人工标注,现有自动化标注方法在自动驾驶场景下的识别精度较低,导致高精地图标注效率低下。为了解决这一问题,提出了一种新的用于高精地图自动标注的语义分割方法MapFormer,包括一个多级特征融合模块,能够使模型聚合不同级别的细节和语义信息;一种新的边界解耦联合解码器用以提高模型处理类别间边界的能力。在鸟瞰图数据集上的实验验证了该模型不仅在分割精度上取得了优秀的表现,而且在对类别边界的处理上更为清晰。其mIoU为55.82%,高于SegFormer的mIoU 1.03%,该方法可提升高精地图标注效率与标注自动化率。  相似文献   

18.
In simultaneous localisation and mapping (SLAM) the correspondence problem, specifically detecting cycles, is one of the most difficult challenges for an autonomous mobile robot. In this paper we show how significant cycles in a topological map can be identified with a companion absolute global metric map. A tight coupling of the basic unit of representation in the two maps is the key to the method. Each local space visited is represented, with its own frame of reference, as a node in the topological map. In the global absolute metric map these local space representations from the topological map are described within a single global frame of reference. The method exploits the overlap which occurs when duplicate representations are computed from different vantage points for the same local space. The representations need not be exactly aligned and can thus tolerate a limited amount of accumulated error. We show how false positive overlaps which are the result of a misaligned map, can be discounted.  相似文献   

19.
A mechanism is proposed by which feedback pathways model spatial patterns of feedforward activity in cortical maps. The mechanism can be viewed equivalently as readout of a content-addressable memory or as decoding of a population code. The model is based on the evidence that cortical receptive fields can often be described as a separable product of functions along several dimensions, each represented in a spatially ordered map. Given this, it is shown that for an N-dimensional map, accurate modeling and decoding of x(N) feedforward activity patterns can be done with Nx fibers, N of which must be active at any one time. The proposed mechanism explains several known properties of the cortex and pyramidal neurons: (1) the integration of signals by dendrites with a narrow tangential distribution, that is, apical dendrites; (2) the presence of fast-conducting feedback projections with broad tangential distributions; (3) the multiplicative effects of attention on receptive field profiles; and (4) the existence of multiplicative interactions between subthreshold feedforward inputs to basal dendrites and inputs to apical dendrites.  相似文献   

20.
目的 显著性目标检测算法主要分为基于低级特征的传统方法和基于深度学习的新方法,传统方法难以捕获对象的高级语义信息,基于深度学习的新方法能捕获高级语义信息却忽略了边缘特征。为了充分发挥两种方法的优势,基于将二者结合的思路,本文利用稀疏能使得显著性对象指向性凝聚的优势,提出了一种基于稀疏自编码和显著性结果优化的方法。方法 对VGG(visual geometry group)网络第4个池化层的特征图进行稀疏自编码处理,得到5张稀疏显著性特征图,再与传统方法得到的显著图一起输入卷积神经网络进行显著性结果优化。结果 使用DRFI(discriminative regional feature integration)、HDCT(high dimensional color transform)、RRWR(regularized random walks ranking)和CGVS(contour-guided visual search)等传统方法在DUT-OMRON、ECSSD、HKU-IS和MSRA等公开数据集上进行实验,表明本文算法有效改善了显著性对象的F值和MAE(mean absolute error)值。在F值提高方面,优化后的DRFI方法提升最高,在HKU-IS数据集上提高了24.53%。在MAE值降低方面,CGVS方法降低最少,在ECSSD数据集上降低了12.78%,降低最多的接近50%。而且本模型结构简单,参数少,计算效率高,训练时间约5 h,图像的平均测试时间约为3 s,有很强的实际应用性。结论 本文提出了一种显著性结果优化算法,实验结果表明算法有效改善了显著性对象F值和MAE值,在对显著性对象检测要求越来越准确的对象识别等任务中有较好的适应性和应用性前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号