首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Computers & Geosciences》2006,32(2):184-194
The evolution of open standards and especially those pertaining to the family of XML technologies, have a considerable impact on the way the Geomatics community addresses the acquisition, storage, analysis and display of spatial data. The most recent version of the GML specification enables the merging of vector and raster data into a single “open” format. The notion of “coverage” as described in GML 3.0 can be the equivalent of a raster multi-band dataset. In addition, vector data storage is also described in detail through the GML Schemas and XML itself can store the values of a raster dataset, as values of a multi-table dataset. Under these circumstances an issue that must be addressed is the transformation of raster data into XML format and their subsequent visualization through SVG. The objective of this paper is to give an overview of the steps that can be followed in order to embody open standards and XML technologies in the raster domain. The last part of the work refers to a case study that suggests a step by step methodology to accomplish classification, an important function in Cartography and Remote Sensing, using the XML-encoded images.  相似文献   

2.
传统的离群检测方法多数源于单个数据集或多数据源融合后的单一数据集,其检测结果忽略了多源数据之间的关联知识和单数据源中的关键信息。为了检测多源数据之间的离群关联知识,提出一种基于相关子空间的多源离群检测算法RSMOD。结合[k]近邻集和反向近邻集的双向影响,给出面向多源数据的对象影响空间,提高了离群对象度量的准确性;在影响空间基础上,提出面向多源数据的稀疏因子及稀疏差异因子,有效地刻画了数据对象在多源数据中的稀疏程度,重新定义了相关子空间的度量,使其能适用于多源数据集,并给出基于相关子空间的离群检测算法;采用人工合成数据集和真实的美国人口普查数据集,实验验证了RSMOD算法的性能并分析了源于多数据集的离群关联知识。  相似文献   

3.
知识追踪通过对知识点的表示来描述习题,以此建模知识状态,最终预测学习者的未来表现。然而目前的研究在知识点的表示方面既没有建模历史知识点对当前知识点产生的时间关系上的影响,又未能刻画习题内部各知识点之间产生的空间关系上的作用。为了解决上述问题,提出了时空相关性融合表征的知识追踪模型。首先,以知识点之间的时间相关程度为基础,建模历史知识点对当前知识点的时间作用;其次,利用图注意力网络建模习题所包含的若干知识点之间的空间作用,得到蕴涵了时空信息的知识点表示;最后,利用上述知识点的表示推导出习题的表示,通过自注意力机制得到当前的知识状态。在实验阶段,与五种相关知识追踪模型在四个真实数据集上进行性能对比,结果表明提出的模型在性能方面有更出色的表现。特别地,在ASSISTments2017数据集中所提模型比五个对比模型在AUC、Acc方面分别提升了1.7%~7.7%和7.3%~2.1%;消融实验证明了建模知识点之间时空相关影响的有效性,训练过程实验表明了提出的模型在知识点的表示及其相互作用关系的建模等方面具有一定的优势,应用实例也可看出该模型优于其他知识追踪模型的实际结果。  相似文献   

4.
为解决医学图像中前景背景比例严重失衡及小目标区域难以分割的问题,该文提出了一种 基于高斯图像金字塔的注意力网络。具体地,首先在特征解码阶段将空间信息与抽象信息进行特征融 合;其次,设计了一个特征召回器以强制编码器减少遗漏感兴趣区域的特征;最后,引入分类精度和 全局区域重叠项组成的混合损失函数来处理医学图像前景背景严重不平衡问题。所提出的方法在膝关 节软骨数据集和 COVOID-19 胸部 CT 数据集中进行了验证,其分割区域分别占 2.08% 和 10.73%。与 U-Net 及其主流变体相比,该方法在两个数据集上都得到了最佳的 Dice 系数,分别为 0.884±0.032 和 0.831±0.072。  相似文献   

5.
A Voronoi diagram is an interdisciplinary concept that has been applied to many fields. In geographic information systems (GIS), existing capabilities for generating Voronoi diagrams normally focus on ordinary (not weighted) point (not linear or area) features. For better integration of Voronoi diagram models and GIS, a raster-based approach is developed, and implemented seamlessly as an ArcGIS extension using ArcObjects. In this paper, the methodology and implementation of the extension are described, and examples are provided for ordinary or weighted point, line, and polygon features. Advantages and limitations of the extensions are also discussed. The extension has the following features: (1) it works for point, line, and polygon vector features; (2) it can generate both ordinary and multiplicatively weighted Voronoi diagrams in vector format; (3) it can assign non-spatial attributes of input features to Voronoi cells through spatial joining; and (4) it can produce an ordinary or a weighted Euclidean distance raster dataset for spatial modeling applications. The results can be conveniently combined with other GIS datasets to support both vector-based spatial analysis and raster-based spatial modeling.  相似文献   

6.
视频显著性目标检测需要同时结合空间信息和时间信息,连续地定位视频序列中与运动相关的显著性目标,其核心问题在于如何高效地刻画运动目标的时空特征.现有的视频显著性目标检测算法大多使用光流,ConvLSTM以及3D卷积等提取时域特征,缺乏对时间信息的连续学习能力.为此,设计了一种鲁棒的时空渐进式学习网络(spatial-temporal progressive learning network, STPLNet),以完成对视频序列中显著性目标的高效定位.在空间域中使用一种U型结构对各视频帧进行编码解码,在时间域中通过学习视频序列中帧间运动目标的主体部分和形变区域特征,渐进地对运动目标特征进行编码,能够捕捉到目标的时间相关性特征和运动趋向性.在4个公开数据集上与13个主流的视频显著性目标检测算法进行一系列对比实验,所提出的模型在多个指标(max F, S-measure (S), MAE)上达到了最优结果,同时在运行速度上具有较好的实时性.  相似文献   

7.
针对现阶段语义分割网络存在的空间和通道特征不匹配、小目标物体像素丢失等问题,设计了一种基于空间特征提取和注意力机制的双路径语义分割算法。空间信息路径利用四倍下采样来保留高分辨率特征,并引入空间特征提取模块融合多尺度空间信息,加强网络对小目标物体的识别能力;采用一条结合双阶通道注意力的语义上下文路径提取判别特征,使深层特征能够指导浅层特征捕捉更精确的语义信息,从而降低精度损失。在CamVid和Aeroscapes数据集上验证该算法,平均交并比分别可达70.5%和51.8%,相比于当前主流的双路径语义分割模型有所提升,结果验证了所提算法的有效性。  相似文献   

8.
An algorithm for extracting the spatial objects contained in raster image data is presented. Input to the algorithm is a Siemens computer-aided design (SICAD)-Hygris raster file representing a classification of the plane. Output of the algorithm is a SICAD geographic information system (GIS) external file containing the topologically complete definitions of all area objects and their constituents, such as arcs and nodes, as implied by the classification. The algorithm is based on a one-pass plane sweep that processes the input raster data in a strictly sequential order. Processing takes place only if certain color changes, also called events, occur. Each event triggers a well-defined sequence of simple actions  相似文献   

9.
Lacunarity analysis of raster datasets and 1D, 2D,and 3D point patterns   总被引:1,自引:0,他引:1  
Spatial scale plays an important role in many fields. As a scale-dependent measure for spatial heterogeneity, lacunarity describes the distribution of gaps within a set at multiple scales. In Earth science, environmental science, and ecology, lacunarity has been increasingly used for multiscale modeling of spatial patterns. This paper presents the development and implementation of a geographic information system (GIS) software extension for lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns. Depending on the application requirement, lacunarity analysis can be performed in two modes: global mode or local mode. The extension works for: (1) binary (1-bit) and grey-scale datasets in any raster format supported by ArcGIS and (2) 1D, 2D, and 3D point datasets as shapefiles or geodatabase feature classes. For more effective measurement of lacunarity for different patterns or processes in raster datasets, the extension allows users to define an area of interest (AOI) in four different ways, including using a polygon in an existing feature layer. Additionally, directionality can be taken into account when grey-scale datasets are used for local lacunarity analysis. The methodology and graphical user interface (GUI) are described. The application of the extension is demonstrated using both simulated and real datasets, including Brodatz texture images, a Spaceborne Imaging Radar (SIR-C) image, simulated 1D points on a drainage network, and 3D random and clustered point patterns. The options of lacunarity analysis and the effects of polyline arrangement on lacunarity of 1D points are also discussed. Results from sample data suggest that the lacunarity analysis extension can be used for efficient modeling of spatial patterns at multiple scales.  相似文献   

10.
目的 现有的图像识别方法应用于从同一分布中提取的训练数据和测试数据时具有良好性能,但这些方法在实际场景中并不适用,从而导致识别精度降低。使用领域自适应方法是解决此类问题的有效途径,领域自适应方法旨在解决来自两个领域相关但分布不同的数据问题。方法 通过对数据分布的分析,提出一种基于注意力迁移的联合平衡自适应方法,将源域有标签数据中提取的图像特征迁移至无标签的目标域。首先,使用注意力迁移机制将有标签源域数据的空间类别信息迁移至无标签的目标域。通过定义卷积神经网络的注意力,使用关注信息来提高图像识别精度。其次,基于目标数据集引入网络参数的先验分布,并且赋予网络自动调整每个领域对齐层特征对齐的能力。最后,通过跨域偏差来描述特定领域的特征对齐层的输入分布,定量地表示每层学习到的领域适应性程度。结果 该方法在数据集Office-31上平均识别准确率为77.6%,在数据集Office-Caltech上平均识别准确率为90.7%,不仅大幅领先于传统手工特征方法,而且取得了与目前最优的方法相当的识别性能。结论 注意力迁移的联合平衡领域自适应方法不仅可以获得较高的识别精度,而且能够自动学习领域间特征的对齐程度,同时也验证了进行域间特征迁移可以提高网络优化效果这一结论。  相似文献   

11.
ABSTRACT

This study describes a newly developed high-resolution (1.1 km) Normalized Difference Vegetation Index dataset for the peninsular Spain and the Balearic Islands (Sp_1km_NDVI). This dataset is developed based on National Oceanic and Atmospheric Administration–Advanced Very High Resolution Radiometer (NOAA–AVHRR) afternoon images, spanning the past three decades (1981–2015). After a careful pre-processing procedure, including calibration with post-launch calibration coefficients, geometric and topographic corrections, cloud removal, temporal filtering, and bi-weekly composites by maximum NDVI-value, we assessed changes in vegetation greening over the study domain using Mann-Kendall and Theil-Sen statistics. Our trend results were compared with those derived from some widely recognized global NDVI datasets [e.g. the Global Inventory Modelling and Mapping Studies 3rd generation (GIMMS3g), Smoothed NDVI (SMN) and Moderate-Resolution Imaging Spectroradiometer (MODIS)]. Results demonstrate that there is a good agreement between the annual trends based on Sp_1km_NDVI product and other datasets. Nonetheless, we found some differences in the spatial patterns of the NDVI trends at the seasonal scale. Overall, in comparison to the available global NDVI datasets, Sp_1km_NDVI allows for characterizing changes in vegetation greening at a more-detailed spatial and temporal scale. In specific, our dataset provides relatively long-term corrected satellite time series (>30 years), which are crucial to understand the response of vegetation to climate change and human-induced activities. Also, given the complex spatial structure of NDVI changes over the study domain, particularly due to the rapid land intensification processes, the spatial resolution (1.1 km) of our dataset can provide detailed spatial information on the inter-annual variability of vegetation greening in this Mediterranean region and assess its links to climate change and variability.  相似文献   

12.
13.
A fuzzy index for detecting spatiotemporal outliers   总被引:1,自引:1,他引:0  
The detection of spatial outliers helps extract important and valuable information from large spatial datasets. Most of the existing work in outlier detection views the condition of being an outlier as a binary property. However, for many scenarios, it is more meaningful to assign a degree of being an outlier to each object. The temporal dimension should also be taken into consideration. In this paper, we formally introduce a new notion of spatial outliers. We discuss the spatiotemporal outlier detection problem, and we design a methodology to discover these outliers effectively. We introduce a new index called the fuzzy outlier index, FoI, which expresses the degree to which a spatial object belongs to a spatiotemporal neighbourhood. The proposed outlier detection method can be applied to phenomena evolving over time, such as moving objects, pedestrian modelling or credit card fraud.  相似文献   

14.
Exploring spatial datasets with histograms   总被引:2,自引:0,他引:2  
As online spatial datasets grow both in number and sophistication, it becomes increasingly difficult for users to decide whether a dataset is suitable for their tasks, especially when they do not have prior knowledge of the dataset. In this paper, we propose browsing as an effective and efficient way to explore the content of a spatial dataset. Browsing allows users to view the size of a result set before evaluating the query at the database, thereby avoiding zero-hit/mega-hit queries and saving time and resources. Although the underlying technique supporting browsing is similar to range query aggregation and selectivity estimation, spatial dataset browsing poses some unique challenges. In this paper, we identify a set of spatial relations that need to be supported in browsing applications, namely, the contains, contained and the overlap relations. We prove a lower bound on the storage required to answer queries about the contains relation accurately at a given resolution. We then present three storage-efficient approximation algorithms which we believe to be the first to estimate query results about these spatial relations. We evaluate these algorithms with both synthetic and real world datasets and show that they provide highly accurate estimates for datasets with various characteristics. Recommended by: Sunil Prabhakar Work supported by NSF grants IIS 02-23022 and CNF 04-23336. An earlier version of this paper appeared in the 17th International Conference on Data Engineering (ICDE 2001).  相似文献   

15.

Developments in advanced innovations have prompted the generation of an immense amount of digital information. The data deluge contains hidden information that is difficult to extract. In the biomedical domain, the development of technology has caused the production of voluminous data. Processing these voluminous textual data is referred to as ‘biomedical content mining’. Emerging artificial intelligence (AI) models play a major role in the automation of Pharma 4.0. In AI, natural language processing (NLP) plays a dynamic role in extracting knowledge from biomedical documents. Research articles published by scientists and researchers contain an enormous amount of hidden information. Most of the original and peer-reviewed articles are indexed in PubMed. Extracting meaningful information from a large number of literature documents is very difficult for human beings. This research aims to extract the named entities of literature documents available in the life science domain. A high-level architecture is proposed along with a novel named entity recognition (NER) model. The model is built using rule-based machine learning (ML). The proposed ArRaNER model produced better accuracy and was also able to identify more entities. The NER model was tested on two different datasets: a PubMed dataset and a Wikipedia talk dataset. The ArRaNER model obtains an accuracy of 83.42% on the PubMed articles and 77.65% on the Wikipedia articles.

  相似文献   

16.
针对视频描述生成的文本质量不高与不够新颖的问题,本文提出一种基于特征强化与文本知识补充的编解码模型.在编码阶段,该模型通过局部与全局特征强化增强模型对视频中静态物体的细粒度特征提取,提高了对物体相似语义的分辨,并融合视觉语义与视频特征于长短期记忆网络(long short-term memory, LSTM);在解码阶段,为挖掘视频中不易被机器发现的隐含信息,截取视频部分帧并检测其中视觉目标,利用得到的视觉目标从外部知识语库提取知识用来补充描述文本的生成,以此产生出更新颖更自然的文本描述.在MSVD与MSR-VTT数据集上的实验结果表明,本文方法展现出良好的性能,并且生成的内容信息在一定程度上能够表现出新颖的隐含信息.  相似文献   

17.
Spatial clustering analysis is an important issue that has been widely studied to extract the meaningful subgroups of geo-referenced data. Although many approaches have been developed in the literature, efficiently modeling the network constraint that objects (e.g. urban facility) are observed on or alongside a street network remains a challenging task for spatial clustering. Based on the techniques of mathematical morphology, this paper presents a new spatial clustering approach NMMSC designed for mining the grouping patterns of network-constrained point objects. NMMSC is essentially a hierarchical clustering approach, and it generally consists of two main steps: first, the original vector data is converted to raster data by utilizing basic linear unit of network as the pixel in network space; second, based on the specified 1-dimensional raster structure, an extended mathematical morphology operator (i.e. dilation) is iteratively performed to identify spatial point agglomerations with hierarchical structure snapped on a network. Compared to existing methods of network-constrained hierarchical clustering, our method is more efficient for cluster similarity computation with linear time complexity. The effectiveness and efficiency of our approach are verified through the experiments with real and synthetic data sets.  相似文献   

18.
融合图像场景及物体先验知识的图像描述生成模型   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 目前基于深度卷积神经网络(CNN)和长短时记忆(LSTM)网络模型进行图像描述的方法一般是用物体类别信息作为先验知识来提取图像CNN特征,忽略了图像中的场景先验知识,造成生成的句子缺乏对场景的准确描述,容易对图像中物体的位置关系等造成误判。针对此问题,设计了融合场景及物体类别先验信息的图像描述生成模型(F-SOCPK),将图像中的场景先验信息和物体类别先验信息融入模型中,协同生成图像的描述句子,提高句子生成质量。方法 首先在大规模场景类别数据集Place205上训练CNN-S模型中的参数,使得CNN-S模型能够包含更多的场景先验信息,然后将其中的参数通过迁移学习的方法迁移到CNNd-S中,用于捕捉待描述图像中的场景信息;同时,在大规模物体类别数据集Imagenet上训练CNN-O模型中的参数,然后将其迁移到CNNd-O模型中,用于捕捉图像中的物体信息。提取图像的场景信息和物体信息之后,分别将其送入语言模型LM-S和LM-O中;然后将LM-S和LM-O的输出信息通过Softmax函数的变换,得到单词表中每个单词的概率分值;最后使用加权融合方式,计算每个单词的最终分值,取概率最大者所对应的单词作为当前时间步上的输出,最终生成图像的描述句子。结果 在MSCOCO、Flickr30k和Flickr8k 3个公开数据集上进行实验。本文设计的模型在反映句子连贯性和准确率的BLEU指标、反映句子中单词的准确率和召回率的METEOR指标及反映语义丰富程度的CIDEr指标等多个性能指标上均超过了单独使用物体类别信息的模型,尤其在Flickr8k数据集上,在CIDEr指标上,比单独基于物体类别的Object-based模型提升了9%,比单独基于场景类别的Scene-based模型提升了近11%。结论 本文所提方法效果显著,在基准模型的基础上,性能有了很大提升;与其他主流方法相比,其性能也极为优越。尤其是在较大的数据集上(如MSCOCO),其优势较为明显;但在较小的数据集上(如Flickr8k),其性能还有待于进一步改进。在下一步工作中,将在模型中融入更多的视觉先验信息,如动作类别、物体与物体之间的关系等,进一步提升描述句子的质量。同时,也将结合更多视觉技术,如更深的CNN模型、目标检测、场景理解等,进一步提升句子的准确率。  相似文献   

19.
深度图可以提供运动目标所处的三维空间结构信息,因此可以用来提升跟踪性能。但目前缺少基于RGBD的目标跟踪数据集,无法直接训练RGBD输入下的深度学习跟踪器。对此,提出了一种基于知识对齐的模型迁移重组算法,可以方便地将在其他RGBD任务上训练得到的模型迁移到基于DiMP的跟踪算法上来,并且对于不同的跟踪对象不需要重新计算迁移参数。另外,针对深度图信息不稳定的问题,提出了一种高效的平滑稳定算法。在VOTRGBD数据集上的实验结果表明,迁移融合后的特征可以显著提升目标和背景之间的判别性,有效提升跟踪器的性能。  相似文献   

20.
Polygons provide natural representations for many types of geospatial objects, such as countries, buildings, and pollution hotspots. Thus, polygon-based data mining techniques are particularly useful for mining geospatial datasets. In this paper, we propose a polygon-based clustering and analysis framework for mining multiple geospatial datasets that have inherently hidden relations. In this framework, polygons are first generated from multiple geospatial point datasets by using a density-based contouring algorithm called DCONTOUR. Next, a density-based clustering algorithm called Poly-SNN with novel dissimilarity functions is employed to cluster polygons to create meta-clusters of polygons. Finally, post-processing analysis techniques are proposed to extract interesting patterns and user-guided summarized knowledge from meta-clusters. These techniques employ plug-in reward functions that capture a domain expert’s notion of interestingness to guide the extraction of knowledge from meta-clusters. The effectiveness of our framework is tested in a real-world case study involving ozone pollution events in Texas. The experimental results show that our framework can reveal interesting relationships between different ozone hotspots represented by polygons; it can also identify interesting hidden relations between ozone hotspots and several meteorological variables, such as outdoor temperature, solar radiation, and wind speed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号