首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 390 毫秒
1.
由于我国地址普遍存在命名异常复杂,结构无序,门牌号码编号混乱等问题,难以利用欧美国家的道路插值方式进行地理编码,因此,本文提出了一种基于无规则地址点的地理编码模型,并实现了基础地理编码模型的分析应用:地址匹配和逆地址匹配。  相似文献   

2.
程琦  梁武卫  汪培 《城市勘测》2018,(1):76-78,82
针对中文地名地址的特点,设计了一套基于复合字典地名地址匹配算法,此算法利用逆向最大匹配、梯度权重、复合字典相结合,实现了更加准确、高效的地名搜索,并利用GIS软件开发了一套匹配软件,实现了算法功能,在工商登记企业、警用地理信息、土地确权落宗、人口定位等项目中得到广泛验证与应用。  相似文献   

3.
《Planning》2014,(1)
针对传统文本索引技术空间消耗大、分词不准确等问题,设计并实现了高性能文本索引系统。该系统采用压缩的全文自索引算法,节省了空间开销,避免了自然语言分词方法的影响,配合通配符搜索算法扩展了模糊搜索的功能,在众核CPU高性能处理器上可实现多线程并行处理,提高了处理速度,整个系统的实现是基于Web方式的,可以跨平台运行。实验结果表明,该系统将文本索引的空间消耗降为原文本的50%左右,具有较高的实用价值。  相似文献   

4.
张雪虎北京大学遥感与地理信息系统研究所副教授,空间智能计算实验室主任,硕士生导师,IEEE高级会员,国际对地卫星观测委员会(CEOS)教育与培训工作组成员。曾任美国麻省大学电子工程系微波遥感实验室博士后研究员,美国海洋大气管理局(NOAA)"飓风猎手"科研队成员,美国Empirix公司软件架构师等。2003年回国后,主要参与国家863数字城市科研课题工作,负责"城市地址地理编码"子课题,领导开发的地址地理编码(AddressGeocoding)原型系统正确地址匹配率达90%以上,基本解决了我国空间信息领域的地址地理编码瓶颈问题。目前研究领域有中文地址地理编码,空间信息智能计算与服务,复杂系统与混沌神经网络。已发表学术论文20余篇,其中SCI/EI检索论文10余篇。  相似文献   

5.
在数字化日益成为重要辅助技术手段的城市规划中,城市地址编码是基础性的关键技术。本文通过解析西方发达国家的城市地址编码经验,分析我国城市地址产生的历史传统,提出基于城市社区建设,由社区单元地址编码和道路地址编码组成的城市地址编码方法;论述了数字城市规划中,运用该方法建立的城市地址编码对现状信息整合、方案论证及规划管理的作用。  相似文献   

6.
宝鸡市数字化城市管理系统中基础数据的建立   总被引:1,自引:0,他引:1  
通过现有的信息技术和积累的相关IT资源,运用空间网格技术、地理编码技术,以数字城市技术为依托,将空间地理技术、协同工作模式应用到城市管理中,建立宝鸡市数字化城市管理系统,全面提高城市管理的效率,政府的管理能力和服务水平,实现高效能服务。  相似文献   

7.
基于地理编码的北京市城乡规划信息资源整合研究   总被引:1,自引:0,他引:1  
城市地理编码是“数字城市”空间信息基础设施的重要内容,是实现城市信息共享与交换的支撑技术。北京市地理编码标准制定工作通过确定空间定位参考体系和空间单元,采用空间解析、空间关联及空间参照的实施方法,实现地理编码在城市规划信息资源整合中的应用,为城多规划信息资源共享带来契机。  相似文献   

8.
地址编码数据库的建立是地址编码技术的重要基础,本文详细论述了地址编码数据库建立的过程,主要包括地址数据的标准化、数据库的总体设计及数据采集与更新三个部分,其中总体设计部分重点讲述了地址编码数据库的四层逻辑结构、系统建立的数据流程图和地址数据库的表结构,最后介绍了地址编码数据库在实际中的应用。同时,结合国内现状,提出了若干建议。  相似文献   

9.
以高分辨率的正射影像数据和建成区叠加大比例尺地形图为基础,核查与补充了地名地址数据,按照地名数据编码和地理要素分类代码,建立村镇地图数据库。依据地图编制设计规则,采用GIS平台和专业地图编绘软件,最终完成地图编绘、印刷工作。地图数据库和编码对照表技术的应用,实现了空间实体的智能符号化和快速出图,为"一村一镇一地图"建设工作的开展提供了一定的技术与方法支持。  相似文献   

10.
门牌地址作为一个院落及单位的唯一地址标识,在人们的衣食住行等起到重要作用,同时在应急救援、户籍管理和规划建设等方面体现城市的管理水平。本文在分析传统的序数编码和距离编码的优缺点之后,提出了基于加权距离的门牌地址编码算法,以沿道路走向距离和依据控规用地性质划定门牌号间隔区间进行门牌地址编码,进而弥补了仅基于距离进行门牌地址编码的不足,避免了重号、号码过大,减少跳号,使门牌地址编码更加合理。  相似文献   

11.
行业专题资料是地理国情普查工作的重要数据源,涉及部门众多,数据差异较大. 本文阐述了武汉市地理国情普查行业专题资料收集与分析利用的经验做法,介绍了专题资料的处理与上图的作业流程和基于百度地图服务的要素采集方法,探索了地理国情普查专题资料的应用.  相似文献   

12.
地址编码数据库是城市信息化的重要组成部分,本文结合福州市实际,对地址编码数据库的建设进行了研究。重点阐述了地址编码数据库的技术体系、建设步骤、地址编码数据的分类与规则设计及数据库的组织形式,为项目的实施提供了技术支撑.  相似文献   

13.
Abstract:   Spatial databases contain geocoded data. Geocoded data play a major role in numerous engineering applications such as transportation and environmental studies where geospatial information systems (GIS) are used for spatial modeling and analysis as they contain spatial information (e.g., latitude and longitude) about objects. The information that a GIS produces is impacted by the quality of the geocoded data (e.g., coordinates) stored in its database. To make appropriate and reasonable decisions using geocoded data, it is important to understand the sources of uncertainty in geocoding. There are two major sources of uncertainty in geocoding, one related to the database that is used as a reference data set to geocode objects and one related to the interpolation technique used. Factors such as completeness, correctness, consistency, currency, and accuracy of the data in the reference database contribute to the uncertainty of the former whereas the specific logic and assumptions used in an interpolation technique contribute to the latter. The primary purpose of this article is to understand uncertainties associated with interpolation techniques used for geocoding. In doing so, three geocoding algorithms were used and tested and the results were compared with the data collected by the Global Positioning System (GPS). The result of the overall comparison indicated no significant differences between the three algorithms .  相似文献   

14.
In the field of tunnel lining crack identification, the semantic segmentation algorithms based on convolution neural network (CNN) are extensively used. Owing to the inherent locality of CNN, these algorithms cannot make full use of context semantic information, resulting in difficulty in capturing the global features of crack. Transformer-based networks can capture global semantic information, but this method also has the deficiencies of strong data dependence and easy loss of local features. In this paper, a hybrid semantic segmentation algorithm for tunnel lining crack, named SCDeepLab, is proposed by fusing Swin Transformer and CNN in the encoding and decoding framework of DeepLabv3+ to address the above issues. In SCDeepLab, a joint backbone network is introduced with CNN-based Inverse Residual Block and Swin Transformer Block. The former is used to extract the local detailed information of the crack to generate the shallow feature layer, whereas the latter is used to extract the global semantic information to obtain the deep feature layer. In addition, Efficient Channel Attention enhanced Feature Fusion Module is proposed to fuse the shallow and deep features to combine the advantages of the two types of features. Furthermore, the strategy of transfer learning is adopted to solve the data dependency of Swin Transformer. The results show that the mean intersection over union (mIoU) and mean pixel accuracy (mPA) of SCDeepLab on the data sets constructed in this paper are 77.41% and 84.42%, respectively, which have higher segmentation accuracy than previous CNN-based and transformer-based semantic segmentation algorithms.  相似文献   

15.
由于火焰分割数据集欠缺,经典语义分割模型在火焰分割的研究应用面小,模型对比实验不充分.针对这些问题,在构建火焰分割数据集的基础上,选用在公开数据集中表现良好的4种语义分割模型和2种骨干网络进行训练和测试,并在不同的应用场景下进行对比实验及分析.实验结果表明,U-Net模型在火焰分割领域取得了较好的效果,其中U-Net+...  相似文献   

16.
Single shot, semantic bounding box detectors, trained in a supervised manner are popular in computer vision-aided visual inspections. These methods have several key limitations: (1) bounding boxes capture too much background, especially when images experience perspective transformation; (2) insufficient domain-specific data and cost to label; and (3) redundant or incorrect detection results on videos or multi-frame data; where it is a nontrivial task to select the best detection and check for outliers. Recent developments in commercial augmented reality and robotic hardware can be leveraged to support inspection tasks. A common capability of the previous is the ability to obtain image sequences and camera poses. In this work, the authors leverage pose information as “prior” to address the limitations of existing supervised learned, single-shot, semantic detectors for the application of visual inspection. The authors propose an unsupervised semantic segmentation method (USP), based on unsupervised learning for image segmentation inspired by differentiable feature clustering coupled with a novel outlier rejection and stochastic consensus mechanism for mask refinement. USP was experimentally validated for a spalling quantification task using a mixed reality headset (Microsoft HoloLens 2). Also, a sensitivity study was conducted to evaluate the performance of USP under environmental or operational variations.  相似文献   

17.
A novel video-based method is proposed for long-distance wildfire smoke detection. Since the long-distance wildfire smoke usually moves slowly and lacks salient features in the video, the detection is still a challenging problem. Unlike many traditional video-based methods that usually rely on the smoke color or motion for initial smoke region segmentation, we use the Maximally Stable Extremal Region (MSER) detection method to extract local extremal regions of the smoke. This makes the initial segmentation of possible smoke region less dependent on the motion and color information. Potential smoke regions are then selected from all the possible regions by using some static visual features of the smoke, helping to eliminate the non-smoke regions as many as possible. Once a potential smoke region is found, we keep tracking it by searching the best-matched extremal regions in the subsequent frames. At the same time, the propagating motions of the potential smoke region are monitored based on a novel cumulated region approach, which can be effectively used to identify the distinctive expanding and rising motions of smoke. This approach can also make the final smoke motion identification insensitive to image shaking. It was proved that the proposed method is able to reliably detect the long-distance wildfire smoke and simultaneously produce very few false alarms in actual applications.  相似文献   

18.
点云数据分割是三维模型重建的关键环节,传统的基于模糊C均值聚类(FCM)的点云数据分割算法在规则物体细节的分割上具有一定的局限性。针对此问题,在传统算法的基础上,增加了激光反射率信息,提出了顾及激光反射率的分割算法。通过实例验证,该算法具有较高的可行性和普遍适用性,分类结果较为可靠。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号