首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
We describe a method for generating queries for retrieving data from distributed heterogeneous semistructured documents, and its implementation in the metadata interface DDXMI (distributed document XML metadata interchange). The proposed system generates local queries appropriate to local schemas from a user query over the global schema. The system constructs mappings between global schema and local schemas (extracted from local documents if not given), path substitution, and node identification for resolving the heterogeneity among nodes with the same label that often exist in semistructured data. The system uses Quilt as its XML query language. An experiment is reported over three local semistructured documents: ‘thesis’, ‘reports’, and ‘journal’ documents with ‘article’ global schema. The prototype was developed under Windows system with Java and JavaCC.  相似文献   

3.
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., “strictness for large lot size, toleration for small lot size,” and those of a national standard used specifically for industrial outputs, i.e., “lots with different sizes corresponding to the same sampling plan.”  相似文献   

4.
This paper presents the LQM metadata schema, an extension of the IEEE LOM standard. LQM is capable of registering information related to the quality of virtual education resources. As a complement, we have developed a cataloging and evaluation tool capable of registering LQM metadata and performing the subsequent quality estimation according to UNE 66181:2012. The proposal identifies and describes the dimensions and properties of the LQM element data. The research results show that it is feasible to provide an automatic estimation of quality of digital educational resources using LQM.  相似文献   

5.
基于软件描述模型的测试数据自动生成研究中,字符串类型测试数据生成是一个研究热点和难点。EFSM模型是一种重要的软件描述模型。分析了EFSM模型的特点,针对面向EFSM模型目标路径的字符串测试数据生成,建立了字符串输入变量模型和操作模型,结合静态测试的特点,给出了通过字符串变量模型在目标路径上的符号执行结果生成字符串类型测试数据的方法。实验结果表明,该方法能够达到预期效果,提高测试生成效率。  相似文献   

6.
Domain testing is designed to detect domain errors that result from a small boundary shift in a path domain. Although many researchers have studied domain testing, automatic domain test data generation for string predicates has seldom been explored. This paper presents a novel approach for the automatic generation of ON–OFF test points for string predicate borders, and describes a corresponding test data generator. Our empirical work is conducted on a set of programs with string predicates, where extensive trials have been done for each string predicate, and the results are analysed using the SPSS tool. Conclusions are drawn that: (i) the approach is promising and effective; (ii) there is a strong linear relationship between the performance of the test generator and the length of target string in the predicate tested; and (iii) initial inputs, no shorter than the target string and with characters generated randomly, may enhance the performance in the test data generation for string predicates. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
The advent of commercial observation satellites in the new millennium provides unprecedented access to timely information, as they produce images of the Earth with the sharpness and quality previously available only from US, Russian, and French military satellites. Due to the fact that they are commercial in nature, a broad range of government agencies (including international), the news media, businesses, and nongovernmental organizations can gain access to this information. This may have grave implications on national security and personal privacy. Formal policies for prohibiting the release of imagery beyond a certain resolution, and notifying when an image crosses an international boundary or when such a request is made, are beginning to emerge. Access permissions in this environment are determined by both the spatial and temporal attributes of the data, such as location, resolution level, and the time of image download, as well as those of the user credentials. Since existing authorization models are not adequate to provide access control based on spatial and temporal attributes, in this paper, we propose a geospatial data authorization model (GSAM). Unlike the traditional access control models where authorizations are specified using subjects and objects, authorizations in GSAM are specified using credential expressions and object expressions. GSAM supports privilege modes including view, zoom-in, download, overlay, identify, animate, and fly by, among others. We present our access control prototype system that enables subject, object as well as authorization specification via a Web-based interface. When an access request is made, the access control system computes the overlapping region of the authorization and the access request. The zoom-in and zoom-out requests can simply be made through a click of the mouse, and the appropriate authorizations will be evaluated when these access requests are made  相似文献   

8.
Speech interfaces are becoming more and more popular as a means to interact with virtual environments but the development and integration of these interfaces is usually still ad hoc, especially the speech grammar creation of the speech interface is a process commonly performed by hand. In this paper, we introduce an approach to automatically generate a speech grammar which is generated using semantic information. The semantic information is represented through ontologies and gathered from the conceptual modelling phase of the virtual environment application. The utterances of the user will be resolved using queries onto these ontologies such that the meaning of the utterance can be resolved. For validation purposes we augmented a city park designer with our approach. Informal tests validate our approach, because they reveal that users mainly use words represented in the semantic data, and therefore also words which are incorporated in the automatically generated speech grammar.
Karin ConinxEmail:
  相似文献   

9.
In this paper we present a generic model for automatic generation of basic multi-partite graphs obtained from collections of arbitrary input data following user indications. The paper also presents GraphGen, a tool that implements this model. The input data is a collection of complex objects composed by a set or list of heterogeneous elements. Our tool provides a simple interface for the user to specify the types of nodes that are relevant for the application domain in each case. The nodes and the relationships between them are derived from the input data through the application of a set of derivation rules specified by the user. The resulting graph can be exported in the standard GraphML format so that it can be further processed with other graph management and mining systems. We end by giving some examples in real scenarios that show the usefulness of this model.  相似文献   

10.
This paper proposes a dynamic test data generation framework based on genetic algorithms. The framework houses a Program Analyser and a Test Case Generator, which intercommunicate to automatically generate test cases. The Program Analyser extracts statements and variables, isolates code paths and creates control flow graphs. The Test Case Generator utilises two optimisation algorithms, the Batch-Optimistic (BO) and the Close-Up (CU), and produces a near to optimum set of test cases with respect to the edge/condition coverage criterion. The efficacy of the proposed approach is assessed on a number of programs and the empirical results indicate that its performance is significantly better compared to existing dynamic test data generation methods.  相似文献   

11.
以程序结构测试自动生成为研究背景,提出了一种重叠路径结构用以描述程序路径,并以此为基础设计了一种多路径测试数据生成适应值算法,实现了一次搜索完成多条路径的测试数据生成。算法通过目标路径间共享遗传算法产生的中间个体减少单一路径搜索始于随机产生的无序个体的初期迭代,从而加快搜索收敛的速度。应用于常用的基准程序和取自实际项目的程序,该算法与典型的分支谓词距离算法相比平均消耗时间缩短了70.6%。  相似文献   

12.
Automatic personalized video abstraction for sports videos using metadata   总被引:1,自引:1,他引:0  
Video abstraction is defined as creating a video abstract which includes only important information in the original video streams. There are two general types of video abstracts, namely the dynamic and static ones. The dynamic video abstract is a 3-dimensional representation created by temporally arranging important scenes while the static video abstract is a 2-dimensional representation created by spatially arranging only keyframes of important scenes. In this paper, we propose a unified method of automatically creating these two types of video abstracts considering the semantic content targeting especially on broadcasted sports videos. For both types of video abstracts, the proposed method firstly determines the significance of scenes. A play scene, which corresponds to a play, is considered as a scene unit of sports videos, and the significance of every play scene is determined based on the play ranks, the time the play occurred, and the number of replays. This information is extracted from the metadata, which describes the semantic content of videos and enables us to consider not only the types of plays but also their influence on the game. In addition, user’s preferences are considered to personalize the video abstracts. For dynamic video abstracts, we propose three approaches for selecting the play scenes of the highest significance: the basic criterion, the greedy criterion, and the play-cut criterion. For static video abstracts, we also propose an effective display style where a user can easily access target scenes from a list of keyframes by tracing the tree structures of sports games. We experimentally verified the effectiveness of our method by comparing our results with man-made video abstracts as well as by conducting questionnaires.
Noboru BabaguchiEmail:
  相似文献   

13.
田甜  毛明志 《计算机工程与设计》2011,32(6):2134-2137,2149
针对软件结构测试数据的自动生成提出了一种动态改变惯性权重的简化粒子群算法(DWSPSO)。该算法舍弃了粒子速度这个参数,并通过粒子群中所有粒子适应度的整体变化跟踪粒子群的状态。在每次迭代时,算法可根据粒子的适应度变化动态改变惯性权重,从而使算法具有动态自适应性全局搜索与局部搜索能力。实验结果表明,该算法在测试数据的自动生成上,优于基本的粒子群算法以及惯性权值线性递减粒子群算法(LDWPSO)。  相似文献   

14.
Underground utility lines being struck by mechanized excavators during construction or maintenance operations is a long standing problem. Besides the disruptions to public services, daily life, and commerce, utility strike accidents lead to injuries, fatalities, and property damages that cause significant financial loss. Utility strikes by excavation occur mainly because of the lack of an effective approach to synergize the geospatial utility locations and the movement of excavation equipment into a real-time, three-dimensional (3D) spatial context that is accessible to excavator operators. A critical aspect of enabling such a knowledge-based excavation approach is the geospatial utility data and its geometric modeling. Inaccurate and/or incomplete utility location information could lead to false instilled confidence and be counterproductive to the excavator operator. This paper addresses the computational details in geometric modeling of geospatial utility data for 3D visualization and proximity monitoring to support knowledge-based excavation. The details of the various stages in the life-cycle of underground utility geospatial data are described, and the inherent limitations that preclude the effective use of the data in downstream engineering applications such as excavation guidance are analyzed. Five key requirements - Interactivity, Information Richness, 3-Dimensionality, Accuracy Characterization, and Extensibility – are identified as necessary for the consumption of geospatial utility data in location-sensitive engineering applications. A visualization framework named IDEAL that meets the outlined requirements is developed and presented in this paper to geometrically represent buried utility geospatial data and the movement of excavation equipment in a 3D emulated environment in real-time.  相似文献   

15.
The purpose of this research is to suggest and develop a building information modeling (BIM) database based on BIM perspective definition metadata for connecting external facility management (FM) and BIM data, which considers variability and expandability from the user’s perspective. The BIM-based FM system must be able to support different use cases per user role and effectively extract information required by the use cases from various heterogeneous data sources. If the FM system’s user perspective becomes structurally fixed when developing the system, the lack of expandability can cause problems for maintenance and reusability. BIM perspective definition (BPD) metadata helps increase expandability and system reusability because it supports system variability, which allows adding or changing the user perspective even after the system has been developed. The information to be dealt with differs according to the user’s role, which also means that the data model, data conversion rules, and expression methods change per perspective. The perspective should be able to extract only the user-requested data from the heterogeneous system’s data source and format it in the style demanded by the user. In order to solve such issues, we analyzed the practice of FM and the benefits of using BIM-based FM, and we proposed a BPD that supports data extraction and conversion and created a prototype.  相似文献   

16.
The authors examine a particular type of domain-based metadata for which a complete ready-made conceptual framework is not and probably cannot be directly supported in predefined data models of knowledge-based representation paradigms. The data or knowledge base designer thus has the burden of sometimes working outside a familiar and otherwise appropriate model. The authors discuss issues arising in formulating certain domain-based metadata extensions and provide guidelines for developing them. These results derive from a continuing effort to create a knowledge base to support research on biological organisms. The added semantics are used to support a more flexible and reliable system for identification  相似文献   

17.
基于空间数据面向对象存储思想和云存储可扩展架构,将控制信息集中在元数据服务器集群中管理,而实际的空间数据基于对象存储分布到存储设备集群中,实现控制信息路径与数据传输路径的分离,并缓存热点空间数据对象接口,以减少元数据访问次数和降低其服务器负载;基于对象存储设备的并行性和CDMI标准对元数据进行自上而下的功能分层管理,增...  相似文献   

18.
基于GA-PSO算法的路径测试数据自动生成*   总被引:3,自引:2,他引:3  
为了实现测试数据自动生成,许多遗传算法及其改进算法应用到了测试领域。针对遗传算法具有较强的全局搜索能力,但局部搜索能力较弱,且收敛速度慢的特点。将遗传算法与粒子群算法结合起来形成新的混合算法(GA-PSO),并成功应用到软件测试数据自动生成过程中。实验结果表明,该算法结合了遗传算法和粒子群算法的优点,在保证软件测试数据正确生成的情况下,极大地提高了数据生成的效率。  相似文献   

19.
Automatic generation of high-quality building models from lidar data   总被引:3,自引:0,他引:3  
Automating data acquisition for 3D city models is an important research topic in photogrammetry. In addition to techniques that rely on aerial images, generating 3D building models from point clouds provided. by light detection and ranging (Lidar) sensors is gaining importance. The progress in sensor technology has triggered this development. Airborne laser scanners can deliver dense point clouds with densities of up to one point per square meter. Using this information, it's possible to detect buildings and their approximate outlines and also to extract planar roof faces and create models that correctly resemble the roof structures. The author presents a method for automatically generating 3D building models from point clouds generated by the Lidar sensing technology.  相似文献   

20.
This paper discusses an iconmap-based visualization technique that enables multiple geospatial variables to be illustrated in a single GIS raster layer.This is achieved by extending the conventional pixelbased data structure to an iconic design.In this way,spatial patterns generated by the interaction of geographic variables can be disclosed,and geospatial information mining can be readily achieved.As a case study,a visual analysis of soil organic matter and soil nutrients for Shuangliu County in the city of Chengdu,China,was undertaken using the prototype IconMapper software,developed by the authors.The results show that the static IconMap can accurately exhibit trends in the distribution of organic matter and nutrients in soil.The dynamic iconmap can both reflect interaction patterns between organic matter and the nutrient variables,and display soil fertility levels in a comprehensive way.Thus,the iconmap-based visualization approach is shown to be a non-fused,exploratory analytical approach for multivariate data analysis and as a result is a valuable method for visually analyzing soil fertility conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号