首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 146 毫秒
1.
闫巧芝  王洁萍  杜小勇 《软件学报》2009,20(Z1):154-164
在外包数据库服务(database-as-a-service,简称DAS)中,数据拥有者将数据外包给第三方:服务提供商(database service provider,简称DSP).与传统的DBMS相比,DAS通过提供基于Web的数据访问来节省数据库管理开销.为了保证DSP的服务质量,之前大部分工作关注于对数据隐秘性和数据有效性的研究.目前验证数据有效性的方法均要求DSP提供额外信息或储存额外数据,且每次更新都需要验证数据做相应调整,这在实际部署中是很低效的.为此提出了一种基于生成检测查询的数据有效性验证方法:通过用户发出过的多个查询生成检测查询,客户端根据检测查询的执行结果,并利用多个查询与检测查询的关系高效、有效地完成基于概率的有效性验证.通过实验验证了该方法的可行性.  相似文献   

2.
孙丽君  苗夺谦 《计算机工程》2007,33(16):183-185
从微阵列得到的基因表达数据可以用于癌症的分类。该文介绍了基于粗糙集的基因表达数据分类方法,并在急性白血病的数据集上验证了该方法的有效性。实验表明,该方法能取得较高的预测准确率,可以成为生物信息学研究领域的有力工具。  相似文献   

3.
给出一种异构环境下多模块通用的数据有效性验证方法。该方法运用接口定义语言的通用"接口定义"思想,使用数据验证配置XML文件定义公共的数据验证项,使用类C语言定义通用的数据验证接口,各模块根据定义的接口实现数据验证功能并举例说明。该方法使数据验证的维护可以通过直接修改公用的配置文件快速实现,避免了多模块重复修改,提高了软件开发效率。  相似文献   

4.
在日常的工怍中,我们在输入数据的时候难免会发生错误,如何简单而又有效地验证数据是否有效呢?为了防止输入错误的数据,Excel提供了简单而有效的方法。它就是数据有效性。今天,就让办公小精灵来教你这一招吧!  相似文献   

5.
基于数据融合的贝叶斯人脸识别方法   总被引:2,自引:0,他引:2  
分析了反对称双正交小波分解细节系数在光照条件变化下的稳定性,提出了一种基于数据融合的贝叶斯人脸识别方法,利用AR人脸图像库进行了对比实验,实验结果验证了本文方法的有效性。  相似文献   

6.
基于Oracle的企业异构数据流整合   总被引:2,自引:0,他引:2  
以某生产制造型企业信息化工作为背景,针对企业信息化的一个瓶颈——异构数据集成问题,研究了一种低成本的数据移植方法,设计了一种新的专用ETL数据抽取工具,提出了企业最终建立数据仓库的关键技术。并通过从MySQL数据库向Oracle数据库移植数据为实例,成功验证了这种方法和工具的有效性。  相似文献   

7.
正式文件的数据类型多样化导致了数据有效性验证的复杂化,这已经对信息化交流产生越来越大的影响。针对这个问题提出一种新的数据验证通用模板,这种模板采用新的验证机制,结合XML,XPath,Java,DOM来验证数据的有效性。  相似文献   

8.
基于遗传算法的支撑向量机的特征选取   总被引:1,自引:0,他引:1  
许建强  李高平 《计算机工程》2004,30(24):1-2,182
提出了一种支撑向量机(SVM)的特征提取方法,该方法使得所提取的特征向量能最小化SVM推广性的界,同时设计了一种有效的遗传算法来实现该方法。模拟数据和心电信号等识别问题的实验结果验证了该方法的有效性。  相似文献   

9.
利用数据挖掘技术对电力企业数据进行处理使得处理过程更加简洁有效。分析了现有的数据预处理技术,研究了数据预处理中Z-score标准化和FCM聚类算法,设计了新的数据预处理流程,利用电力营销数据验证了数据预处理的有效性。  相似文献   

10.
刘红霞  谭璐  吴翊 《计算机工程》2006,32(24):195-197
将单幅图像数据进行分割,获得高维化后的数据集合,再依据图像数据的最优分解来提取不同图像块之间的数字关联,利用多维尺度分析(MDS)方法来获取单幅图像数据不同块之间的低维表示。通过对此低维表示的自动分析,便可获得图像感兴趣区域的精确位置。的实例验证了方法的可行性、有效性。  相似文献   

11.
面向切面的数据验证组件研究与实现   总被引:1,自引:0,他引:1  
传统的数据验证方法会造成数据验证代码纠缠在一起,使得软件的维护性与重用性大大降低.面向切面编程技术(AOP)可将应用程序中的"横切关注点"从"纵向关注点"中分离出来并被封装到一个可重用的模块中,再利用反转控制(IoC)实现数据验证逻辑与其它业务逻辑间的松散耦合.在此基础上具体构建了一个基于服务器端的数据验证组件All4Validate,并将其以低侵入的方式融入到现有的J2EE/EJB开发过程的组件产品中,有效地解决了传统数据验证方法的弊端,并极大地提高软件的开发效率.  相似文献   

12.
Reference polygons are homogenous areas that aim to provide the best available assessment of ground condition that the user can identify. Delineation of such polygons provides a convenient and efficient approach for researchers to identify training and validation data for supervised classification. However, the spatial dependence of training and validation data should be taken into account when the two data sets are obtained from a common set of reference polygons. We investigate the effect on classification accuracy and the accuracy estimates derived from the validation data when training and validation data are obtained from four selection schemes. The four schemes are composed of two sampling designs (simple random and systematic) and two methods for splitting sample points between training and validation (validation points in separate polygons from training points and validation points and training points split within the same polygons). A supervised object-based classification of the study region was repeated 30 times for each selection scheme. The selection scheme did not impact classification accuracy, but estimates of overall (OA), user's (UA), and producer's (PA) accuracies produced from the validation data overestimated accuracy for the study region by about 10%. The degree of overestimation was slightly greater when the validation sample points were allowed to be in the same polygons as the training data points. These results suggest that accuracy estimates derived from splitting training and validation within a limited set of reference polygons should be regarded with suspicion. To be fully confident in the validity of the accuracy estimates, additional validation sample points selected from the region outside the reference polygons may be needed to augment the validation sample selected from the reference polygons.  相似文献   

13.
提出了移动计算环境中多版本乐观并发控制协议(MultiversionOptimisticConcurrencyControl,MVOCC)处理移动实时嵌套事务.协议消除了只读事务和更新事务之间冲突,通过动态调整事务串行次序,避免不必要的事务重启动.只读事务在移动主机处理,只读事务的响应时间大大改善.事务有效性检查分两级局部有效性检查和全局有效性检查.局部有效性检查在移动主机进行.通过局部有效性检查的事务提交到服务器进行全局有效性检查.如此早地检测数据冲突,节省了处理和通信资源.通过模拟仿真,对协议进行了性能测试,并与OCC-TI-WAIT50和HP2PL协议进行了比较.实验结果表明该协议要优于其它协议.协议不但能有效地降低事务的重启动率和延误截止时间率,而且改善只读事物的响应时间.  相似文献   

14.
Large area land cover products generated from remotely sensed data are difficult to validate in a timely and cost effective manner. As a result, pre-existing data are often used for validation. Temporal, spatial, and attribute differences between the land cover product and pre-existing validation data can result in inconclusive depictions of map accuracy. This approach may therefore misrepresent the true accuracy of the land cover product, as well as the accuracy of the validation data, which is not assumed to be without error. Hence, purpose-acquired validation data is preferred; however, logistical constraints often preclude its use — especially for large area land cover products. Airborne digital video provides a cost-effective tool for collecting purpose-acquired validation data over large areas. An operational trial was conducted, involving the collection of airborne video for the validation of a 31,000 km2 sub-sample of the Canadian large area Earth Observation for Sustainable Development of Forests (EOSD) land cover map (Vancouver Island, British Columbia, Canada). In this trial, one form of agreement between the EOSD product and the airborne video data was defined as a match between the mode land cover class of a 3 by 3 pixel neighbourhood surrounding the sample pixel and the primary or secondary choice of land cover for the interpreted video. This scenario produced the highest level of overall accuracy at 77% for level 4 of classification hierarchy (13 classes). The coniferous treed class, which represented 71% of Vancouver Island, had an estimated user's accuracy of 86%. Purpose acquired video was found to be a useful and cost-effective data source for validation of the EOSD land cover product. The impact of using multiple interpreters was also tested and documented. Improvements to the sampling and response designs that emerged from this trial will benefit a full-scale accuracy assessment of the EOSD product and also provides insights for other regional and global land cover mapping programs.  相似文献   

15.
Artificial surfaces represent one of the key land cover types, and validation is an indispensable component of land cover mapping that ensures data quality. Traditionally, validation has been carried out by confronting the produced land cover map with reference data, which is collected through field surveys or image interpretation. However, this approach has limitations, including high costs in terms of money and time. Recently, geo-tagged photos from social media have been used as reference data. This procedure has lower costs, but the process of interpreting geo-tagged photos is still time-consuming. In fact, social media point of interest (POI) data, including geo-tagged photos, may contain useful textual information for land cover validation. However, this kind of special textual data has seldom been analysed or used to support land cover validation. This paper examines the potential of textual information from social media POIs as a new reference source to assist in artificial surface validation without photo recognition and proposes a validation framework using modified decision trees. First, POI datasets are classified semantically to divide POIs into the standard taxonomy of land cover maps. Then, a decision tree model is built and trained to classify POIs automatically. To eliminate the effects of spatial heterogeneity on POI classification, the shortest distances between each POI and both roads and villages serve as two factors in the modified decision tree model. Finally, a data transformation based on a majority vote algorithm is then performed to convert the classified points into raster form for the purposes of applying confusion matrix methods to the land cover map. Using Beijing as a study area, social media POIs from Sina Weibo were collected to validate artificial surfaces in GlobeLand30 in 2010. A classification accuracy of 80.68% was achieved through our modified decision tree method. Compared with a classification method without spatial heterogeneity, the accuracy is 10% greater. This result indicates that our modified decision tree method displays considerable skill in classifying POIs with high spatial heterogeneity. In addition, a high validation accuracy of 92.76% was achieved, which is relatively close to the official result of 86.7%. These preliminary results indicate that social media POI datasets are valuable ancillary data for land cover validation, and our proposed validation framework provides opportunities for land cover validation with low costs in terms of money and time.  相似文献   

16.
仿真模型验证已成为仿真研究的有机组成部分。为了更好地对飞行训练模拟器飞行仿真模型进行可信性评估,提高飞行训练模拟器飞行仿真模型的逼真性,将飞参数据应用于验证飞行训练模拟器飞行仿真模型的可信性。提出飞行仿真模型验证的验证方案,并对方案中的飞参解译、飞参数字滤波、飞行阶段识别及飞参数据插值处理等技术进行研究。介绍仿真模型验证量化评估的方法,包括时域分析法和频域分析法,并将这几种方法应用于某型飞行训练模拟器飞行仿真模型的验证,验证结果表明,利用飞参数据验证飞行仿真模型的方法完全可行。  相似文献   

17.
为提高智能车对多车道的实际道路车辆行驶环境的适应性,提出了一种基于三车道模型的车辆检测方法。方法在预处理的基础上利用极角及位置约束的Hough变换得到可能的车道线信息并利用消失点对车道线进行筛选;利用三车道四线模型对车道线进行匹配;对于每条车道,分别利用车辆灰度信息对车道线内车辆进行识别,并利用视频的连贯性对车辆识别结果进行修正并跟踪车辆。该算法通过对车道线的二次筛选,提高了三车道模型的准确率,进一步提高了对于不同车道车辆识别的正确率。实验结果表明,在结构化道路上,对于不同路况,算法均具有较好的实时性和鲁棒性。  相似文献   

18.
ARINC429总线通讯软件测试环境的构建   总被引:3,自引:0,他引:3  
ARINC429总线是航空电子设备的数字总线传输标准,目前广泛应用于各种航空电子设备中,其有效、便捷、可靠的数据传输是保证设备之间资源共享和信息传输的重要前提;这里采用基于功能的测试方法和测试技术,探讨ARINC429通讯软件等效测试环境的构建,并提出被测软件确认测试环境的一种具体构建方法;该测试环境模拟被测目标软件实际运行环境,在无需系统其它硬件支持的情况下,完成ARINC429总线通讯确认测试,确保了数据在ARINC429总线上可靠的传输。  相似文献   

19.
Random generation of data sets is a vital step in simulation modeling. It involves in generating the variation associated with the real system behavior. In the industrial fabrication of construction components, unique products such as pipelines are produced. The fabrication processes are dependent on pipelines features, and complexity; randomly generating pipelines structure is imperative in the simulation of such processes. This paper investigates the nature of industrial pipelines and proposes a Markov chain model to randomly generate pipelines data structure. The performance of Markov chain model was tested against real pipelines through a three-stage validation process. The validation process includes (1) a validation based on the number of components and the pipelines components correlation analysis, (2) clustering-based model validation, and (3) model validation using similarity distances between pipelines feature vectors. The Markov chain model was found to generate a reasonable pipelines data structure when compared with real pipelines. It was found that 89% of the generated pipelines share similar properties equivalent to 0.88 (a scale from 0 (not identical) to 1 (identical)) to 85.5% of the original pipelines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号