首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
甘肃科技文献共享平台包括五大系统:全文数据库检索及发布系统、异构数字资源统一检索系统、原文传递系统、用户管理及计费系统、统计分析系统。论述该平台的应用架构和技术架构,阐述其主要关键技术:统一检索、Web 2.0、Web服务及数据安全。运行实践表明,平台整合了173个资源数据库,实现"一站式"文献服务,提高了文献资源的集成程度,从而提高文献情报机构的服务水平、管理水平和市场竞争能力,减少文献资源的重复投资,减少内容相同的数据库资源的重复开发。  相似文献   

2.
妇女研究     
《程序员》2000,(3)
妇女研究数据库(Women’s Studies Database)马里兰大学妇女研究数据库建于1992年9月,它是为从事专门的妇女研究和出版工作的专业人士服务的。该站点收集有相关书目、报纸、电影、论坛以及历史、政治等相关学科信息。  相似文献   

3.
陈萍  张涛  赵敏  袁志坚  杨兰娟 《计算机科学》2013,40(11):140-142,146
数据库即服务(DBaaS)是云计算的一个研究热点,而数据应用托管则是当前DBaaS的一个重要应用领域。针对托管数据隐私保护问题,提出了基于虚拟机和CryptDB系统构建支持多副本的多租户数据托管方法及相应的数据库即服务系统,该系统实现了托管数据的隔离和加密存储并且能基于加密数据执行SQL查询。相关实验表明,和全同态加密系统相比,系统具有较低的性能损耗,较好地解决了隐私保护和实用性问题。  相似文献   

4.
基于Solr的大规模标准文献可视化分析系统   总被引:1,自引:0,他引:1  
国家标准馆是唯一的国家级标准收藏机构,建成了规模庞大的标准文献题录数据库、全文数据库.但是面对海量的数据资源,标准文献研究人员在没有计算机相关知识的情况下难以对相关数据进行全面的了解和研究,传统的研究方式也无法实时并直观的对统计数据进行展示.本文基于这些问题,开发了大规模标准文献可视化分析系统,设计和实现了可以自由定制的数据统计功能以及对标准文献的起草人、起草机构的关联分析功能.本系统为标准文献研究领域的研究人员提供了一个对标准文献资源高效便捷的可视化分析工具,研究人员对统计数据进行定制就能够获取到有效的数据,大幅提升了标准文献资源的分析效率.  相似文献   

5.
维普科技期刊全文数据库检索初探   总被引:2,自引:0,他引:2  
周宏图 《福建电脑》2002,(10):18-18,20
由重庆维普咨询有限公司研制的《中文科技期刊全文数据库》(简称维普期刊全文数据库)是目前国内数据量最大的全文库,该数据库收录了1989年以来中文科技期刊近5600种,是综合性文献型数据库,收录的文献涵盖了自然科学、工程技术各个领域及部分社会科学领域,包括经济、文化教育、图书情报等类别,数据库收录的文献项目含题名、作者、机构、刊名(出处)、ISSN、CN号、关键词、分类号、全文或文摘等。《维普》数据库的优势是同时提供题录和全文的服务,这是其他数据库所无法比  相似文献   

6.
天文文献情报资源共享网的建设   总被引:1,自引:0,他引:1  
介绍了中国科学院天文系统 (5个天文台和 1个天文仪器厂 )文献情报资源共享网的建设 ,包括该网的实验网、陕西天文台文献情报计算机局域网的建设以及应用。  相似文献   

7.
王凤领  陈荣耀  张剑飞  邢婷  王知强 《计算机工程》2012,38(1):291-292,F0003
基于地理信息系统(GIS)和专家系统(ES)相结合的结构体系,构建公路生态景观评价系统。该系统采用模块化程序设计技术,由人机接口、综合数据库、空间数据库、知识库、推理机、知识获取及解释程序模块组成。利用智能控制界面和应用启发式推理,加强和完善GIS的系统功能,使用专家知识提供科学的决策和咨询。应用结果验证了该系统的有效性。  相似文献   

8.
多策略数据库销毁系统的设计与实现   总被引:1,自引:0,他引:1  
该文设计并实现了一种多策略的数据库销毁系统。该系统提供多种销毁策略,能迅速破坏数据库中的重要信息,及其相关的临时文件和敏感数据;具有磁盘销毁功能,能够对磁盘上的所有数据进行彻底擦除;并且该系统具有数据库自毁功能,能够自动根据网络情况判断并完成数据库销毁工作而无需人工干预,防止了数据流失,确保数据安全。  相似文献   

9.
《计算机仿真》2012,(3):405
尊敬的主编先生/女士:我们谨此郑重通知:依据文献计量学的原理和方法,经研究人员对相关文献的检索、统计和分析,以及学科专家评审,贵刊《计算机仿真》入编《中文核心期刊要目总览》2011年版(即第六版)之自动化技  相似文献   

10.
为了规范税务稽查,保证税务稽查工作质量和效率,应用产生式规则表示的知识库解决税务稽查系统中的违法定性与处罚问题,介绍了整个专家数据库系统的设计方法以及控制系统的算法。该系统还可以用于其他类似的专家系统,具有一定的通用性,通过改进,也可以作为类似系统的专家数据库的骨架系统。  相似文献   

11.
12.
13.
We show how to regard covered logic programs as cellular automata. Covered logic programs are ones for which every variable occurring in the body of a given clause also occurs in the head of the same clause. We generalize the class of register machine programs to permit negative literals and characterize the members of this class of programs as n-state 2-dimensional cellular automata. We show how monadic covered programs, the class of which is computationally universal, can be regarded as 1-dimensional cellular automata. We show how to continuously (and differentiably) deform 1-dimensional cellular automata from one to another and understand the arrangement of these cellular automata in a separable Hilbert space over the real numbers. The embedding of the cellular automata of fixed radius r is a linear mapping into R2 2r+1 in which a cellular automaton's transition function is the attractor of a state-governed iterated function system of affine contraction mappings. The class of covered monadic programs having a particular fixed point has a uniform arrangement in an affine subspace of the Hilbert space l2. Furthermore, these programs are construable as almost everywhere continuous functions from the unit interval {x | 0 ≤ x ≤ 1} to the real numbers R. As one consequence, in particular, we can define a variety of natural metrics on the class of these programs. Moreover, for each program in this class, the set of initial segments of the program's fixed points, with respect to an ordering induced by the program's dependency relation, is a regular set. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

14.
A Multistrategy Approach to Relational Knowledge Discovery in Databases   总被引:1,自引:0,他引:1  
When learning from very large databases, the reduction of complexity is extremely important. Two extremes of making knowledge discovery in databases (KDD) feasible have been put forward. One extreme is to choose a very simple hypothesis language, thereby being capable of very fast learning on real-world databases. The opposite extreme is to select a small data set, thereby being able to learn very expressive (first-order logic) hypotheses. A multistrategy approach allows one to include most of these advantages and exclude most of the disadvantages. Simpler learning algorithms detect hierarchies which are used to structure the hypothesis space for a more complex learning algorithm. The better structured the hypothesis space is, the better learning can prune away uninteresting or losing hypotheses and the faster it becomes.We have combined inductive logic programming (ILP) directly with a relational database management system. The ILP algorithm is controlled in a model-driven way by the user and in a data-driven way by structures that are induced by three simple learning algorithms.  相似文献   

15.
支持协同的建筑设计图样系统   总被引:1,自引:0,他引:1  
陈莉  刘弘  邵增珍 《计算机应用》2003,23(12):91-93
针对当前建筑设计行业协同设计中对图样数据库的要求,提出了支持协同的建筑设计图样系统设计思想,并且依据行业特点,对图样库内图样的安全管理、检索等重要问题进行了研究。  相似文献   

16.
We give an overview of correctness criteria specific to concurrent shared-memory programs and runtime verification techniques for verifying these criteria. We cover a spectrum of criteria, from ones focusing on low-level thread interference such as races to higher-level ones such as linearizability. We contrast these criteria in the context of runtime verification. We present the key ideas underlying the runtime verification techniques for these criteria and summarize the state of the art. Finally, we discuss the issue of coverage for runtime verification for concurrency and present techniques that improve the set of covered thread interleavings.  相似文献   

17.
The ubiquity of the World Wide Web offers an ideal opportunity for the deployment of highly distributed applications. Now that connectivity is no longer an issue, attention has turned to providing a middleware infrastructure that will sustain data sharing among Web-accessible databases. We present a dynamic architecture and system for describing, locating, and accessing data from Web-accessible databases. We propose the use of flexible organizational constructs service links and coalitions to facilitate data organization, discovery, and sharing among Internet-accessible databases. A language is also proposed to support the definition and manipulation of these constructs. The implementation combines Java, CORBA, database API (JDBC), agent, and database technologies to support a scalable and portable architecture interconnecting large networks of heterogeneous and autonomous databases. We report on an experiment to provide uniform access to a Web of healthcare-related databases  相似文献   

18.
One of the approaches for integrating object-oriented programs with databases is to instantiate objects from relational databases by evaluating view queries. In that approach, it is often necessary to evaluate some joins of the query by left outer joins to prevent information loss caused by the tuples discarded by inner joins. It is also necessary to filter some relations with selection conditions to prevent the retrieval of unwanted nulls. The system should automatically prescribe joins as inner or left outer joins and generate the filters, rather than letting them be specified manually for every view definition. We develop such a mechanism in this paper. We first develop a rigorous system model to facilitate the mapping between an object-oriented model and the relational model. The system model provides a well-defined context for developing a simple mechanism. The mechanism requires only one piece of information from users: null options on an object attribute. The semantics of these options are mapped to non-null constraints on the query result. Then the system prescribes joins and generates filters accordingly. We also address reducing the number of left outer joins and the filters so that the query can be processed more efficiently  相似文献   

19.
This paper presents the results of handwritten digit recognition on well-known image databases using state-of-the-art feature extraction and classification techniques. The tested databases are CENPARMI, CEDAR, and MNIST. On the test data set of each database, 80 recognition accuracies are given by combining eight classifiers with ten feature vectors. The features include chaincode feature, gradient feature, profile structure feature, and peripheral direction contributivity. The gradient feature is extracted from either binary image or gray-scale image. The classifiers include the k-nearest neighbor classifier, three neural classifiers, a learning vector quantization classifier, a discriminative learning quadratic discriminant function (DLQDF) classifier, and two support vector classifiers (SVCs). All the classifiers and feature vectors give high recognition accuracies. Relatively, the chaincode feature and the gradient feature show advantage over other features, and the profile structure feature shows efficiency as a complementary feature. The SVC with RBF kernel (SVC-rbf) gives the highest accuracy in most cases but is extremely expensive in storage and computation. Among the non-SV classifiers, the polynomial classifier and DLQDF give the highest accuracies. The results of non-SV classifiers are competitive to the best ones previously reported on the same databases.  相似文献   

20.
Iris recognition has been demonstrated to be an efficient technology for personal identification. In this work, methods to perform iris encoding using bi-orthogonal wavelets and directional bi-orthogonal filters are proposed and compared. All the iris images are enhanced using the wavelet domain in-band de-noising method. This method is shown to improve the iris segmentation results. A framework to assess the iris image quality based on occlusion, contrast, focus and angular deformation is introduced and used as part of a novel adaptive matching technique based on the assessed iris image quality. Adaptive matching presents improved performance when compared against the Hamming distance method. Four different databases are used to analyze the system performance. The first two databases include popular CASIA and high resolution University of Bath databases. Results obtained for these databases compare with results from the literature, in terms of speed as well as accuracy. The other two databases have challenging off-angle (WVU database) and uncontrolled (Clarkson database) iris images and are used to assess the limits of system performance. Best results are achieved for directional bi-orthogonal filter based encoding technique combined with the adaptive matching method with EER values of 0.07%, 0.15%, 0.81% and 1.29% for the four databases, which reflect highly competent performance and high correlation with the quality of the iris images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号