首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
对公文正文实现全文批注和痕迹保留功能,是很多基于Web的OA应用系统的重要环节.在对现有B/S模式的OA系统中公文正文处理痕迹保留常用方法分析比较的基础上,综合运用ASP.NET、ActiveX、VBA、WebDAV等技术,提出了一种新的基于WebDAV协议的B/S模式下全文批注与痕迹保留功能的实现方案,给出了实现步骤,并详细阐述涉及的关键技术.该技术已成功应用于某军队机关公文流转系统中的发文流程中,应用效果良好.  相似文献   

2.
传统的电子公文痕迹保留方法,在用户对文本进行频繁修改时,痕迹保留结果容易变得混乱。针对这种情况,提出基于文本比较的痕迹保留方法。该方法以基于递进式逐字比较的最长公共子串匹配算法为核心,通过递归调用方式找出两个文本的所有公共子串,并以此为基础实现痕迹保留。分析和实验结果表明,该方法能够比较真实地反映文本修改过程和用户的修改意图,并可以在普通计算机上快速完成万字以内的文本比较,适用于电子公文流转中的痕迹保留。  相似文献   

3.
一种基于Word文档的手写批注实现方法   总被引:9,自引:2,他引:7  
该文讨论了一种基于Word实现的手写批注技术。该技术可以实现在用户使用手写笔对电子文档修改后,保留修改的原始痕迹。这项技术主要应用于电子公文系统的开发,为电子公文系统实现真正的无纸化办公奠定了坚实的基础。  相似文献   

4.
电子政务OA系统的设计与实现   总被引:6,自引:0,他引:6  
田路  陆国栋 《计算机工程与设计》2005,26(4):1056-1058,1061
针对政府机关的实际需求,提出了实用的政府办公自动化(OA)系统整体设计与实现方案。从网络建设及功能设计的角度提出了系统集成设计原则。依据该原则,对系统的网络基础及主干进行了合理配置,分别从主机安全、数据安全、网络安全角度提出了网络安全防护技术;根据用户需求对系统进行了功能上的划分;介绍了其系统实现平台的选用,并提出了公文流转控制、痕迹保留等几项有特色的典型实现技术。  相似文献   

5.
公文流转是日常办公中重要一环,为实现无纸办公而实现办 公自动化,就需既保留修改痕迹,又保留文档,主要介绍了用中文Lotus Notes/Domino4.6 开发的所采取的解决此问题的两种方法,并指出了这两种方法的优缺点。  相似文献   

6.
基于Lotus Notes企业办公自动化系统   总被引:3,自引:0,他引:3  
简要地介绍了Lotus Notes在办公自动化系统(OA)开发中的技术特点,结合一个企业办公自动化系统的开发,阐述了该系统的总体需求,设计方案和系统实现,并对系统实现过程中的如何控制公文流转、文件修改及痕迹保留、同关系数据库的数据交换、保证系统安全等技术问题给出了相应的解决方案和思路.  相似文献   

7.
随着信息化的不断深入,公文管理中对公文的安全性、真实性和实效性的要求不断的提高.特别是针对手写签名、电子印章需求更加严格,要求在网络办公系统中对电子公文进行签署、盖章,并实现多人会签、签章可验证、可认证、防抵赖等功能.因此给出基于电子签名技术的公文管理解决方案,该方案可完成对公文的全过程网络化、无纸化管理.  相似文献   

8.
根据办公自动化系统中的公文流转和传统的手工流转中关于笔迹跟踪的矛盾 ,提出了基于LotusDomino/Notes5 .0 3构建的企业办公自动化系统中 ,如何在公文流转的过程中实现公文修改痕迹跟踪的三种方法 ,并指出了这三种方法的优缺点  相似文献   

9.
为了提高工作效率,加速公文的流转速度,针对目前办公自动化(OA)系统公文流转审批过程中的痕迹保留、授权处理以及数字签名三个方面的关键技术进行了研究和探讨,并根据南通大学实际需求,设计了南通大学办公系统并将上述三项关键技术在该系统中进行了应用,取得了较好的效果。  相似文献   

10.
基于Lotus Notes的政务办公自动化系统的研究与实践   总被引:1,自引:0,他引:1  
简要介绍Lotus Domino/notes在办公自动化(OA)系统开发中的技术优势,结合《XX政务中心办公自动化系统》的开发,阐述了该系统的设计方案、系统实现,并对开发过程中如何自动设置ACL、灵活设置流程,精确控制公文流转、文档的自动成文和发布、痕迹保留等技术问题提出了相应解决方案和思路。  相似文献   

11.
While electronic documents are increasingly prevalent in the workplace there are many texts — such as books, magazines and letters — which are not easily available in an electronic form. Since many electronic document systems depend upon documents existing exclusively, or at least predominantly, in electronic form, this suggests an opportunity for document scanning technology. However, conventional scanners are limited by their large size and relatively cumbersome usage. Using a diary-based methodology, this study investigated the use of a new portable document scanning technology. In this paper we explore the need for document scanning, and how this portable device was used by our study participants. Document scanning is shown to be a goal-driven activity — individuals did not scan just to have an electronic version of a document, but to do something with electronic documents, in particular, distributing documents to others, archiving documents and reusing documents. The small design of this device also enabled a mode of usage distinct from that of conventional flarbed scanners. Its size meant that the device was a personal, rather than shared technology; that it could be easily stored when not being used; and that the scanner could be carried to the materials to be scanned, rather than the materials brought to the scanner. We discuss this interaction with the local environment as a case of local mobility — this is less to do with portability but with how a device's small size can make it fit better into work environments.  相似文献   

12.
13.
This correspondence presents a novel hierarchical clustering approach for knowledge document self-organization, particularly for patent analysis. Current keyword-based methodologies for document content management tend to be inconsistent and ineffective when partial meanings of the technical content are used for cluster analysis. Thus, a new methodology to automatically interpret and cluster knowledge documents using an ontology schema is presented. Moreover, a fuzzy logic control approach is used to match suitable document cluster(s) for given patents based on their derived ontological semantic webs. Finally, three case studies are used to test the approach. The first test case analyzed and clustered 100 patents for chemical and mechanical polishing retrieved from the World Intellectual Property Organization (WIPO). The second test case analyzed and clustered 100 patent news articles retrieved from online Web sites. The third case analyzed and clustered 100 patents for radio-frequency identification retrieved from WIPO. The results show that the fuzzy ontology-based document clustering approach outperforms the K-means approach in precision, recall, F-measure, and Shannon's entropy.  相似文献   

14.
为解决波长路由WDM光网络中波长通道的分布式动态建立和拆除问题,已经有多个分布式波长通道建路算法被提出.但新提出的算法都是只跟经典算法,如后向资源预留BRP和前向资源预留FRP等各自具有自身所不能克服的缺陷.本文在一个中立的仿真平台上,将目前研究较多的几种典型分布式光路建路算法进行了综合分析和比较,通过仿真数据分析,包括业务阻塞率和光路建立时间,得出了各算法在不同网络环境下的优劣.并在此基础之上提出一种延迟化的分布式波长通道建路算法,并与已有算法进行对比以说明其优势,为根据具体网络状况选用最为合适的算法和策略提供了参考依据.  相似文献   

15.
16.
In this paper, we address the problem of minimizing the cost of transferring a document or a file requested by a set of users geographically separated on a network of nodes. We concentrate on theoretical aspects of data migration and caching on high-speed networks. Following the information caching paradigm introduced in the literature, we present polynomial time optimal caching strategies that minimize the total monetary cost of all the service requests by the users on a high-speed network. We consider a scenario in which a large pool of customers from one or more remote sites on a network demand a document, situated at some site, for their use. We also assume that the users can request the document at different time instants. This process of distributing the requested document incurs communication costs due to the use of communication resources and caching costs of the document at some server sites before it is delivered to the users at their desired time instances. We configure the network as a fully connected topology in which the service providers manage and control the distribution of the requested document among the users. For a high-speed network, we show that a single copy of the requested document is sufficient to serve all the user requests in an optimal manner. We extend the study to a homogeneous case in which the communication costs are identical and caching costs at all the sites are identical. In this case, we demonstrate the adaptability of the algorithm in generating more than one copy when needed by the minimization process. Using these strategies, the network service providers can decide when, where, and for how long the requested documents must be cached at vantage sites to obtain an optimal solution. Illustrative examples are provided to ease the understanding.  相似文献   

17.
A schedule for a multimedia document indicates when document events should occur. We describe a two-phase algorithm that automatically produces schedules for interactive multimedia documents, which can contain both predictable behavior (such as audio and video) and unpredictable behavior (such as user interaction and programs with unpredictable execution times). The first phase of the algorithm, called the compiletime scheduler, preprocesses high-level temporal specifications before the document is presented and creates as much of the schedule as possible. Our compiletime scheduler is conceptually similar to TEX's spatial layout algorithm in that it permits time to be stretched or shrunk between events inside media segments to arrive at an optimal presentation for a document. The second phase of the algorithm, called the runtime scheduler, resolves the presentation of media segments that depend upon unpredictable behavior.  相似文献   

18.
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection.  相似文献   

19.
In this work, a series of principal questions of the existence of documents in the information space is considered. Based on natural concepts of a document as an information object, the questions of document identification, concepts of the instance, copy, and original are studied, along with accompanying concepts of the registry, catalogue, and repository. Requirements for the identification and rules of the existence and use of documents are considered. An approach is suggested to make possible an adequate treatment of copies of documents for the case when copies are more available than originals. Features of an XML-based prototype are described. The work can serve as a basis for constructing document collections of an arbitrary nature, especially for documents naturally distributed in the information space.  相似文献   

20.
Text categorization systems often use machine learning techniques to induce document classifiers from preclassified examples. The fact that each example document belongs to many classes often leads to very high computational costs that sometimes grow exponentially in the number of features. Seeking to reduce these costs, we explored the possibility of running a "baseline induction algorithm" separately for subsets of features, obtaining a set of classifiers to be combined. For the specific case of classifiers that return not only class labels but also confidences in these labels, we investigate here a few alternative fusion techniques, including our own mechanism that was inspired by the Dempster-Shafer Theory. The paper describes the algorithm and, in our specific case study, compares its performance to that of more traditional mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号