首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A web services framework for distributed model management   总被引:2,自引:0,他引:2  
Distributed model management aims to support the wide-spread sharing and usage of decision support models. Web services is a promising technology for supporting distributed model management activities such as model creation and delivery, model composition, model execution and model maintenance to fulfill dynamic decision-support and problem solving requests. We propose a web services based framework for model management (called MM-WS) to support various activities of the model management life cycle. The framework is based on the recently proposed Integrated Service Planning and Execution (ISP & E) approach for web services integration. We discuss encoding of domain knowledge (as individual models) and utilize the MM-WS framework to interleave synthesis of composite models with their execution. A prototypical implementation with an example is used to illustrate the utility of the framework to enable distributed model management and knowledge integration. Benefits and issues of using the framework to support model-based decision-making in organizational contexts are outlined.
Therani MadhusudanEmail:
  相似文献   

2.
3.
The basic paradigm of service-oriented architectures—publication, discovery, and use—can be interpreted in different ways. Current technologies assume a static and rigid approach: UDDI was conceived with the idea of a centralized repository for service publication and BPEL only supports design–time bindings between the orchestrated workflow and the external services. The trend, however, is towards more flexibility and dynamism. The single centralized repository is being substituted by dedicated repositories that cooperate and exchange information about stored services on demand. Design–time compositions are complemented by mechanisms to allow for the selection and binding of services at runtime. This paper presents the research results of our group in delivering a framework for the deployment of adaptable Web service compositions. The publication infrastructure integrates existing heterogeneous repositories and makes them cooperate for service discovery. The deployment infrastructure supports BPEL-like compositions that can select services dynamically, and also adjust their behavior in response to detected changes and unforeseen events. The framework also provides a monitoring-based validation of running compositions: we provide suitable probes to oversee the execution of deployed compositions. The various parts of the framework are exemplified on a common case study taken from the automotive domain. This research is partially supported by the European IST project SeCSE (Service Centric System Engineering) and the Italian FIRB project ARTDECO (Adaptive infRasTructures for DECentralized Organizations).  相似文献   

4.
5.
为满足用户精确化和个性化获取信息的需要,通过分析Deep Web信息的特点,提出了一个可搜索不同主题Deep Web 信息的爬虫框架.针对爬虫框架中Deep Web数据库发现和Deep Web爬虫爬行策略两个难题,分别提出了使用通用搜索引擎以加快发现不同主题的Deep Web数据库和采用常用字最大限度下载Deep Web信息的技术.实验结果表明了该框架采用的技术是可行的.  相似文献   

6.
Together with the explosive growth of web video in sharing sites like YouTube, automatic topic discovery and visualization have become increasingly important in helping to organize and navigate such large-scale videos. Previous work dealt with the topic discovery and visualization problem separately, and did not take fully into account of the distinctive characteristics of multi-modality and sparsity in web video features. This paper tries to solve web video topic discovery problem with visualization under a single framework, and proposes a Star-structured K-partite Graph based co-clustering and ranking framework, which consists of three stages: (1) firstly, represent the web videos and their multi-model features (e.g., keyword, near-duplicate keyframe, near-duplicate aural frame, etc.) as a Star-structured K-partite Graph; (2) secondly, group videos and their features simultaneously into clusters (topics) and organize the generated clusters as a linked cluster network; (3) finally, rank each type of nodes in the linked cluster network by “popularity” and visualize them as a novel interface to let user interactively browse topics in multi-level scales. Experiments on a YouTube benchmark dataset demonstrate the flexibility and effectiveness of our proposed framework.  相似文献   

7.
An XML-enabled data extraction toolkit for web sources   总被引:7,自引:0,他引:7  
The amount of useful semi-structured data on the web continues to grow at a stunning pace. Often interesting web data are not in database systems but in HTML pages, XML pages, or text files. Data in these formats are not directly usable by standard SQL-like query processing engines that support sophisticated querying and reporting beyond keyword-based retrieval. Hence, the web users or applications need a smart way of extracting data from these web sources. One of the popular approaches is to write wrappers around the sources, either manually or with software assistance, to bring the web data within the reach of more sophisticated query tools and general mediator-based information integration systems. In this paper, we describe the methodology and the software development of an XML-enabled wrapper construction system—XWRAP for semi-automatic generation of wrapper programs. By XML-enabled we mean that the metadata about information content that are implicit in the original web pages will be extracted and encoded explicitly as XML tags in the wrapped documents. In addition, the query-based content filtering process is performed against the XML documents. The XWRAP wrapper generation framework has three distinct features. First, it explicitly separates tasks of building wrappers that are specific to a web source from the tasks that are repetitive for any source, and uses a component library to provide basic building blocks for wrapper programs. Second, it provides inductive learning algorithms that derive or discover wrapper patterns by reasoning about sample pages or sample specifications. Third and most importantly, we introduce and develop a two-phase code generation framework. The first phase utilizes an interactive interface facility to encode the source-specific metadata knowledge identified by individual wrapper developers as declarative information extraction rules. The second phase combines the information extraction rules generated at the first phase with the XWRAP component library to construct an executable wrapper program for the given web source.  相似文献   

8.
With the increased complexity of complex engineering systems (CES), more and more disciplines, coupled relationships, work processes, design data, design knowledge and uncertainties are involved. Currently, the MDO is facing unprecedented challenges especially in dealing with the CES by different specialists dispersed geographically on heterogeneous platforms with different analysis tools. The product design data integration and data sharing among the participants and the workflow optimization hamper the development and applications of MDO in enterprises seriously. Therefore, a multi-hierarchical integrated product design data model (MiPDM) supporting the MDO in web environment and a web services-based MDO framework considering aleatory and epistemic uncertainties are proposed in this paper. With the enabling technologies including web services, ontology, workflow, agent, XML, and evidence theory, the proposed framework enables the designers geographically dispersed to work collaboratively in the MDO environment. The ontology-based workflow enables the logical reasoning of MDO to be processed dynamically. Finally, a proof-of-concept prototype system is developed based on Java 2 Platform Enterprise Edition (J2EE) and an example of supersonic business jet is demonstrated to verify the web services-based MDO framework.  相似文献   

9.
A substantial subset of Web data has an underlying structure. For instance, the pages obtained in response to a query executed through a Web search form are usually generated by a program that accesses structured data in a local database, and embeds them into an HTML template. For software programs to gain full benefit from these “semi-structured” Web sources, wrapper programs must be built to provide a “machine-readable” view over them. Since Web sources are autonomous, they may experience changes that invalidate the current wrapper, thus automatic maintenance is an important issue. Wrappers must perform two tasks: navigating through Web sites and extracting structured data from HTML pages. While several works have addressed the automatic maintenance of data extraction tasks, the problem of maintaining the navigation sequences remains unaddressed to the best of our knowledge. In this paper, we propose a set of novel techniques to fill this gap.  相似文献   

10.
Web service interfaces can be discovered through several means, including service registries, search engines, service portals, and peer‐to‐peer networks. But discovering Web services in such heterogeneous environments is becoming a challenging task and raises several concerns, such as performance, reliability, and robustness. In this paper, we introduce the Web Service Broker (WSB) framework that provides a universal access point for discovering Web services. WSB uses a crawler to collect the plurality of Web services disseminated throughout the Web, continuously monitor the behavior of Web services in delivering the expected functionality, and enable clients to articulate service queries tailored to their needs. The framework features ranking algorithms we have developed which are capable of ranking services according to Quality of Web Service parameters. WSB can be seamlessly integrated into the existing service‐oriented architectures. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
It is difficult to digest the poorly organized and vast amount of information contained in auction Web sites which are fast changing and highly dynamic. We develop a unified framework which can automatically extract product features and summarize hot item features from multiple auction sites. To deal with the irregularity in the layout format of Web pages and harness the uncertainty involved, we formulate the tasks of product feature extraction and hot item feature summarization as a single graph labeling problem using conditional random fields. One characteristic of this graphical model is that it can model the inter-dependence between neighbouring tokens in a Web page, tokens in different Web pages, as well as various information such as hot item features across different auction sites. We have conducted extensive experiments on several real-world auction Web sites to demonstrate the effectiveness of our framework. The work described in this paper is substantially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Nos: CUHK 4179/03E and CUHK4193/04E) and the Direct Grant of the Faculty of Engineering, CUHK (Project Codes: 2050363 and 2050391). This work is also affiliated with the Microsoft-CUHK Joint Laboratory for Human-centric Computing and Interface Technologies.  相似文献   

12.
13.
This study presents an analysis of users' queries directed at different search engines to investigate trends and suggest better search engine capabilities. The query distribution among search engines that includes spawning of queries, number of terms per query and query lengths is discussed to highlight the principal factors affecting a user's choice of search engines and evaluate the reasons of varying the length of queries. The results could be used to develop long to short term business plans for search engine service providers to determine whether or not to opt for more focused topic specific search offerings to gain better market share.  相似文献   

14.
Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client–server database application based on the public domain CakePHP framework.  相似文献   

15.
The rapid growth of the Internet has created a tremendous number of multilingual resources. However, language boundaries prevent information sharing and discovery across countries. Proper names play an important role in search queries and knowledge discovery. When foreign names are involved, proper names are often translated phonetically which is referred to as transliteration. In this research we propose a generic transliteration framework, which incorporates an enhanced Hidden Markov Model (HMM) and a Web mining model. We improved the traditional statistical-based transliteration in three areas: (1) incorporated a simple phonetic transliteration knowledge base; (2) incorporated a bigram and a trigram HMM; (3) incorporated a Web mining model that uses word frequency of occurrence information from the Web. We evaluated the framework on an English–Arabic back transliteration. Experiments showed that when using HMM alone, a combination of the bigram and trigram HMM approach performed the best for English–Arabic transliteration. While the bigram model alone achieved fairly good performance, the trigram model alone did not. The Web mining approach boosted the performance by 79.05%. Overall, our framework achieved a precision of 0.72 when the eight best transliterations were considered. Our results show promise for using transliteration techniques to improve multilingual Web retrieval.  相似文献   

16.
Due to the spread of smart devices and the development of network technology, a large number of people can now easily utilize the web for acquiring information and various services. Further, collective intelligence has emerged as a core player in the evolution of technology in web 2.0 generation. It means that people who are interested in a specific domain of knowledge can not only make use of the information, but they can also participate in the knowledge production processes. Since a large volume of knowledge is produced by multiple contributors, it is important to integrate and manage knowledge efficiently. In this paper, we propose a social tagging-based dynamic knowledge management system in crowdsourcing environments. The approach here is to categorize and package knowledge from multiple sources, in such a way that it easily links to target knowledge.  相似文献   

17.
18.
19.
Sign Languages (SL) are underrepresented in the digital world, which contributes to the digital divide for the Deaf Community. In this paper, our goal is twofold: (1) to review the implications of current SL generation technologies for two key user web tasks, information search and learning and (2) to propose a taxonomy of the technical and functional dimensions for categorizing those technologies. The review reveals that although contents can currently be portrayed in SL by means of videos of human signers or avatars, the debate about how bilingual (text and SL) versus SL-only websites affect signers’ comprehension of hypertext content emerges as an unresolved issue in need of further empirical research. The taxonomy highlights that videos of human signers are ecological but require a high-cost group of experts to perform text to SL translations, video editing and web uploading. Avatar technology, generally associated with automatic text-SL translators, reduces bandwidth requirements and human resources but it lacks reliability. The insights gained through this review may enable designers, educators or users to select the technology that best suits their goals.  相似文献   

20.
A web service-based web application (WSbWA) is a collection of web services or reusable proven software parts that can be discovered and invoked using standard Internet protocols. The use of these web services in the development process of WSbWAs can help overcome many problems of software use, deployment and evolution. Although the cost-effective software engineering of WSbWAs is potentially a very rewarding area, not much work has been done to accomplish short time to market conditions by viewing and dealing with WSbWAs as software products that can be derived from a common infrastructure and assets with a captured specific abstraction in the domain. Both Product Line Engineering (PLE) and Agile Methods (AMs), albeit with different philosophies, are software engineering approaches that can significantly shorten the time to market and increase the quality of products. Using the PLE approach we built, at the domain engineering level, a WSbWA-specific lightweight product-line architecture and combined it, at the application engineering level, with an Agile Method that uses a domain-specific visual language with direct manipulation and extraction capabilities of web services to perform customization and calibration of a product or WSBWA for a specific customer. To assess the effectiveness of our approach we designed and implemented a tool that we used to investigate the return on investment of the activities related to PLE and AMs. Details of our proposed approach, the related tool developed, and the experimental study performed are presented in this article together with a discussion of planned directions of future work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号