共查询到20条相似文献,搜索用时 15 毫秒
1.
Hypermedia composite templates define generic structures of nodes and links to be added to a document composition, providing spatio-temporal synchronization semantics. This paper presents EDITEC, a graphical editor for hypermedia composite templates. EDITEC templates are based on the XTemplate 3.0 language. The editor was designed for offering a user-friendly visual approach. It presents a new method that provides several options for representing iteration structures graphically, in order to specify a certain behavior to be applied to a set of generic document components. The editor provides a multi-view environment, giving the user a complete control of the composite template during the authoring process. Composite templates can be used in NCL documents for embedding spatio-temporal semantics into NCL contexts. NCL is the standard declarative language used for the production of interactive applications in the Brazilian digital TV system and ITU H.761 IPTV services. Hypermedia composite templates could also be used in other hypermedia authoring languages offering new types of compositions with predefined semantics. 相似文献
2.
Automatic semantic annotation of real-world web images 总被引:1,自引:0,他引:1
Roger C F Wong Clement H C Leung 《IEEE transactions on pattern analysis and machine intelligence》2008,30(11):1933-1944
As the number of web images is increasing at a rapid rate, searching them semantically presents a significant challenge. Many raw images are constantly uploaded with little meaningful direct annotations of semantic content, limiting their search and discovery. In this paper, we present a semantic annotation technique based on the use of image parametric dimensions and metadata. Using decision trees and rule induction, we develop a rule-based approach to formulate explicit annotations for images fully automatically, so that by the use of our method, semantic query such as " sunset by the sea in autumn in New York" can be answered and indexed purely by machine. Our system is evaluated quantitatively using more than 100,000 web images. Experimental results indicate that this approach is able to deliver highly competent performance, attaining good recall and precision rates of sometimes over 80%. This approach enables a new degree of semantic richness to be automatically associated with images which previously can only be performed manually. 相似文献
3.
Guido Boella Luigi Di Caro Alice Ruggeri Livio Robaldo 《Journal of Intelligent Information Systems》2014,43(2):231-246
Nowadays, there is a huge amount of textual data coming from on-line social communities like Twitter or encyclopedic data provided by Wikipedia and similar platforms. This Big Data Era created novel challenges to be faced in order to make sense of large data storages as well as to efficiently find specific information within them. In a more domain-specific scenario like the management of legal documents, the extraction of semantic knowledge can support domain engineers to find relevant information in more rapid ways, and to provide assistance within the process of constructing application-based legal ontologies. In this work, we face the problem of automatically extracting structured knowledge to improve semantic search and ontology creation on textual databases. To achieve this goal, we propose an approach that first relies on well-known Natural Language Processing techniques like Part-Of-Speech tagging and Syntactic Parsing. Then, we transform these information into generalized features that aim at capturing the surrounding linguistic variability of the target semantic units. These new featured data are finally fed into a Support Vector Machine classifier that computes a model to automate the semantic annotation. We first tested our technique on the problem of automatically extracting semantic entities and involved objects within legal texts. Then, we focus on the identification of hypernym relations and definitional sentences, demonstrating the validity of the approach on different tasks and domains. 相似文献
4.
《Knowledge》2007,20(4):373-381
The aim of this paper is to support user browsing on semantically heterogeneous information spaces. In advance of a user’s explicit actions, his search context should be predicted by the locally annotated resources in his access histories. We thus exploit semantic transcoding method and measure the relevance between the estimated model of user intention and the candidate resources in web spaces. For these experiments, we simulated the scenario of comparison-shopping systems on the testing bed organized by twelve online stores in which images are annotated with semantically heterogeneous metadata. 相似文献
5.
6.
针对现有的基于本体描述的语义Web服务发现方法发现效率较为低下的问题,提出一种新的服务发现方法.该方法在基于本体距离计算语义Web服务综合相似度的基础上,利用数据挖掘中的聚类算法AGNES对语义Web服务集合进行聚类预处理,形成若干服务簇,然后应用相应服务发现算法根据相似度阈值定位于某一服务簇内进行查找匹配,从而可减少搜索空间.理论与仿真实验结果表明,该方法既可保证服务发现的准确率,又可明显提高服务发现的效率. 相似文献
7.
ContextSemantically annotating web services is gaining more attention as an important aspect to support the automatic matchmaking and composition of web services. Therefore, the support of well-known and agreed ontologies and tools for the semantical annotation of web services is becoming a key concern to help the diffusion of semantic web services.ObjectiveThe objective of this systematic literature review is to summarize the current state-of-the-art for supporting the semantical annotation of web services by providing answers to a set of research questions.MethodThe review follows a predefined procedure that involves automatically searching well-known digital libraries. As a result, a total of 35 primary studies were identified as relevant. A manual search led to the identification of 9 additional primary studies that were not reported during the automatic search of the digital libraries. Required information was extracted from these 44 studies against the selected research questions and finally reported.ResultsOur systematic literature review identified some approaches available for semantically annotating functional and non-functional aspects of web services. However, many of the approaches are either not validated or the validation done lacks credibility.ConclusionWe believe that a substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services. 相似文献
8.
9.
Multimedia Tools and Applications - Automated image annotation (AIA) is an important issue in computer vision and pattern recognition, and plays an extremely important role in retrieving... 相似文献
10.
In this paper,we study the reliability-aware synthesis problem for composing available services automatically and guaranteeing that the composed result satisfies the specification,such as temporal constraints of functionality and reliability,centered on a synthesis model for mediator of web services composition (CSM).This approach focuses on handling attributes and state relations,and permitting users and services to operate over them,i.e.,read /write their data values and compare them according to a dense state order.We show that the reliability-aware synthesis problem for the specification is EXPTIME-complete and we give an exponential-time algorithm (CSM-NSA) which for a given formula ψ and a synthesis model,synthesizes available services in the library satisfying ψ over the synthesis model (if they exist) or responds with not satisfiable (otherwise).The specification ψ is a fragment of PCTL (probabilistic computation tree logic),obtained from ordinary CTL (computation tree logic) by replacing the EX,AX,EU and AU operation with their quantitative counterparts X >p,X =1,U >p,and U =1,respectively.As opposed to NSA,we provide a more effective algorithm to replace the NSA algorithm called CSM-HSA (heuristic synthesis algorithm).Though HSA is an incomplete algorithm,the answer is correct.The experiments show that the HSA algorithm solves the problem of reliability-aware service synthesis effectively and efficiently. 相似文献
11.
12.
13.
14.
Ghassan Beydoun 《Expert systems with applications》2009,36(8):10952-10961
We capture student interactions in an e-learning community to construct a semantic web (SW) to create a collective meta-knowledge structure guiding students as they search the existing knowledge corpus. We use formal concept analysis (FCA) as a knowledge acquisition tool to process the students virtual surfing trails to express and exploit the dependencies between web-pages to yield subsequent and more effective focused search results. We mirror social navigation and bypass the cumbersome manual annotation of webpages and at the same time paralleling social navigation for knowledge.We present our system KAPUST2 (Keeper and Processor of User Surfing Trails) which constructs from captured students trails a conceptual lattice guiding student queries. We use KAPUST as an e-learning software for an undergraduate class over two semesters. We show how the lattice evolved over the two semesters, improving its performance by exploring the relationship between ‘kinds’ of research assignments and the e-learning semantic web development. Course instructors monitored the evolution of the lattice with interesting positive pedagogical consequences. These are reported as well in this paper. 相似文献
15.
Multimedia Tools and Applications - In automatic image annotation (AIA) different features describe images from different aspects or views. Part of information embedded in some views is common for... 相似文献
16.
Roberto De Virgilio Flavius Frasincar Walter Hop Stephan Lachner 《Multimedia Tools and Applications》2013,64(1):119-140
The Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim adding semantics (i.e meaning) to a Web page would help the machines to understand Web contents and better support the Web search process. One of the latest developments in this field is Google’s Rich Snippets, a service for Web site owners to add semantics to their Web pages. In this paper we provide a structured approach to automatically annotate a Web page with Rich Snippets RDFa tags. Exploiting a data reverse engineering method, combined with several heuristics, and a named entity recognition technique, our method is capable of recognizing and annotating a subset of Rich Snippets’ vocabulary, i.e., all the attributes of its Review concept, and the names of the Person and Organization concepts. We implemented tools and services and evaluated the accuracy of the approach on real E-commerce Web sites. 相似文献
17.
Damaris Fuentes-Lorenzo Jorge Morato Juan Miguel Gómez 《Information Systems Frontiers》2009,11(4):471-480
In recent years, technological advances in high-throughput techniques and efficient data gathering methods, coupled with a
world-wide effort in computational biology, have resulted in an enormous amount of life science data available in repositories
devoted to biomedical literature. These repositories lack the ability to attain an effective and accurate search. Using semantic
technologies as the key for interoperation enables searching and processing of biomedical literature in a more efficient way.
However, emerging semantic applications take for granted specific knowledge that biomedical researchers may not have. This
paper presents design principles for easy-to-use biomedical semantic applications by means of ontology-based annotations and
faceted search. The proposed approach is backed with a usable prototype that shows the breakthroughs of adding these principles
to a biomedical digital library where identifying and searching information are critical aspects for non-semantic Web experts. 相似文献
18.
AbstractDevelopment of semantic web and social network is undeniable in the Internet world these days. Widespread nature of semantic web has been very challenging to assess the trust in this field. In recent years, extensive researches have been done to estimate the trust of semantic web. Since trust of semantic web is a multidimensional problem, in this paper, we used parameters of social network authority, the value of pages links authority and semantic authority to assess the trust. Due to the large space of semantic network, we considered the problem scope to the clusters of semantic subnetworks and obtained the trust of each cluster elements as local and calculated the trust of outside resources according to their local trusts and trust of clusters to each other. According to the experimental result, the proposed method shows more than 79% Fscore that is about 11.9% in average more than Eigen, Tidal and centralised trust methods. Mean of error in this proposed method is 12.936, that is 9.75% in average less than Eigen and Tidal trust methods. 相似文献
19.
20.
面向Web应用的语义标注方法 总被引:1,自引:0,他引:1
提出了一种语义标注的方法来支持用户在网上的浏览活动.采用了基于参考本体转换技术的语义转换,它能够从语义上同类型的标注资源中提取特征.随着获取标注资源的模式建立用户意向模型,利用概率的方法识别用户意向.然后利用启发式函数量化具体用户意向和资源之间的相似度,从而达到用户在浏览语义异构网络信息空间时获取相关信息的目的. 相似文献