首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hypermedia composite templates define generic structures of nodes and links to be added to a document composition, providing spatio-temporal synchronization semantics. This paper presents EDITEC, a graphical editor for hypermedia composite templates. EDITEC templates are based on the XTemplate 3.0 language. The editor was designed for offering a user-friendly visual approach. It presents a new method that provides several options for representing iteration structures graphically, in order to specify a certain behavior to be applied to a set of generic document components. The editor provides a multi-view environment, giving the user a complete control of the composite template during the authoring process. Composite templates can be used in NCL documents for embedding spatio-temporal semantics into NCL contexts. NCL is the standard declarative language used for the production of interactive applications in the Brazilian digital TV system and ITU H.761 IPTV services. Hypermedia composite templates could also be used in other hypermedia authoring languages offering new types of compositions with predefined semantics.  相似文献   

2.
Automatic semantic annotation of real-world web images   总被引:1,自引:0,他引:1  
As the number of web images is increasing at a rapid rate, searching them semantically presents a significant challenge. Many raw images are constantly uploaded with little meaningful direct annotations of semantic content, limiting their search and discovery. In this paper, we present a semantic annotation technique based on the use of image parametric dimensions and metadata. Using decision trees and rule induction, we develop a rule-based approach to formulate explicit annotations for images fully automatically, so that by the use of our method, semantic query such as " sunset by the sea in autumn in New York" can be answered and indexed purely by machine. Our system is evaluated quantitatively using more than 100,000 web images. Experimental results indicate that this approach is able to deliver highly competent performance, attaining good recall and precision rates of sometimes over 80%. This approach enables a new degree of semantic richness to be automatically associated with images which previously can only be performed manually.  相似文献   

3.
Nowadays, there is a huge amount of textual data coming from on-line social communities like Twitter or encyclopedic data provided by Wikipedia and similar platforms. This Big Data Era created novel challenges to be faced in order to make sense of large data storages as well as to efficiently find specific information within them. In a more domain-specific scenario like the management of legal documents, the extraction of semantic knowledge can support domain engineers to find relevant information in more rapid ways, and to provide assistance within the process of constructing application-based legal ontologies. In this work, we face the problem of automatically extracting structured knowledge to improve semantic search and ontology creation on textual databases. To achieve this goal, we propose an approach that first relies on well-known Natural Language Processing techniques like Part-Of-Speech tagging and Syntactic Parsing. Then, we transform these information into generalized features that aim at capturing the surrounding linguistic variability of the target semantic units. These new featured data are finally fed into a Support Vector Machine classifier that computes a model to automate the semantic annotation. We first tested our technique on the problem of automatically extracting semantic entities and involved objects within legal texts. Then, we focus on the identification of hypernym relations and definitional sentences, demonstrating the validity of the approach on different tasks and domains.  相似文献   

4.
《Knowledge》2007,20(4):373-381
The aim of this paper is to support user browsing on semantically heterogeneous information spaces. In advance of a user’s explicit actions, his search context should be predicted by the locally annotated resources in his access histories. We thus exploit semantic transcoding method and measure the relevance between the estimated model of user intention and the candidate resources in web spaces. For these experiments, we simulated the scenario of comparison-shopping systems on the testing bed organized by twelve online stores in which images are annotated with semantically heterogeneous metadata.  相似文献   

5.
6.
针对现有的基于本体描述的语义Web服务发现方法发现效率较为低下的问题,提出一种新的服务发现方法.该方法在基于本体距离计算语义Web服务综合相似度的基础上,利用数据挖掘中的聚类算法AGNES对语义Web服务集合进行聚类预处理,形成若干服务簇,然后应用相应服务发现算法根据相似度阈值定位于某一服务簇内进行查找匹配,从而可减少搜索空间.理论与仿真实验结果表明,该方法既可保证服务发现的准确率,又可明显提高服务发现的效率.  相似文献   

7.
ContextSemantically annotating web services is gaining more attention as an important aspect to support the automatic matchmaking and composition of web services. Therefore, the support of well-known and agreed ontologies and tools for the semantical annotation of web services is becoming a key concern to help the diffusion of semantic web services.ObjectiveThe objective of this systematic literature review is to summarize the current state-of-the-art for supporting the semantical annotation of web services by providing answers to a set of research questions.MethodThe review follows a predefined procedure that involves automatically searching well-known digital libraries. As a result, a total of 35 primary studies were identified as relevant. A manual search led to the identification of 9 additional primary studies that were not reported during the automatic search of the digital libraries. Required information was extracted from these 44 studies against the selected research questions and finally reported.ResultsOur systematic literature review identified some approaches available for semantically annotating functional and non-functional aspects of web services. However, many of the approaches are either not validated or the validation done lacks credibility.ConclusionWe believe that a substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services.  相似文献   

8.
为了在图像语义标注领域能更好地反映标注之间的关系,通过对已标注图像的标注进行分析来建立标 注之间的关系,并在此基础上将叙词查询的概念引入到图像语义标注中并提出了基于叙词查询的图像语义标注 方法,把语义标注问题统一在叙词查询与图像的语义关系相结合在统一的框架下,最后通过在Corel图像数据库中的验证表明,所提出的方法是有效的并且标注率得到了明显的提高。  相似文献   

9.
三维模型语义自动标注的目标是自动给出最适合描述模型的标注词集合,是基于文本的三维模型检索的重要环节。语义鸿沟的存在使得相似匹配技术得到的标注效果有待提高。为了在用户提供的有限模型数量和对应的标注词信息下,在自动标注过程中利用大量的未标注样本改善三维模型的标注性能,提出了一种半监督测度学习方法完成三维模型语义自动标注。该方法首先使用基于图的半监督学习方法扩展已标注模型集合,并给出扩展集合中语义标签表征模型的语义置信度,使用改进的相关成分分析方法学习马氏距离度量,依据学习到的距离和语义置信度形成多语义标注策略。在PSB(Princeton Shape Benchmark)数据集上的测试表明,该方法利用了大量未标注样本参与标注过程,取得了比较好的标注效果。  相似文献   

10.
This paper presents a novel method for semantic annotation and search of a target corpus using several knowledge resources (KRs). This method relies on a formal statistical framework in which KR concepts and corpus documents are homogeneously represented using statistical language models. Under this framework, we can perform all the necessary operations for an efficient and effective semantic annotation of the corpus. Firstly, we propose a coarse tailoring of the KRs w.r.t the target corpus with the main goal of reducing the ambiguity of the annotations and their computational overhead. Then, we propose the generation of concept profiles, which allow measuring the semantic overlap of the KRs as well as performing a finer tailoring of them. Finally, we propose how to semantically represent documents and queries in terms of the KRs concepts and the statistical framework to perform semantic search. Experiments have been carried out with a corpus about web resources which includes several Life Sciences catalogs and Wikipedia pages related to web resources in general (e.g., databases, tools, services, etc.). Results demonstrate that the proposed method is more effective and efficient than state-of-the-art methods relying on either context-free annotation or keyword-based search.  相似文献   

11.
12.
Jin  Cong  Sun  Qing-Mei  Jin  Shu-Wei 《Multimedia Tools and Applications》2019,78(9):11815-11834
Multimedia Tools and Applications - Automated image annotation (AIA) is an important issue in computer vision and pattern recognition, and plays an extremely important role in retrieving...  相似文献   

13.
In this paper,we study the reliability-aware synthesis problem for composing available services automatically and guaranteeing that the composed result satisfies the specification,such as temporal constraints of functionality and reliability,centered on a synthesis model for mediator of web services composition (CSM).This approach focuses on handling attributes and state relations,and permitting users and services to operate over them,i.e.,read /write their data values and compare them according to a dense state order.We show that the reliability-aware synthesis problem for the specification is EXPTIME-complete and we give an exponential-time algorithm (CSM-NSA) which for a given formula ψ and a synthesis model,synthesizes available services in the library satisfying ψ over the synthesis model (if they exist) or responds with not satisfiable (otherwise).The specification ψ is a fragment of PCTL (probabilistic computation tree logic),obtained from ordinary CTL (computation tree logic) by replacing the EX,AX,EU and AU operation with their quantitative counterparts X >p,X =1,U >p,and U =1,respectively.As opposed to NSA,we provide a more effective algorithm to replace the NSA algorithm called CSM-HSA (heuristic synthesis algorithm).Though HSA is an incomplete algorithm,the answer is correct.The experiments show that the HSA algorithm solves the problem of reliability-aware service synthesis effectively and efficiently.  相似文献   

14.
15.
16.
17.
Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force.  相似文献   

18.
Recently, genre collection and automatic genre identification for the web has attracted much attention. However, currently there is no genre-annotated corpus of web pages where inter-annotator reliability has been established, i.e. the corpora are either not tested for inter-annotator reliability or exhibit low inter-coder agreement. Annotation has also mostly been carried out by a small number of experts, leading to concerns with regard to scalability of these annotation efforts and transferability of the schemes to annotators outside these small expert groups. In this paper, we tackle these problems by using crowd-sourcing for genre annotation, leading to the Leeds Web Genre Corpus—the first web corpus which is, demonstrably reliably annotated for genre and which can be easily and cost-effectively expanded using naive annotators. We also show that the corpus is source and topic diverse.  相似文献   

19.
针对Web服务发现中存在的语义异构、逻辑推理复杂、应用实施代价较大以及查准率有待进一步提高等问题,提出了一种基于功能语义标注的Web服务发现方法。定义了领域功能语义模型和本体映射机制,对服务进行功能语义标注以增加语义信息;服务发现时,先对其功能进行语义相似度匹配,满足给定阈值的服务再进行接口匹配。仿真实验表明,提出的方法在保证服务匹配效率的同时,有效避免了语义异构问题,相比传统方法查准率提高了34.1%。  相似文献   

20.
We capture student interactions in an e-learning community to construct a semantic web (SW) to create a collective meta-knowledge structure guiding students as they search the existing knowledge corpus. We use formal concept analysis (FCA) as a knowledge acquisition tool to process the students virtual surfing trails to express and exploit the dependencies between web-pages to yield subsequent and more effective focused search results. We mirror social navigation and bypass the cumbersome manual annotation of webpages and at the same time paralleling social navigation for knowledge.We present our system KAPUST2 (Keeper and Processor of User Surfing Trails) which constructs from captured students trails a conceptual lattice guiding student queries. We use KAPUST as an e-learning software for an undergraduate class over two semesters. We show how the lattice evolved over the two semesters, improving its performance by exploring the relationship between ‘kinds’ of research assignments and the e-learning semantic web development. Course instructors monitored the evolution of the lattice with interesting positive pedagogical consequences. These are reported as well in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号