共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
Grouping video content into semantic segments and classifying semantic scenes into different types are the crucial processes
to content-based video organization, management and retrieval. In this paper, a novel approach to automatically segment scenes
and semantically represent scenes is proposed. Firstly, video shots are detected using a rough-to-fine algorithm. Secondly,
key-frames within each shot are selected adaptively with hybrid features, and redundant key-frames are removed by template
matching. Thirdly, spatio-temporal coherent shots are clustered into the same scene based on the temporal constraint of video
content and visual similarity between shot activities. Finally, under the full analysis of typical characters on continuously
recorded videos, scene content is semantically represented to satisfy human demand on video retrieval. The proposed algorithm
has been performed on various genres of films and TV program. Promising experimental results show that the proposed method
makes sense to efficient retrieval of interesting video content.
相似文献
Yuncai LiuEmail: |
3.
As the latest stage of learning and training evolution, e-Learning is supposed to provide intelligent functionalities not only in processing multi-media education resources but also in supporting context-sensitive pedagogical education processes. Towards providing an integrated solution for intelligent multimedia e-Learning, this paper presents a context-aware knowledge management framework named ConKMeL. Proposed framework features a semantic context-based approach for representing and integrating information and knowledge in e-Learning. Requirement analysis in university e-Learning environments shows that knowledge communications are usually in a hybrid mode across different conceptual levels. Based on the fact, a multi-layer contextual knowledge representation model called KG (knowledge graph) is presented. Corresponding key issues in development such as context-based knowledge retrieval and logical knowledge interpretation are discussed. On the application side, a scenario-based learning case study is shown to demonstrate the concepts and techniques developed in the ConKMeL framework.
相似文献
Alain MilleEmail: |
4.
XFlavor: providing XML features in media representation 总被引:1,自引:1,他引:0
We present XFlavor, a framework for providing XML representation of multimedia data. XFlavor can be used to convert multimedia
data back and forth between binary and XML representations. Compared to bitstreams, XML documents are easier to access and
manipulate, and consequently, the development of multimedia processing software is greatly facilitated, as one generic XML
parser can be used to read and write different types of data in XML form.
相似文献
Alexandros EleftheriadisEmail: |
5.
In spite of significant improvements in video data retrieval, a system has not yet been developed that can adequately respond
to a user’s query. Typically, the user has to refine the query many times and view query results until eventually the expected
videos are retrieved from the database. The complexity of video data and questionable query structuring by the user aggravates
the retrieval process. Most previous research in this area has focused on retrieval based on low-level features. Managing
imprecise queries using semantic (high-level) content is no easier than queries based on low-level features due to the absence
of a proper continuous distance function. We provide a method to help users search for clips and videos of interest in video
databases. The video clips are classified as interesting and uninteresting based on user browsing. The attribute values of clips are classified by commonality, presence, and frequency within each
of the two groups to be used in computing the relevance of each clip to the user’s query. In this paper, we provide an intelligent
query structuring system, called I-Quest, to rank clips based on user browsing feedback, where a template generation from the set of interesting and uninteresting
sets is impossible or yields poor results.
相似文献
Ramazan Savaş Aygün (Corresponding author)Email: |
6.
XML plays an important role as the standard language for representing structured data for the traditional Web, and hence many
Web-based knowledge management repositories store data and documents in XML. If semantics about the data are formally represented
in an ontology, then it is possible to extract knowledge: This is done as ontology definitions and axioms are applied to XML
data to automatically infer knowledge that is not explicitly represented in the repository. Ontologies also play a central
role in realizing the burgeoning vision of the semantic Web, wherein data will be more sharable because their semantics will
be represented in Web-accessible ontologies. In this paper, we demonstrate how an ontology can be used to extract knowledge
from an exemplar XML repository of Shakespeare’s plays. We then implement an architecture for this ontology using de facto
languages of the semantic Web including OWL and RuleML, thus preparing the ontology for use in data sharing. It has been predicted
that the early adopters of the semantic Web will develop ontologies that leverage XML, provide intra-organizational value
such as knowledge extraction capabilities that are irrespective of the semantic Web, and have the potential for inter-organizational
data sharing over the semantic Web. The contribution of our proof-of-concept application, KROX, is that it serves as a blueprint
for other ontology developers who believe that the growth of the semantic Web will unfold in this manner.
相似文献
Henry M. KimEmail: |
7.
Traditional content-based music retrieval systems retrieve a specific music object which is similar to what a user has requested.
However, the need exists for the development of category search for the retrieval of a specific category of music objects
which share a common semantic concept. The concept of category search in content-based music retrieval is subjective and dynamic.
Therefore, this paper investigates a relevance feedback mechanism for category search of polyphonic symbolic music based on
semantic concept learning. For the consideration of both global and local properties of music objects, a segment-based music
object modeling approach is presented. Furthermore, in order to discover the user semantic concept in terms of discriminative
features of discriminative segments, a concept learning mechanism based on data mining techniques is proposed to find the
discriminative characteristics between relevant and irrelevant objects. Moreover, three strategies, the Most-Positive, the
Most-Informative, and the Hybrid, to return music objects concerning user relevance judgments are investigated. Finally, comparative
experiments are conducted to evaluate the effectiveness of the proposed relevance feedback mechanism. Experimental results
show that, for a database of 215 polyphonic music objects, 60% average precision can be achieved through the use of the proposed
relevance feedback mechanism.
相似文献
Fang-Fei KuoEmail: |
8.
9.
Ha Manh Tran Christoph Lange Georgi Chulkov Jürgen Schönwälder Michael Kohlhase 《Journal of Network and Systems Management》2009,17(3):285-308
The Web has become an important knowledge source for resolving system installation problems and for working around software
bugs. In particular, web-based bug tracking systems offer large archives of useful troubleshooting advice. However, searching
bug tracking systems can be time consuming since generic search engines do not take advantage of the semi-structured knowledge
recorded in bug tracking systems. We present work towards a semantics-based bug search system which tries to take advantage
of the semi-structured data found in many widely used bug tracking systems. We present a study of bug tracking systems and
we describe how to crawl them in order to extract semi-structured data. We describe a unified data model to store bug tracking
data. The model has been derived from the analysis of the most popular systems. Finally, we describe how the crawled data
can be fed into a semantic search engine to facilitate semantic search.
相似文献
Michael KohlhaseEmail: |
10.
Capturing latent structural and semantic properties in semi-structured documents (e.g., XML documents) is crucial for improving
the performance of related document analysis tasks. Structured Link Vector Mode (SLVM) is a representation recently proposed
for modeling semi-structured documents. It uses an element similarity matrix to capture the latent relationships between XML
elements—the constructing components of an XML document. In this paper, instead of applying heuristics to define the element
similarity matrix, we propose to compute the matrix using the machine learning approach. In addition, we incorporate term
semantics into SLVM using latent semantic indexing to enhance the model accuracy, with the element similarity learnability
property preserved. For performance evaluation, we applied the similarity learning to k-nearest neighbors search and similarity-based clustering, and tested the performance using two different XML document collections.
The SLVM obtained via learning was found to outperform significantly the conventional Vector Space Model and the edit-distance-based
methods. Also, the similarity matrix, obtained as a by-product, can provide higher-level knowledge on the semantic relationships
between the XML elements.
相似文献
Xiaoou ChenEmail: |
11.
RRSi: indexing XML data for proximity twig queries 总被引:2,自引:2,他引:0
Twig query pattern matching is a core operation in XML query processing. Indexing XML documents for twig query processing
is of fundamental importance to supporting effective information retrieval. In practice, many XML documents on the web are
heterogeneous and have their own formats; documents describing relevant information can possess different structures. Therefore
some “user-interesting” documents having similar but non-exact structures against a user query are often missed out. In this
paper, we propose the RRSi, a novel structural index designed for structure-based query lookup on heterogeneous sources of XML documents supporting
proximate query answers. The index avoids the unnecessary processing of structurally irrelevant candidates that might show
good content relevance. An optimized version of the index, oRRSi, is also developed to further reduce both space requirements and computational complexity. To our knowledge, these structural
indexes are the first to support proximity twig queries on XML documents. The results of our preliminary experiments show
that RRSi and oRRSi based query processing significantly outperform previously proposed techniques in XML repositories with structural heterogeneity.
相似文献
Vincent T. Y. NgEmail: |
12.
A comprehensive method for movie abstraction is developed in this research for applications in fast movie content exploring,
indexing, browsing, and skimming, Most current approaches rely heavily on specific domain knowledge or models to identify
and extract the determining scenes of a given movie; however, the segments extracted are often isolated, presenting a fragmented
outline of the original. Our proposed method fuses simple audiovisual features, and measures the “tempos” of a movie directly,
especially that of long-term ones. These tempos form a curve that catches the high-level semantics of a movie, indicating
the events of interests named as “story intensity.” Through tempo, the proposed algorithm provides a natural way that segments
a movie into manageable parts. As our experimental results demonstrate, the condensed skimming clips efficiently extract semantic
content that contains the most interesting and informative parts of the original movie.
相似文献
Chih-Hung KuoEmail: |
13.
The technique of relevance feedback has been introduced to content-based 3D model retrieval, however, two essential issues
which affect the retrieval performance have not been addressed. In this paper, a novel relevance feedback mechanism is presented,
which effectively makes use of strengths of different feature vectors and perfectly solves the problem of small sample and
asymmetry. During the retrieval process, the proposed method takes the user’s feedback details as the relevant information
of query model, and then dynamically updates two important parameters of each feature vector, narrowing the gap between high-level
semantic knowledge and low-level object representation. The experiments, based on the publicly available 3D model database
Princeton Shape Benchmark (PSB), show that the proposed approach not only precisely captures the user’s semantic knowledge,
but also significantly improves the retrieval performance of 3D model retrieval. Compared with three state-of-the-art query
refinement schemes for 3D model retrieval, it provides superior retrieval effectiveness only with a few rounds of relevance
feedback based on several standard measures.
相似文献
Biao LengEmail: |
14.
John Dilworth 《Minds and Machines》2008,18(4):527-546
A novel semantic naturalization program is proposed. Its three main differences from informational semantics approaches are
as follows. First, it makes use of a perceptually based, four-factor interactive causal relation in place of a simple nomic
covariance relation. Second, it does not attempt to globally naturalize all semantic concepts, but instead it appeals to a
broadly realist interpretation of natural science, in which the concept of propositional truth is off-limits to naturalization
attempts. And third, it treats all semantic concepts as being purely abstract, so that concrete cognitive states are only
indexed by them rather than instantiating them.
相似文献
John DilworthEmail: |
15.
Guoray Cai 《GeoInformatica》2007,11(2):217-237
Human interactions with geographical information are contextualized by problem-solving activities which endow meaning to geospatial
data and processing. However, existing spatial data models have not taken this aspect of semantics into account. This paper
extends spatial data semantics to include not only the contents and schemas, but also the contexts of their use. We specify
such a semantic model in terms of three related components: activity-centric context representation, contextualized ontology
space, and context mediated semantic exchange. Contextualization of spatial data semantics allows the same underlying data
to take multiple semantic forms, and disambiguate spatial concepts based on localized contexts. We demonstrate how such a
semantic model supports contextualized interpretation of vague spatial concepts during human–GIS interactions. We employ conversational
dialogue as the mechanism to perform collaborative diagnosis of context and to coordinate sharing of meaning across agents
and data sources.
相似文献
Guoray CaiEmail: |
16.
Relevance feedback has recently emerged as a solution to the problem of improving the retrieval performance of an image retrieval
system based on low-level information such as color, texture and shape features. Most of the relevance feedback approaches
limit the utilization of the user’s feedback to a single search session, performing a short-term learning. In this paper we
present a novel approach for short and long term learning, based on the definition of an adaptive similarity metric and of
a high level representation of the images. For short-term learning, the relevant and non-relevant information given by the
user during the feedback process is employed to create a positive and a negative subspace of the feature space. For long-term
learning, the feedback history of all the users is exploited to create and update a representation of the images which is
adopted for improving retrieval performance and progressively reducing the semantic gap between low-level features and high-level
semantic concepts. The experimental results prove that the proposed method outperforms many other state of art methods in
the short-term learning, and demonstrate the efficacy of the representation adopted for the long-term learning.
相似文献
Annalisa FrancoEmail: |
17.
Using Wikipedia knowledge to improve text classification 总被引:7,自引:7,他引:0
Text classification has been widely used to assist users with the discovery of useful information from the Internet. However,
traditional classification methods are based on the “Bag of Words” (BOW) representation, which only accounts for term frequency
in the documents, and ignores important semantic relationships between key terms. To overcome this problem, previous work
attempted to enrich text representation by means of manual intervention or automatic document expansion. The achieved improvement
is unfortunately very limited, due to the poor coverage capability of the dictionary, and to the ineffectiveness of term expansion.
In this paper, we automatically construct a thesaurus of concepts from Wikipedia. We then introduce a unified framework to
expand the BOW representation with semantic relations (synonymy, hyponymy, and associative relations), and demonstrate its
efficacy in enhancing previous approaches for text classification. Experimental results on several data sets show that the
proposed approach, integrated with the thesaurus built from Wikipedia, can achieve significant improvements with respect to
the baseline algorithm.
相似文献
Pu WangEmail: |
18.
Jason J. Jung 《Neural computing & applications》2009,18(3):213-221
Conventional focused crawling systems have difficulties on contextual information retrieval in semantic web environment. In
order to deal with these problems, we propose a cooperative crawler platform based on evolution strategy to build semantic
structure (i.e., local ontologies) of web spaces. Mainly, multiple crawlers can discover semantic instances (i.e., ontology
fragments) from annotated resources in a web space, and a centralized meta-crawler can carry out incremental aggregation of
the semantic instances sent by the multiple crawlers. To do this, we exploit similarity-based ontology matching algorithm
for computing semantic fitness of a population, i.e., summation of all possible semantic similarities between the semantic instances. As a result, we could
efficiently obtain the best mapping condition (i.e., maximizing the semantic fitness) of the estimated semantic structures.
We have shown two significant contributions of this paper; (1) reconciling semantic conflicts between multiple crawlers, and
(2) adapting to evolving semantic structures of web spaces over time.
相似文献
Jason J. JungEmail: Email: |
19.
Research in content-based image retrieval has been around for over a decade. While the research community has successfully
exploited content features such as color and texture, finding an effective shape representation and measure remains a challenging
task. The shape feature is particularly crucial for the success of content-based systems as it carries meaningful semantics
of the objects of interest and fits more naturally into humans’ perception of similarity. In this paper, we present our approach
to use the shape feature for image retrieval. First, we introduce an effective image decomposition method called Crawling
Window (CW) to distinguish the outline of each object in the image. Second, to represent each individual shape, we propose
a novel representation model called component Distance Distribution Function and its measure. Traditionally, an object is
represented by a set of points on the shape’s contour. Our idea is to first compute the distance between each point and the
center of the object. The distance values for all points form a signal, which we call Distance Distribution Function (DDF).
Each DDF is then divided into component DDFs (cDDF) by taking local signal information into account. Finally, a transformation
technique is employed to generate the feature vector for each cDDF. All vectors from the cDDFs in circular order construct
the final shape representation. The model is invariant to position, scaling, rotation and starting point. The similarity measure
model based on the new representation is also introduced. Our extensive experiments show that our models are more effective
than the existing representation model, both in the shape and the image level.
相似文献
Xiaofang ZhouEmail: |
20.
A lack of design information can be a significant barrier for systems developers when developing and reusing a component.
This paper tackles this problem by presenting and exemplifying the conceptual framework of component context and its hypertext
representation in a metaCASE environment. It discusses the linking of contextual knowledge to components in systems analysis
and design. The contextual knowledge includes the conceptual dependencies of component definition, reuse, and implementation,
as well as the reasoning and rationale behind design and reuse processes. We also illustrate the hypertext approach to contextual
knowledge representation that enables designers to express, record, explore, recognize, and negotiate their shared context
within a metaCASE environment.
相似文献
Janne KaipalaEmail: |