共查询到10条相似文献,搜索用时 140 毫秒
1.
Automatic mineral identification using evolutionary computation technology is discussed. Thin sections of mineral samples
are photographed digitally using a computer-controlled rotating polarizer stage on a petrographic microscope. A suite of image
processing functions is applied to the images. Filtered image data for identified mineral grains is then selected for use
as training data for a genetic programming system, which automatically synthesizes computer programs that identify these grains.
The evolved programs use a decision-tree structure that compares the mineral image values with one other, resulting in a thresholding
analysis of the multi-dimensional colour and textural space of the mineral images.
Received: 18 October 1999 / Accepted: 20 January 2001 相似文献
2.
Heuristic and randomized optimization for the join ordering problem 总被引:11,自引:0,他引:11
Michael Steinbrunn Guido Moerkotte Alfons Kemper 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):191-208
Recent developments in database technology, such as deductive database systems, have given rise to the demand for new, cost-effective
optimization techniques for join expressions. In this paper many different algorithms that compute approximate solutions for
optimizing join orders are studied since traditional dynamic programming techniques are not appropriate for complex problems.
Two possible solution spaces, the space of left-deep and bushy processing trees, are evaluated from a statistical point of
view. The result is that the common limitation to left-deep processing trees is only advisable for certain join graph types.
Basically, optimizers from three classes are analysed: heuristic, randomized and genetic algorithms. Each one is extensively
scrutinized with respect to its working principle and its fitness for the desired application. It turns out that randomized
and genetic algorithms are well suited for optimizing join expressions. They generate solutions of high quality within a reasonable
running time. The benefits of heuristic optimizers, namely the short running time, are often outweighed by merely moderate
optimization performance. 相似文献
3.
In video processing, a common first step is to segment the videos into physical units, generally called shots. A shot is a video segment that consists of one continuous action. In general, these physical units need to be clustered
to form more semantically significant units, such as scenes, sequences, programs, etc. This is the so-called story-based video
structuring. Automatic video structuring is of great importance for video browsing and retrieval. The shots or scenes are
usually described by one or several representative frames, called key-frames. Viewed from a higher level, key frames of some shots might be redundant in terms of semantics. In this paper, we propose
automatic solutions to the problems of: (i) video partitioning, (ii) key frame computing, (iii) key frame pruning. For the
first problem, an algorithm called “net comparison” is devised. It is accurate and fast because it uses both statistical and
spatial information in an image and does not have to process the entire image. For the last two problems, we develop an original
image similarity criterion, which considers both spatial layout and detail content in an image. For this purpose, coefficients
of wavelet decomposition are used to derive parameter vectors accounting for the above two aspects. The parameters exhibit
(quasi-) invariant properties, thus making the algorithm robust for many types of object/camera motions and scaling variances.
The novel “seek and spread” strategy used in key frame computing allows us to obtain a large representative range for the
key frames. Inter-shot redundancy of the key-frames is suppressed using the same image similarity measure. Experimental results
demonstrate the effectiveness and efficiency of our techniques. 相似文献
4.
Summary. The Consensus problem is a fundamental paradigm for fault-tolerant asynchronous systems. It abstracts a family of problems
known as Agreement (or Coordination) problems. Any solution to consensus can serve as a basic building block for solving such
problems (e.g., atomic commitment or atomic broadcast). Solving consensus in an asynchronous system is not a trivial task: it has been proven
(1985) by Fischer, Lynch and Paterson that there is no deterministic solution in asynchronous systems which are subject to
even a single crash failure. To circumvent this impossibility result, Chandra and Toueg have introduced the concept of unreliable
failure detectors (1991), and have studied how these failure detectors can be used to solve consensus in asynchronous systems
with crash failures. This paper presents a new consensus protocol that uses a failure detector of the class . Like previous protocols, it is based on the rotating coordinator paradigm and proceeds in asynchronous rounds. Simplicity
and efficiency are the main characteristics of this protocol. From a performance point of view, the protocol is particularly
efficient when, whether failures occur or not, the underlying failure detector makes no mistake (a common case in practice).
From a design point of view, the protocol is based on the combination of three simple mechanisms: a voting mechanism, a small
finite state automaton which manages the behavior of each process, and the possibility for a process to change its mind during
a round.
Received: August 1997 / Accepted: March 1999 相似文献
5.
M. D. McNeese 《Cognition, Technology & Work》2000,2(3):164-177
Within cooperative learning great emphasis is placed on the benefits of ?two heads being greater than one?. However, further
examination of this adage reveals that the value of learning groups can often be overstated and taken for granted for different
types of problems. When groups are required to solve ill-defined and complex problems under real world constraints, different
socio-cognitive factors (e.g., metacognition, collective induction, and perceptual experience) are expected to determine the
extent to which cooperative learning is successful. Another facet of cooperative learning, the extent to which groups enhance
the use of knowledge from one situation to another, is frequently ignored in determining the value of cooperative learning.
This paper examines the role and functions of cooperative learning groups in contrast to individual learning conditions, for
both an acquisition and transfer task. Results for acquisition show groups perform better overall than individuals by solving
more elements of the Jasper problem as measured by their overall score in problem space analysis. For transfer, individuals
do better overall than groups in the overall amount of problem elements transferred from Jasper. This paradox is explained
by closer examination of the data analysis. Groups spend more time engaged with each other in metacognitive activities (during
acquisition) whereas individuals spend more time using the computer to explore details of the perceptually based Jasper macrocontext.
Hence, results show that individuals increase their perceptual learning during acquisition whereas groups enhance their metacognitive
strategies. These investments show different pay-offs for the transfer problem. Individuals transfer more overall problem
elements (as they explored the context more) but problem solvers who had the benefit of metacognition in a learning group
did better at solving the most complex elements of the transfer problem. Results also show that collective induction groups
(ones that freely share) – in comparison to groups composed of dominant members – enhance certain kinds of transfer problem
solving (e.g., generating subgoals). The results are portrayed as the active interplay of socio-cognitive elements that impact
the outcomes (and therein success) of cooperative learning. 相似文献
6.
Extending the Unified Modeling Language for ontology development 总被引:3,自引:0,他引:3
Kenneth Baclawski Mieczyslaw K. Kokar Paul A. Kogut Lewis Hart Jeffrey Smith Jerzy Letkowski Pat Emery 《Software and Systems Modeling》2002,1(2):142-156
There is rapidly growing momentum for web enabled agents that reason about and dynamically integrate the appropriate knowledge
and services at run-time. The dynamic integration of knowledge and services depends on the existence of explicit declarative
semantic models (ontologies). We have been building tools for ontology development based on the Unified Modeling Language
(UML). This allows the many mature UML tools, models and expertise to be applied to knowledge representation systems, not
only for visualizing complex ontologies but also for managing the ontology development process. UML has many features, such
as profiles, global modularity and extension mechanisms that are not generally available in most ontology languages. However,
ontology languages have some features that UML does not support. Our paper identifies the similarities and differences (with
examples) between UML and the ontology languages RDF and DAML+OIL. To reconcile these differences, we propose a modification
to the UML metamodel to address some of the most problematic differences. One of these is the ontological concept variously
called a property, relation or predicate. This notion corresponds to the UML concepts of association and attribute. In ontology
languages properties are first-class modeling elements, but UML associations and attributes are not first-class. Our proposal
is backward-compatible with existing UML models while enhancing its viability for ontology modeling. While we have focused
on RDF and DAML+OIL in our research and development activities, the same issues apply to many of the knowledge representation
languages. This is especially the case for semantic network and concept graph approaches to knowledge representations.
Initial sbmission: 16 February 2002 / Revised submission: 15 October 2002 Published online: 2 December 2002 相似文献
7.
Abstract. Providing a customized result set based upon a user preference is the ultimate objective of many content-based image retrieval
systems. There are two main challenges in meeting this objective: First, there is a gap between the physical characteristics
of digital images and the semantic meaning of the images. Secondly, different people may have different perceptions on the
same set of images. To address both these challenges, we propose a model, named Yoda, that conceptualizes content-based querying
as the task of soft classifying images into classes. These classes can overlap, and their members are different for different
users. The “soft” classification is hence performed for each and every image feature, including both physical and semantic
features. Subsequently, each image will be ranked based on the weighted aggregation of its classification memberships. The
weights are user-dependent, and hence different users would obtain different result sets for the same query. Yoda employs
a fuzzy-logic based aggregation function for ranking images. We show that, in addition to some performance benefits, fuzzy
aggregation is less sensitive to noise and can support disjunctive queries as compared to weighted-average aggregation used
by other content-based image retrieval systems. Finally, since Yoda heavily relies on user-dependent weights (i.e., user profiles)
for the aggregation task, we utilize the users' relevance feedback to improve the profiles using genetic algorithms (GA).
Our learning mechanism requires fewer user interactions, and results in a faster convergence to the user's preferences as
compared to other learning techniques.
Correspondence to: Y.-S. Chen (E-mail: yishinc@usc.edu)
This research has been funded in part by NSF grants EEC-9529152 (IMSC ERC) and IIS-0082826, NIH-NLM R01-LM07061, DARPA and
USAF under agreement nr. F30602-99-1-0524, and unrestricted cash gifts from NCR, Microsoft, and Okawa Foundation. 相似文献
8.
Katrin Franke Mario Köppen 《International Journal on Document Analysis and Recognition》2001,3(4):218-231
Computer-based forensic handwriting analysis requires sophisticated methods for the pre-processing of digitized paper documents,
in order to provide high-quality digitized handwriting, which represents the original handwritten product as accurately as
possible. Due to the requirement of processing a huge amount of different document types, neither a standardized queue of
processing stages, fixed parameter sets nor fixed image operations are qualified for such pre-processing methods. Thus, we
present an open layered framework that covers adaptation abilities at the parameter, operator, and algorithm levels. Moreover,
an embedded module, which uses genetic programming, might generate specific filters for background removal on-the-fly. The
framework is understood as an assistance system for forensic handwriting experts and has been in use by the Bundeskriminalamt,
the federal police bureau in Germany, for two years. In the following, the layered framework will be presented, fundamental
document-independent filters for textured, homogeneous background removal and for foreground removal will be described, as
well as aspects of the implementation. Results of the framework-application will also be given.
Received July 12, 2000 / Revised October 13, 2000 相似文献
9.
Rita Cucchiara 《Machine Vision and Applications》1998,11(1):1-6
The paper presents a genetic algorithm for clustering objects in images based on their visual features. In particular, a
novel solution code (named Boolean Matching Code) and a correspondent reproduction operator (the Single Gene Crossover) are defined specifically for clustering and are compared with other standard genetic approaches. The paper describes the
clustering algorithm in detail, in order to show the suitability of the genetic paradigm and underline the importance of
effective tuning of algorithm parameters to the application. The algorithm is evaluated on some test sets and an example of
its application in automated visual inspection is presented.
Received: 6 August 1996 / Accepted: 11 November 1997 相似文献
10.
Head tracking using stereo 总被引:2,自引:0,他引:2
Head tracking is an important primitive for smart environments and perceptual user interfaces where the poses and movements
of body parts need to be determined. Most previous solutions to this problem are based on intensity images and, as a result,
suffer from a host of problems including sensitivity to background clutter and lighting variations. Our approach avoids these
pitfalls by using stereo depth data together with a simple human-torso model to create a head-tracking system that is both
fast and robust. We use stereo data (Commercial equipment and materials are identified in order to adequately specify certain
procedures. In no case does such identification imply recommendation or endorsement by the National Institute of Standards
and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.)
to derive a depth model of the background that is then employed to provide accurate foreground segmentation. We then use directed
local edge detectors on the foreground to find occluding edges that are used as features to fit to a torso model. Once we
have the model parameters, the location and orientation of the head can be easily estimated. A useful side effect from using
stereo data is the ability to track head movement through a room in three dimensions. Experimental results on real image sequences
are given.
Accepted: 13 August 2001 相似文献