共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper describes a method for recognizing partially occluded objects under different levels of illumination brightness
by using the eigenspace analysis. In our previous work, we developed the “eigenwindow” method to recognize the partially occluded
objects in an assembly task, and demonstrated with sufficient high performance for the industrial use that the method works
successfully for multiple objects with specularity under constant illumination. In this paper, we modify the eigenwindow method
for recognizing objects under different illumination conditions, as is sometimes the case in manufacturing environments, by
using additional color information. In the proposed method, a measured color in the RGB color space is transformed into one
in the HSV color space. Then, the hue of the measured color, which is invariant to change in illumination brightness and direction,
is used for recognizing multiple objects under different illumination conditions. The proposed method was applied to real
images of multiple objects under various illumination conditions, and the objects were recognized and localized successfully. 相似文献
2.
3.
Detection, segmentation, and classification of specific objects are the key building blocks of a computer vision system for
image analysis. This paper presents a unified model-based approach to these three tasks. It is based on using unsupervised
learning to find a set of templates specific to the objects being outlined by the user. The templates are formed by averaging
the shapes that belong to a particular cluster, and are used to guide a probabilistic search through the space of possible
objects. The main difference from previously reported methods is the use of on-line learning, ideal for highly repetitive
tasks. This results in faster and more accurate object detection, as system performance improves with continued use. Further,
the information gained through clustering and user feedback is used to classify the objects for problems in which shape is
relevant to the classification. The effectiveness of the resulting system is demonstrated in two applications: a medical diagnosis
task using cytological images, and a vehicle recognition task.
Received: 5 November 2000 / Accepted: 29 June 2001
Correspondence to: K.-M. Lee 相似文献
4.
Philip A. Bernstein Shankar Pal David Shutt 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(3):177-189
When implementing persistent objects on a relational database, a major performance issue is prefetching data to minimize
the number of round-trips to the database. This is especially hard with navigational applications, since future accesses are
unpredictable. We propose the use of the context in which an object is loaded as a predictor of future accesses, where a context
can be a stored collection of relationships, a query result, or a complex object. When an object O's state is loaded, similar
state for other objects in O's context is prefetched. We present a design for maintaining context and for using it to guide
prefetch. We give performance measurements of its implementation in Microsoft Repository, showing up to a 70% reduction in
running time. We describe several variations of the optimization: selectively applying the technique based on application
and database characteristics, using application-supplied performance hints, using concurrent database queries to support asynchronous
prefetch, prefetching across relationship paths, and delayed prefetch to save database round-trips.
Received May 3, 2000 / Accepted October 26, 2000 相似文献
5.
A database model for object dynamics 总被引:1,自引:0,他引:1
M.P. Papazoglou B.J. Krämer 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(2):73-96
To effectively model complex applications in which constantly changing situations can be represented, a database system must
be able to support the runtime specification of structural and behavioral nuances for objects on an individual or group basis.
This paper introduces the role mechanism as an extension of object-oriented databases to support unanticipated behavioral
oscillations for objects that may attain many types and share a single object identity. A role refers to the ability to represent
object dynamics by seamlessly integrating idiosyncratic behavior, possibly in response to external events, with pre-existing
object behavior specified at instance creation time. In this manner, the same object can simultaneously be an instance of
different classes which symbolize the different roles that this object assumes. The role concept and its underlying linguistic
scheme simplify the design requirements of complex applications that need to create and manipulate dynamic objects.
Edited by D. McLeod / Received March 1994 / Accepted January 1996 相似文献
6.
A method for identification of latent weaknesses is proposed and its use illustrated in a case study. The method is a qualitative
upgrade of probabilistic safety assessment, using its results in the form of risk-significant components identified through
a logical model. The identification of latent weaknesses is made during meetings of the relevant staff who have authority
to influence the status of these components. The groups of key organisational factors influencing the component status are
identified and it is judged how well they are implemented. In the same way weak points of the management system and latent
weaknesses are also taken into account. The method was applied on four occasions. It proved to be efficient, fast and user-friendly,
if the organisation of the analysis is correctly implemented, and taking into account the reluctance of some participants
to put their knowledge at the disposal of a group understanding of complex technological system. 相似文献
7.
Stefan Berchtold Daniel A. Keim Hans-Peter Kriegel 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(4):333-348
In this paper, we introduce the concept of extended feature objects for similarity retrieval. Conventional approaches for
similarity search in databases map each object in the database to a point in some high-dimensional feature space and define
similarity as some distance measure in this space. For many similarity search problems, this feature-based approach is not
sufficient. When retrieving partially similar polygons, for example, the search cannot be restricted to edge sequences, since
similar polygon sections may start and end anywhere on the edges of the polygons. In general, inherently continuous problems
such as the partial similarity search cannot be solved by using point objects in feature space. In our solution, we therefore
introduce extended feature objects consisting of an infinite set of feature points. For an efficient storage and retrieval
of the extended feature objects, we determine the minimal bounding boxes of the feature objects in multidimensional space
and store these boxes using a spatial access structure. In our concrete polygon problem, sets of polygon sections are mapped
to 2D feature objects in high-dimensional space which are then approximated by minimal bounding boxes and stored in an R-tree. The selectivity of the index is improved by using an adaptive decomposition of very large feature objects and a dynamic
joining of small feature objects. For the polygon problem, translation, rotation, and scaling invariance is achieved by using
the Fourier-transformed curvature of the normalized polygon sections. In contrast to vertex-based algorithms, our algorithm
guarantees that no false dismissals may occur and additionally provides fast search times for realistic database sizes. We
evaluate our method using real polygon data of a supplier for the car manufacturing industry.
Edited by R. Güting. Received October 7, 1996 / Accepted March 28, 1997 相似文献
8.
Ada Wai-chee Fu Polly Mei-shuen Chan Yin-Ling Cheung Yiu Sang Moon 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(2):154-173
Abstract. For some multimedia applications, it has been found that domain objects cannot be represented as feature vectors in a multidimensional
space. Instead, pair-wise distances between data objects are the only input. To support content-based retrieval, one approach
maps each object to a k-dimensional (k-d) point and tries to preserve the distances among the points. Then, existing spatial access index methods such as the R-trees
and KD-trees can support fast searching on the resulting k-d points. However, information loss is inevitable with such an approach since the distances between data objects can only
be preserved to a certain extent. Here we investigate the use of a distance-based indexing method. In particular, we apply
the vantage point tree (vp-tree) method. There are two important problems for the vp-tree method that warrant further investigation,
the n-nearest neighbors search and the updating mechanisms. We study an n-nearest neighbors search algorithm for the vp-tree, which is shown by experiments to scale up well with the size of the dataset
and the desired number of nearest neighbors, n. Experiments also show that the searching in the vp-tree is more efficient than that for the -tree and the M-tree. Next, we propose solutions for the update problem for the vp-tree, and show by experiments that the algorithms are
efficient and effective. Finally, we investigate the problem of selecting vantage-point, propose a few alternative methods,
and study their impact on the number of distance computation.
Received June 9, 1998 / Accepted January 31, 2000 相似文献
9.
The most common way of designing databases is by means of a conceptual model, such as E/R, without taking into account other
views of the system. New object-oriented design languages, such as UML (Unified Modelling Language), allow the whole system,
including the database schema, to be modelled in a uniform way. Moreover, as UML is an extendable language, it allows for
any necessary introduction of new stereotypes for specific applications. Proposals exist to extend UML with stereotypes for
database design but, unfortunately, they are focused on relational databases. However, new applications require complex objects
to be represented in complex relationships, object-relational databases being more appropriate for these requirements. The
framework of this paper is an Object-Relational Database Design Methodology, which defines new UML stereotypes for Object-Relational
Database Design and proposes some guidelines to translate a UML conceptual schema into an object-relational schema. The guidelines
are based on the SQL:1999 object-relational model and on Oracle8i as a product example.
Initial submission: 22 January 2002 / Revised submission: 10 June 2002
Published online: 7 January 2003
This paper is a revised and extended version of Extending UML for Object-Relational Database Design, presented in the UML’2001
conference [17]. 相似文献
10.
Rule-based document structure understanding with a fuzzy combination of layout and textual features 总被引:1,自引:1,他引:0
Stefan Klink Thomas Kieninger 《International Journal on Document Analysis and Recognition》2001,4(1):18-26
Document image processing is a crucial process in office automation and begins at the ‘OCR’ phase with difficulties in document
‘analysis’ and ‘understanding’. This paper presents a hybrid and comprehensive approach to document structure analysis. Hybrid
in the sense that it makes use of layout (geometrical) as well as textual features of a given document. These features are
the base for potential conditions which in turn are used to express fuzzy matched rules of an underlying rule base. Rules
can be formulated based on features which might be observed within one specific layout object. However, rules can also express
dependencies between different layout objects. In addition to its rule driven analysis, which allows an easy adaptation to
specific domains with their specific logical objects, the system contains domain-independent markup algorithms for common
objects (e.g., lists).
Received June 19, 2000 / Revised November 8, 2000 相似文献
11.
The way in which humans perceive and react to visual complexity is an important issue in many areas of research and application,
particularly because simplification of complex matter can lead to better understanding of both human behaviour in visual control
tasks as well as the visual environment itself. One area of interest is how people perceive their world in terms of complexity
and how this can be modelled mathematically and/or computationally. A prototype model of complexity has been derived using
subcomponents called ‘SymGeons’ (Symmetrical Geometric Icons) based on Biederman’s original Geon Model for human perception.
The SymGeons are primitive shapes which constitute foreground objects. This paper outlines the derivation and ongoing development
of the ‘SymGeon’ model and how it compares to human perception of visual complexity. The application of the model to understanding
complex human-in-the-loop problems associated with visual remote control operations, e.g. control of remotely operated vehicles,
is discussed. 相似文献
12.
The increasingly global nature of financial markets and institutions means that the collection and management of information
on which decisions might be based are increasingly complex. There is a growing requirement for the integration of information
flows at individual and departmental levels, and across processes and organisational boundaries. Effective information management
is an important contributory factor in the efficiency of such institutions, though there are many associated problems that
do not have obvious or simple answers. This paper discusses the problem of information gathering in complex business environments
and considers how use cases can help to alleviate the problem using an example of a multinational organisation. Such organisations
often require information systems that can support regional differences. However, management requires consistent and uniform
representation of information. The example shows that use cases can be a helpful mechanism for capturing user requirements
that accommodate both regional properties as well as their organisational commonalties. 相似文献
13.
This paper presents an efficient method for creating the animation of flexible objects. The mass-spring model was used to
represent flexible objects. The easiest approach to creating animation with the mass-spring model is the explicit Euler method,
but the method has a serious weakness in that it suffers from an instability problem. The implicit integration method is a possible
solution, but a critical flaw of the implicit method is that it involves a large linear system. This paper presents an approximate
implicit method for the mass-spring model. The proposed technique updates with stability the state of n mass points in O(n) time when the number of total springs is O(n). In order to increase the efficiency of simulation or reduce the numerical errors of the proposed approximate implicit method,
the number of mass points must be as small as possible. However, coarse discretization with a small number of mass points
generates an unrealistic appearance for a cloth model. By introducing a wrinkled cubic spline curve, we propose a new technique
that generates realistic details of the cloth model, even though a small number of mass points are used for simulation. 相似文献
14.
We present a method of colour shade grading for industrial inspection of surfaces, the differences of which are at the threshold
of human perception. This method converts the input data from the electronic sensor to the corresponding data as they would
have been viewed using the human vision system. Then their differences are computed using a perceptually uniform colour space,
thus approximating the way the human experts would grade the product. The transformation from the electronic sensor to the
human sensor makes use of synthetic metameric data to determine the transformation parameters. The method has been tested
using real data.
Received: 17 November 1997 / Accepted: 15 September 1998 相似文献
15.
Aya Soffer Hanan Samet 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(4):253-274
Symbolic images are composed of a finite set of symbols that have a semantic meaning. Examples of symbolic images include
maps (where the semantic meaning of the symbols is given in the legend), engineering drawings, and floor plans. Two approaches
for supporting queries on symbolic-image databases that are based on image content are studied. The classification approach
preprocesses all symbolic images and attaches a semantic classification and an associated certainty factor to each object
that it finds in the image. The abstraction approach describes each object in the symbolic image by using a vector consisting
of the values of some of its features (e.g., shape, genus, etc.). The approaches differ in the way in which responses to queries
are computed. In the classification approach, images are retrieved on the basis of whether or not they contain objects that
have the same classification as the objects in the query. On the other hand, in the abstraction approach, retrieval is on
the basis of similarity of feature vector values of these objects. Methods of integrating these two approaches into a relational
multimedia database management system so that symbolic images can be stored and retrieved based on their content are described.
Schema definitions and indices that support query specifications involving spatial as well as contextual constraints are presented.
Spatial constraints may be based on both locational information (e.g., distance) and relational information (e.g., north of).
Different strategies for image retrieval for a number of typical queries using these approaches are described. Estimated costs
are derived for these strategies. Results are reported of a comparative study of the two approaches in terms of image insertion
time, storage space, retrieval accuracy, and retrieval time.
Received June 12, 1998 / Accepted October 13, 1998 相似文献
16.
Display Design of Process Systems Based on Functional Modelling 总被引:1,自引:0,他引:1
The prevalent way to present information in industrial computer displays is by using piping and instrumentation diagrams.
Such interfaces have sometimes resulted in difficulties for operators because they are not sufficient to fulfil their needs.
A systematic way that supports interface design therefore has to be considered. In the new design framework, two questions
must be answered. Firstly, a modelling method is required to describe a process system. Such a modelling method can define
the information content that must be displayed in interfaces. Secondly, how to communicate this information to operators efficiently
must be considered. This will provide a basis for determining the visual forms that the information should take. This study
discusses interface design of human–machine systems from these two points of view. Based on other scholars’ work, a comprehensive
set of functional primitives is summarised as a basis to build a functional model of process systems. A library of geometrical
presentations for these primitives is then developed. To support effective interface design, the concept of ‘functional macro’
is introduced and a way to map functional model to interface display is illustrated by applying several principles. To make
our ideas clear, a central heating system is taken as an example and its functional model is constructed. Based on the functional
model, the information to be displayed is determined. Several functional macros are then found in the model and their corresponding
displays are constructed. Finally, by using the library of geometrical presentations for functional primitives and functional
macros, the display hierarchy of the central heating system is developed. Reusability of functional primitives makes it possible
to use the methodology to support interface design of different process systems. 相似文献
17.
M. D. McNeese 《Cognition, Technology & Work》2000,2(3):164-177
Within cooperative learning great emphasis is placed on the benefits of ?two heads being greater than one?. However, further
examination of this adage reveals that the value of learning groups can often be overstated and taken for granted for different
types of problems. When groups are required to solve ill-defined and complex problems under real world constraints, different
socio-cognitive factors (e.g., metacognition, collective induction, and perceptual experience) are expected to determine the
extent to which cooperative learning is successful. Another facet of cooperative learning, the extent to which groups enhance
the use of knowledge from one situation to another, is frequently ignored in determining the value of cooperative learning.
This paper examines the role and functions of cooperative learning groups in contrast to individual learning conditions, for
both an acquisition and transfer task. Results for acquisition show groups perform better overall than individuals by solving
more elements of the Jasper problem as measured by their overall score in problem space analysis. For transfer, individuals
do better overall than groups in the overall amount of problem elements transferred from Jasper. This paradox is explained
by closer examination of the data analysis. Groups spend more time engaged with each other in metacognitive activities (during
acquisition) whereas individuals spend more time using the computer to explore details of the perceptually based Jasper macrocontext.
Hence, results show that individuals increase their perceptual learning during acquisition whereas groups enhance their metacognitive
strategies. These investments show different pay-offs for the transfer problem. Individuals transfer more overall problem
elements (as they explored the context more) but problem solvers who had the benefit of metacognition in a learning group
did better at solving the most complex elements of the transfer problem. Results also show that collective induction groups
(ones that freely share) – in comparison to groups composed of dominant members – enhance certain kinds of transfer problem
solving (e.g., generating subgoals). The results are portrayed as the active interplay of socio-cognitive elements that impact
the outcomes (and therein success) of cooperative learning. 相似文献
18.
One of the most important criticisms that can be made concerning synthesized images is the brand new and too clean aspect
of objects. Surface color modifications can be used to introduce dirtiness or other aging-linked characteristics. Also, techniques
such as bump or displacement mapping allow users to improve surface aspects by introducing geometrical perturbations. In parallel,
the bidirectional reflectance distribution function (BRDF) is a crucial factor in achieving a high degree of realism. It turns
out that surfaces are very often covered by defects such as scratches that are related to both textures and BRDFs due to their
size. Scratches do not always affect the apparent geometry but nevertheless can remain strongly visible. None of the previously
mentioned methods is suited for rendering these defects efficiently. We propose a new method, based on extensions to existing
BRDFs and classical 2D texture mapping techniques, to render efficiently individually visible scratches. We use physical measurements
on "real objects" to derive an accurate geometric model of scratches at small scale range (roughness scale), and we introduce
a new geometric level between bump mapping and BRDFs. Beyond providing graphical results closely matching real cases, our
method opens the way to a new class of considerations in computer graphics based on defects that require the coupling of both
BRDFs and texturing techniques. 相似文献
19.
In this paper, we discuss an appearance-matching approach to the difficult problem of interpreting color scenes containing
occluded objects. We have explored the use of an iterative, coarse-to-fine sum-squared-error method that uses information
from hypothesized occlusion events to perform run-time modification of scene-to-template similarity measures. These adjustments
are performed by using a binary mask to adaptively exclude regions of the template image from the squared-error computation.
At each iteration higher resolution scene data as well as information derived from the occluding interactions between multiple
object hypotheses are used to adjust these masks. We present results which demonstrate that such a technique is reasonably
robust over a large database of color test scenes containing objects at a variety of scales, and tolerates minor 3D object
rotations and global illumination variations.
Received: 21 November 1996 / Accepted: 14 October 1997 相似文献
20.
S. Bernardi S. Donatelli A. Horváth 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(4):417-430
An implementation of compositionality for stochastic well-formed nets (SWN) and, consequently, for generalized stochastic
Petri nets (GSPN) has been recently included in the GreatSPN tool. Given two SWNs and a labelling function for places and
transitions, it is possible to produce a third one as a superposition of places and transitions of equal label. Colour domains
and arc functions of SWNs have to be treated appropriately. The main motivation for this extension was the need to evaluate
a library of fault-tolerant “mechanisms” that have been recently defined, and are now under implementation, in a European
project called TIRAN. The goal of the TIRAN project is to devise a portable software solution to the problem of fault tolerance
in embedded systems, while the goal of the evaluation is to provide evidence of the efficacy of the proposed solution. Modularity
being a natural “must” for the project, we have tried to reflect it in our modelling effort. In this paper, we discuss the
implementation of compositionality in the GreatSPN tool, and we show its use for the modelling of one of the TIRAN mechanisms,
the so-called local voter.
Published online: 24 August 2001 相似文献