首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this study, we selected medical image diagnosis as a task to investigate how expertise influences the relations between perceptual and conceptual processing. In an experiment, participants, namely five novices and five experts, made diagnoses on 13 CT images. We obtained two types of data concerning verbal protocols and manipulating computational systems. The segments related to perceptual and conceptual processing were extracted from these data, and the interrelations of the two components were analyzed. Consequently, we confirmed three salient features in the experts: (1) the experts verbalized more types of findings and more types of hypotheses than novices; (2) the experts generated several hypotheses in the early phases of the task; and (3) they newly verbalized many perceptual features during conceptual activities, and verbalized conceptual words during perceptual activities. These results suggest that expertise in medical image diagnosis involves not only the development of both perceptual and conceptual processing, but also the development of an ability to connect the two components.  相似文献   

2.
本体映射过程中概念相似度计算方法的改进   总被引:1,自引:0,他引:1       下载免费PDF全文
针对本体映射中概念相似度计算中存在的一些不足之处,提出了一种改进的方法。首先根据本体概念树的特点以及数据挖掘的思想,提出了一种改进的侯选映射集设置的方法,减少了相似度计算的工作量。进而根据本体和本体中概念的特点,综合概念名称、概念实例、概念的属性、结构、关系等因素。设计了一种改进的相似度的计算方法。改善了相似度计算中存在的片面性和不完善性问题,提高了本体映射的查全率和查准率。初步实验表明该算法在计算复杂度、查全率和查准率上都要优于Glue方法。  相似文献   

3.
Discovery of a perceptual distance function for measuring image similarity   总被引:3,自引:0,他引:3  
For more than a decade, researchers have actively explored the area of image/video analysis and retrieval. Yet one fundamental problem remains largely unsolved: how to measure perceptual similarity between two objects. For this purpose, most researchers employ a Minkowski-type metric. Unfortunately, the Minkowski metric does not reliably find similarities in objects that are obviously alike. Through mining a large set of visual data, our team has discovered a perceptual distance function. We call the discovered function the dynamic partial function (DPF). When we empirically compare DPF to Minkowski-type distance functions in image retrieval and in video shot-transition detection using our image features, DPF performs significantly better. The effectiveness of DPF can be explained by similarity theories in cognitive psychology.  相似文献   

4.
Nonfunctional requirements (NFRs) have been frequently neglected or forgotten in software design. They have been presented as a second or even third class type of requirement, frequently hidden inside notes. We tackle this problem by treating NFRs as first class requirements. We present a process to elicit NFRs, analyze their interdependencies, and trace them to functional conceptual models. We focus our attention on conceptual models expressed using UML (Unified Modeling Language). Extensions to UML are proposed to allow NFRs to be expressed. We show how to integrate NFRs into the class, sequence, and collaboration diagrams. We also show how use cases and scenarios can be adapted to deal with NFRs. This work was used in three case studies and their results suggest that by using our proposal we can improve the quality of the resulting conceptual models.  相似文献   

5.
Measuring image similarity is an important task for various multimedia applications. Similarity can be defined at two levels: at the syntactic (lower, context-free) level and at the semantic (higher, contextual) level. As long as one deals with the syntactic level, defining and measuring similarity is a relatively straightforward task, but as soon as one starts dealing with the semantic similarity, the task becomes very difficult. We examine the use of simple readily available syntactic image features combined with other multimodal features to derive a similarity measure that captures the weak semantics of an image. The weak semantics can be seen as an intermediate step between low level image understanding and full semantic image understanding. We investigate the use of single modalities alone and see how the combination of modalities affect the similarity measures. We also test the measure on multimedia retrieval task on a tv series data, even though the motivation is in understanding how different modalities relate to each other.  相似文献   

6.
There has been growing interest in theory building in Information Systems (IS) research. We extend this literature by examining theory building perspectives. We define a perspective as a researcher’s choice of the types of concepts and relationships used to construct a theory, and we examine three perspectives – process, variance, and systems. We contribute by clarifying these perspectives and explaining how they can be used more flexibly in future research. We illustrate the value of this more flexible approach by showing how researchers can use different theoretical perspectives to critique and extend an existing theoretical model (in our case, the IS Success Model). Overall, we suggest a shift from the traditional process-variance dichotomy to a broader view defined by conceptual latitude (the types of concepts and relationships available) and conceptual fit (the types of concepts and relationships appropriate for a given study). We explain why this shift should help researchers as they engage in the knowledge generation process.  相似文献   

7.
Recently there has been a steep growth in the development of kernel-based learning algorithms. The intrinsic problem in such algorithms is the selection of the optimal kernel for the learning task of interest. In this paper, we propose an unsupervised approach to learn a linear combination of kernel functions, such that the resulting kernel best serves the objectives of the learning task. This is achieved through measuring the influence of each point on the structure of the dataset. This measure is calculated by constructing a weighted graph on which a random walk is performed. The measure of influence in the feature space is probabilistically related to the input space that yields an optimization problem to be solved. The optimization problem is formulated in two different convex settings, namely linear and semidefinite programming, dependent on the type of kernel combination considered. The contributions of this paper are twofold: first, a novel unsupervised approach to learn the kernel function, and second, a method to infer the local similarity represented by the kernel function by measuring the global influence of each point toward the structure of the dataset. The proposed approach focuses on the kernel selection which is independent of the kernel-based learning algorithm. The empirical evaluation of the proposed approach with various datasets shows the effectiveness of the algorithm in practice.  相似文献   

8.
Inferring dependencies from relations: a conceptual clustering approach   总被引:1,自引:0,他引:1  
In this paper we consider two related types of data dependencies that can hold in a relation: conjunctive implication rules between attribute‐value pairs, and functional dependencies. We present a conceptual clustering approach that can be used, with some small modifications, for inferring a cover for both types of dependencies. The approach consists of two steps. First, a particular clustered representation of the relation, called concept (or Galois ) lattice , is built. Then, a cover is extracted from the lattice built in the earlier step. Our main emphasis is on the second step. We study the computational complexity of the proposed approach and present an experimental comparison with other methods that confirms its validity. The results of the experiments show that our algorithm for extracting implication rules from concept lattices clearly outperforms an earlier algorithm, and suggest that the overall lattice‐based approach to inferring functional dependencies from relations can be seen as an alternative to traditional methods.  相似文献   

9.
Determining the semantic similarity is an important issue in the development of semantic search technology. In this paper, we propose an approach to determining the semantic similarity. This approach takes into consideration the similarity between two entities and their similarity reflected in context. Furthermore, the approach provides an efficient Tabu Search algorithm combined with multi-objective programming algorithm to improve the precision.  相似文献   

10.
11.
This paper addresses the problem of characterizing ensemble similarity from sample similarity in a principled manner. Using a reproducing kernel as a characterization of sample similarity, we suggest a probabilistic distance measure in the reproducing kernel Hilbert space (RKHS) as the ensemble similarity. Assuming normality in the RKHS, we derive analytic expressions for probabilistic distance measures that are commonly used in many applications, such as Chernoff distance (or the Bhattacharyya distance as its special case), Kullback-Leibler divergence, etc. Since the reproducing kernel implicitly embeds a nonlinear mapping, our approach presents a new way to study these distances whose feasibility and efficiency is demonstrated using experiments with synthetic and real examples. Further, we extend the ensemble similarity to the reproducing kernel for ensemble and study the ensemble similarity for more general data representations.  相似文献   

12.
A rigorous formulation of the solvation forces (first derivatives) associated with the electrostatic free energy calculated from numerical solutions of the linearized Poisson-Boltzmann equation on a discrete grid is described. The solvation forces are obtained from the formal solution of the linearized Poisson-Boltzmann equation written in terms of the Green function. An intermediate region for the solute-solvent dielectric boundary is introduced to yield a continuous solvation free energy and accurate solvation forces. A series of numerical tests show that the calculated forces agree extremely well with finite-difference derivatives of the solvation free energy. To gain a maximum efficiency, the nonpolar contribution to the free energy is expressed in terms of the discretized grid used for the electrostatic problem. The current treatment of solvation forces can be used to introduce the influence of a continuum solvation model in molecular mechanics calculations of large biological systems.  相似文献   

13.
Large-scale Web Applications, especially those intended to publish contents and provide information to its users, are by nature subject to continuous and fast changes. This often means fast obsolescence of the design documentation and a lot of effort required to comprehend the application when performing maintenance and evolution tasks. This paper presents a reverse engineering approach for Web Applications enabling the semi-automatic recovery of user-centered conceptual models describing, from a user perspective, key aspects, such as the delivered contents and navigational paths. The abstracted models are formalized according to the Ubiquitous Web Applications (UWA) design methodology, but any other design method for Web Applications could be used instead. The paper describes the recovery process, a tool developed to support the process, and the results from a case study conducted to validate the approach on a set of real world Web Applications.  相似文献   

14.
To successfully compete in today’s volatile business environments, enterprises need to consolidate, flexibly adapt, and extend their information systems (IS) with new functionality. Component-based development approaches can help solving these challenges as they support the structuring of IS landscapes into business components with a loosely coupled business functionality. However, the structuring process continues to pose research challenges and is not adequately supported yet. Current approaches to support the structuring process typically rely on procedures that cannot be customized to the designer’s situational preferences. Furthermore, they do not allow the designer to identify and reflect emerging conflicts during the structuring. In this paper, we therefore propose a new method that introduces a rational, reflective procedure to systematically derive an optimized structuring according to situational preferences. Using a design science approach, we show (i) how the derivation of business components can be formulated as a customizable multi-criteria decision making problem and (ii) how conceptual models can be used to derive business components with an optimized functional scope. To evaluate the feasibility of the proposed method, we describe its application in a complex case that was taken from a German DAX-30 automobile manufacturer.  相似文献   

15.
Much of the research that deals with understanding the real world and representing it in a conceptual model uses some form of the entity-relationship model as a means of representation. This research proposes an ontology for classifying relationship verb phrases based upon the domain and context of the application within which the relationship appears. The classification categories to which the verb phrases are mapped were developed based upon prior research in databases, ontologies, and linguistics. The usefulness of the ontology for comparing relationships when used in conjunction with an entity ontology is discussed. Together, these ontologies can be effective in comparing two conceptual database designs for integration and validation. Empirical testing of the ontology on a number of relationships from different application domains and contexts illustrates the usefulness of the research.  相似文献   

16.
17.
Neural Computing and Applications - Over the past years, a digital multimedia uprising has been experienced in every walk of life, due to which the un-annotated or unstructured multimedia content...  相似文献   

18.
《Robotics and Computer》1994,11(3):137-166
This paper describes a knowledge-based approach to the domain independent conceptual design phase. We describe the development of “ECDEX”, the “Engineering Conceptual Design Expert” that generates concept variants (concept alternatives) during the conceptual design phase of engineering systems. ECDEX was developed using an expert system shell called CLIPS (C Language Integrated Production System) and a C-program interface to aid the quantitative phase of concept evaluation. ECDEX internally develops function structures that are dependent on the number of solution principles for each sub-function selected during the process. After the function structures are developed, ECDEX searches for solution principles for each of the sub-functions considering that every combination needs to begin with the system input and end with the system output. After all combinations are developed, ECDEX performs the concept evaluation using evaluation criteria defined by the designer providing a set of ‘concept variants’. ECDEX is applied to three test cases; an impulse loading test rig, a hydro-electric generation plant and a fishing reel. The results from these three test cases demonstrate that ECDEX generates satisfactory conceptual designs of engineering systems.  相似文献   

19.
Corner matching in image sequences is an important and difficult problem that serves as a building block of several important applications of stereo vision etc. Normally, in area-based corner matching techniques, the linear measures like standard cross correlation coefficient, zero-mean (normalized) cross correlation coefficient, sum of absolute difference and sum of squared difference are used. Fuzzy logic is a powerful tool to solve many image processing problems because of its ability to deal with ambiguous data. In this paper, we use a similarity measure based on fuzzy correlations in order to establish the corner correspondence between sequence images in the presence of intensity variations and motion blur. The matching approach proposed here needs only to extract one set of corner points as candidates from the left image (first frame), and the positions of which in the right image (second frame) are determined by matching, not by extracting. Experiments conducted with the help of various sequences of images prove the superiority of our algorithm over standard and zero-mean cross correlation as well as one contemporary work using mutual information as a window similarity measure combined with graph matching techniques under non-ideal conditions.  相似文献   

20.
Abstract  In recent years, much research has been undertaken to investigate the difference between pupils' beliefs and accepted ideas about various scientific concepts, such as force, light, energy and electricity, and the implications of these findings for classroom practice have been considered. In solving scientific problems, many students share common misconceptions or 'models' of the behaviour of a system or the nature of a phenomenon. This article describes a computer program which uses questionnaires to try to diagnose these conceptual models and to provide feedback to students. The program is not domain-specific and questionnaires could be constructed for many topics, though work on the program so far has concentrated on diagnosing students' models of simple electrical circuits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号