首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
区域图像检索(RBIR)是基于内容图像检索(CBIR)的一个分支,它以图像分割为基础,通过图像局部视觉特征的相似性进行图像检索。由于准确的图像分割技术尚不成熟,区域图像检索性能容易受到冗余分割和错误分割的影响。为了降低RBIR中图像分割的影响,提出了一种基于前景和背景划分的区域图像检索方法。该方法通过规则分块、图像分类和有效区域定位来得到图像分割区域,然后应用中心对象提取算法(COEA)获得图像主体对象,最后提取颜色和纹理特征进行相似度匹配。实现了一个基于上述方法的RBIR系统ObFind,实验结果表明该方法不仅具有与SIMPLIcity相当的检索性能,而且计算复杂度更低。  相似文献   

2.
《大众软件》2013,(22):9-10
10月17日尼康全球同步推出DX格式入门级机型单反相机D5300,它具有2416万有效像素.采用尼康DX格式CM0S图像传感器和全新高性能EXPEED4图像处理器,  相似文献   

3.
设计了基于图像分析理论,根据SAR图像特点形成水上目标轮廓的检测算法。该算法的实现分为3步:(1)基于数理统计的目标检测方法,形成目标区域;(2)用加权统计滤波算子和去除孤立点算法消除噪声,规范需要检测的水上目标;(3)提取目标轮廓。系统在不同的SAR图像上,取得了良好的检测结果。  相似文献   

4.
针对胸部CT图像分割中的一些难点,结合贪心算法的特点,引入梯度向量流(GVF)来代替传统Snake中的图像力。从添加外部约束、方程求解和边缘图生成等方面对GVF模型进行了改进。该GVF模型的扩散方程计算的边缘映射图则利用了Canny算子的边缘检测结果,从而对GVF力场的弱边界区域和小灰度变化区域进行了明显的改善。对力场扩散方程中梯度矢量流的分量分别进行归一化处理,减小了物体边界力场对曲线上各点的影响,克服了GVF模型难以解决的深凹问题,有效地控制活动等值线外部人工约束能量。改进的GVF-Snake模型能准确、反复地对胸部CT图像进行分割优化,具有良好的鲁棒性和实用性。该方法提高了胸部CT图像分割的可重复性、准确性和通用性。其应用可为医生准确观察CT图像提供良好的帮助,更对该方面病理研究提供较好依据。  相似文献   

5.
图像分割在多媒体,图像处理,计算机视觉领域扮演着重要角色。提出了基于图像分割熵的二区域图像分割方法。 首先,根据熵的特性:单个随机变量所对应的熵越大,所包含的信息量越大,图像是单一区域时,所含的信息量(熵)较小,引入图像分割熵(ISE)测度,用于度量两区域图像分割准确程度,将两区域图像分割问题转化成ISE最小值问题。然后,采用迭代图切(graph cut)算法给出ISE最小值问题的近似解,实现二区域图像分割。实验结果表明,基于图像分割熵的二区域图像分割方法是可行有效的。  相似文献   

6.
基于小波变换的医学图像融合技术的实现   总被引:2,自引:0,他引:2  
研究目的:为了对医学图像进行基于小波变换的融合。方法:首先通过学习和研究小波理论的有关知识来研究小波理论在图像融合中的应用目的和实现方法,并借助MATLAB平台,通过小波工具箱实现了图像融合,同时比较了使用这两种方法实现图像融合的效果;然后借助MATLAB中的函数,通过编程实现了图像融合算法。结果:由两幅非同源的医学图像(CT图像和MRI图像)的融合结果可见,两种方法得到的融合图像的效果一致,其对于两幅图像中同一部位相对位置偏移量小的融合效果较好,但对偏移量较大以及存在形变情况的融合则效果不好。结论:通过小波工具箱可以实现一些简单的医学图像融合,但是随着医学图像融合技术的进一步发展和医学图像的复杂度的进一步加深,尤其是对于腹部和胸部的医学图像,则要通过非刚性配准之后才能再进行融合显像,其过程更加复杂。  相似文献   

7.
正富士施乐开发出基于人类视觉感知的全新图像编辑技术。该技术可基于视觉感知通过调整图像的颜色和形象,改变图像中特定区域或整体效果。运用这项技术,通过加强清晰度和改变形象变换整体图像效果。为了研发该技术,富士施乐针对图像亮度频率及暗部区域进行了深入的独立分析,以实现基于人们对图像物体的视觉感知的自然再现(图1)。  相似文献   

8.
全景拼图是近年来兴起的基于图像的绘制技术(IBR)中的一个重要研究方向。在全景视图的实现过程中,关键技术是实现重叠图像正确、平滑的无缝拼接。本文在原有图像配准技术的基础上,提出了一种新的基于特征区域的配准算法,该算法利用两幅重叠图像对应特征区域的相似性,经过特征选择、特征提取和特征匹配三个阶段,实现两幅图像的拼接。实验结果证明,该算法在拼接速度和效果方面的表现均令人满意。  相似文献   

9.
全景拼图是近年来兴起的基于图像的绘制技术(IBR)中的一个重要研究方向。在全景视图的实现过程中,关键技术是实现重叠图像正确、平滑的无缝拼接。本文在原有图像配准技术的基础上,提出了一种新的基于特征区域的配准算法,该算法利用两幅重叠图像对应特征区域的相似性,经过特征选择、特征提取和特征匹配三个阶段,实现两幅图像的拼接。实验结果证明,该算法在拼接速度和效果方面的表现均令人满意。  相似文献   

10.
一种使用Harris特征点的区域图像检索算法   总被引:3,自引:0,他引:3  
宋辉  李弼程 《计算机工程》2006,32(7):202-203,206
为了克服图像分割技术的限制,提出了一种基干特征点匹配技术的图像检索算法。手工提取图像中的一块区域作为查询图像,然后使用Harris算子提取彩色特征点,并用相应的颜色特征对特征点进行表示,最后利用特征点匹配技术实现区域图像的检索。实验表明,该方法对于图像的亮度变化和几何变换具有很强的鲁棒性,可以有效提高检索准确率。  相似文献   

11.
The success of a statistical classification-design sample model in discriminating cloud-type samples of visible and infrared meteorological satellite data depends on the selection of the design parameters for the system and the ability of the labelled design samples to characterize and discriminate class patterns within the given geographical region. In a companion study by Parikh (1977), pattern recognition design parameters were examined for a four-class problem and a three-class problem for NOAA-1 cloud data. The purpose of this study was to evaluate pattern recognition systems designed in the previous study on SMS-1 design and test sets. Experiments were conducted for both a four-class problem (separation of “low”, “mix”, “cirrus”, and “cumulonimbus” samples) and a three-class problem (separation of “low”, “cirrus”, and “cumulonimbus” samples). For the four-class problem, decreases in classification accuracy ranging from 4% to 11% occurred when the pattern recognition systems were designed and tested on two different data sets selected from the same satellite orbit. A similar decrease was not observed for the well-defined three-class problem.  相似文献   

12.
In this paper, we provide a theoretical foundation for and improvements to the existing bytecode verification technology, a critical component of the Java security model, for mobile code used with the Java “micro edition” (J2ME), which is intended for embedded computing devices. In Java, remotely loaded “bytecode” class files are required to be bytecode verified before execution, that is, to undergo a static type analysis that protects the platform's Java run-time system from so-called type confusion attacks such as pointer manipulation. The data flow analysis that performs the verification, however, is beyond the capacity of most embedded devices because of the memory requirements that the typical algorithm will need. We propose to take a proof-carrying code approach to data flow analysis in defining an alternative technique called “lightweight analysis” that uses the notion of a “certificate” to reanalyze a previously analyzed data flow problem, even on poorly resourced platforms. We formally prove that the technique provides the same guarantees as standard bytecode safety verification analysis, in particular that it is “tamper proof” in the sense that the guarantees provided by the analysis cannot be broken by crafting a “false” certificate or by altering the analyzed code. We show how the Java bytecode verifier fits into this framework for an important subset of the Java Virtual Machine; we also show how the resulting “lightweight bytecode verification” technique generalizes and simulates the J2ME verifier (to be expected as Sun's J2ME “K-Virtual machine” verifier was directly based on an early version of this work), as well as Leroy's “on-card bytecode verifier,” which is specifically targeted for Java Cards.  相似文献   

13.
We study the asymptotics of the stationary sojourn time Z of a “typical customer” in a tandem of single-server queues. It is shown that in a certain “intermediate” region of light-tailed service time distributions, Z may take a large value mostly due to a large value of a single service time of one of the customers. Arguments used in the paper also allow us to obtain an elementary proof of the logarithmic asymptotics for the tail distribution of the stationary sojourn time in the whole class of light-tailed distributions.  相似文献   

14.
Finding typical instances is an effective approach to understand and analyze large data sets. In this paper, we apply the idea of typicality analysis from psychology and cognitive science to database query answering, and study the novel problem of answering top-k typicality queries. We model typicality in large data sets systematically. Three types of top-k typicality queries are formulated. To answer questions like “Who are the top-k most typical NBA players?”, the measure of simple typicality is developed. To answer questions like “Who are the top-k most typical guards distinguishing guards from other players?”, the notion of discriminative typicality is proposed. Moreover, to answer questions like “Who are the best k typical guards in whole representing different types of guards?”, the notion of representative typicality is used. Computing the exact answer to a top-k typicality query requires quadratic time which is often too costly for online query answering on large databases. We develop a series of approximation methods for various situations: (1) the randomized tournament algorithm has linear complexity though it does not provide a theoretical guarantee on the quality of the answers; (2) the direct local typicality approximation using VP-trees provides an approximation quality guarantee; (3) a local typicality tree data structure can be exploited to index a large set of objects. Then, typicality queries can be answered efficiently with quality guarantees by a tournament method based on a Local Typicality Tree. An extensive performance study using two real data sets and a series of synthetic data sets clearly shows that top-k typicality queries are meaningful and our methods are practical.  相似文献   

15.
《Computers & Geosciences》2006,32(8):1052-1068
The most crucial and difficult task in landslide hazard analysis is estimating the conditional probability of occurrence of future landslides in a study area within a specific time period, given specific geomorphic and topographic features. This task can be addressed with a mathematical model that estimates the required conditional probability in two stages: “relative hazard mapping” and “empirical probability estimation.” The first stage divides the study area into a number of “prediction” classes according to their relative likelihood of occurrence of future landslides, based on the geomorphic and topographic data. Each prediction class represents a relative level of hazard with respect to other prediction classes. The number of classes depends on the quantity and quality of input data. Several quantitative models have been developed and tested for use in this stage; the objective is to delineate typical settings in which future landslides are likely to occur. In this stage, problems related to different degrees of resolution in the input data layers are resolved. The second stage is to empirically estimate the conditional probability of landslide occurrence in each prediction class by a cross-validation technique. The basic strategy is to divide past occurrences of landslides into two groups, a “modeling group” and a “validation group”. The first mapping stage is repeated, but the prediction is limited to only those landslide occurrences in the modeling group that are used to construct a new set of prediction classes. The new set of prediction classes is compared to the distribution of landslide occurrences in the validation group. Statistics from the comparison provide a quantitative measure of the conditional probability of occurrence of future landslides.  相似文献   

16.
Research on humanoid robots has produced various uses for their body properties in communication. In particular, mutual relationships of body movements between a robot and a human are considered to be important for smooth and natural communication, as they are in human–human communication. We have developed a semi-autonomous humanoid robot system that is capable of cooperative body movements with humans using environment-based sensors and switching communicative units. Concretely, this system realizes natural communication by using typical behaviors such as: “nodding,” “eye-contact,” “face-to-face,” etc. It is important to note that the robot parts are NOT operated directly; only the communicative units in the robot system are switched. We conducted an experiment using the mentioned robot system and verified the importance of cooperative behaviors in a route-guidance situation where a human gives directions to the robot. The task requires a human participant (called the “speaker”) to teach a route to a “hearer” that is (1) a human, (2) a developed robot that performs cooperative movements, and (3) a robot that does not move at all. This experiment is subjectively evaluated through a questionnaire and an analysis of body movements using three-dimensional data from a motion capture system. The results indicate that the cooperative body movements greatly enhance the emotional impressions of human speakers in a route-guidance situation. We believe these results will allow us to develop interactive humanoid robots that sociably communicate with humans.  相似文献   

17.
In this work, we study the problem of annotating a large volume of Financial text by learning from a small set of human-annotated training data. The training data is prepared by randomly selecting some text sentences from the large corpus of financial text. Conventionally, bootstrapping algorithm is used to annotate large volume of unlabeled data by learning from a small set of annotated data. However, the small set of annotated data have to be carefully chosen as seed data. Thus, our approach is a digress from the conventional approach of bootstrapping as we let the users randomly select the seed data. We show that our proposed algorithm has an accuracy of 73.56% in classifying the financial texts into the different categories (“Accounting”, “Cost”, “Employee”, “Financing”, “Sales”, “Investments”, “Operations”, “Profit”, “Regulations” and “Irrelevant”) even when the training data is just 30% of the total data set. Additionally, the accuracy improves by an approximate average of 2% for an increase of the training data by 10% and the accuracy of our system is 77.91% when the training data is about 50% of the total data set. As a dictionary of hand chosen keywords prepared by domain experts are often used for financial text extraction, we assumed the existence of almost linearly separable hyperplanes between the different classes and therefore, we have used Linear Support Vector Machine along with a modified version of Label Propagation Algorithm which exploits the notion of neighborhood (in Euclidean space) for classification. We believe that our proposed techniques will be of help to Early Warning Systems used in banks where large volumes of unstructured texts need to be processed for better insights about a company.  相似文献   

18.
From an anthropological viewpoint, “accessibility” is not so much a technological and design project as it is a cultural construction, a cognitive schema through which graphic designers and technologists imagine audiences and create appropriate graphic designs that will be “accessible” to that audience. The ethnographer's task is the specification of key actors, institutions and discourses active in the making and remaking of accessibility in a given context. In this article, we examine how Egyptian Web producers at the turn of millennium (1999–2001) sought to design Web portals that would allow the “typical” Egyptian to easily access the World Wide Web. We argue, first, that Egyptian Web producers are deeply influenced by national and international discourses that frame IT as a national mission for socioeconomic development. Second, we found that in the absence of clear definitions of the Web audience, Web producers imagined a “typical” Egyptian that contradicted their own experiences of users of the Web. Finally, we found that Egyptian Web producers largely borrowed pre-existing models, using design elements to “inflect” their sites with an Egyptian motif. However, the conceptual models of access and related design strategies created by Egyptian Web producers were out of touch with Egyptian social realities, contributing to a collapse of most Web portal projects.  相似文献   

19.
This article describes an algorithm for incremental parsing of expressions in the context of syntax-directed editors for programming languages. Since a syntax-directed editor represents programs as trees and statements and expressions as nodes in trees, making minor modifications in an expression can be difficult. Consider, for example, changing a “ + ” operator to a “1” operator or adding a short subexpression at a syntactically but not structurally correct position, such as inserting “) 1 (d“ at the # mark in” (a + b # + c)”. To make these changes in a typical syntax-directed editor, the user must understand the tree structure and type a number of tree-oriented construction and manipulation commands. This article describes an algorithm that allows the user to think in terms of the syntax of the expression as it is displayed on the screen (in infix notation) rather than in terms of its internal representation (which is effectively prefix), while maintaining the benefits of syntax-directed editing. This algorithm is significantly different from other incremental parsing algorithms in that it does not involve modifications to a traditional parsing algorithm or the overhead of maintaining a parser stack or any data structure other than the syntax tree. Instead, the algorithm applies tree transformations, in real-time as each token is inserted or deleted, to maintain a correct syntax tree.  相似文献   

20.
In this paper, we study multiparametric sensitivity analysis of the additive model in data envelopment analysis using the concept of maximum volume in the tolerance region. We construct critical regions for simultaneous and independent perturbations in all inputs/outputs of an efficient decision making unit. Necessary and sufficient conditions are derived to classify the perturbation parameters as “focal” and “nonfocal.” Nonfocal parameters can have unlimited variations because of their low sensitivity in practice and these parameters can be deleted from the final analysis. For focal parameters a maximum volume region is characterized. Theoretical results are illustrated with the help of a numerical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号