共查询到20条相似文献,搜索用时 0 毫秒
1.
Defining characteristics of the phonatory vocal fold vibration is essential for studies that aim to understand the mechanism of voice production and for clinical diagnosis of voice disorders. The application of high-speed digital imaging techniques to these studies makes it possible to capture sequences of images of the vibrating vocal folds at a frequency that can resolve the actual vocal fold vibrations of a patient. The objective of this study is to introduce a new approach for automatic tracing of vocal fold motion from image sequences acquired from high-speed digital imaging of the larynx. The approach involves three process steps. 1) Global thresholding--the threshold value is selected on the basis of the histogram of the image, which is assumed to follow Rayleigh distribution; 2) applying a morphology operator to remove the isolated object regions; 3) using region-growing to delineate the object, or the vocal fold opening region, and to obtain the area of the glottis; the segmented object obtained after global threshold and the morphological operation is used as a seed region for the final region-growing operation. The performance, effectiveness and validation of our approach is demonstrated using representative, high-speed imaging recordings of subjects having normal and pathological voices. 相似文献
2.
3.
Jones M.E. Agah A. 《IEEE transactions on systems, man and cybernetics. Part C, Applications and reviews》2002,32(3):261-271
The ability to create art is a uniquely human endeavor. Throughout history, humans have used paintings, drawings, songs, stories, and other art forms to communicate important ideas or events and to entertain. The advent of computers gave rise to a new medium for art. Computer generated art work has become popular in the entertainment and education industries. The paper uses genetic algorithms (GAs) to automatically evolve a set of unique digital images according to a predetermined set of criteria. By monitoring and automatically evaluating geometric features in images, the work reported in the paper intends to evolve interesting images with a variety of geometric features without the need for human intervention. 相似文献
4.
Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution. 相似文献
5.
Iskander DR 《IEEE transactions on bio-medical engineering》2006,53(6):1134-1140
It is difficult to demarcate the limbus borders in standard intensity images of the eye since the transition from the cornea (iris) to sclera is gradual. Non-parametric techniques that are currently used for this task are not sufficiently precise to be adopted in those ophthalmic applications where high precision in determining the limbus/pupil characteristics is required. This is the case, for example, in customized refractive corrections. Another aspect of limbus corneae characterization is the inability of measuring in vivo the outer outline of the limbus annulus without specialized illumination techniques. To overcome these limitations, we propose a parametric approach that utilizes a sigmoidal function fitted to radial intensity profiles of the limbus corneae. By the way of simulations, we show the superiority of the proposed parametric approach when compared to an optimized nonparametric technique. Further, we review the techniques for fitting ellipses to the estimated points of the limbus annulus and discuss clinical aspects of the proposed methodology. Initial clinical experiments showed close agreement between the proposed approach and limbus corneae estimates manually obtained by experienced clinicians. 相似文献
6.
Chen R. Jupp D.L.B. Woodcock C.E. Strahler A.H. 《Geoscience and Remote Sensing, IEEE Transactions on》1993,31(3):736-746
A zero-hit run-length probability model for image statistics is derived. The statistics are based on the lengths of runs of pixels that do not include any part of objects that define a scene model. The statistics are used to estimate the density and size of the discrete objects (modeled as disks) from images when the image pixel size is significant relative to the object size. Using different combinations of disk size, density, and image resolution (pixel size) in simulated images, parameter estimation may be used to investigate the essential invertibility of object size and density. Analysis of the relative errors and 95% confidence intervals indicates the accuracy and reliability of the estimates. An integrated parameter r, reveals relationships between errors and the combinations of the three basic parameters of object size, density, and pixel size. The method may be used to analyze real remotely sensed images if simplifying assumptions are relaxed to include the greater complexity found in real data 相似文献
7.
Hidden digital watermarks in images 总被引:146,自引:0,他引:146
Chiou-Ting Hsu Ja-Ling Wu 《IEEE transactions on image processing》1999,8(1):58-68
An image authentication technique by embedding digital "watermarks" into images is proposed. Watermarking is a technique for labeling digital pictures by hiding secret information into the images. Sophisticated watermark embedding is a potential method to discourage unauthorized copying or attest the origin of the images. In our approach, we embed the watermarks with visually recognizable patterns into the images by selectively modifying the middle-frequency parts of the image. Several variations of the proposed method are addressed. The experimental results show that the proposed technique successfully survives image processing operations, image cropping, and the Joint Photographic Experts Group (JPEG) lossy compression. 相似文献
8.
Colorimetric restoration of digital images 总被引:1,自引:0,他引:1
A colorimetric approach to restoration of digital images is presented. Assumptions are made to simplify the general problem to obtain a more computable form. Two methods are developed, using Karhunen-Loeve transformation and independent restoration schemes from earlier works, to solve the estimation problem in color image processing using multidimensional restoration. A comparison of the methods is presented including the effects of parameters of interest to desktop scanners and digital cameras. The results for the SNRs and blurs studied indicate that more than three color channels produces a slight numerical gain and modest visual gain. 相似文献
9.
Image forensics is a form of image analysis for finding out the condition of an image in the complete absence of any digital watermark or signature.It can be used to authenticate digital images and identify their sources.While the technology of exemplar-based inpainting provides an approach to remove objects from an image and play visual tricks.In this paper,as a first attempt,a method based on zero-connectivity feature and fuzzy membership is proposed to discriminate natural images from inpainted images.Firstly,zero-connectivity labeling is applied on block pairs to yield matching degree feature of all blocks in the region of suspicious,then the fuzzy memberships are computed and the tampered regions are identified by a cut set.Experimental results demonstrate the effectiveness of our method in detecting inpainted images. 相似文献
10.
Under the framework of computer-aided eye disease diagnosis, this paper presents an automatic optic disc (OD) detection technique. The proposed technique makes use of the unique circular brightness structure associated with the OD, i.e., the OD usually has a circular shape and is brighter than the surrounding pixels whose intensity becomes darker gradually with their distances from the OD center. A line operator is designed to capture such circular brightness structure, which evaluates the image brightness variation along multiple line segments of specific orientations that pass through each retinal image pixel. The orientation of the line segment with the minimum/maximum variation has specific pattern that can be used to locate the OD accurately. The proposed technique has been tested over four public datasets that include 130, 89, 40, and 81 images of healthy and pathological retinas, respectively. Experiments show that the designed line operator is tolerant to different types of retinal lesion and imaging artifacts, and an average OD detection accuracy of 97.4% is obtained. 相似文献
11.
We describe a knowledge-driven algorithm to automatically delineate the caudate nucleus (CN) region of the human brain from a magnetic resonance (MR) image. Since the lateral ventricles (LVs) are good landmarks for positioning the CN, the algorithm first extracts the LVs, and automatically localizes the CN from this information guided by anatomic knowledge of the structure. The face validity of the algorithm was tested with 55 high-resolution T1-weighted magnetic resonance imaging (MRI) datasets, and segmentation results were overlaid onto the original image data for visual inspection. We further evaluated the algorithm by comparing automated segmentation results to a "gold standard" established by human experts for these 55 MR datasets. Quantitative comparison showed a high intraclass correlation between the algorithm and expert as well as high spatial overlap between the regions-of-interest (ROIs) generated from the two methods. The mean spatial overlap +/- standard deviation (defined by the intersection of the 2 ROIs divided by the union of the 2 ROIs) was equal to 0.873 +/- 0.0234. The algorithm has been incorporated into a public domain software program written in Java and, thus, has the potential to be of broad benefit to neuroimaging investigators interested in basal ganglia anatomy and function. 相似文献
12.
13.
Mount D.M. Kanungo T. Netanyahu N.S. Piatko C. Silverman R. Wu A.Y. 《IEEE transactions on image processing》2001,10(12):1826-1835
Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an mxn image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image. 相似文献
14.
Mila Nikolova 《Journal of Visual Communication and Image Representation》2009,20(4):254-274
We propose several very fast algorithms to dejitter digital video images in one iteration. They are based on an essential disproportion of the magnitude of the second-order differences along the columns of a real-world image and all its jittered versions. The optimal row positions are found using non-smooth and possibly non-convex local criteria, applied on the second-order differences between consecutive rows. The dejittering iteration involves a number of steps equal to the number of the rows of the image. These algorithms are designed for gray-value and color natural images, as well as to noisy images. A reasonable version of these algorithms can be considered as parameter-free. We propose specific error measures to assess the success of dejittering. We provide experiments with random and structured jitter. The obtained results outperform by far the existing methods both in quality and in speed (the ours need around 1 second for a 512 × 512 image on Matlab). Our algorithms are a crucial step towards real-time dejittering of digital video sequences. 相似文献
15.
Measuring perceptual contrast in digital images 总被引:1,自引:0,他引:1
Gabriele Simone Marius PedersenJon Yngve Hardeberg 《Journal of Visual Communication and Image Representation》2012,23(3):491-506
In this paper we present a novel method to measure perceptual contrast in digital images. We start from a previous measure of contrast developed by Rizzi et al. [26], which presents a multilevel analysis. In the first part of the work the study is aimed mainly at investigating the contribution of the chromatic channels and whether a more complex neighborhood calculation can improve this previous measure of contrast. Following this, we analyze in detail the contribution of each level developing a weighted multilevel framework. Finally, we perform an investigation of Regions-of-Interest in combination with our measure of contrast. In order to evaluate the performance of our approach, we have carried out a psychophysical experiment in a controlled environment and performed extensive statistical tests. Results show an improvement in correlation between measured contrast and observers perceived contrast when the variance of the three color channels separately is used as weighting parameters for local contrast maps. Using Regions-of-Interest as weighting maps does not improve the ability of contrast measures to predict perceived contrast in digital images. This suggests that Regions-of-Interest cannot be used to improve contrast measures, as contrast is an intrinsic factor and it is judged by the global impression of the image. This indicates that further work on contrast measures should account for the global impression of the image while preserving the local information. 相似文献
16.
Given an elongated object in a digital image, a thinning transform is described to simultaneously calculate the coordinates of its medial line elements and a value for these elements equal to the width of the object at the element's coordinates. The transform is being used in the area of quantitative fractography in photo-micrographs of rock samples. 相似文献
17.
A hybrid BTC-VQ-DCT (block truncation coding, vector quantization, and discrete cosine transform) image coding algorithm is presented. The algorithm combines the simple computation and edge preservation properties of BTC and the high fidelity and high-compression ratio of adaptive DCT with the high-compression ratio and good subjective performance of VQ, and can be implemented with significantly lower coding delays than either VQ or DCT alone. The bit-map generated by BTC is decomposed into a set of vectors which are vector quantized. Since the space of the BTC bit-map is much smaller than that of the original 8-b image, a lookup-table-based VQ encoder has been designed to `fast encode' the bit-map. Adaptive DCT coding using residual error feedback is implemented to encode the high-mean and low-mean subimages. The overall computational complexity of BTC-VQ-DCT coding is much less than either DCT and VQ, while the fidelity performance is competitive. The algorithm has strong edge-preserving ability because of the implementation of BTC as a precompress decimation. The total compression ratio is about 10:1 相似文献
18.
Watermarking digital images for copyright protection 总被引:6,自引:0,他引:6
O'Ruanaidh J.J.K. Dowling W.J. Boland F.M. 《Vision, Image and Signal Processing, IEE Proceedings -》1996,143(4):250-256
A watermark is an invisible mark placed on an image that is designed to identify both the source of an image as well as its intended recipient. The authors present an overview of watermarking techniques and demonstrate a solution to one of the key problems in image watermarking, namely how to hide robust invisible labels inside grey scale or colour digital images 相似文献
19.
20.
An automatic algorithm has been developed for high-speed detection of cavity boundaries in sequential 2-D echocardiograms using an optimization algorithm called simulated annealing (SA). The algorithm has three stages. (1) A predetermined window of size nxm is decimated to size n'xm' after low-pass filtering. (2) An iterative radial gradient algorithm is employed to determine the center of gravity (CG) of the cavity. (3) 64 radii which originate from the CG defined in stage 2 are bounded by the high-probability region. Each bounded radius is defined as a link in a 1-D, 64-member cyclic Markov random field. This algorithm is unique in that it compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone. This is the first implementation of a relaxation algorithm for edge detection in echocardiograms. Results attained using this algorithm on real data have been highly encouraging. 相似文献