首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Breast cancer is the most common cancer among women. In CAD systems, several studies have investigated the use of wavelet transform as a multiresolution analysis tool for texture analysis and could be interpreted as inputs to a classifier. In classification, polynomial classifier has been used due to the advantages of providing only one model for optimal separation of classes and to consider this as the solution of the problem. In this paper, a system is proposed for texture analysis and classification of lesions in mammographic images. Multiresolution analysis features were extracted from the region of interest of a given image. These features were computed based on three different wavelet functions, Daubechies 8, Symlet 8 and bi-orthogonal 3.7. For classification, we used the polynomial classification algorithm to define the mammogram images as normal or abnormal. We also made a comparison with other artificial intelligence algorithms (Decision Tree, SVM, K-NN). A Receiver Operating Characteristics (ROC) curve is used to evaluate the performance of the proposed system. Our system is evaluated using 360 digitized mammograms from DDSM database and the result shows that the algorithm has an area under the ROC curve Az of 0.98 ± 0.03. The performance of the polynomial classifier has proved to be better in comparison to other classification algorithms.  相似文献   

2.
3.
Reversible contrast mapping (RCM) and its various modified versions are used extensively in reversible watermarking (RW) to embed secret information into the digital contents. RCM based RW accomplishes a simple integer transform applied on pair of pixels and their least significant bits (LSB) are used for data embedding. It is perfectly invertible even if the LSBs of the transformed pixels are lost during data embedding. RCM offers high embedding rate at relatively low visual distortion (embedding distortion). Moreover, low computation cost and ease of hardware realization make it attractive for real-time implementation. To this aim, this paper proposes a field programmable gate array (FPGA) based very large scale integration (VLSI) architecture of RCM-RW algorithm for digital images that can serve the purpose of media authentication in real-time environment. Two architectures, one for block size (8 × 8) and the other one for (32 × 32) block are developed. The proposed architecture allows a 6-stage pipelining technique to speed up the circuit operation. For a cover image of block size (32 × 32), the proposed architecture requires 9881 slices, 9347 slice flip-flops, 11291 number 4-input LUTs, 3 BRAMs and a data rate of 1.0395 Mbps at an operating frequency as high as 98.76 MHz.  相似文献   

4.
This paper presents a new hardware-oriented approach for the extraction of disparity maps from stereo images. The proposed method is based on the herein named Adaptive Census Transform that exploits adaptive support weights during the image transformation; the adaptively weighted sum of SADs is then used as the dissimilarity metric. Quality tests show that the proposed method reaches significantly better accuracy than alternative hardware-oriented approaches. To demonstrate the practical hardware feasibility, a specific architecture has been designed and its implementation has been carried out using a single FPGA chip. Such a VLSI implementation allows a frame rate up to 68 fps to be reached for 640 × 480 stereo images, using just 80,000 slices and 32 RAM blocks of a Virtex6 chip.  相似文献   

5.
In biometric systems, reference facial images captured during enrollment are commonly secured using watermarking, where invisible watermark bits are embedded into these images. Evolutionary Computation (EC) is widely used to optimize embedding parameters in intelligent watermarking (IW) systems. Traditional IW methods represent all blocks of a cover image as candidate embedding solutions of EC algorithms, and suffer from premature convergence when dealing with high resolution grayscale facial images. For instance, the dimensionality of the optimization problem to process a 2048 × 1536 pixel grayscale facial image that embeds 1 bit per 8 × 8 pixel block involves 49k variables represented with 293k binary bits. Such Large-Scale Global Optimization problems cannot be decomposed into smaller independent ones because watermarking metrics are calculated for the entire image. In this paper, a Blockwise Coevolutionary Genetic Algorithm (BCGA) is proposed for high dimensional IW optimization of embedding parameters of high resolution images. BCGA is based on the cooperative coevolution between different candidate solutions at the block level, using a local Block Watermarking Metric (BWM). It is characterized by a novel elitism mechanism that is driven by local blockwise metrics, where the blocks with higher BWM values are selected to form higher global fitness candidate solutions. The crossover and mutation operators of BCGA are performed on block level. Experimental results on PUT face image database indicate a 17% improvement of fitness produced by BCGA compared to classical GA. Due to improved exploration capabilities, BCGA convergence is reached in fewer generations indicating an optimization speedup.  相似文献   

6.
Quantification of pavement crack data is one of the most important criteria in determining optimum pavement maintenance strategies. Recently, multi-resolution analysis such as wavelet decompositions provides very good multi-resolution analytical tools for different scales of pavement analysis and distresses classification. This paper present an automatic diagnosis system for detecting and classification pavement crack distress based on Wavelet–Radon Transform (WR) and Dynamic Neural Network (DNN) threshold selection. The algorithm of the proposed system consists of a combination of feature extraction using WR and classification using the neural network technique. The proposed WR + DNN system performance is compared with static neural network (SNN). In test stage; proposed method was applied to the pavement images database to evaluate the system performance. The correct classification rate (CCR) of proposed system is over 99%. This research demonstrated that the WR + DNN method can be used efficiently for fast automatic pavement distress detection and classification. The details of the image processing technique and the characteristic of system are also described in this paper.  相似文献   

7.
8.
Multiresolution wavelet analysis of pressure variations in a gas turbine compressor reveals the existence of precursors of stall and surge processes. Signals from eight pressure sensors positioned at various places within the compressor were recorded and digitized in three different operating modes in stationary conditions with a recording interval of 1 ms during 5–6 s. It has been discovered that there exists a scale of 32 intervals over which the dispersion (variance) of the wavelet coefficients shows a remarkable drop of about 40% for more than 1 s prior to the development of the malfunction. A shuffled sample of the same values of the pressure does not show such a drop demonstrating the dynamical origin of this effect. Higher order correlation moments reveal different slopes in these two regions differing by the variance values. The log–log dependence of the moments does not show clear fractal behavior because the scales of 16 and 32 intervals are not on the straight line of monofractals. This is a clear indication of the nonlinear response of the system at this scale. These results provide a means for automatic regulation of an engine, preventing possible failures.  相似文献   

9.
In this paper, we present a method to recover the parameters governing the reflection of light from a surface making use of a single hyperspectral image. To do this, we view the image radiance as a combination of specular and diffuse reflection components and present a cost functional which can be used for purposes of iterative least squares optimisation. This optimisation process is quite general in nature and can be applied to a number of reflectance models widely used in the computer vision and graphics communities. We elaborate on the use of these models in our optimisation process and provide a variant of the Beckmann–Kirchhoff model which incorporates the Fresnel reflection term. We show results on synthetic images and illustrate how the recovered photometric parameters can be employed for skin recognition in real world imagery, where our estimated albedo yields a classification rate of 95.09 ± 4.26% as compared to an alternative, whose classification rate is of 90.94 ± 6.12%. We also show quantitative results on the estimation of the index of refraction, where our method delivers an average per-pixel angular error of 0.15°. This is a considerable improvement with respect to an alternative, which yields an error of 9.9°.  相似文献   

10.
Noise elimination is an important pre-processing step in magnetic resonance (MR) images for clinical purposes. In the present study, as an edge-preserving method, bilateral filter (BF) was used for Rician noise removal in MR images. The choice of BF parameters affects the performance of denoising. Therefore, as a novel approach, the parameters of BF were optimized using genetic algorithm (GA). First, the Rician noise with different variances (σ = 10, 20, 30) was added to simulated T1-weighted brain MR images. To find the optimum filter parameters, GA was applied to the noisy images in searching regions of window size [3 × 3, 5 × 5, 7 × 7, 11 × 11, and 21 × 21], spatial sigma [0.1–10] and intensity sigma [1–60]. The peak signal-to-noise ratio (PSNR) was adjusted as fitness value for optimization.After determination of optimal parameters, we investigated the results of proposed BF parameters with both the simulated and clinical MR images. In order to understand the importance of parameter selection in BF, we compared the results of denoising with proposed parameters and other previously used BFs using the quality metrics such as mean squared error (MSE), PSNR, signal-to-noise ratio (SNR) and structural similarity index metric (SSIM). The quality of the denoised images with the proposed parameters was validated using both visual inspection and quantitative metrics. The experimental results showed that the BF with parameters proposed by us showed a better performance than BF with other previously proposed parameters in both the preservation of edges and removal of different level of Rician noise from MR images. It can be concluded that the performance of BF for denoising is highly dependent on optimal parameter selection.  相似文献   

11.
Kam Leung Yeung  Li Li 《Displays》2013,34(2):165-170
We have previously shown that concurrent head movements impair head-referenced image motion perception when compensatory eye movements are suppressed (Li, Adelstein, & Ellis, 2009) [16]. In this paper, we examined the effect of the field of view on perceiving world-referenced image motion during concurrent head movements. Participants rated the motion magnitude of a horizontally oscillating checkerboard image presented on a large screen while making yaw or pitch head movements, or holding their heads still. As the image motion was world-referenced, head motion elicited compensatory eye movements from the vestibular-ocular reflex to maintain the gaze on the display. The checkerboard image had either a large (73°H × 73°V) or a small (25°H × 25°V) field of view (FOV). We found that perceptual sensitivity to world-referenced image motion was reduced by 20% during yaw and pitch head movements compared to the veridical levels when the head was still, and this reduction did not depend on the display FOV size. Reducing the display FOV from 73°H × 73°V to 25°H × 25°V caused an overall underestimation of image motion by 7% across the head movement and head still conditions. We conclude that observers have reduced perceptual sensitivity to world-referenced image motion during concurrent head movements independent of the FOV size. The findings are applicable in the design of virtual environment countermeasures to mitigate perception of spurious motion arising from head tracking system latency.  相似文献   

12.
This paper presents a novel approach for image retrieval, named multi-joint histogram based modelling (MJHM), in which the joint correlation histograms are constructed between the motif and texton maps. Firstly, the quantized image is divided into non-overlapping 2 × 2 grids. Then each grid is replaced by a scan motif and texton values to construct the transformed motif and texton maps (images) respectively. The motif transformed map minimizes the local gradient and texton transformed map identifies the equality of grayscales while traversing the 2 × 2 grid. Finally, the correlation histograms are constructed between the transformed motif and texton maps. The performance of the proposed method (MJHM) is tested by conducting two experiments on Corel-5K and Corel-10K benchmark databases. The results after investigation show significant improvements in terms of precision, average retrieval precision (ARP), recall and average retrieval rate (ARR) as compared to multi-texton histogram (MTH), smart content based image retrieval system (CMCM) and other state-of-the-art techniques for image retrieval.  相似文献   

13.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid-flow in rough-pipes. In this paper, the state-of-the-art review for the most currently available explicit alternatives to the Colebrook–White equation, is presented. An extensive comparison test was established on the 20 × 500 grid, for a wide range of relative roughness (ε/D) and Reynolds number (R) values (1 × 10?6 ? ε/D ? 5 × 10?2; 4 × 103 ? R ? 108), covering a large portion of turbulent flow zone in Moody’s diagram. Based on the comprehensive error analysis, the magnitude points in which the maximum absolute and the maximum relative error are occurred at the pair of ε/D and R values, are observed. A limiting case of the most of these approximations provided friction factor estimates that are characterized by a mean absolute error of 5 × 10?4, a maximum absolute error of 4 × 10?3 whereas, a mean relative error of 1.3% and a maximum relative error of 5.8%, over the entire range of ε/D and R values, respectively. For practical purposes, the complete results for the maximum and the mean relative errors versus the 20 sets of ε/D value, are also indicated in two comparative figures. The examination results for error properties of these approximations gives one an opportunity to practically evaluate the most accurate formula among of all the previous explicit models; and showing in this way its great flexibility for estimating turbulent flow friction factor. Comparative analysis for the mean relative error profile revealed, the classification for the best-fitted six equations examined was in a good agreement with those of the best model selection criterion claimed in the recent literature, for all performed simulations.  相似文献   

14.
《Parallel Computing》2014,40(5-6):144-158
One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.  相似文献   

15.
This article discusses on the detection of fault occurred during friction stir welding using discrete wavelet transform on force and torque signals. The work pieces used were AA1100 aluminum alloys of thickness 2.5 mm. The plates were 200 mm in length and 80 mm in width. Presence of defect in welding causes sudden change in force signals (Z-load), thus it is easier to detect such abrupt changes in a signal using discrete wavelet transform. Statistical features like variance and square of errors of detail coefficients are implemented to localize the defective zone properly as it shows better variations (in defective area) than the detail coefficient itself.  相似文献   

16.
The electrochemical sensor of triazole (TA) self-assembled monolayer (SAM) modified gold electrode (TA SAM/Au) was fabricated. The electrochemical behaviors of epinephrine (EP) at TA SAM/Au have been studied. The TA SAM/Au shows an excellent electrocatalytic activity for the oxidation of EP and accelerates electron transfer rate. The diffusion coefficient is 1.135 × 10−6 cm2 s−1. Under the optimum experiment conditions (i.e. 0.1 mol L−1, pH 4.4, sodium borate buffer, accumulation time: 180 s, accumulation potential: 0.6 V, scan rate: 0.1 Vs−1), the cathodic peak current of EP versus its concentration has a good linear relation in the ranges of 1.0 × 10−7 to 1.0 × 10−5 mol L−1 and 1.0 × 10−5 to 6.0 × 10−4 mol L−1 by square wave adsorptive stripping voltammetry (SWASV), with the correlation coefficient of 0.9985 and 0.9996, respectively. Detection limit is down to 1.0 × 10−8 mol L−1. The TA SAM/Au can be used for the determination of EP in practical injection. Meantime, the oxidative peak potentials of EP and ascorbic acid (AA) are well separated about 200 ± 10 mV at TA SAM/Au, the oxidation peak current increases approximately linearly with increasing concentration of both EP and AA in the concentration range of 2.0 × 10−5 to 1.6 × 10−4 mol L−1. It can be used for simultaneous determination of EP and AA.  相似文献   

17.
In this paper, a prediction model is proposed for wind farm power forecasting by combining the wavelet transform, chaotic time series and GM(1, 1) method. The wavelet transform is used to decompose wind farm power into several detail parts associated with high frequencies and an approximate part associated with low frequencies. The characteristic of each high frequencies signal is identified, if it is chaotic time series then use weighted one-rank local-region method to predict it. If not, use GM(1, 1) model to predict it. And the GM(1, 1) model is also used to predict the approximate part of the low frequencies. In the end, the final forecasted result for wind farm power is obtained by summing the predicted results of all extracted high frequencies and the approximate part. According to the predicted results, the proposed method can improve the prediction accuracy of the wind farm power.  相似文献   

18.
In this paper, we propose a parallel algorithm for data classification, and its application for Magnetic Resonance Images (MRI) segmentation. The studied classification method is the well-known c-means method. The use of the parallel architecture in the classification domain is introduced in order to improve the complexities of the corresponding algorithms, so that they will be considered as a pre-processing procedure. The proposed algorithm is assigned to be implemented on a parallel machine, which is the reconfigurable mesh computer (RMC). The image of size (m × n) to be processed must be stored on the RMC of the same size, one pixel per processing element (PE).  相似文献   

19.
An accurate contour estimation plays a significant role in classification and estimation of shape, size, and position of thyroid nodule. This helps to reduce the number of false positives, improves the accurate detection and efficient diagnosis of thyroid nodules. This paper introduces an automated delineation method that integrates spatial information with neutrosophic clustering and level-sets for accurate and effective segmentation of thyroid nodules in ultrasound images. The proposed delineation method named as Spatial Neutrosophic Distance Regularized Level Set (SNDRLS) is based on Neutrosophic L-Means (NLM) clustering which incorporates spatial information for Level Set evolution. The SNDRLS takes rough estimation of region of interest (ROI) as input provided by Spatial NLM (SNLM) clustering for precise delineation of one or more nodules. The performance of the proposed method is compared with level set, NLM clustering, Active Contour Without Edges (ACWE), Fuzzy C-Means (FCM) clustering and Neutrosophic based Watershed segmentation methods using the same image dataset. To validate the SNDRLS method, the manual demarcations from three expert radiologists are employed as ground truth. The SNDRLS yields the closest boundaries to the ground truth compared to other methods as revealed by six assessment measures (true positive rate is 95.45 ± 3.5%, false positive rate is 7.32 ± 5.3% and overlap is 93.15 ± 5. 2%, mean absolute distance is 1.8 ± 1.4 pixels, Hausdorff distance is 0.7 ± 0.4 pixels and Dice metric is 94.25 ± 4.6%). The experimental results show that the SNDRLS is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. The proposed method achieves the automated nodule boundary even for low-contrast, blurred, and noisy thyroid ultrasound images without any human intervention. Additionally, the SNDRLS has the ability to determine the controlling parameters adaptively from SNLM clustering.  相似文献   

20.
基于小波包变换和蚁群算法的纹理分类   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种小波包变换和蚁群算法相结合的纹理分类新方法。首先采用小波包变换提取纹理图像的纹理特征向量,然后用蚁群算法进行训练和分类。实验表明小波包变换和蚁群算法应用到纹理分类领域,是一次有效的尝试。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号