首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An accurate contour estimation plays a significant role in classification and estimation of shape, size, and position of thyroid nodule. This helps to reduce the number of false positives, improves the accurate detection and efficient diagnosis of thyroid nodules. This paper introduces an automated delineation method that integrates spatial information with neutrosophic clustering and level-sets for accurate and effective segmentation of thyroid nodules in ultrasound images. The proposed delineation method named as Spatial Neutrosophic Distance Regularized Level Set (SNDRLS) is based on Neutrosophic L-Means (NLM) clustering which incorporates spatial information for Level Set evolution. The SNDRLS takes rough estimation of region of interest (ROI) as input provided by Spatial NLM (SNLM) clustering for precise delineation of one or more nodules. The performance of the proposed method is compared with level set, NLM clustering, Active Contour Without Edges (ACWE), Fuzzy C-Means (FCM) clustering and Neutrosophic based Watershed segmentation methods using the same image dataset. To validate the SNDRLS method, the manual demarcations from three expert radiologists are employed as ground truth. The SNDRLS yields the closest boundaries to the ground truth compared to other methods as revealed by six assessment measures (true positive rate is 95.45 ± 3.5%, false positive rate is 7.32 ± 5.3% and overlap is 93.15 ± 5. 2%, mean absolute distance is 1.8 ± 1.4 pixels, Hausdorff distance is 0.7 ± 0.4 pixels and Dice metric is 94.25 ± 4.6%). The experimental results show that the SNDRLS is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. The proposed method achieves the automated nodule boundary even for low-contrast, blurred, and noisy thyroid ultrasound images without any human intervention. Additionally, the SNDRLS has the ability to determine the controlling parameters adaptively from SNLM clustering.  相似文献   

2.
The cuff-less continuous blood pressure monitoring provides reliable and invaluable information about the individuals’ health condition. Conventional sphygmomanometer with a cuff measures only the value of the blood pressure intermittently and the measurement process is sometimes inconvenient. In this work, a systematic approach with multi-parameter fusion has been proposed to estimate the non-invasive beat-to-beat systolic and diastolic blood pressure with high accuracy. The methods involve real-time monitoring of the electrocardiogram (ECG) and photoplethysmogram (PPG), and extracting the R peak from the ECG and relevant feature parameters from the synchronous PPG. Also, it covers the creation of the topological model of back-propagation neural network that has fifteen neurons in the input layer, ten neurons in the single interlayer, and two neurons in the output layer, where all the neurons are fully connected. As for the results, the proposed method was validated on the volunteers. The reference blood pressure (BP) is from Finometer (MIDI, Finapres Medical System, Netherlands). The results showed that the mean ± S.D. for the estimated systolic BP (SBP) and diastolic BP (DBP) with the proposed method against reference were −0.41 ± 2.02 mmHg and 0.46 ± 2.21 mmHg, respectively. Thus, the continuous blood pressure algorithm based on Back-Propagation neural network provides a continuous BP with a high accuracy.  相似文献   

3.
This paper presents a new hardware-oriented approach for the extraction of disparity maps from stereo images. The proposed method is based on the herein named Adaptive Census Transform that exploits adaptive support weights during the image transformation; the adaptively weighted sum of SADs is then used as the dissimilarity metric. Quality tests show that the proposed method reaches significantly better accuracy than alternative hardware-oriented approaches. To demonstrate the practical hardware feasibility, a specific architecture has been designed and its implementation has been carried out using a single FPGA chip. Such a VLSI implementation allows a frame rate up to 68 fps to be reached for 640 × 480 stereo images, using just 80,000 slices and 32 RAM blocks of a Virtex6 chip.  相似文献   

4.
This paper proposes a computer-aided diagnosis tool for the early detection of atherosclerosis. This pathology is responsible for major cardiovascular diseases, which are the main cause of death worldwide. Among preventive measures, the intima-media thickness (IMT) of the common carotid artery stands out as early indicator of atherosclerosis and cardiovascular risk. In particular, IMT is evaluated by means of ultrasound scans. Usually, during the radiological examination, the specialist detects the optimal measurement area, identifies the layers of the arterial wall and manually marks pairs of points on the image to estimate the thickness of the artery. Therefore, this manual procedure entails subjectivity and variability in the IMT evaluation. Instead, this article suggests a fully automatic segmentation technique for ultrasound images of the common carotid artery. The proposed methodology is based on machine learning and artificial neural networks for the recognition of IMT intensity patterns in the images. For this purpose, a deep learning strategy has been developed to obtain abstract and efficient data representations by means of auto-encoders with multiple hidden layers. In particular, the considered deep architecture has been designed under the concept of extreme learning machine (ELM). The correct identification of the arterial layers is achieved in a totally user-independent and repeatable manner, which not only improves the IMT measurement in daily clinical practice but also facilitates the clinical research. A database consisting of 67 ultrasound images has been used in the validation of the suggested system, in which the resulting automatic contours for each image have been compared with the average of four manual segmentations performed by two different observers (ground-truth). Specifically, the IMT measured by the proposed algorithm is 0.625 ± 0.167 mm (mean ± standard deviation), whereas the corresponding ground-truth value is 0.619 ± 0.176 mm. Thus, our method shows a difference between automatic and manual measures of only 5.79 ± 34.42 μm. Furthermore, different quantitative evaluations reported in this paper indicate that this procedure outperforms other methods presented in the literature.  相似文献   

5.
Auralization through binaural transfer path analysis and synthesis is a useful tool to analyze how contributions from different sources affect the perception of sound. This paper presents a novel model based on the auralization of sound sources through the study of the behavior of the system with respect to frequency. The proposed approach is a combined model using the airborne source quantification (ASQ) technique for low-mid frequencies (?2.5 kHz) and Evolutionary Product-Unit Neural Networks (EPUNNs) for high frequencies (>2.5 kHz), which improve overall accuracy. The accuracy of all models has been evaluated in terms of the Mean Squared Error (MSE) and the Standard Error of Prediction (SEP), the combined model obtaining the smallest value for high frequencies. Moreover, the best prediction model was established based on sound quality metrics, the proposed method showing better accuracy than the ASQ technique at high frequencies in terms of loudness, sharpness and 1/3rd octave bands.  相似文献   

6.
7.
This paper presents a novel approach for image retrieval, named multi-joint histogram based modelling (MJHM), in which the joint correlation histograms are constructed between the motif and texton maps. Firstly, the quantized image is divided into non-overlapping 2 × 2 grids. Then each grid is replaced by a scan motif and texton values to construct the transformed motif and texton maps (images) respectively. The motif transformed map minimizes the local gradient and texton transformed map identifies the equality of grayscales while traversing the 2 × 2 grid. Finally, the correlation histograms are constructed between the transformed motif and texton maps. The performance of the proposed method (MJHM) is tested by conducting two experiments on Corel-5K and Corel-10K benchmark databases. The results after investigation show significant improvements in terms of precision, average retrieval precision (ARP), recall and average retrieval rate (ARR) as compared to multi-texton histogram (MTH), smart content based image retrieval system (CMCM) and other state-of-the-art techniques for image retrieval.  相似文献   

8.
In this paper, a prediction model is proposed for wind farm power forecasting by combining the wavelet transform, chaotic time series and GM(1, 1) method. The wavelet transform is used to decompose wind farm power into several detail parts associated with high frequencies and an approximate part associated with low frequencies. The characteristic of each high frequencies signal is identified, if it is chaotic time series then use weighted one-rank local-region method to predict it. If not, use GM(1, 1) model to predict it. And the GM(1, 1) model is also used to predict the approximate part of the low frequencies. In the end, the final forecasted result for wind farm power is obtained by summing the predicted results of all extracted high frequencies and the approximate part. According to the predicted results, the proposed method can improve the prediction accuracy of the wind farm power.  相似文献   

9.
Precise segmentation and identification of thoracic vertebrae is important for many medical imaging applications though it remains challenging due to the vertebra’s complex shape and varied neighboring structures. In this paper, a new method based on learned bone-structure edge detectors and a coarse-to-fine deformable surface model is proposed to segment and identify vertebrae in 3D CT thoracic images. In the training stage, a discriminative classifier for object-specific edge detection is trained using steerable features and statistical shape models for 12 thoracic vertebrae are also learned. For the run-time testing, we design a new coarse-to-fine, two-stage segmentation strategy: subregions of a vertebra first deform together as a group; then vertebra mesh vertices in a smaller neighborhood move group-wise to progressively drive the deformable model towards edge response maps by optimizing a probability cost function. In this manner, the smoothness and topology of vertebrae shapes are guaranteed. This algorithm performs successfully with reliable mean point-to-surface errors 0.95 ± 0.91 mm on 40 volumes. Consequently a vertebra identification scheme is also proposed via mean surface mesh matching. We achieve a success rate of 73.1% using a single vertebra, and over 95% for 8 or more vertebra which is comparable or slightly better than state-of-the-art [5].  相似文献   

10.
Many problems are confronted when characterizing a type 1 diabetic patient such as model mismatches, noisy inputs, measurement errors and huge variability in the glucose profiles. In this work we introduce a new identification method based on interval analysis where variability and model imprecisions are represented by an interval model as parametric uncertainty.The minimization of a composite cost index comprising: (1) the glucose envelope width predicted by the interval model, and (2) a Hausdorff-distance-based prediction error with respect to the envelope, is proposed. The method is evaluated with clinical data consisting in insulin and blood glucose reference measurements from 12 patients for four different lunchtime postprandial periods each.Following a “leave-one-day-out” cross-validation study, model prediction capabilities for validation days were encouraging (medians of: relative error = 5.45%, samples predicted = 57%, prediction width = 79.1 mg/dL). The consideration of the days with maximum patient variability represented as identification days, resulted in improved prediction capabilities for the identified model (medians of: relative error = 0.03%, samples predicted = 96.8%, prediction width = 101.3 mg/dL). Feasibility of interval models identification in the context of type 1 diabetes was demonstrated.  相似文献   

11.
This paper presents a comparative analysis of four nature inspired algorithms to improve the training stage of a segmentation strategy based on Gaussian matched filters (GMF) for X-ray coronary angiograms. The statistical results reveal that the method of differential evolution (DE) outperforms the considered algorithms in terms of convergence to the optimal solution. From the potential solutions acquired by DE, the area (Az) under the receiver operating characteristic curve is used as fitness function to establish the best GMF parameters. The GMF-DE method demonstrated high accuracy with Az = 0.9402 with a training set of 40 angiograms. Moreover, to evaluate the performance of the coronary artery segmentation method compared to the ground-truth vessels hand-labeled by a specialist, measures of sensitivity, specificity and accuracy have been adopted. According to the experimental results, GMF-DE has obtained high coronary artery segmentation rate compared with six state-of-the-art methods provided an average accuracy of 0.9134 with a test set of 40 angiograms. Additionally, the experimental results in terms of segmentation accuracy, have also shown that the GMF-DE can be highly suitable for clinical decision support in cardiology.  相似文献   

12.
《Information Fusion》2007,8(2):177-192
A new quantitative metric is proposed to objectively evaluate the quality of fused imagery. The measured value of the proposed metric is used as feedback to a fusion algorithm such that the image quality of the fused image can potentially be improved. This new metric, called the ratio of spatial frequency error (rSFe), is derived from the definition of a previous measure termed “spatial frequency” (SF) that reflects local intensity variation. In this work, (1) the concept of SF is first extended by adding two diagonal SFs, then, (2) a reference SF (SFR) is computed from the input images, and finally, (3) the error SF (SFE) (subtracting the fusion SF from the reference SF), or the ratio of SF error (rSFe = SFE/SFR), is used as a fusion quality metric. The rSFe (which can be positive or negative) indicates the direction of fusion error—over-fused (if rSFe > 0) or under-fused (if rSFe < 0). Thus, the rSFe value can be back propagated to the fusion algorithm (BP fusion), thereby directing further parameter adjustments in order to achieve a better-fused image. The accuracy of the rSFe is verified with other quantitative measurements such as the root mean square error (RMSE) and the image quality index (IQI), as well as with a qualitative perceptual evaluation based on a standard psychophysical paradigm. An advanced wavelet transform (aDWT) method that incorporates principal component analysis (PCA) and morphological processing into a regular DWT fusion algorithm is implemented with two adjustable parameters—the number of levels of DWT decompositions and the length of the selected wavelet. Results with aDWT were compared to those with a regular DWT and with a Laplacian pyramid. After analyzing several inhomogeneous image groups, experimental results showed that the proposed metric, rSFe, is consistent with RMSE and IQI, and is especially powerful and efficient for realizing the iterative BP fusion in order to achieve a better image quality. Human perceptual assessment was measured and found to strongly support the assertion that the aDWT offers a significant improvement over the DWT and pyramid methods.  相似文献   

13.
Dynamic time-linkage optimization problems (DTPs) are a special class of dynamic optimization problems (DOPs) with the feature of time-linkage. Time-linkage means that the decisions taken now could influence the problem states in future. Although DTPs are common in practice, attention from the field of evolutionary optimization is little. To date, the prediction method is the major approach to solve DTPs in the field of evolutionary optimization. However, in existing studies, the method of how to deal with the situation where the prediction is unreliable has not been studied yet for the complete Black-Box Optimization (BBO) case. In this paper, the prediction approach EA + predictor, proposed by Bosman, is improved to handle such situation. A stochastic-ranking selection scheme based on the prediction accuracy is designed to improve EA + predictor under unreliable prediction, where the prediction accuracy is based on the rank of the individuals but not the fitness. Experimental results show that, compared with the original prediction approach, the performance of the improved algorithm is competitive.  相似文献   

14.
15.
The presented study describes a false-alarm probability-FAP bounded solution for detecting and quantifying Heart Rate Turbulence (HRT) major parameters including heart rate (HR) acceleration/deceleration, turbulence jump, compensatory pause value and HR recovery rate. To this end, first, high resolution multi-lead holter electrocardiogram (ECG) signal is appropriately pre-processed via Discrete Wavelet Transform (DWT) and then, a fixed sample size sliding window is moved on the pre-processed trend. In each slid, the area under the excerpted segment is multiplied by its curve-length to generate the Area Curve Length (ACL) metric to be used as the ECG events detection-delineation decision statistic (DS). The ECG events detection-delineation algorithm was applied to various existing databases and as a result, the average values of sensitivity and positive predictivity Se = 99.95% and P+ = 99.92% were obtained for the detection of QRS complexes, with the average maximum delineation error of 7.4 msec, 4.2 msec and 8.3 msec for P-wave, QRS complex and T-wave, respectively. Because the heart-rate time series might include fast fluctuations which don’t follow a premature ventricular contraction (PVC) causing high-level false alarm probability (false positive detections) of HRT detection, based on the binary two-dimensional Neyman-Pearson radius test (which is a FAP-bounded classifier), a new method for discrimination of PVCs from other beats using the geometrical-based features is proposed. The statistical performance of the proposed HRT detection-quantification algorithm was obtained as Se = 99.94% and P+ = 99.85% showing marginal improvement for the detection-quantification of this phenomenon. In summary, marginal performance improvement of ECG events detection-delineation process, high performance PVC detection and isolation from noisy holter data and reliable robustness against holter strong noise and artifacts can be mentioned as important merits and capabilities of the proposed HRT detection algorithm.  相似文献   

16.
This study investigated the effects of upstream stations’ flow records on the performance of artificial neural network (ANN) models for predicting daily watershed runoff. As a comparison, a multiple linear regression (MLR) analysis was also examined using various statistical indices. Five streamflow measuring stations on the Cahaba River, Alabama, were selected as case studies. Two different ANN models, multi layer feed forward neural network using Levenberg–Marquardt learning algorithm (LMFF) and radial basis function (RBF), were introduced in this paper. These models were then used to forecast one day ahead streamflows. The correlation analysis was applied for determining the architecture of each ANN model in terms of input variables. Several statistical criteria (RMSE, MAE and coefficient of correlation) were used to check the model accuracy in comparison with the observed data by means of K-fold cross validation method. Additionally, residual analysis was applied for the model results. The comparison results revealed that using upstream records could significantly increase the accuracy of ANN and MLR models in predicting daily stream flows (by around 30%). The comparison of the prediction accuracy of both ANN models (LMFF and RBF) and linear regression method indicated that the ANN approaches were more accurate than the MLR in predicting streamflow dynamics. The LMFF model was able to improve the average of root mean square error (RMSEave) and average of mean absolute percentage error (MAPEave) values of the multiple linear regression forecasts by about 18% and 21%, respectively. In spite of the fact that the RBF model acted better for predicting the highest range of flow rate (flood events, RMSEave/RBF = 26.8 m3/s vs. RMSEave/LMFF = 40.2 m3/s), in general, the results suggested that the LMFF method was somehow superior to the RBF method in predicting watershed runoff (RMSE/LMFF = 18.8 m3/s vs. RMSE/RBF = 19.2 m3/s). Eventually, statistical differences between measured and predicted medians were evaluated using Mann-Whitney test, and differences in variances were evaluated using the Levene's test.  相似文献   

17.
Reversible contrast mapping (RCM) and its various modified versions are used extensively in reversible watermarking (RW) to embed secret information into the digital contents. RCM based RW accomplishes a simple integer transform applied on pair of pixels and their least significant bits (LSB) are used for data embedding. It is perfectly invertible even if the LSBs of the transformed pixels are lost during data embedding. RCM offers high embedding rate at relatively low visual distortion (embedding distortion). Moreover, low computation cost and ease of hardware realization make it attractive for real-time implementation. To this aim, this paper proposes a field programmable gate array (FPGA) based very large scale integration (VLSI) architecture of RCM-RW algorithm for digital images that can serve the purpose of media authentication in real-time environment. Two architectures, one for block size (8 × 8) and the other one for (32 × 32) block are developed. The proposed architecture allows a 6-stage pipelining technique to speed up the circuit operation. For a cover image of block size (32 × 32), the proposed architecture requires 9881 slices, 9347 slice flip-flops, 11291 number 4-input LUTs, 3 BRAMs and a data rate of 1.0395 Mbps at an operating frequency as high as 98.76 MHz.  相似文献   

18.
This paper proposes an integrated system for the segmentation and classification of four moving objects, including pedestrians, cars, motorcycles, and bicycles, from their side-views in a video sequence. Based on the use of an adaptive background in the red–green–blue (RGB) color model, each moving object is segmented with its minimum enclosing rectangle (MER) window by using a histogram-based projection approach or a tracking-based approach. Additionally, a shadow removal technique is applied to the segmented objects to improve the classification performance. For the MER windows with different sizes, a window scaling operation followed by an adaptive block-shifting operation is applied to obtain a fixed feature dimension. A weight mask, which is constructed according to the frequency of occurrence of an object in each position within a square window, is proposed to enhance the distinguishing pixels in the rescaled MER window. To extract classification features, a two-level Haar wavelet transform is applied to the rescaled MER window. The local shape features and the modified histogram of oriented gradients (HOG) are extracted from the level-two and level-one sub-bands, respectively, of the wavelet-transformed space. A hierarchical linear support vector machine classification configuration is proposed to classify the four classes of objects. Six video sequences are used to test the classification performance of the proposed method. The computer processing times of the object segmentation, object tracking, and feature extraction and classification approaches are 79 ms, 211 ms, and 0.01 ms, respectively. Comparisons with different well-known classification approaches verify the superiority of the proposed classification method.  相似文献   

19.
This paper describes a novel single-layer bi-material cantilever microstructure without silicon (Si) substrate for focal plane array (FPA) application in uncooled optomechanical infrared imaging system (UOIIS). The UOIIS, responding to the radiate infrared (IR) source with spectral range from 8 to 14 μm, may receive an IR image through visible optical readout method. The temperature distribution of the IR source could be obtained by measuring the thermal–mechanical rotation angle distribution of every pixel in the cantilever array, which is consisted of two materials with mismatching thermal expansion coefficients. In order to obtain a high detection to the IR object, gold (Au) film is coated alternately on silicon nitride (SiNx) film in the flection beams of the cantilevers. And a thermal–mechanical model for such cantilever microstructure is proposed. The thermal and thermal–mechanical coupling field characteristics of the cantilever array structure are optimized through numerical analysis method and simulated by using the finite element simulation method. The thermal–mechanical rotation angle simulated and thermal–mechanical sensitivity tested in the experiment are 2.459 × 10−3 and 3.322 × 10−4 rad/K, respectively, generally in good agreement with what the thermal–mechanical model and numerical analysis forecast, which offers an effective reference for FPA structure parameters design in UOIIS.  相似文献   

20.
Joint moment is one of the most important factors in human gait analysis. It can be calculated using multi body dynamics but might not be straight forward. This study had two main purposes; firstly, to develop a generic multi-dimensional wavelet neural network (WNN) as a real-time surrogate model to calculate lower extremity joint moments and compare with those determined by multi body dynamics approach, secondly, to compare the calculation accuracy of WNN with feed forward artificial neural network (FFANN) as a traditional intelligent predictive structure in biomechanics.To aim these purposes, data of four patients walked with three different conditions were obtained from the literature. A total of 10 inputs including eight electromyography (EMG) signals and two ground reaction force (GRF) components were determined as the most informative inputs for the WNN based on the mutual information technique. Prediction ability of the network was tested at two different levels of inter-subject generalization. The WNN predictions were validated against outputs from multi body dynamics method in terms of normalized root mean square error (NRMSE (%)) and cross correlation coefficient (ρ).Results showed that WNN can predict joint moments to a high level of accuracy (NRMSE < 10%, ρ > 0.94) compared to FFANN (NRMSE < 16%, ρ > 0.89). A generic WNN could also calculate joint moments much faster and easier than multi body dynamics approach based on GRFs and EMG signals which released the necessity of motion capture. It is therefore indicated that the WNN can be a surrogate model for real-time gait biomechanics evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号