首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
基于GPU的等值面提取与绘制*   总被引:4,自引:1,他引:3  
吴玲达  杨超  陈鹏 《计算机应用研究》2008,25(11):3468-3470
利用图形硬件的并行性将六面体网格数据映射为纹理,从每个六面体中提取等值面片,并将其绘制到纹理而得到最终等值面。基于Cg着色器编程语言实现三维电磁环境表现的实验结果表明,该方法有效地减轻了CPU负担,提高了等值面提取速度,适合实时应用。  相似文献   

2.
使用方向滤波技术的手指静脉纹路提取方法   总被引:1,自引:1,他引:0       下载免费PDF全文
为了准确高效地提取手指静脉纹路,提出一种新的基于方向滤波的手指静脉纹路提取方法。该方法结合静脉纹路特点设计手指静脉图像的方向图及方向滤波器,根据所得的方向图及方向滤波器对图像进行滤波增强,对增强后的图像提取手指静脉模式。与传统的二值化方法相比,在进行二值化操作前,先对图像进行方向滤波处理,使得该方法提取出的手指静脉纹路连通性与光滑性好、噪声与伪特征少,且其不仅对高质量图像能够准确有效地提取出静脉纹路,对低质量图像处理效果也比较理想。  相似文献   

3.
4.
This work illustrates the contribution of persistent scatterer interferometry (PSI) from radar satellites ERS (European Remote Sensing satellite) and ENVISAT (Environmental Satellite) for the updating of a pre-existing landslide inventory (LSI) map: the main purpose is to change or confirm the landslide state of activity and geometry and to identify new landslides. Radar data have been integrated with optical images and ancillary data in a 1320 km2 wide river basin (Biferno Basin) located in the central-eastern part of Italy. The geological setting of the area is characterized by clay and alternated clayey, silt and sandy formations that are affected by slow landslides. Field validation confirmed the results and the capabilities of multi-interferometric synthetic aperture radar data, integrated and coupled with conventional techniques, to support landslide investigation at the regional scale thanks to the available archive of repeated satellite data, which provides measurements of ground displacements with a millimetre-scale accuracy. In the study area, about 9% of the pre-existing LSI has been modified by means of permanent scatterer (PS) information, 15% of which have changed the state of activity from dormant to active and 95 new landslides were detected. The radar interpretation method applied in Biferno Basin confirms its high capability of detecting and mapping landslides at basin the scale: the information acquired from radar interpretation is the basis of the proposed method to evaluate the state of activity and the intensity of slow landslides. However, it is clear that limitations exist and this method does not always support the updating of LSI for the whole study area. We consider this methodology and procedure as a portable and suitable one for different geological and geomorphological environments.  相似文献   

5.
In this paper, we propose a source localization algorithm based on a sparse Fast Fourier Transform (FFT)-based feature extraction method and spatial sparsity. We represent the sound source positions as a sparse vector by discretely segmenting the space with a circular grid. The location vector is related to microphone measurements through a linear equation, which can be estimated at each microphone. For this linear dimensionality reduction, we have utilized a Compressive Sensing (CS) and two-level FFT-based feature extraction method which combines two sets of audio signal features and covers both short-time and long-time properties of the signal. The proposed feature extraction method leads to a sparse representation of audio signals. As a result, a significant reduction in the dimensionality of the signals is achieved. In comparison to the state-of-the-art methods, the proposed method improves the accuracy while the complexity is reduced in some cases.  相似文献   

6.
Pattern Analysis and Applications - The image dehazing is a complicated dilemma to resolve the haze density influence on the object depth. Though many pixel-based or color space-based algorithms...  相似文献   

7.
Entity relation extraction can be applied in the automatic question answering system, digital library and many other fields. However, the previous works on this topic mainly focused on the features from a sentence itself in the data sets, without considering the links between sentences in the corpus. In this paper, we propose a concept model and obtain a new effective spatial feature based on this concept model. The added feature makes our feature space concerning not only the inherent information of the sentence itself, but also the semantic information connection between sentences. At last, we use ELM as the training classifier in entity relation extraction. The experiment result shows that the precision and recall of the relation extraction both have a significant increase, by using the new feature. Also, the use of ELM significantly reduces the time of relation extraction. It has a better performance than the traditional method based on SVM.  相似文献   

8.
Signal extraction deals with weighting the available observations in order to estimate a latent feature of interest. A signal extraction method is linear if the feature is measured by a possibly time-varying linear combination of the available observations. Linear methods play an important role since they are well understood, easy to apply, and are a key ingredient in more elaborate nonlinear and non-Gaussian models. The focus is on the main methods for inference about parametric and semiparametric unobserved components models formulated as linear mixed models and state space models and establish the connections between best linear unbiased prediction, penalised least squares and recursive methods of signal extraction. The methods are illustrated with reference to the traditional problem of extracting the cycle and the trend from economic time series.  相似文献   

9.
We introduce a new technique for estimating the optical flow field, starting from image sequences. As suggested by Fleet and Jepson (1990), we track contours of constant phase over time, since these are more robust to variations in lighting conditions and deviations from pure translation than contours of constant amplitude. Our phase-based approach proceeds in three stages. First, the image sequence is spatially filtered using a bank of quadrature pairs of Gabor filters, and the temporal phase gradient is computed, yielding estimates of the velocity component in directions orthogonal to the filter pairs' orientations. Second, a component velocity is rejected if the corresponding filter pair's phase information is not linear over a given time span. Third, the remaining component velocities at a single spatial location are combined and a recurrent neural network is used to derive the full velocity. We test our approach on several image sequences, both synthetic and realistic.  相似文献   

10.
The enhancement of image where the picture contains additive noise is considered. The procedure of high sequency ordered Hadamard transform filtering HSHTF is utilized to recursively improve the enhancement of the image. In each step the average error between reconstructed and original image is determined. The HSHTF was implemented to a generalized two dimensional Weiner filter to improve the quality of the reconstructed image. Examples are used to illustrate the effectiveness of the procedure.  相似文献   

11.
Automatic extraction of retinal vessels is of great significance in the field of medical diagnosis. Unfortunately, extracting vessels in retinal images with uneven background is a challenging task. In addition, accurate extraction of vessels with different widths is difficult. Aiming at these problems, in this paper, a new dynamic multi-scale filtering method together with a dynamic threshold processing scheme was proposed. The image is first divided into sub-images to facilitate the analysis of gray features. Then for each sub-image, the scales of the matched filter and the segmentation threshold are dynamically determined in accordance with the Gaussian fitting results of the gray distribution. Compared with the current blood vessel extraction algorithms based on multi-scale matched filter using uniform scales for the whole retinal image, the proposed method detects many fine vessels drowned by noise and avoids an overestimation of the thin vessels while improving the accuracy of segmentation in general.  相似文献   

12.
Coastline extraction from synthetic aperture radar (SAR) data is difficult because of the presence of speckle noise and strong signal returns from the wind-roughened and wave-modulated sea surface. High resolution and weather change independent of SAR data lead to better monitoring of coastal sea. Therefore, SAR coastline extraction has taken up much interest. The active contour method is an efficient algorithm for the edge detection task; however, applying this method to high-resolution images is time-consuming. The current article presents an efficient approach to extracting coastlines from high-resolution SAR images. First, fuzzy clustering with spatial constraints is applied to the input SAR image. This clustering method is robust for noise and shows good performance with noisy images. Next, binarization is carried out using Otsu’s method on the fuzzification results. Third, morphological filters are used on the binary image to eliminate spurious segments after binarization. To extract the coastline, an active contour level set method is used on the initial contours and is applied to the input SAR image to refine the segmentation. Because the proposed approach is based on an active contour model, it does not require preprocessing for SAR speckle reduction. Another advantage of the proposed method is the ability to extract the coastline at full resolution of the input SAR image without degrading the resolution. The proposed approach does not require manual initialization for the level set method and the proposed initialization speeds up the level set evolution. Experimental results on low- and high-resolution SAR images showed good performance for coastline extraction. A criterion based on neighbourhood pixels for the coastline is proposed for the quantitative expression of the accuracy of the method.  相似文献   

13.
This article presents a novel method for mean filtering that reduces the required number of additions and eliminates the need for division altogether. The time reduction is achieved using basic store-and-fetch operations and is irrespective of the image or neighbourhood size. This method has been tested on a variety of greyscale images and neighbourhood sizes with promising results. These results indicate that the relative time requirement reduces with increase in image size. The method's efficiency also improves significantly with increase in neighbourhood size thereby making it increasingly useful when dealing with large images.  相似文献   

14.
We propose an information filtering system for documents by a user profile using latent semantics obtained by singular value decomposition (SVD) and independent component analysis (ICA). In information filtering systems, it is useful to analyze the latent semantics of documents. ICA is one method to analyze the latent semantics. We assume that topics are independent of each other. Hence, when ICA is applied to documents, we obtain the topics included in the documents. By using SVD remove noises before applying ICA, we can improve the accuracy of topic extraction. By representation of the documents with those topics, we improve recommendations by the user profile. In addition, we construct a user profile with a genetic algorithm (GA) and evaluate it by 11-point average precision. We carried out an experiment using a test collection to confirm the advantages of the proposed method. This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February 4–6, 2005  相似文献   

15.
针对总变分(TV)模型对图像的细节不敏感以及去噪的同时易造成边缘模糊的缺陷,提出了一种图像去噪的新算法。根据图像经过小波分解以后,细节主要集中在高频部分,而取相邻尺度的小波系数进行相关计算,可以提高边缘的定位精度。利用小波高频系数的相关计算来控制TV模型的扩散,在去噪的同时保护了边缘细节。仿真实验采用三种典型的离散方法,结果显示该算法处理的去噪图像视觉效果有所改善,且信噪比也有很大提高。  相似文献   

16.
In using traditional digital classification algorithms, a researcher typically encounters serious issues in identifying urban land cover classes employing high resolution data. A normal approach is to use spectral information alone and ignore spatial information and a group of pixels that need to be considered together as an object. We used QuickBird image data over a central region in the city of Phoenix, Arizona to examine if an object-based classifier can accurately identify urban classes. To demonstrate if spectral information alone is practical in urban classification, we used spectra of the selected classes from randomly selected points to examine if they can be effectively discriminated. The overall accuracy based on spectral information alone reached only about 63.33%. We employed five different classification procedures with the object-based paradigm that separates spatially and spectrally similar pixels at different scales. The classifiers to assign land covers to segmented objects used in the study include membership functions and the nearest neighbor classifier. The object-based classifier achieved a high overall accuracy (90.40%), whereas the most commonly used decision rule, namely maximum likelihood classifier, produced a lower overall accuracy (67.60%). This study demonstrates that the object-based classifier is a significantly better approach than the classical per-pixel classifiers. Further, this study reviews application of different parameters for segmentation and classification, combined use of composite and original bands, selection of different scale levels, and choice of classifiers. Strengths and weaknesses of the object-based prototype are presented and we provide suggestions to avoid or minimize uncertainties and limitations associated with the approach.  相似文献   

17.
A Semivariogram, as defined in geostatistics, is a powerful tool for texture extraction of remotely sensed images. However, the traditional texture features extracted by a semivariogram are generally for pixel-based classification. Moreover, most studies have been based on the original computation mode of semivariogram and discrete semivariance values. This article describes a set of semivariogram texture features (STFs) based on the mean square root pair difference (SRPD) to improve the accuracy of object-oriented classification (OOC) in QuickBird images. The adaptive parameters for the calculation of a semivariogram were first derived from semivariance analysis, including directions, moving window size, and lag distance. Then, 22 STFs were extracted from the discrete and mean/standard deviation semivariance, and 15 features were selected from the extracted STFs based on feature optimization. Then five grey-level co-occurrence matrix (GLCM) texture features (mean, homogeneity, contrast, angular second moment, and entropy) were calculated based on segmented image objects using the panchromatic band. A comparison of classification results demonstrates that the STFs described in this article are useful supplement information for the spectral OOC, and the spectral + STFs classification method can be used to obtain a higher classification accuracy than can the combination of spectral and GLCM features.  相似文献   

18.
19.
Track-before-detect (TBD) algorithms are used for tracking systems, where the object’s signal is below the noise floor (low-SNR objects). A lot of computations and memory transfers for real-time signal processing are necessary. GPGPU in parallel processing devices for TBD algorithms is well suited. Finding optimal or suboptimal code, due to lack of documentation for low-level programming of GPGPUs is not possible. High-level code optimization is necessary and the evolutionary approach, based on the single parent and single child is considered, that is local search approach. Brute force search technique is not feasible, because there are N! code variants, where N is the number of motion vectors components. The proposed evolutionary operator—LREI (local random extraction and insertion) allows source code reordering for the reduction of computation time due to better organization of memory transfer and the texture cache content. The starting point, based on the sorting and the minimal execution time metric is proposed. The unbiased random and biased sorting techniques are compared using experimental approach. Tests shows significant improvements of the computation speed, about 8 % over the conventional code for CUDA code. The time period of optimization for the sample code is about 1 h (1,000 iterations) for the considered recursive spatio-temporal TBD algorithm.  相似文献   

20.
《Ergonomics》2012,55(4):613-626
Two experiments have examined the effects of whole-body vibration on visual performance. The first experiment concerned alphanumeric reading performance and contrast thresholds for gratings subtending 7-5, 10 and 12-5 cycles per degree (c deg)?1. Seated subjects were exposed to vertical sinusoidal whole-body vibration (4 Hz, 2-5 ms?2 r.m.s.). Greatest reading errors occurred with characters exhibiting a high spatial complexity in their vertical axis. Reductions in contrast sensitivity due to vibration increased with increasing spatial frequency, the greatest loss occurring with horizontally orientated gratings.

In the second experiment, contrast thresholds for horizontally orientated gratings subtending 1-5 and 12-5cdeg?1 were obtained from ten subjects at five-minute intervals during a 60-minute whole-body vibration exposure (20 Hz I -7 m s ?2 r.m.s.), a 20-minute pre-exposure and a 60-minute post-exposure period. There were no significant changes in contrast thresholds for gratings subtending 1-5cdegminus;1 during or after vibration exposure. A large variation was found in the effect of vibration upon performance with the higher spatial frequency grating both during and after vibration exposure. Significant correlations between vertical head motion and contrast sensitivity were obtained with five of the ten subjects, suggesting that time-dependent changes in seat-to-head transmissibility were partly responsible for the results. Other time-dependent changes were found with the high spatial frequency grating. Possible explanations are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号