首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
全卷积神经网络近年来被应用于深度学习中的多个领域,其不仅能处理简单的图像分类任务,还能应用于例如物体检测、语义/图像分割以及基于生成式对抗网络的生成型任务.典型的全卷积神经网络中不仅包括了传统的卷积层,还有反卷积层,它们都是计算密集型的.现在大多数研究者大都关注卷积层的设计优化,而反卷积的加速优化很少.本文提出了一种双向脉动数据流的全卷积神经网络加速器,可以同时高效地处理普通卷积层以及反卷积层.实验中选取了多个具有代表性的全卷积神经网络模型,例如DCGAN,Cascaded-FCN等.相较于以往传统的未优化的加速方案,本文所设计的加速器平均可以达到2.8倍的加速比,并且能耗降低了46.3%.  相似文献   

2.
Postprocessing technique for blocking artifacts reduction in DCT domain   总被引:1,自引:0,他引:1  
Zhao  Y. Cheng  G. Yu  S. 《Electronics letters》2004,40(19):1175-1176
An effective post-processing technique is proposed to reduce the blocking artifacts in the block discrete cosine transform (BDCT)-coded images. Simulation results indicate that the proposed scheme outperforms conventional post-processing techniques in both PSNR and visual quality.  相似文献   

3.
随着电网智能化程度的逐步深入,智能电网高维数据成为了“电网2.0”的重要价值资源.本文论述了智能电网大数据源、大数据流体系,讨论了传统电力数据聚类方法与特征,分析了智能电网高维数据所具有的稀疏性、空空间现象、维度效应、Hubness现象和离群点检测的特征,对智能电网高维数据从维数简化、索引技术、结果表征与评价方面论述了高维数据聚类分析方法和应用实践.  相似文献   

4.
周冬  苏勇  黄烨 《信息技术》2013,(3):168-171
传统异常检测技术是基于距离和密度的,快速的异常检测算法过分依赖于索引结构或网格划分,在低维数据上有很好的效果;面对高维数据的稀疏性、空空间现象等特性,索引结构失效,网格划分的数目呈指数级增长,传统算法性能下降;文中采用信息熵确定高维数据异常子空间,在异常子空间上使用DBSCAN聚类算法,在高维数据异常检测中表现出较好的性能。  相似文献   

5.
A technique is described that permits the rapid determination of all four noise parameters of a MESFET or HEMT at wafer level. The fully automated procedure, which has been implemented in the 2-8 GHz range, uses 16 accurately measured, very repeatable source impedance standards. The standards have been selected for optimum coverage of the input impedance plane to result in stable and rapidly convergent least-squares solutions for the minimum noise figure, optimum source impedance, and noise resistance of practical devices. The resultant system is very stable and produces accurate noise parameters for a wide range of devices  相似文献   

6.
Fluorescence loss in photobleaching (FLIP) is a method to study compartment connectivity in living cells. A FLIP sequence is obtained by alternatively bleaching a spot in a cell and acquiring an image of the complete cell. Connectivity is estimated by comparing fluorescence signal attenuation in different cell parts. The measurements of the fluorescence attenuation are hampered by the low signal to noise ratio of the FLIP sequences, by sudden sample shifts and by sample drift. This paper describes a method that estimates the attenuation by modeling photobleaching as exponentially decaying signals. Sudden motion artifacts are minimized by registering the frames of a FLIP sequence to target frames based on the estimated model and by removing frames that contain deformations. Linear motion (sample drift) is reduced by minimizing the entropy of the estimated attenuation coefficients. Experiments on 16 in vivo FLIP sequences of muscle cells in Drosophila show that the proposed method results in fluorescence attenuations similar to the manually identified gold standard, but with standard deviations of approximately 50 times smaller. As a result of this higher precision, cell compartment edges and details such as cell nuclei become clearly discernible. The main value of this method is that it uses a model of the bleaching process to correct motion and that the model based fluorescence intensity and attenuation estimates can be interpreted easily. The proposed method is fully automatic, and runs in approximately one minute per sequence, making it suitable for unsupervised batch processing of large data series.  相似文献   

7.
This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from ${1.26}pm{0.87}$ to ${0.64}pm{0.46}$ pixels. Time-intensity curves are also improved after registration with an average error reduced from ${2.65}pm {7.89}%$ to ${0.87}pm {3.88}%$ between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.   相似文献   

8.
This paper presents a gradient-based optimization approach to achieve reduction of blocking artifacts in compressed JPEG images. This approach involves decomposing a JPEG image into 1-D signals once along the rows or columns and once along the columns or rows. The reduction of blocking artifacts is carried out per 1-D signal by an optimization formulation where the gradient of an original 1-D signal is approximated based on the gradient of a compressed signal. A fixed-weight and an adaptive-weight optimization formulation are considered and solved analytically. A restored image is reconstructed by aggregating recovered 1-D signals. The performance of the developed method is assessed by examining both gray-level and color images and by computing the three measures of PSNR, SSIM, and GBIM. Comparison results with five existing methods are also reported.  相似文献   

9.
《现代电子技术》2017,(19):138-141
提出利用基于多目标优化软子空间聚类理论的关联规则数据挖掘方法对高维数据集中局部离散文本数据实现数据特征有效挖掘。首先,利用多目标优化软子空间聚类思想结合非支配排序遗传理论优化加权类内紧致及加权类间分离函数,获取优化后的目标函数及非占优Pareto最优解集,运用加权子空间划分方法对最优解集完成特征聚类;其次,基于关联规则思想运用一种特征提取和关联文本的识别方法,对聚类后的文本特征进行文本间及文本内部的特征识别和分类,即实现了文本信息数据的有效挖掘。实验证明,利用多目标优化软子空间聚类数据挖掘方法可以有效实现高维集中局部离散文本数据的挖掘。  相似文献   

10.
沈晓燕  王志功 《半导体学报》2014,35(9):095011-4
Nerve tracts interruption is one of the major reasons for dysfunction after spiral cord injury. The microelectronic neural bridge is a method to restore function of interrupted neural pathways, by making use of microelectronic chips to bypass the injured nerve tracts. A low-power fully integrated microelectronic neural bridge chip is designed, using CSMC 0.5-μm CMOS technology. The structure and the key points in the circuit design will be introduced in detail. In order to meet the requirement for implantation, the circuit was modified to avoid the use of off-chip components, and fully monolithic integration is achieved. The operating voltage of the circuit is 4-2.5 V, and the chip area is 1.21×1.18 mm2. According to the characteristic of neural signal, the time-domain method is used in testing. The pass bandwidth of the microelectronic neural bridge system covers the whole frequency range of the neural signal, power consumption is 4.33 mW, and the gain is adjustable. The design goals are achieved.  相似文献   

11.
We present a fully Bayesian approach to modeling in functional magnetic resonance imaging (FMRI), incorporating spatio-temporal noise modeling and haemodynamic response function (HRF) modeling. A fully Bayesian approach allows for the uncertainties in the noise and signal modeling to be incorporated together to provide full posterior distributions of the HRF parameters. The noise modeling is achieved via a nonseparable space-time vector autoregressive process. Previous FMRI noise models have either been purely temporal, separable or modeling deterministic trends. The specific form of the noise process is determined using model selection techniques. Notably, this results in the need for a spatially nonstationary and temporally stationary spatial component. Within the same full model, we also investigate the variation of the HRF in different areas of the activation, and for different experimental stimuli. We propose a novel HRF model made up of half-cosines, which allows distinct combinations of parameters to represent characteristics of interest. In addition, to adaptively avoid over-fitting we propose the use of automatic relevance determination priors to force certain parameters in the model to zero with high precision if there is no evidence to support them in the data. We apply the model to three datasets and observe matter-type dependence of the spatial and temporal noise, and a negative correlation between activation height and HRF time to main peak (although we suggest that this apparent correlation may be due to a number of different effects).  相似文献   

12.
Fourier approximation and estimation of discriminant, regression, and density functions are considered. A preference order is established for the frequency weights in multiple Fourier expansions and the connection weights in single hidden-layer neural networks. These preferred weight vectors, called good weights (good lattice weights for estimation of periodic functions), are generalizations for arbitrary periods of the hyperbolic lattice points of Korobov (1959) and Hlawka (1962) associated with classes of smooth functions of period one in each variable. Although previous results on approximation and quadrature are affinely invariant to the scale of the underlying periods, some of our results deal with optimization over finite sets and strongly depend on the choice of scale. It is shown how to count and generate good lattice weights. Finite sample bounds on mean integrated squared error are calculated for ridge estimates of periodic pattern class densities. The bounds are combined with a table of cardinalities of good lattice weight sets to furnish classifier design with prescribed class density estimation errors. Applications are presented for neural networks and projection pursuit. A hyperbolic kernel gradient transform is developed which automatically determines the training weights (projection directions). Its sampling properties are discussed. Algorithms are presented for generating good weights for projection pursuit  相似文献   

13.
14.
The present work concerns the problem of refraction artifacts in ultrasonic transmission tomography. The reconstruction is improved by curved-ray methods, combined with algebraic reconstruction techniques. The problem of acoustic ray tracing and image interpolation has been carefully studied, and different reconstruction algorithms have been developed and compared. The effect of the geometrical characteristics of the set-up and the studied medium characteristics (geometry and acoustical properties) on the reconstruction accuracy are considered. Some simulation results are presented which show an encouraging reduction of the refraction artifacts. The results have been confirmed by experiments carried out with agar-gel phantoms. The experimental device and procedure are described and straight- and curved-ray reconstructions are shown. Reconstruction quality can be improved significantly for refractive index variations of up to 10%, which seems sufficient for soft tissue imaging; yet there are some limiting factors, such as multipath propagation, if any, or the difficulty of choosing an initial value for the reconstruction.  相似文献   

15.
Virtual colonoscopy detects polyps by navigating along a colon centerline. Complete colon segmentation based on computed tomography (CT) data is a prerequisite to the computation of complete colon centerline. There are two main problems impeding complete segmentation: overdistention/underdistention of colon and the use of oral contrast agents. Overdistention produces loops in the segmented colon, while underdistention may cause the segmented colon collapse into a series of disconnected segments. Use of oral contrast agents, which have high attenuation on CT, may add redundant structures (bones and small bowels) to the segmented colon. A fully automated colon segmentation method is proposed in this paper to address the two problems. We tested the proposed method in 170 cases, including 37 "moderate" and 133 "challenging" cases. Computer-generated centerlines were compared with human-generated centerlines (plotted by three radiologists). The proposed method achieved a 90.56% correct coverage rate with respect to the human-generated centerlines. We also compared the proposed method with two existing colon segmentation methods: Uitert's method and Nappi's method. The results of these two methods were 75.16% and 72.59% correct coverage rates, respectively. Our experimental results indicate that the proposed method could yield more complete colon centerlines than the existing methods.  相似文献   

16.
An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.  相似文献   

17.
18.
The recent development of more sophisticated remote-sensing systems enables the measurement of radiation in many more spectral intervals than was previously possible. An example of this technology is the AVIRIS system, which collects image data in 220 bands. The increased dimensionality of such hyperspectral data greatly enhances the data's information content, but provides a challenge to the current techniques for analyzing such data. Human experience in 3D space tends to mislead our intuition of geometrical and statistical properties in high-dimensional space, properties which must guide our choices in the data analysis process. Using Euclidean and Cartesian geometry, high-dimensional space properties are investigated in this paper, and their implication for high-dimensional data and its analysis is studied in order to illuminate the differences between conventional spaces and hyperdimensional space  相似文献   

19.
The analysis of natural linear structures, termed “lineaments” in satellite images, provides important information to the geologist. In the satellite imaging process, important features of the observed tridimensional scene, including geological lineaments, are mapped into the resulting 2D image as sharp radiation variations or edge elements (edgels). Edgels are detected by a first-order differentiation operator and are linked together with those in the vicinity on a basis of orientation continuity. Lineaments are mapped into remotely sensed satellite images as long and continuous quasilinear features and can be described as a connected sequence of edgels whose direction may change gradually along the sequence. Parts of the same lineament can be occluded by geomorphological features and must be linked together, a major drawback with local and small neighborhood detectors. The authors propose a cellular neural network (CNN) architecture to offer a large directional neighborhood to the lineament detection algorithm. The CNN uses a large circular neighborhood coupled with a directional-induced gradient field to link together edgels with similar and continuous orientation. Missing edgels are restored if a surrounding lineament is detected  相似文献   

20.
In recent years, many investigators have proposed Gibbs prior models to regularize images reconstructed from emission computed tomography data. Unfortunately, hyperparameters used to specify Gibbs priors can greatly influence the degree of regularity imposed by such priors and, as a result, numerous procedures have been proposed to estimate hyperparameter values, from observed image data. Many of these, procedures attempt to maximize the joint posterior distribution on the image scene. To implement these methods, approximations to the joint posterior densities are required, because the dependence of the Gibbs partition function on the hyperparameter values is unknown. Here, the authors use recent results in Markov chain Monte Carlo (MCMC) sampling to estimate the relative values of Gibbs partition functions and using these values, sample from joint posterior distributions on image scenes. This allows for a fully Bayesian procedure which does not fix the hyperparameters at some estimated or specified value, but enables uncertainty about these values to be propagated through to the estimated intensities. The authors utilize realizations from the posterior distribution for determining credible regions for the intensity of the emission source. The authors consider two different Markov random field (MRF) models-the power model and a line-site model. As applications they estimate the posterior distribution of source intensities from computer simulated data as well as data collected from a physical single photon emission computed tomography (SPECT) phantom  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号