首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Hilbert curve is a one-to-one mapping between multidimensional space and one-dimensional (1-D) space. Due to the advantage of preserving high correlation of multidimensional points, it receives much attention in many areas. Especially in image processing, Hilbert curve is studied actively as a scan technique (Hilbert scan). Currently there have been several Hilbert scan algorithms, but they usually have strict implementation conditions. For example, they use recursive functions to generate scans, which makes the algorithms complex and difficult to implement in real-time systems. Moreover the length of each side in a scanned region should be same and equal to the power of two, which limits the application of Hilbert scan greatly. In this paper, to remove the constraints and improve the Hilbert scan for a general application, an effective generalized three-dimensional (3-D) Hilbert scan algorithm is proposed. The proposed algorithm uses two simple look-up tables instead of recursive functions to generate a scan, which greatly reduces the computational complexity and saves storage memory. Furthermore, the experimental results show that the proposed generalized Hilbert scan can also take advantage of the high correlation between neighboring lattice points in an arbitrarily-sized cuboid region, and give competitive performance in comparison with some common scan techniques.  相似文献   

2.
基于卷积神经网络(convolutional neural network, CNN)的表面肌电信号(surface electromygraphy, sEMG)手势识别算法通常将一维sEMG转换成二维肌电图作为CNN的输入。针对sEMG瞬时样本量偏少、以及一维sEMG转换成二维肌电图时带来的局部时序特征丢失等问题,提出了将多元经验模态分解(multivariate empirical mode decomposition, MEMD)算法与Hilbert空间填充曲线相结合的方法,以提升手势识别算法的准确率。采用开源数据集NinaPro-DB1作为实验数据集;通过MEMD算法对sEMG进行分解;将分解后的本征模态函数(intrinsic mode functions, IMFs)作为Hilbert曲线的填充域(Hilb-IMFs)映射成二维肌电图;选择DenseNet作为手势识别的基本网络。实验结果表明,提出的方法相对于传统信号升维方法在手势识别准确率上约有4%的性能提升,验证了该方法的有效性。  相似文献   

3.
Hilbert曲线是高维降到1维的重要方法,具有较好的空间聚集和空间连续性,在地理信息系统、空间数据库、信息检索等方面有广泛的应用。现有Hilbert编码或解码算法未考虑输入数据对编码或解码效率的影响,因此将不同输入数据同等对待。为此,该文通过设计高效的状态视图并结合快速置位检测算法提出高效的免计前0的Hilbert编码算法(FZF-HE)和免计前0的Hilbert解码算法(FZF-HD),可快速识别输入数据前部为0而无需迭代计算的部分,从而降低迭代查询次数及算法复杂度,提高编解码效率。实验结果表明,FZF-HE算法和FZF-HD算法在数据均匀分布时效率稍高于现有算法,而在数据偏斜分布时效率远高于现有算法。  相似文献   

4.
Using a paired comparison paradigm, various gamut mapping algorithms were evaluated using simple rendered images and artificial gamut boundaries. The test images consisted of simple rendered spheres floating in front of a gray background. Using CIELAB as our device-independent color space, cut-off values for lightness and chroma, based on the statistics of the images, were chosen to reduce the gamuts for the test images. The gamut mapping algorithms consisted of combinations of clipping and mapping the original gamut in linear piecewise segments. Complete color space compression in RGB and CIELAB was also tested. Each of the colored originals (R,G,B,C,M,Y, and Skin) were mapped separately in lightness and chroma. In addition, each algorithm was implemented with saturation (C(*)/L(*)) allowed to vary or retain the same values as in the original image. Pairs of test images with reduced color gamuts were presented to twenty subjects along with the original image. For each pair the subjects chose the test image that better reproduced the original. Rank orders and interval scales of algorithm performance with confidence limits were then derived. Clipping all out-of-gamut colors was the best method for mapping chroma. For lightness mapping at low lightness levels and high lightness levels particular gamut mapping algorithms consistently produced images chosen as most like the original. The choice of device-independent color space may also influence which gamut mapping algorithms are best.  相似文献   

5.
温媛媛  龙伟  高政 《电光与控制》2006,13(4):103-106
现有许多数字水印算法基本上都是针对灰度图像的,彩色图像数字水印算法尚未得到充分研究,且所能嵌入水印的容量也不够大。本文提出的大容量多通道数字水印算法对这一问题进行了研究。该算法以彩色图像作为原始载体,通过数字水印压缩编码,载体图像颜色空间转换,彩色分量分块离散余弦变换,结合人眼视觉系统确定水印嵌入位置等措施,将二维水印图像嵌入到原始彩色载体图像中,且能嵌入较大容量的水印图像。实验结果表明,该算法不仅提高了水印容量,且对剪切、模糊、锐化等有损攻击具有良好的健壮性。  相似文献   

6.
红外成像系统设计中,一般采用宽动态的采集电路以获得丰富的细节信息,当前大部分的显示设备都只有8位,所以将宽动态图像压缩成低动态图像同时保持尽可能多的信息成为重点。研究了当前主流的宽动态红外图像处理算法,分析了映射、图像分层和梯度域3类算法的优缺点,实现了3类算法中的经典算法,并用同一张红外图片进行对比分析,提出了各类算法的改进意见。图像分层算法要在抑制光晕和梯度反转的情况下降低时间复杂度;梯度域算法需在进一步提高细节信息的情况下抑制背景噪声。  相似文献   

7.
提出了一种基于非线性核空间映射人工免疫网络的高光谱遥感图像分类算法.根据生物免疫网络基本原理构建了人工免疫网络模型,利用非线性核函数将高光谱训练样本映射到高维空间,完善了人工免疫网络中目标样本核空间相似性分选方法,降低了人工免疫网络识别样本所需的抗体数量,提升了算法的分类精度和运算效率.为了验证算法的有效性,利用两组高光谱遥感数据将多种高光谱分类方法进行了对比实验.实验表明该算法分类精度和算法运算时间上都有较大改善,是一种分类精度更高、运算速度更快的改进型基于人工免疫网络的高光谱遥感图像分类新方法.  相似文献   

8.
该文提出了基于层次模型的重要性编码算法,该算法在不改变JPEG2000上下文模型前提下按照重要系数的空间聚集区域进行层次邻域编码。实验结果表明:新算法输出的码流的自相关性较JEG2000以及Hilbert曲线输出的码流的自相关性好;新算法的平均码率较JPEG2000的条带扫描以及Hilbert曲线扫描的平均码率分别提高了1.06%,0.57%;而且,新算法的平均码率较JPEG2000的上下文量化优化算法获得的平均码率提高了1.08%。  相似文献   

9.
It is important to detect and extract the major cortical sulci from brain images, but manually annotating these sulci is a time-consuming task and requires the labeler to follow complex protocols. This paper proposes a learning-based algorithm for automated extraction of the major cortical sulci from magnetic resonance imaging (MRI) volumes and cortical surfaces. Unlike alternative methods for detecting the major cortical sulci, which use a small number of predefined rules based on properties of the cortical surface such as the mean curvature, our approach learns a discriminative model using the probabilistic boosting tree algorithm (PBT). PBT is a supervised learning approach which selects and combines hundreds of features at different scales, such as curvatures, gradients and shape index. Our method can be applied to either MRI volumes or cortical surfaces. It first outputs a probability map which indicates how likely each voxel lies on a major sulcal curve. Next, it applies dynamic programming to extract the best curve based on the probability map and a shape prior. The algorithm has almost no parameters to tune for extracting different major sulci. It is very fast (it runs in under 1 min per sulcus including the time to compute the discriminative models) due to efficient implementation of the features (e.g., using the integral volume to rapidly compute the responses of 3-D Haar filters). Because the algorithm can be applied to MRI volumes directly, there is no need to perform preprocessing such as tissue segmentation or mapping to a canonical space. The learning aspect of our approach makes the system very flexible and general. For illustration, we use volumes of the right hemisphere with several major cortical sulci manually labeled. The algorithm is tested on two groups of data, including some brains from patients with Williams Syndrome, and the results are very encouraging.  相似文献   

10.
基于图像高阶MARKOV链模型的扩频隐写分析   总被引:1,自引:1,他引:0       下载免费PDF全文
张湛  刘光杰  王俊文  戴跃伟  王执铨 《电子学报》2010,38(11):2578-2584
 扩频隐写分析是信息隐藏研究领域的一个重要方面.文章提出基于高阶Markov链的数字图像统计分布模型,在对常用图像扫描方法构成高阶Markov链的效果进行比较后,采用Hilbert扫描方式构建数字图像n阶Markov链模型,进而提出度量数字图像隐写统计安全性的n阶Markov链测度,并证明其有界.最后文章通过研究扩频隐写对高阶Markov链模型经验矩阵的影响,利用该模型提取图像统计特征,并使用支持向量机对几种常用图像扩频隐写方法进行分析.实验说明文章所提方法对扩频隐写分析效果良好,且随着模型阶数提高,分析准确率也随之提高.  相似文献   

11.
We present an efficient algorithm to compute multidimensional spatially variant convolutions-or inner products-between N-dimensional signals and B-splines-or their derivatives-of any order and arbitrary sizes. The multidimensional B-splines are computed as tensor products of 1-D B-splines, and the input signal is expressed in a B-spline basis. The convolution is then computed by using an adequate combination of integration and scaled finite differences as to have, for moderate and large scale values, a computational complexity that does not depend on the scaling factor. To show in practice the benefit of using our spatially variant convolution approach, we present an adaptive noise filter that adjusts the kernel size to the local image characteristics and a high sensitivity local ridge detector.  相似文献   

12.
Hilbert curve describes a one-to-one mapping between multidimensional space and 1 D space.Most traditional 3D Hilbert encoding and decoding algorithms work on order-wise manner and are not aware of the difference between different input data and spend equivalent computing costs on them, thus resulting in a low efficiency. To solve this problem, in this paper we design efficient 3D state views for fast encoding and decoding. Based on the state views designed, a new encoding algorithm(JFK-3HE) and...  相似文献   

13.
A general cone-beam reconstruction algorithm   总被引:17,自引:0,他引:17  
Considering the characteristics of the X-ray microscope system being developed at SUNY at Buffalo and the limitations of available cone-beam reconstruction algorithms, a general cone-beam reconstruction algorithm and several special versions of it are proposed and validated by simulation. The cone-beam algorithm allows various scanning loci, handles reconstruction of rod-shaped specimens which are common in practice, and facilitates near real-time reconstruction by providing the same computational efficiency and parallelism as L.A. Feldkamp et al.'s (1984) algorithm. Although the present cone-beam algorithm is not exact, it consistently gives satisfactory reconstructed images. Furthermore, it has several nice properties if the scanning locus meets some conditions. First, reconstruction within a midplane is exact using a planar scanning locus. Second, the vertical integral of a reconstructed image is equal to that of the actual image. Third, reconstruction is exact if an actual image is independent of rotation axis coordinate z. Also, the general algorithm can uniformize and reduce z-axis artifacts, if a helix-like scanning locus is used.  相似文献   

14.
Region-based image coding with multiple algorithms   总被引:3,自引:0,他引:3  
The wide usage of small satellite imagery, especially its commercialization, makes data-based onboard compression not only meaningful but also necessary in order to solve the bottleneck between the huge volume of data generated onboard and the very limited downlink bandwidth. The authors propose a method that encodes different regions with different algorithms. The authors use three shape-adaptive image compression algorithms as the candidates. The first one is a JPEG-based algorithm, the second one is based on the object-based wavelet transform method proposed by Katata et al. (1997), and the third adopts Hilbert scanning of the regions of interest followed by one-dimensional (1-D) wavelet transform. The three algorithms are also applied to the full image so that one can compare their performance on a whole rectangular image. The authors use eight Landsat TM multispectral images and another 12 small satellite single-band images as their data set. The results show that these compression algorithms have significantly different performance for different regions  相似文献   

15.
詹曙  方琪  杨福猛  常乐乐  闫婷 《电子学报》2016,44(5):1189-1195
针对目前基于字典学习的图像超分辨率重建效果欠佳或字典训练时间过长的问题,本文提出了一种耦合特征空间下改进字典学习的图像超分辨率重建算法.该算法首先利用高斯混合模型聚类算法对训练图像块进行聚类,然后使用更改字典更新方式的改进KSVD字典学习算法来快速获得高、低分辨率特征空间下字典对和映射矩阵.重建时根据测试样本与各个类别的似然概率自适应地选择最匹配的字典对和映射矩阵进行高分辨率重建.最后利用图像非局部相似性,将其与迭代反向投影算法相结合对重建后的图像进行后处理获得最佳重建效果.实验结果表明了本文方法的有效性.  相似文献   

16.
在实际应用中,景像匹配算法的性能受旋转、缩放、平移等多种因素影响.选取适当的图像性能指标和算法性能指标作为输入和输出,利用多元Logistic回归模型建立两者之间的联系,通过回归分析评估匹配算法性能优劣.该方法能够综合多种影响因素评估算法性能,并确定各因素与算法性能指标的相关程度,为算法应用及改进提供依据.通过多元回归分析对两种匹配算法的性能进行了定量评估与比较,验证了评估模型的有效性.  相似文献   

17.
Recently, there has been much progress in algorithm development for image reconstruction in cone-beam computed tomography (CT). Current algorithms, including the chord-based algorithms, now accept minimal data sets for obtaining images on volume regions-of-interest (ROIs) thereby potentially allowing for reduction of X-ray dose in diagnostic CT. As these developments are relatively new, little effort has been directed at investigating the response of the resulting algorithm implementations to physical factors such as data noise. In this paper, we perform an investigation on the noise properties of ROI images reconstructed by using chord-based algorithms for different scanning configurations. We find that, for the cases under study, the chord-based algorithms yield images with comparable quality. Additionally, it is observed that, in many situations, large data sets contain extraneous data that may not reduce the ROI-image variances.  相似文献   

18.
模糊核聚类算法是一种结合无监督聚类和模糊集合概念的图像分割技术,已广泛应用于图像分割领域,但其算法对初值敏感,很大程度上依赖初始聚类中心的选择,并且容易收敛于局部极小值,用于图像分割时,隶属度的计算只考虑了图像中当前的像素探值,而未考虑邻域像素探间的相互关系,故对分割含有噪声图像不理想。故提出了一种改进的模糊核聚类图像分割算法,先通过数据约简,不损失数据聚类结构的前提下对数据进行挖掘,然后在模糊核聚类算法中引入特性核函数,将约简后的数据映射到高维非线性特征空间进行划分,最后再利用表征邻域像素的参数来修正当前空间像素的隶属度。实验结果表明,提出的算法较好地解决了模糊核聚类算法在局部极值处收敛和在迭代过程中出现停滞等问题,最终得到最佳全局聚类,迭代次数降低明显,并具有高鲁棒性、对噪声不敏感的特点。  相似文献   

19.
十种基于颜色特征图像检索算法的比较和分析   总被引:47,自引:0,他引:47  
在基于内容的图像检索中,颜色特征已得到广泛应用.本文对十种利用颜色特征进行图像检索的算法利用同一图像库进行了实验比较.实验结果表明,无论在HSI空间或MTM空间,累加直方图法均优于一般直方图法,对这一点本文还首次给出了严格的理论证明.实验结果还表明,加权距离法比欧氏距离法总体上没有明显改善,MTM空间比HSI空间也没有显出优势,而中心矩法算法简单,检索速度快,通过调整加权系数,检索精度可以接近累加直方图法.本文的实验和分析对选择和优化检索算法有一定的参考价值.  相似文献   

20.
This paper presents a new algorithm based on integrated congruence transform for efficient simulation of nonuniform transmission lines. The proposed algorithm introduces the concept of model-order reduction (MOR) via implicit usage of the Hilbert-space moments in distributed networks. The key idea in the proposed algorithm is the development of an orthogonalization procedure that does not require the explicit computation of the Hilbert-space moments in order to find their spanning orthogonal basis. The proposed orthogonalization procedure can thus be used to compute an orthogonal basis for any set of elements that are related through a differential operator in a generalized Hilbert space, without the need to have these elements in an explicit form. The proposed algorithm also addresses the problem of MOR of nonuniform transmission lines, through defining a weighted inner product and norm mappings over the Hilbert space of the moments. Numerical examples demonstrate more accurate numerical approximation capabilities over using the moments explicitly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号