首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 231 毫秒
1.
在多维数据分析和处理中,经常会出现部分数据丢失或者部分数据未知的情况,如何利用已知数据的潜在结构对这些缺失数据进行填充是一个亟待解决的问题。目前对于缺失数据填充的研究大多是针对矩阵或者向量形式的低维数据,而对于三维以上高维数据填充的研究则很少。针对该问题,提出一种基于张量分解的多维数据填充算法,利用张量分解中CP分解模型的结构特性和分解的唯一性,实现对多维数据中缺失数据的有效填充。通过实验对以三维形式存储的部分数据缺失图像进行填充修复,并与CP-WOPT算法进行比较,结果表明,该算法具有较高的准确度以及较快的运行速度。  相似文献   

2.
MODIS日尺度的地表温度受到天气影响,有效像元信息严重缺失,这对数据稀缺区域尤为重要。以古尔班通古特沙漠为研究区,探索了采用AMSR-2的垂直极化亮度温度与植被指数对地表温度空间降尺度的方法,并用此方法填补了2018年MODIS的缺失像元。(1)通过十折交叉验证,对4种机器学习算法(Cubist、DBN、SVM、RF)、10个波段组合、2个空间尺度(5 km、10 km)下的训练模型进行了分析,表明RF算法精度明显高于其他3种算法,C09波段组合的验证精度高于其他波段组合。(2)构建了2个鲁棒性的随机森林算法地表温度降尺度模型(5 km|RF|09、10 km|RF|09),将AMSR-2亮度温度降尺度到1km分辨率,表明5 km|RF|09模型反演结果更为合理,MODIS与站点验证的R2分别为0.971、0.930,RMSE分别为3.38 K、4.71 K,MAE分别为2.51 K、3.84 K。(3)降尺度结果填补MODIS地表温度缺失像元,将其应用到古尔班通古特沙漠长时间序列的陆表温度分析,可为数据稀缺区域数据获取提供科学参考。  相似文献   

3.
《软件》2019,(4):18-24
为了克服InSAR技术获取数据时,由于部分月份缺失影像,而导致对研究区进行长时间序列形变分析时,所获取的形变序列为非等时距,不利于系统地反映其形变趋势的问题。本文基于MATLAB软件进行插值算法编写,对缺失时间跨度同的两种地表形变监测数据,分别进行多种插值算法实验。实验结果显示:对于缺失月份较少的数据,三次样条插值拟合效果最好;对缺失较多月份或分布不均匀的数据,立方插值拟合效果最好;线性插值和邻近点插值的拟合效果较差,不适用于对缺失的数据进行插值。该结果对由于缺失部分数据而影响其形变时序分析的情况提供了方法指导,具有较强的实用意义。  相似文献   

4.
《微型机与应用》2019,(11):47-53
随着大数据时代的来临,多变量时间序列的应用价值得到了越来越多的关注。然而,缺失数据的存在严重影响了对多变量时间序列的进一步开发利用。针对这个问题,提出了基于改进递归神经网络的多变量缺失数据填充算法,该算法通过衰减机制可以获得更多有用的隐藏信息,从而更好地完成对多变量缺失数据的填充。首先,对多变量数据进行预处理,得到网络的输入向量;其次,在长短时记忆(Long-Short-Term Memory,LSTM)单元的基础上引入衰减机制,提出了两种改进的缺失数据填充模型。改进后的模型能够更多更好地获取长时间间隔的隐藏信息,并对输入进行相应的衰减处理。为检验算法的性能,在上海空气质量数据集以及多传感器数据融合活动识别系统(Activity Recognition system based on Multisensor data fusion,ARe M)数据集上进行了仿真实验。结果表明,相比于其他算法,所提算法能够更好地实现多变量时间序列的缺失数据填充。  相似文献   

5.
针对k最近邻填充算法(kNNI)在缺失数据的k个最近邻可能存在噪声,提出一种新的缺失值填充算法——相互k最近邻填充算法MkNNI(Mutualk-NearestNeighborImputa—tion)。用于填充缺失值的数据,不仅是缺失数据的k最近邻,而且它的k最近邻也包含该缺失数据.从而有效地防止kNNI算法选取的k个最近邻点可能存在噪声这一情况。实验结果表明.MkNNI算法的填充准确性总体上要优于kNNI算法。  相似文献   

6.
针对欧式距离填充算法不足和微阵列数据集中缺失数据比率过大问题,提出了使用马氏距离有序填充微阵列的最近邻算法,能充分使用数据集中所有有效信息填充缺失数据,真实基因数据集的实验结果显示改进后的最近邻算法明显优于存在算法。  相似文献   

7.
异常挖掘是数据挖掘的重要研究内容之一,对于不完全数据会面对双重的困难.首先将用于缺失数据填充的EM算法和MI算法推广到混合缺失情形,并根据Weisberg的不完全数据填充理论,提出了RE算法,然后通过将聚类分析与向前搜索算法结合起来,获得了比单纯的向前搜索法更优越的算法.最后,在上述填充算法的基础上探讨了不完全数据的异常挖掘.理论和实例分析均表明,基于不完全数据的异常挖掘算法是有效可行的.  相似文献   

8.
提出了一种基于粒子群的三次样条插值算法,详细阐述了该算法应用于矿压缺失数据插值时的实现步骤和基本流程。该算法具有三次样条插值方法良好的分段光滑性,同时具有粒子群算法参数少、易于实现的优点。对相同地点不同工作面、不同地点不同工作面的矿压缺失数据插值的应用实例分析表明,该算法对矿压缺失数据的插值是有效的;与目前几种常用的缺失数据插值方法的比较结果表明,该算法更加准确、有效。  相似文献   

9.
传统的时间序列缺失修复方法通常假设数据由线性动态系统产生,然而时间序列更多地表现为非线性。为此,提出了基于残差连接长短期记忆(LSTM)网络的时间序列修复模型,称为RSI-LSTM,用来有效捕获时间序列的非线性动态特性,并且挖掘缺失数据和最近的非缺失数据之间的潜在关联。具体来说,就是采用LSTM网络对时间序列的非线性动态特性进行建模,同时引入残差连接来挖掘历史值与缺失值的联系,从而提升模型的修复能力。首先使用RSI-LSTM对单变量日供电量数据集的缺失数据进行修复,然后在第九届电工数学建模竞赛A题的电力负荷数据集上,引入气象因素作为RSI-LSTM的多变量输入,以提升模型对时间序列缺失值的修复效果。此外,使用了两个通用的多变量时间序列数据集以验证模型的缺失修复能力。实验结果表明,在单变量和多变量数据集上,RSI-LSTM的缺失值修复效果均优于LSTM,得到的均方误差(MSE)总体下降了10%。  相似文献   

10.
随着个性化推荐技术的发展,推荐系统面临着越来越多的挑战。传统的推荐算法通常存在数据稀疏性和推荐精度低等问题。针对以上问题,提出了一种融合时间隐语义填充和子群划分的推荐算法[K]-TLFM(Time Based Latent Factor Model Integrated with [k]-means)。该算法利用融合时间因素的隐语义模型对原始用户物品评分矩阵缺失项进行填充,避免了用全局平均值或者用户/物品平均值补全矩阵带来的误差,有效缓解了数据稀疏性问题,同时融合时间因素有效地刻画了用户偏好随时间的变化;完成评分矩阵缺失项填充后,基于二分[k]-means聚类算法将偏好、兴趣特征相似的对象划分到同一个子群中,在目标用户所属的子群中基于选定的协同过滤算法为用户产生推荐列表,提高了推荐效率和准确性。在MovieLens和Netflix数据集上对该算法的推荐性能进行了对比实验,结果表明该算法具有更高的推荐精度。  相似文献   

11.
Methods to predict and fill Landsat 7 Scan Line Corrector (SLC)-off data gaps are diverse and their usability is case specific. An appropriate gap-filling method that can be used for seagrass mapping applications has not been proposed previously. This study compared gap-filling methods for filling SLC-off data gaps with images acquired from different dates at similar mean sea-level tide heights, covering the Sungai Pulai estuary area inhabited by seagrass meadows in southern Peninsular Malaysia. To assess the geometric and radiometric fidelity of the recovered pixels, three potential gap-filling methods were examined: (a) geostatistical neighbourhood similar pixel interpolator (GNSPI); (b) weighted linear regression (WLR) algorithm integrated with the Laplacian prior regularization method; and (c) the local linear histogram matching method. These three methods were applied to simulated and original SLC-off images. Statistical measures for the recovered images showed that GNSPI can predict data gaps over the seagrass, non-seagrass/water body, and mudflat site classes with greater accuracy than the other two methods. For optimal performance of the GNSPI algorithm, cloud and shadow in the primary and auxiliary images had to be removed by cloud removal methods prior to filling data gaps. The gap-filled imagery assessed in this study produced reliable seagrass distribution maps and should help with the detection of spatiotemporal changes of seagrasses from multi-temporal Landsat imagery. The proposed gap-filling method can thus improve the usefulness of Landsat 7 ETM+ SLC-off images in seagrass applications.  相似文献   

12.
目的 深度图像作为一种重要的视觉感知数据,其质量对于3维视觉系统至关重要。由于传统方法获取的深度图像大多有使用场景的限制,容易受到噪声和环境影响,导致深度图像缺失部分深度信息,使得修复深度图像仍然是一个值得研究并有待解决的问题。对此,本文提出一种用于深度图像修复的双尺度顺序填充框架。方法 首先,提出基于条件熵快速逼近的填充优先级估计算法。其次,采用最大似然估计实现缺失深度值的最优预测。最后,在像素和超像素两个尺度上对修复结果进行整合,准确实现了深度图像孔洞填充。结果 本文方法在主流数据集MB (Middlebury)上与7种方法进行比较,平均峰值信噪比(peak signal-to-noise ratio,PSNR)和平均结构相似性指数(structural similarity index,SSIM)分别为47.955 dB和0.998 2;在手工填充的数据集MB+中,本文方法的PSNR平均值为34.697 dB,SSIM平均值为0.978 5,对比其他算法,本文深度修复效果有较大优势。在时间效率对比实验中,本文方法也表现优异,具有较高的效率。在消融实验部分,对本文提出的填充优先级估计、深度值预测和双尺度改进分别进行评估,验证了本文创新点的有效性。结论 实验结果表明,本文方法在鲁棒性、精确度和效率方面相较于现有方法具有比较明显的优势。  相似文献   

13.
传统基于预测误差直方图平移的可逆信息隐藏算法大多通过固定顺序来扫描原始图像,从而进行数据嵌入,这种方式没有考虑图像本身的纹理信息,导致无效移位像素点较多,伪装图像视觉质量较差。为解决该问题,提出一种基于中值预测的四轮嵌入可逆信息隐藏算法,以在提高嵌入容量的同时降低伪装图像的失真率。利用相邻像素之间具有较强相关性的特点,在较小的误差值处聚集大量像素点,以得到更陡峭的预测误差直方图并提高嵌入容量。对每个像素点定义复杂度,根据复杂度的高低对预测误差进行排序,优先在图像平滑区域嵌入数据,从而有效减少无效移位像素点个数,降低伪装图像的失真率。实验结果表明,该算法的最大嵌入率可以达到0.3 bpp,在0.1 bpp的嵌入率下峰值信噪比高达55.15 dB,与非对称直方图算法、误差直方图移位算法等相比,其具有较高的嵌入容量和较小的视觉失真率。  相似文献   

14.
利用云模型和数据场的图像分割方法   总被引:1,自引:0,他引:1  
针对图像自动分割中的最优阈值选择问题,提出一种基于云模型和数据场的图像分割方法。 该方法引入数据场实现图像灰度值特征空间到数据场势值空间的非线性映射,设定两个不同的质量函数分别形成相对数据场和绝对数据场。利用两类数据场的特点,结合全局和局部统计特征获得自适应势阈值对图像像素进行划分,产生图像潜在的背景或目标像素集合。进一步由逆向云发生器算法产生图像背景和目标的云模型表示,根据图像像素隶属于背景、目标云模型的程度,采用极大判定法则得到最终的分割结果。 实验结果表明,该方法的分割效果较好、性能稳定,具有合理性和有效性。  相似文献   

15.
ABSTRACT

Moderate Resolution Imaging Spectroradiometer (MODIS) has been employed for continuous monitoring of land surface dynamics to facilitate the examination of spatial aspects of the environment. Periodical generation of MODIS products enables temporal analysis, and the interpretation of temporal patterns requires information about image quality. The MODIS Scientific Data Set (SDS) provides information on image properties. Some research has utilized the SDS to assist in analysis and interpretation, particularly in supporting time series forecasting and estimating ‘invalid’ data from near-dates observation. Our research compares the usability and reliability information provided in the MODIS SDS for collections 5 and 6 to describe the spatio-temporal distribution of image quality. This research compared the ability of the MODIS collections to identify the extent of water and to differentiate forest from non-forest. Four sites representing tropical and temperate regions were selected in Brazil, Congo, Colorado (United States of America), and the European Alps. The robustness of usability and reliability information for assessing MODIS vegetation collections 5 and 6 was compared over these sites by using 16-day composite products over a year of observations (2015). The spatio-temporal distribution of invalid pixels and gaps derived from usability and reliability information were assessed by using TiSeG (Time series Generator) and GeoDa. Moran’s I indicated a large number of invalid pixels and temporal gaps were clustered in a few areas. Collection 6 appears more consistent in the identification of waterbodies, either for inland water or ocean, but the error detection of ice fractions in two tropical sites tends to increase. Masking data by using Quality Assurance (QA)-SDS information improved the separability between forest and non-forest. This research demonstrated that evaluating the quality of image products using the SDS assisted the selection of period and location to better differentiate forest and non-forest. The seasonal fluctuation of separability metrics showed the importance of exploring temporal pattern for better understanding of the dynamics of forest cover.  相似文献   

16.
The analysis of airborne hyperspectral data is often affected by brightness gradients that are caused by directional surface reflectance. For line scanners these gradients occur in across-track direction and depend on the sensor's view-angle. They are greatest whenever the flight path is perpendicular to the sun-target-observer plane. A common way to correct these gradients is to normalize the reflectance factors to nadir view. This is especially complicated for data from spatially and spectrally heterogeneous urban areas and requires surface type specific models. This paper presents a class-wise empirical approach that is adapted to meet the needs of such images.Within this class-wise approach, empirical models are fit to the brightness gradients of spectrally pure pixels from classes after a spectral angle mapping (SAM). Compensation factors resulting from these models are then assigned to all pixels of the image, both in a discrete manner according the SAM and in a weighted manner based on information from the SAM rule images. The latter scheme is designed in consideration of the great number of mixed pixels.The method is tested on data from the Hyperspectral Mapper (HyMap) that was acquired over Berlin, Germany. It proves superior to a common global approach based on a thorough assessment using a second HyMap image as reference. The weighted assignment of compensation factors is adequate for the correction of areas that are characterized by mixed pixels.A remainder of the original brightness gradient cannot be found in the corrected image, which can then be used for any subsequent qualitative and quantitative analyses. Thus, the proposed method enables the comparison and composition of airborne data sets with similar recording conditions and does not require additional field or laboratory measurements.  相似文献   

17.
Image registration is a fundamental procedure in image processing that aligns two or more images of the same scene taken from different times, different viewpoints, or even different sensors. It is reasonable to orientate two images by matching corresponding pixels or regions that are considered identical. Based on this concept, this paper proposes a novel image registration method that applies the information theorem on intensity difference data. An entropy-based objective function is then developed according to the histogram of the intensity difference. The intensity difference represents the absolute gray-level difference of the corresponding pixels between the reference and sensed images over the overlapped region. The proposed registration method is to align the sensed image onto the reference image by minimizing the entropy of the intensity difference through iteratively updating the parameters of the similarity transformation. For performance evaluation, the proposed method is compared with the two exiting registration methods in terms of eight test image sets. The experiment is divided into two scenarios. One is to investigate the sensitivity (i.e., robustness) of the objective functions in these three different methods; the other is to verify the effectiveness of the proposed method. Through the experimental results, the proposed method is shown to be very effective in image registration and outperforms the other two methods over the test image sets.  相似文献   

18.
In this paper, we shall propose a new image steganographic technique capable of producing a secret-embedded image that is totally indistinguishable from the original image by the human eye. In addition, our new method avoids the falling-off-boundary problem by using pixel-value differencing and the modulus function. First, we derive a difference value from two consecutive pixels by utilizing the pixel-value differencing technique (PVD). The hiding capacity of the two consecutive pixels depends on the difference value. In other words, the smoother area is, the less secret data can be hidden; on the contrary, the more edges an area has, the more secret data can be embedded. This way, the stego-image quality degradation is more imperceptible to the human eye. Second, the remainder of the two consecutive pixels can be computed by using the modulus operation, and then secret data can be embedded into the two pixels by modifying their remainder. In our scheme, there is an optimal approach to alter the remainder so as to greatly reduce the image distortion caused by the hiding of the secret data. The values of the two consecutive pixels are scarcely changed after the embedding of the secret message by the proposed optimal alteration algorithm. Experimental results have also demonstrated that the proposed scheme is secure against the RS detection attack.  相似文献   

19.
针对因图像加密、解密过程中图像还原度较差,数据嵌入率较低,导致图像传输过程中图像质量差,安全性低的问题,提出一种基于异或-置乱框架的图像可逆数据隐藏方法。分析相邻像素间位异或-置乱法,使用位异或和像素位置置乱方式对初始图像做加密处理,得到初始加密图像,根据隐藏密钥特征选取一部分像素,利用替换方法把选取像素嵌入隐藏数据中,使用加密密钥将隐藏数据提取出来,最后通过邻域预测方式对加密图像做解密处理,并根据像素波动性算出各邻域模块最高位是否发生改变,还原初始图像。仿真结果表明,采用所提方法得到的图像还原性较好,数据嵌入率较高,可以有效保护图像传输中的安全,并保证图像质量,具有一定的实践价值。  相似文献   

20.
A study of the estimation of partial cloud cover within a pixel has been conducted in order to be able to use pixels partially contaminated with cloud in sea surface temperature determination.

The existing estimation methods based on the least squares method with constraints of minimizing the mixing ratio and observation vector, are theoretically compared and then an adaptive least squares method is proposed. In a comparative study the estimation accuracies for the proposed and other existing methods, including the maximum likelihood method, are compared with simulated and real satellite image data of NOAA AVHRR and MOS-1 VTIR. The results with the simulation data show that the maximum likelihood method is best followed by the adaptive least squares method, the least squares method and the observation vector, while the results with the real VTIR data show that the proposed adaptive least squares method is best followed by the least squares method, the maximum likelihood method and the observation vector but there is no significant differences between all these methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号