首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
This paper develops a statistical regression method to estimate the instantaneous Downwelling Surface Longwave Radiation (DSLR) for cloud-free skies using only the satellite-based radiances measured at the Top Of the Atmosphere (TOA), and subsequently combines the DSLR with the MODIS land surface temperature/emissivity products (MOD11_L2) to estimate the instantaneous Net Surface Longwave Radiation (NSLR). The proposed method relates the DSLR directly to the TOA radiances in the MODIS Thermal InfraRed (TIR) channels provided that the terrain altitude and the satellite Viewing Zenith Angle (VZA) are known. The simulation analysis shows that the instantaneous DSLR could be estimated by the proposed method with the Root Mean Square Error (RMSE) of 12.4 W/m2 for VZA = 0 and terrain altitude z = 0 km. Similar results are obtained for the other VZAs and altitudes. Considering the MODIS instrumental errors of 0.25 K for the TOA brightness temperatures in channels 28, 33 and 34, and of 0.05 K for channels 29 and 31, and of 0.35 K for channel 36, the overall retrieval accuracy in terms of the RMSE is decreased to 13.1 W/m2 for the instantaneous DSLR. Moreover, a comparison of MODIS derived DSLR and NSLR are done with the field measurements made at six sites of the Surface Radiation Budget Network (SURFRAD) in the United States for days with cloud-free conditions at the moment of MODIS overpass in 2006. The results show that the bias, RMSE and the square of the correlation coefficient (R2) between the MODIS derived DSLR with the proposed method and the field measured DSLR are 20.3 W/m2, 30.1 W/m2 and 0.91 respectively, and bias = 11.7 W/m2, RMSE = 26.1 W/m2 and R2 = 0.94 for NSLR. In addition, the scheme proposed by Bisht et al. [Bisht, G., Venturini, V., Islam, S., & Jiang, L. (2005). Estimation of the net radiation using MODIS (Moderate Resolution Imaging Spectroradiometer) data for clear-sky days. Remote Sensing of Environment, 97, 52-67], which requires the MODIS atmospheric profile product (MOD07) and also the MODIS land surface temperature/emissivity products (MOD11_L2) as inputs, is used to estimate the instantaneous DSLR and NSLR for comparison with the field measurements as well as the MODIS derived DSLR and NSLR using our proposed method. The results of the comparisons show that, at least for our cases, our proposed method for estimating DSLR from the MODIS radiances at the TOA and the resultant NSLR gives results comparable to those estimated with Bisht et al.'s scheme [Bisht, G., Venturini, V., Islam, S., & Jiang, L. (2005). Estimation of the net radiation using MODIS (Moderate Resolution Imaging Spectroradiometer) data for clear-sky days. Remote Sensing of Environment, 97, 52-67].  相似文献   

2.
In this study, we propose a novel local outlier detection approach - called LOMA - to mining local outliers in high-dimensional data sets. To improve the efficiency of outlier detection, LOMA prunes irrelevance attributes and objects in the data set by analyzing attribute relevance with a sparse factor threshold. Such a pruning technique substantially reduce the size of data sets. The core of LOMA is searching sparse subspace, which implements the particle swarm optimization method in reduced data sets. In the process of searching sparse subspace, we introduce the sparse coefficient threshold to represent sparse degrees of data objects in a subspace, where the data objects are considered as local outliers. The attribute relevance analysis provides a guidance for experts and users to identify useless attributes for detecting outliers. In addition, our sparse-subspace-based outlier algorithm is a novel technique for local-outlier detection in a wide variety of applications. Experimental results driven by both synthetic and UCI data sets validate the effectiveness and accuracy of our LOMA. In particular, LOMA achieves high mining efficiency and accuracy when the sparse factor threshold is set to a small value.  相似文献   

3.
This paper presents the development of a 2D high-order solver with spectral difference method for unsteady incompressible Navier-Stokes equations accelerated by a p-multigrid method. This solver is designed for unstructured quadrilateral elements. Time-marching methods cannot be applied directly to incompressible flows because the governing equations are not hyperbolic. An artificial compressibility method (ACM) is employed in order to treat the inviscid fluxes using the traditional characteristics-based schemes. The viscous fluxes are computed using the averaging approach (Sun et al., 2007; Kopriva, 1998) [29] and [12]. A dual time stepping scheme is implemented to deal with physical time marching. A p-multigrid method is implemented (Liang et al., 2009) [16] in conjunction with the dual time stepping method for convergence acceleration. The incompressible SD (ISD) method added with the ACM (SD-ACM) is able to accurately simulate 2D steady and unsteady viscous flows.  相似文献   

4.
Anomaly detection is considered an important data mining task, aiming at the discovery of elements (known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate significantly from the typical behavior. As in the case of clustering, the application of different criteria leads to different definitions for an outlier. In this work, we focus on distance-based outliers: an object x is an outlier if there are less than k objects lying at distance at most R from x. The problem offers significant challenges when a stream-based environment is considered, where data arrive continuously and outliers must be detected on-the-fly. There are a few research works studying the problem of continuous outlier detection. However, none of these proposals meets the requirements of modern stream-based applications for the following reasons: (i) they demand a significant storage overhead, (ii) their efficiency is limited and (iii) they lack flexibility in the sense that they assume a single configuration of the k and R parameters. In this work, we propose new algorithms for continuous outlier monitoring in data streams, based on sliding windows. Our techniques are able to reduce the required storage overhead, are more efficient than previously proposed techniques and offer significant flexibility with regard to the input parameters. Experiments performed on real-life and synthetic data sets verify our theoretical study.  相似文献   

5.
When scanning an object using a 3D laser scanner, the collected scanned point cloud is usually contaminated by numerous measurement outliers. These outliers can be sparse outliers, isolated or non-isolated outlier clusters. The non-isolated outlier clusters pose a great challenge to the development of an automatic outlier detection method since such outliers are attached to the scanned data points from the object surface and difficult to be distinguished from these valid surface measurement points. This paper presents an effective outlier detection method based on the principle of majority voting. The method is able to detect non-isolated outlier clusters as well as the other types of outliers in a scanned point cloud. The key component is a majority voting scheme that can cut the connection between non-isolated outlier clusters and the scanned surface so that non-isolated outliers become isolated. An expandable boundary criterion is also proposed to remove isolated outliers and preserve valid point clusters more reliably than a simple cluster size threshold. The effectiveness of the proposed method has been validated by comparing with several existing methods using a variety of scanned point clouds.  相似文献   

6.
A nonrigid registration method is proposed to automatically align two images by registering two sets of sparse features extracted from the images. Motivated by the paradigm of Robust Point Matching (RPM) algorithms [1] and [2], which were originally proposed for shape registration, we develop Robust Hybrid Image Matching (RHIM) algorithm by alternatively optimizing feature correspondence and spatial transformation for image registration. Our RHIM algorithm is built to be robust to feature extraction errors. A novel dynamic outlier rejection approach is described for removing outliers and a local refinement technique is applied to correct non-exactly matched correspondences arising from image noise and deformations. Experimental results demonstrate the robustness and accuracy of our method.  相似文献   

7.
Mining class outliers: concepts, algorithms and applications in CRM   总被引:4,自引:0,他引:4  
Outliers, or commonly referred to as exceptional cases, exist in many real-world databases. Detection of such outliers is important for many applications and has attracted much attention from the data mining research community recently. However, most existing methods are designed for mining outliers from a single dataset without considering the class labels of data objects. In this paper, we consider the class outlier detection problem ‘given a set of observations with class labels, find those that arouse suspicions, taking into account the class labels’. By generalizing two pioneer contributions [Proc WAIM02 (2002); Proc SSTD03] in this field, we develop the notion of class outlier and propose practical solutions by extending existing outlier detection algorithms to this case. Furthermore, its potential applications in CRM (customer relationship management) are also discussed. Finally, the experiments in real datasets show that our method can find interesting outliers and is of practical use.  相似文献   

8.
Currently, there is a renewed interest in the use of optimal experimentation (adaptive control) in economics. Example are found in [Amman and Kendrick, 1999], [Amman and Kendrick, 2003], [Cosimano, in?press], [Cosimano and Gapen, 2005b], [Cosimano and Gapen, 2005a], [Cosimano and Gapen, 2006], [Tesfaselassie et?al., 2007], [Tucci, 1997], [Wieland, 2000a] and [Wieland, 2000b]. In this paper we present the Beck & Wieland model [Beck, G., & Wieland, V. (2002). Learning and control in a changing economic environment. Journal of Economic Dynamics and Control, 26, 1359-1378] and the methodology to solve this model with time-varying parameters using the various control methods described in [Kendrick, 1981] and [Kendrick, 2002]. Furthermore, we also provide numerical results using the DualPC software [Amman, H. M., & Kendrick, D. A. (1999). The DualI/DualPC software for optimal control models: User’s guide. Working paper, Austin, TX 78712, USA: Center for Applied Research in Economics, University of Texas] and show first evidence that optimal experimentation or Dual Control may produce better results than Expected Optimal Feedback.  相似文献   

9.
针对传统离群点检测算法在类极度不平衡的高维数据集中难以学习离群点的分布模式,导致检测率低的问题,提出了一种生成对抗网络(generative adversarial network,GAN)与变分自编码器(variational auto-encoder,VAE)结合的GAN-VAE算法。算法首先将离群点输入VAE训练,学习离群点的分布模式;然后将VAE与GAN结合训练,生成更多潜在离群点,同时学习正常点与离群点的分类边界;最后将测试数据输入训练后的GAN-VAE,根据正常点与离群点相对密度的差异性计算每个对象的离群值,将离群值高的对象判定为离群点。在四个真实数据集上与六个离群点检测算法进行对比实验,结果表明GAN-VAE在AUC、准确率和F;值上平均提高了5.64%、5.99%和13.30%,证明GAN-VAE算法是有效可行的。  相似文献   

10.
The authors propose a recursive protocol for group-oriented authentication with key exchange, in which a group of n entities can authenticate with each other and share a group session key. The proposed protocol has the following characteristics: First, it requires O(n) rounds of messages, O(log n) completion time, O(log n) waiting time, and O(n log n) communication overhead in average for the completion of the recursion. Second, it not only meets the five principles suggested by Diffie et al. [Diffie, W., van Oorschot, P.C., Wiener, M.J., 1992. Authentication and authenticated key exchange. Designs, Codes, and Cryptography 2 (2), 107-125] on the design of a secure key exchange protocol, but also achieves the properties of nondisclosure, independency, and integrity addressed by Janson and Tsudik [Janson, P., Tsudik, G., 1995. Secure and minimal protocols for authenticated key distribution. Computer Communications 18 (9), 645-653] for the authentication of the group session key. Third, we describe the beliefs of trustworthy entities involved in our authentication protocol and the evolution of these beliefs as a consequence of communication by using BAN logic. Finally, it is practical and efficient, because only one-way hash function and exclusive-or (XOR) operations are used in implementation.  相似文献   

11.
Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclusion of noise, inconsistent supply voltage and delay or loss of data in transfer. In order to manage, query or mine such data, data uncertainty needs to be considered. Hence,this paper studies the problem of top-k distance-based outlier detection from uncertain data objects. In this work, an uncertain object is modelled by a probability density function of a Gaussian distribution. The naive approach of distance-based outlier detection makes use of nested loop. This approach is very costly due to the expensive distance function between two uncertain objects. Therefore,a populated-cells list(PC-list) approach of outlier detection is proposed. Using the PC-list, the proposed top-k outlier detection algorithm needs to consider only a fraction of dataset objects and hence quickly identifies candidate objects for top-k outliers. Two approximate top-k outlier detection algorithms are presented to further increase the efficiency of the top-k outlier detection algorithm.An extensive empirical study on synthetic and real datasets is also presented to prove the accuracy, efficiency and scalability of the proposed algorithms.  相似文献   

12.

Data points situated near a cluster boundary are called boundary points and they can represent useful information about the process generating this data. The existing methods of boundary points detection cannot differentiate boundary points from outliers as they are affected by the presence of outliers as well as by the size and density of clusters in the dataset. Also, they require tuning of one or more parameters and prior knowledge of the number of outliers in the dataset for tuning. In this research, a boundary points detection method called BPF is proposed which can effectively differentiate boundary points from outliers and core points. BPF combines the well-known outlier detection method Local Outlier Factor (LOF) with Gravity value to calculate the BPF score. Our proposed algorithm StaticBPF can detect the top-m boundary points in the given dataset. Importantly, StaticBPF requires tuning of only one parameter i.e. the number of nearest neighbors \((k)\) and can employ the same \(k\) used by LOF for outlier detection. This paper also extends BPF for streaming data and proposes StreamBPF. StreamBPF employs a grid structure for improving k-nearest neighbor computation and an incremental method of calculating BPF scores of a subset of data points in a sliding window over data streams. In evaluation, the accuracy of StaticBPF and the runtime efficiency of StreamBPF are evaluated on synthetic and real data where they generally performed better than their competitors.

  相似文献   

13.
A fuzzy index for detecting spatiotemporal outliers   总被引:1,自引:1,他引:0  
The detection of spatial outliers helps extract important and valuable information from large spatial datasets. Most of the existing work in outlier detection views the condition of being an outlier as a binary property. However, for many scenarios, it is more meaningful to assign a degree of being an outlier to each object. The temporal dimension should also be taken into consideration. In this paper, we formally introduce a new notion of spatial outliers. We discuss the spatiotemporal outlier detection problem, and we design a methodology to discover these outliers effectively. We introduce a new index called the fuzzy outlier index, FoI, which expresses the degree to which a spatial object belongs to a spatiotemporal neighbourhood. The proposed outlier detection method can be applied to phenomena evolving over time, such as moving objects, pedestrian modelling or credit card fraud.  相似文献   

14.
Different methodologies to estimate the amplitude of the sea surface temperature diurnal variation (DV) and remove it from remotely sensed SST images have been proposed in the last years. Among these, the parameterization proposed by [Stuart-Menteth et al., 2004a] and [Stuart-Menteth et al., 2004b] [Stuart-Menteth, A.C., Robinson, I.S., & Weller, R.A. (2004a). Sensitivity of the diurnal warm layer to meteorological fluctuations Part 1: observations, submitted to Journal of Atmospheric and Ocean Science; Stuart-Menteth, A.C., Robinson, I. S., & Donlon, C.J. (2004b). Sensitivity of the diurnal warm layer to meteorological fluctuations Part 2: a new parameterisation for diurnal warming, submitted to Journal of Atmospheric and Ocean Science] and adopted by the GHRSST-PP (Donlon, 2004) [Donlon, C.J., ad the GHRSST-PP Science Team, 2004: The GHRSST-PP data processing specification v1.0 (GDS v1.0, revision 1.5), GHRSST-PP Report N. 17, Published by the International GHRSST-PP Project Office, pp. 241] appeared as the most promising. In fact, it takes into account wind and insolation variations during the day, that effectively drive the SST diurnal cycle.This parameterization has been tested on 6 months of NOAA-16 AVHRR images acquired and processed at CNR with Pathfinder algorithm. The tests evidenced some limits for a correct estimation of the DV in low-wind regimes for any insolation condition, and in high insolation regimes (>600 W/m2) when the wind intensity increases or decreases of more than 2 m/s during the morning. The limits of applicability of the DV correction to NOAA-16 AVHRR data (at least for the Mediterranean area) were thus identified, and data outside these limits were flagged. However, some anomalous heating were not corrected even with these constraints, due to the lack of accuracy in the wind field used for the correction. As a result, a strategy to flag residual outliers in the corrected daily images has been developed, based on the comparison to an optimally interpolated night SST field of the previous day.  相似文献   

15.
离群数据检测,主要目的是从海量数据中发现异常数据。其有以下两点好处:第一,作为数据预处理工作,减少噪声点对模型的影响;第二,针对特定场景检测出异常,并对异常现象本身进行挖掘,也非常有价值。目前,国内外主流的方法像LOF、KNN、ORCA等,无法兼顾全局离群点、局部离群点和离群簇同时存在的复杂场景的检测。 针对这一情况,提出了一种新的离群数据检测模型。为了能够最大限度对全局、局部离群数据以及离群簇的全面检测,基于iForest、LOF、DBSCAN分别对于全局离群点、局部离群点、离群簇的高度敏感度,选定该三种特定基分类器,并且改变其目标函数,修正框架的错误率计算方式,进行融合,形成了新的离群数据检测模型ILD-BOOST。实验结果表明,该模型充分兼顾了全局和局部离群数据及离群簇的检测,且效果优于目前主流的离群数据检测方法。  相似文献   

16.
The n-dimensional folded hypercube FQn, a variation of the hypercube proposed by Ahmed et al. [A. El-Amawy, S. Latifi, Properties and performance of folded hypercubes, IEEE Transactions on Parallel and Distributed Systems 2(3) (1991) 31-42], is an (n + 1)-regular (n + 1)-connected graph. Conditional diagnosability, a new measure of diagnosability introduced by Lai et al. [Pao-Lien Lai, Jimmy J.M. Tan, Chien-Ping Chuang, Lih-Hsing Hsu, Conditional diagnosability measures for large multiprocessor systems, IEEE Transactions on Computers 54(2) (2005) 165-175] can better measure the diagnosability of regular interconnection networks. This paper determines that under PMC-model the conditional diagnosability of FQn (tc(FQn)) is 4n − 3 when n = 5 or n ? 8; tc(FQ3) = 3, tc(FQ4) = 7.  相似文献   

17.
In this paper, several diagnostics measures are proposed based on case-deletion model for log-Birnbaum-Saunders regression models (LBSRM), which might be a necessary supplement of the recent work presented by Galea et al. [2004. Influence diagnostics in log-Birnbaum-Saunders regression models. J. Appl. Statist. 31, 1049-1064] who studied the influence diagnostics for LBSRM mainly based on the local influence analysis. It is shown that the case-deletion model is equivalent to the mean-shift outlier model in LBSRM and an outlier test is presented based on mean-shift outlier model. Furthermore, we investigate a test of homogeneity for shape parameter in LBSRM, which is a problem mentioned by both Rieck and Nedelman [1991. A log-linear model for the Birnbaum-Saunders distribution. Technometrics 33, 51-60] and Galea et al. [2004. Influence diagnostics in log-Birnbaum-Saunders regression models. J. Appl. Statist. 31, 1049-1064]. We obtain the likelihood ratio and score statistics for such test. Finally, a numerical example is given to illustrate our methodology and the properties of likelihood ratio and score statistics are investigated through Monte Carlo simulations.  相似文献   

18.
Default logics are usually used to describe the regular behavior and normal properties of domain elements. In this paper we suggest, conversely, that the framework of default logics can be exploited for detecting outliers. Outliers are observations expressed by sets of literals that feature unexpected semantical characteristics. These sets of literals are selected among those explicitly embodied in the given knowledge base. Hence, essentially we perceive outlier detection as a knowledge discovery technique. This paper defines the notion of outlier in two related formalisms for specifying defaults: Reiter's default logic and extended disjunctive logic programs. For each of the two formalisms, we show that finding outliers is quite complex. Indeed, we prove that several versions of the outlier detection problem lie over the second level of the polynomial hierarchy. We believe that a thorough complexity analysis, as done here, is a useful preliminary step towards developing effective heuristics and exploring tractable subsets of outlier detection problems.  相似文献   

19.
The use of satellites to monitor the color of the ocean requires effective removal of the atmospheric signal. This can be performed by extrapolating the aerosol optical properties in the visible from the near-infrared (NIR) spectral region assuming that the seawater is totally absorbant in this latter part of the spectrum. However, the non-negligible water-leaving radiance in the NIR which is characteristic of turbid waters may lead to an overestimate of the atmospheric radiance in the whole visible spectrum with increasing severity at shorter wavelengths. This may result in significant errors, if not complete failure, of various algorithms for the retrieval of chlorophyll-a concentration, inherent optical properties and biogeochemical parameters of surface waters.This paper presents results of an inter-comparison study of three methods that compensate for NIR water-leaving radiances and that are based on very different hypothesis: 1) the standard SeaWiFS algorithm (Stumpf et al., 2003; Bailey et al., 2010) based on a bio-optical model and an iterative process; 2) the algorithm developed by Ruddick et al. (2000) based on the spatial homogeneity of the NIR ratios of the aerosol and water-leaving radiances; and 3) the algorithm of Kuchinke et al. (2009) based on a fully coupled atmosphere-ocean spectral optimization inversion. They are compared using normalized water-leaving radiance nLw in the visible. The reference source for comparison is ground-based measurements from three AERONET-Ocean Color sites, one in the Adriatic Sea and two in the East Coast of USA.Based on the matchup exercise, the best overall estimates of the nLw are obtained with the latest SeaWiFS standard algorithm version with relative error varying from 14.97% to 35.27% for λ = 490 nm and λ = 670 nm respectively. The least accurate estimates are given by the algorithm of Ruddick, the relative errors being between 16.36% and 42.92% for λ = 490 nm and λ = 412 nm, respectively. The algorithm of Kuchinke appears to be the most accurate algorithm at 412 nm (30.02%), 510 (15.54%) and 670 nm (32.32%) using its default optimization and bio-optical model coefficient settings.Similar conclusions are obtained for the aerosol optical properties (aerosol optical thickness τ(865) and the Ångström exponent, α(510, 865)). Those parameters are retrieved more accurately with the SeaWiFS standard algorithm (relative error of 33% and 54.15% for τ(865) and α(510, 865)).A detailed analysis of the hypotheses of the methods is given for explaining the differences between the algorithms. The determination of the aerosol parameters is critical for the algorithm of Ruddick et al. (2000) while the bio-optical model is critical for the algorithm of Stumpf et al. (2003) utilized in the standard SeaWiFS atmospheric correction and both aerosol and bio-optical model for the coupled atmospheric-ocean algorithm of Kuchinke. The Kuchinke algorithm presents model aerosol-size distributions that differ from real aerosol-size distribution pertaining to the measurements. In conclusion, the results show that for the given atmospheric and oceanic conditions of this study, the SeaWiFS atmospheric correction algorithm is most appropriate for estimating the marine and aerosol parameters in the given turbid waters regions.  相似文献   

20.
Azariadis and Sapidis [Azariadis PN, Sapidis NS. Drawing curves onto a cloud of points for point-based modelling. Computer-Aided Design 2005;37(1):109-22] introduced a novel method of point directed projection (DP) onto a point cloud along an associated projection vector. This method is essentially based on an idea of least sum of squares by making use of a weight function for bounding the influence of noise. One problem with their method is the lack of robustness for outliers. Here, we present a simple, robust, and efficient algorithm: robust directed projection (RDP) to guide the DP computation. Our algorithm is based on a robust statistical method for outlier detection: least median of squares (LMS). In order to effectively approximate the LMS optimization, the forward search technique is utilized. The algorithm presented here is better suited to detect outliers than the DP approach and thus finds better projection points onto the point cloud. One of the advantages of our algorithm is that it automatically ignores outliers during the directed projection phase.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号