首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 596 毫秒
1.
Most agricultural statistics are calculated per field, and it is well known that classification procedures for homogeneous objects produce better results than per-pixel classification. In this study, a multispectral segmentation method for automated delineation of agricultural field boundaries in remotely sensed images is presented. Edge information from a gradient edge detector is integrated with a segmentation algorithm. The multispectral edge detector uses all available multispectral information by adding the magnitudes and directions of edges derived from edge detection in single bands. The addition is weighted by edge direction, to remove "noise" and to enhance the major direction. The resulting edge from the edge detection algorithm is combined with a segmentation method based on a simple ISODATA algorithm, where the initial centroids are decided by the distances to the edges from the edge detection step. From this procedure, the number of regions will most likely exceed the actual number of fields in the image and merging of regions is performed. By calculating the mean and covariance matrix for pixels of neighboring regions, regions with a high generalized likelihood-ratio test quantity will be merged. In this way, information from several spectral bands (and/or different dates) can be used for delineating field borders with different characteristics. The introduction of the ISODATA classifier compared with a previously used region growing procedure improves the output. Some results are compared with manually extracted field boundaries  相似文献   

2.
The optimum procedure for locating a sync word periodically inserted in uncoded binary data received over a binary symmetric channel is based on the Hamming or bit distance metric. This concise paper addresses the corresponding frame-sync problem for biorthogonally coded data transmitted over the additive white Gaussian noise (AWGN) channel. For conceptual convenience, thek-bit words from the decoder output are treated as "super symbols" from an alphabet of dimension 2k. It is argued that the optimum sync-word search over the decoded data stream is based on a supersymbol distance rule matched to the properties of the biorthogonally coded transmissions over the noisy channel. An optimum frame-sync acquisition algorithm based on this distance rule is formulated, and its performance is investigated. As an example, the performance of this optimum frame-sync algorithm is contrasted analytically with that of a Hamming distance algorithm operating on decoded (32, 6) biorthogonal data, a case of interest to some recent unmanned American space missions.  相似文献   

3.
A new network decomposition-optimization algorithm is presented and evaluated via computer simulation. The network is first partitioned into subnetworks on the basis of lines fitted via linear regression to the node locations weighted by their traffic loads. Each node is then connected via a data line of appropriate capacity to a concentrator-multiplexer. The concentrators are then regarded as nodes, and the process is repeated as often as required. The resulting singly connected network consists of a hierarchy of concentrators whose number, capacity, location, and interconnection are selected as an inherent part of the design procedure to minimize network cost. The linear regression clustering (LRC) design procedure is evaluated via computer simulation by comparing the costs and performance of the resulting networks with that which results from use of an algorithm based on the generally applicable network design approach of iterative local optimizations or "search" procedures. The data supplied to the LRC and search algorithms include randomly generated node locations and traffic matrices, and specific (realistic) cost vs. capacity schedules for data lines and concentrators. Comparisons of network costs, queuing and transmission delays, network reliability and network design costs show the LRC algorithm to be markedly superior to the search algorithm. The paper includes a brief discussion of our results, and their implications.  相似文献   

4.
A tree-structured Markov random field model for Bayesian image segmentation   总被引:3,自引:0,他引:3  
We present a new image segmentation algorithm based on a tree-structured binary MRF model. The image is recursively segmented in smaller and smaller regions until a stopping condition, local to each region, is met. Each elementary binary segmentation is obtained as the solution of a MAP estimation problem, with the region prior modeled as an MRF. Since only binary fields are used, and thanks to the tree structure, the algorithm is quite fast, and allows one to address the cluster validation problem in a seamless way. In addition, all field parameters are estimated locally, allowing for some spatial adaptivity. To improve segmentation accuracy, a split-and-merge procedure is also developed and a spatially adaptive MRF model is used. Numerical experiments on multispectral images show that the proposed algorithm is much faster than a similar reference algorithm based on "flat" MRF models, and its performance, in terms of segmentation accuracy and map smoothness, is comparable or even superior.  相似文献   

5.
This paper presents an innovative microwave technique, which is suitable for the detection of defects in nondestructive-test and nondestructive-evaluation (NDT/NDE) applications where a lot of a priori information is available. The proposed approach is based on the equations of the inverse scattering problem, which are solved by means of a minimization procedure based on a genetic algorithm. To reduce the number of problem unknowns, the available a priori information is efficiently exploited by introducing an updating procedure for the electric field computation based on the Sherman-Morrison-Woodbury formula. The results of a representative set of numerical experiments as well as comparisons with state-of-the-art methods are reported. They confirm the effectiveness, feasibility, and robustness of the proposed approach, which shows some interesting features by a computational point of view as well.  相似文献   

6.
This paper shows that an nx1 integer vector can be exactly recovered from its Hadamard transform coefficients, even when 0.5n log (2)(n) of the (less significant) bits of these coefficients are removed. The paper introduces a fast "lossless" dequantization algorithm for this purpose. To investigate the usefulness of the procedure in data compression, the paper investigates an embedded block image coding technique called the "LHAD" based on the algorithm. The results show that lossless compression ratios close to the state of the art can be achieved, but that techniques such as CALIC and S+P still perform better.  相似文献   

7.
We propose a new stochastic algorithm for computing useful Bayesian estimators of hidden Markov random field (HMRF) models that we call exploration/selection/estimation (ESE) procedure. The algorithm is based on an optimization algorithm of O. Fran?ois, called the exploration/selection (E/S) algorithm. The novelty consists of using the a posteriori distribution of the HMRF, as exploration distribution in the E/S algorithm. The ESE procedure computes the estimation of the likelihood parameters and the optimal number of region classes, according to global constraints, as well as the segmentation of the image. In our formulation, the total number of region classes is fixed, but classes are allowed or disallowed dynamically. This framework replaces the mechanism of the split-and-merge of regions that can be used in the context of image segmentation. The procedure is applied to the estimation of a HMRF color model for images, whose likelihood is based on multivariate distributions, with each component following a Beta distribution. Meanwhile, a method for computing the maximum likelihood estimators of Beta distributions is presented. Experimental results performed on 100 natural images are reported. We also include a proof of convergence of the E/S algorithm in the case of nonsymmetric exploration graphs.  相似文献   

8.
Image segmentation by clustering   总被引:5,自引:0,他引:5  
This paper describes a procedure for segmenting imagery using digital methods and is based on a mathematical-pattern recognition model. The technique does not require training prototypes but operates in an "unsupervised" mode. The features most useful for the given image to be segmented are retained by the algorithm without human interaction, by rejecting those attributes which do not contribute to homogeneous clustering in N-dimensional vector space. The basic procedure is a K-means clustering algorithm which converges to a local minimum in the average squared intercluster distance for a specified number of clusters. The algorithm iterates on the number of clusters, evaluating the clustering based on a parameter of clustering quality. The parameter proposed is a product of between and within cluster scatter measures, which achieves a maximum value that is postulated to represent an intrinsic number of clusters in the data. At this value, feature rejection is implemented via a Bhattacharyya measure to make the image segments more homogeneous (thereby removing "noisy" features); and reclustering is performed. The resulting parameter of clustering fidelity is maximized with segmented imagery resulting in psychovisually pleasing and culturally logical image segments.  相似文献   

9.
This paper deals with the experimental validation of an algorithm, based on the Kirchhoff approximation, for the shape reconstruction of conducting objects from scattered field data. Measured data are collected in a controlled environment under a reflection mode with a finite observation domain and multiview/multistatic/multifrequency configuration. The results show the effectiveness of the approach which takes into account the view diversity by a simple strategy and a threshold procedure.  相似文献   

10.
The paper presents the new method of the natural frequencies and damping identification based on the Artificial Intelligence (AI) Particle Swarm Optimization (PSO) algorithm. The identification is performed in the frequency domain. The algorithm performs two PSO-based steps and introduces some modifications in order to achieve quick convergence and low estimation error of the identified parameters’ values for multi-mode systems. The first stage of the algorithm concentrates on the natural frequencies estimation. Using the information about the natural frequencies, measurement data are filtered and corrected dampings as well as amplitudes are calculated for each preliminary identified mode. This allows regrouping particles to the area around proper parameters values. Particle regrouping is based on the physical properties of modally tested structures. This differs the algorithm from other PSO based algorithms with particles regrouping. In the second stage of the algorithm parameters of all modes are tuned together in order to adjust estimates. The procedure of identification, as well as the appropriate algorithm, is presented and some SISO examples are provided. Results are compared with the results obtained for the selected, already developed modal identification methods. The paper presents practical application of AI method for mechanical systems identification.  相似文献   

11.
The goal of this paper is to show that commercial sensors, whose frequency response is not specifically designed, can be effectively used to measure very fast transient fields applying a proper reconstructing procedure based on the knowledge of the sensor transfer function. To do this, it is necessary to characterize a structure supporting a transverse electromagnetic (TEM) field, that will be used to set up a calibration procedure for elementary magnetic field sensors. The approach is completely analytical and allows us to know rigorously the field inside the structure. From the knowledge of this field, the transfer function of the sensor, in amplitude and phase, is evaluated up to 2 GHz. The complete characterization of the sensor allows us to reconstruct the sensed field from its output voltage waveform. The calibration procedure is carried out in time domain and therefore the fast Fourier transform (FFT) algorithm is used to achieve the sensor transfer function, as well as an inverse FFT (IFFT) is necessary to retrieve the transient impinging field. An experimental validation of the procedure shows the consistency of the approach  相似文献   

12.
A Novel Direction-Finding Algorithm for Directional Borehole Radar   总被引:2,自引:0,他引:2  
A directional borehole radar system has been developed for the purpose of 3-D imaging of subsurface targets in a single-hole measurement. The radar system is equipped with a uniform circular array consisting of four dipole antennas as a receiver in order to realize azimuth bearing sensitivity. We propose a new direction-finding (DF) algorithm that is suitable for directional borehole radar measurement, and we apply this algorithm to actual field measurement data. This algorithm is based on the Adcock DF antenna principle where the complex time series (analytic signal) expression, the optimization, and the filtering procedure are incorporated to provide more accurate estimation. The algorithm was first verified in a transmission measurement in boreholes with a cross-hole configuration (15 m apart from each other) by estimating a direction of the incident wave from a transmitter to the receiver. Finally, the algorithm was applied to single-hole measurement data to demonstrate the ability to detect the 3-D location of a subsurface tunnel which was located 5.5 m from the borehole. The result showed fairly good agreement with the actual location of the tunnel, i.e., to an azimuth estimation error of within 10deg.  相似文献   

13.
游波  张明敏 《信号处理》2005,21(1):52-56
提出多模式贝叶斯准则下的概率数据关联滤波新概念,并将该方法与基于匹配场一长项积分混合处理相结 合,建立了基于Bayes框架的检测跟踪一体化算法,通过时间域的连续递归和空间域的数据关联,较好地实现了对机动目 标的被动定位和跟踪。  相似文献   

14.
Presents a procedure for the automated segmentation of multispectral Landsat TM images of farmland in Western Australia into field units. The segmentation procedure, named the canonically-guided region growing (CGRG) procedure, assumes that each field contains only one ground cover type and that the width of the minimum field of interest is known. The CGRG procedure segments images using a seeded region growing algorithm, but is novel in the method used to generate the internal field markers used as "seeds." These internal field markers are obtained from a multiband, local canonical eigenvalue image. Before the local transformation is applied, the original image is morphologically filtered to estimate both between-field variation and within-field variation in the image. Local computation of the canonical variate transform, using a moving window sized to fit just inside the smallest field of interest, ensures that the between- and within-field spatial variations in each image band are accommodated. The eigenvalues of the local transform are then used to discriminate between an area completely inside a field or at a field boundary. The results obtained using CGRG and the methods of Lee (1997) and Tilton (1998) were numerically compared to "ideal" segmentations of a set of sample satellite images. The comparison indicates that the results of the CGRG are usually more accurate in terms of field boundary position and degree of over-segmentation and under-segmentation, than either of the other procedures  相似文献   

15.
A new approach is presented to determine atmospheric temperature profiles by combining measurements coming from different sources and taking into account evolution models derived by conventional meteorological observations. Using a historical database of atmospheric parameters and related microwave brightness temperatures, the authors have developed a data assimilation procedure based on the geostatistical Kriging method and the Kalman filtering suitable for processing satellite radiometric measurements available at each satellite pass, data of a ground-based radiometer, and temperature profiles from radiosondes released at specific times and locations. The Kalman filter technique and the geostatistical Kriging method as well as the principal component analysis have proved very powerful in exploiting climatological a priori information to build spatial and temporal evolution models of the atmospheric temperature field. The use of both historical radiosoundings (RAOBs) and a radiative transfer code allowed the estimation of the statistical parameters that appears in the models themselves (covariance and cross-covariance matrices, observation matrix, etc.). The authors have developed an algorithm, based on a Kalman filter supplemented with a Kriging geostatistical interpolator, that shows a significant improvement of accuracy in vertical profile estimations with respect to the results of a standard Kalman filter when applied to real satellite radiometric data  相似文献   

16.
Interferometric SAR phase unwrapping using Green's formulation   总被引:8,自引:0,他引:8  
Any method that permits retrieving full range (unwrapped) phase values starting from their (-π,π) determination (wrapped phase) can be defined as a phase unwrapping technique. This paper addresses a new procedure for phase unwrapping especially designed for interferometric synthetic aperture radar applications. The proposed algorithm is based on use of Green's first identity. Results on simulated as well as on real data are presented. They both confirm the excellent performance of the procedure  相似文献   

17.
A procedure has been developed for the design of nonrecursive digital filters with prescribed passband and stopband amplitude characteristics. The proposed procedure is based on an efficient algorithm utilizing the simplex method of linear programming. The design algorithm yields equiripple approximation and it is an alternative to the one which is based on the Remz exchange algorithm. The design procedure allows exact specifications for arbitrary passband and stopband edges. Furthermore, no prior knowledge of the degree of the filter is required. To demonstrate the potential of the design algorithm, several examples with different requirements are worked out and a sample is presented. The obtained results show that the design procedure performed very well where the various parameters of the filter were taken into consideration.  相似文献   

18.
In this paper, we present an algorithm for the automated removal of nonprecipitation related echoes such as atmospheric anomalous propagation (AP) in the lower elevations of meteorological-radar volume scans. The motivation or the development of this technique is the need for an objective quality control algorithm that minimizes human interaction. The algorithm uses both textural and intensity information obtained from the two lower-elevation reflectivity maps. The texture of the reflectivity maps is analyzed with the help of multifractals. Four multifractal exponents are computed for each pixel of the reflectivity maps and are compared to a "strict" and a "soft" threshold. Pixels with multifractal exponents larger than the strict threshold are marked as "nonrain," and pixels with exponents smaller than the soft threshold are marked as "rain." Pixels with all other exponent values are further examined using intensity information. We evaluate our QC procedure by comparison with the Tropical Rainfall Measurement Mission (TRMM) Ground Validation Project quality control algorithm that was developed by TRMM scientists. Comparisons are based on a number of selected cases where nonprecipitation and a variety of rain events are present, and results show that both algorithms are effective in eliminating nonprecipitation related echoes while maintaining the rain pixels  相似文献   

19.
A simple adaptive algorithm for real-time processing in antenna arrays   总被引:3,自引:0,他引:3  
A new adaptation algorithm designed for real-time data processing in large antenna arrays is presented. The algorithm is used to determine the set of filter coefficients (weights) which minimizes the mean-square error in a multidimensional linear filter. The algorithm forms an estimate of the target signal, which is assumed to be of interest, in the presence of interfering noises. It is assumed that the direction of arrival and spectral density of the target signal are known a priori. No such information is assumed to be available regarding the structure of the interfering noise field. The a priori target information is incorporated directly into the adaptation procedure using a modified gradient descent technique. The mathematical convergence properties of the algorithm are presented and a computer simulation experiment is used as an illustration. It is shown that as the number of iterations becomes large, the expected value of the adaptive solution converges to the minimum mean-square-error solution. It is further shown that the variance of the adapted filter about the optimum solution can be made arbitrarily small by appropriate choice of a scalar constant in the algorithm. These results are based on the assumption that the array signals are Gaussian and that successive time samples are statistically uncorrelated. Thus, the new algorithm is shown to converge to the optimum processor in the limit as the number of adaptations becomes large. Any disadvantage which may arise in the use of such an asymptotically optimum system is offset by the extreme simplicity of the adaptive procedure. This simplicity should prove to be particularly useful in many of the practical array processing problems recently encountered in seismic and sonar data processing.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号