首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 837 毫秒
1.
《Information Fusion》2000,1(1):5-15
Much of the research in the area of multi-source fusion systems reported in the literature has concentrated on alternative methods of accomplishing fusion of information, such that the resulting fused output is in some sense better than any of the inputs individually by themselves. There is a wealth of literature expounding many variations on this theme of improving the information quality, reliability, or robustness through the use of a plethora of fusion concepts and tools. However, there has been little reported research aimed at providing an understanding of the causal relationship between the input information and the resulting fused output. This study explores this mostly virgin territory by conceptualizing fusion systems that are elucidative, i.e., systems that can, in some fashion, explain the results of the fusion process in terms of, for example, the relative influence of the different input information components (from the different sources) on the fused result. This new concept of elucidative fusion systems is illustrated in this study by inculcating such an elucidative property into a class of fusion systems operating on the principles of case-based reasoning. The potential for application to real-world problems is also demonstrated using the example of an audio-video system for recognition of spoken French vowels.  相似文献   

2.
Optimal decision fusion given sensor rules   总被引:3,自引:0,他引:3  
When all the rules of sensor decision are known,the optimal distributed decision fusion,which relies only on the joint conditional probability densities, can be derived for very general decision systems. They include those systems with interdependent sensor observations and any network structure. It is also valid for m-ary Bayesian decision problems and binary problems under the Neyman-Pearson criterion. Local decision rules of a sensor withfrom other sensors that are optimal for the sensor itself are also presented, which take the form of a generalized likelihood ratio test. Numerical examples are given to reveal some interesting phenomem that communication between sensors can improve performance of a senor decision,but cannot guarantee to improve the global fusion performance when sensor rules were given before fusing.  相似文献   

3.
This paper addresses the challenge of accurately and timely determining the position of a train, with specific consideration given to the integration of the global navigation satellite system (GNSS) and inertial navigation system (INS). To overcome the increasing errors in the INS during interruptions in GNSS signals, as well as the uncertainty associated with process and measurement noise, a deep learning-based method for train positioning is proposed. This method combines convolutional neural networks (CNN), long short-term memory (LSTM), and the invariant extended Kalman filter (IEKF) to enhance the perception of train positions. It effectively handles GNSS signal interruptions and mitigates the impact of noise. Experimental evaluation and comparisons with existing approaches are provided to illustrate the effectiveness and robustness of the proposed method.  相似文献   

4.
Primary exploration of nonlinear information fusion control theory   总被引:3,自引:0,他引:3  
By introducing information fusion techniques into a control field, a new theory of information fusion control (IFC) is proposed. Based on the theory of information fusion estimation, optimal control of nonlinear discrete control system is investigated. All information on control strategy, including ideal control strategy, expected object trajectory and dynamics of system, are regarded as measuring information of control strategy. Therefore, the problem of optimal control is transferred into the one of information fusion estimation. Firstly, the nonlinear information fusion estimation theorems are described. Secondly, an algorithm of nonlinear IFC theory is detailedly deduced. Finally, the simulation results of manipulator shift control are given, which show the feasibility and effectiveness of the presented algorithm.  相似文献   

5.
The wavelets used in image fusion can be categorized into three general classes: orthogonal, biorthogonal, and non‐orthogonal. Although these wavelets share some common properties, each wavelet also has a unique image decomposition and reconstruction characteristic that leads to different fusion results. This paper focuses on the comparison of the image‐fusion methods that utilize the wavelet of the above three general classes, and theoretically analyses the factors that lead to different fusion results. Normally, when a wavelet transformation alone is used for image fusion, the fusion result is not good. However, if a wavelet transform and a traditional fusion method, such as an IHS transform or a PCA transform, are integrated, better fusion results may be achieved. Therefore, this paper also discusses methods to improve wavelet‐based fusion by integrating an IHS or a PCA transform. As the substitution in the IHS transform or the PCA transform is limited to only one component, the integration of the wavelet transform with the IHS or PCA to improve or modify the component, and the use of IHS or PCA transform to fuse the image, can make the fusion process simpler and faster. This integration can also better preserve colour information. IKONOS and QuickBird image data are used to evaluate the seven kinds of wavelet fusion methods (orthogonal wavelet fusion with decimation, orthogonal wavelet fusion without decimation, biorthogonal wavelet fusion with decimation, biorthogonal wavelet fusion without decimation, wavelet fusion based on the ‘à trous’, wavelet and IHS transformation integration, and wavelet and PCA transformation integration). The fusion results are compared graphically, visually, and statistically, and show that wavelet‐integrated methods can improve the fusion result, reduce the ringing or aliasing effects to some extent, and make the whole image smoother. Comparisons of the final results also show that the final result is affected by the type of wavelets (orthogonal, biorthogonal, and non‐orthogonal), decimation or undecimation, and wavelet‐decomposition levels.  相似文献   

6.
Remote sensing image fusion based on Bayesian linear estimation   总被引:1,自引:0,他引:1  
A new remote sensing image fusion method based on statistical parameter estimation is proposed in this paper. More specially, Bayesian linear estimation (BLE) is applied to observation models between remote sensing images with different spa- tial and spectral resolutions. The proposed method only estimates the mean vector and covariance matrix of the high-resolution multispectral (MS) images, instead of assuming the joint distribution between the panchromatic (PAN) image and low-resolution multispectral image. Furthermore, the proposed method can enhance the spatial resolution of several principal components of MS images, while the traditional Principal Component Analysis (PCA) method is limited to enhance only the first principal component. Experimental results with real MS images and PAN image of Landsat ETM demonstrate that the proposed method performs better than traditional methods based on statistical parameter estimation, PCA-based method and wavelet-based method.  相似文献   

7.
8.
Based on the multi-sensor optimal information fusion criterion weighted by matrices in the linear minimum variance sense, using white noise estimators, an optimal fusion distributed Kalman smoother is given for discrete multi-channel ARMA (autoregressive moving average) signals. The smoothing error cross-covanance matrices between any two sensors are given for measurement noises. Furthermore, the fusion smoother gives higher precision than any local smoother does.  相似文献   

9.
With many remote‐sensing instruments onboard satellites exploring the Earth's atmosphere, most data are processed to gridded daily maps. However, differences in the original spatial, temporal, and spectral resolution—as well as format, structure, and temporal and spatial coverage—make the data merging, or fusion, difficult. NASA Goddard Earth Sciences Data and Information Services Center (GES‐DISC) has archived several data products for various sensors in different formats, structures, and multi‐temporal and spatial scales for ocean, land, and atmosphere. In this investigation using Earth science data sets from multiple sources, an attempt was made to develop an optimal technique to merge the atmospheric products and provide interactive, online analysis tools for the user community. The merged/fused measurements provide a more comprehensive view of the atmosphere and improve coverage and accuracy, compared with a single instrument dataset. This paper describes ways of merging/fusing several NASA Earth Observing Systems (EOS) remote‐sensing datasets available at GES‐DISC. The applicability of various methods was investigated for merging total column ozone to implement these methods into Giovanni, the online interactive analysis tool developed by GES‐DISC. Ozone data fusion of MODerate resolution Imaging Spectrometer (MODIS) Terra and Aqua Level‐3 daily data sets was conducted, and the results were found to provide better coverage. Weighted averaging of Terra and Aqua data sets, with the consequent interpolation through the remaining gaps using Optimal Interpolation (OI), also was conducted and found to produce better results. Ozone Monitoring Instrument (OMI) total column ozone is reliable and provides better results than Atmospheric Infrared Sounder (AIRS) and MODIS. However, the agreement among these instruments is reasonable. The correlation is high (0.88) between OMI and AIRS total column ozone, while the correlation between OMI and MODIS Terra/Aqua fused total column ozone is 0.79.  相似文献   

10.
《Ergonomics》2012,55(6):775-797
In a simulated aircraft navigation task, a fusion technique known as triangulation was used to improve the accuracy and onscreen availability of location information from two separate radars. Three experiments investigated whether the reduced cognitive processing required to extract information from the fused environment led to impoverished retention of visual–spatial information. Experienced pilots and students completed various simulated flight missions and were required to make a number of location estimates. Following a retention interval, memory for locations was assessed. Experiment 1 demonstrated, in an applied setting, that the retention of fused information was problematic and Experiment 2 replicated this finding under laboratory conditions. Experiment 3 successfully improved the retention of fused information by limiting its availability within the interface, which it is argued, shifted participants' strategies from over-reliance on the display as an external memory source to more memory-dependent interaction. These results are discussed within the context of intelligent interface design and effective human–machine interaction.  相似文献   

11.
Traditionally, loop nests are fused only when the data dependences in the loop nests are not violated. This paper presents a new loop fusion algorithm that is capable of fusing loop nests in the presence of fusion-preventing anti-dependences. All the violated anti-dependences are removed by automatic array copying. As a case study, this aggressive loop fusion strategy is applied to a Jacobi solver. The performance of iterative methods is typically limited by the speed of the memory system. Fusing the two loop nests in the Jacobi solver into one reduces data cache misses, and consequently, improves the performance results of both sequential and parallel versions of the Jacobi program, as validated by our experimental results on an HP AlphaServer SC45 supercomputer.  相似文献   

12.
Considerable research has been done on using information from multiple modalities, like hands, facial gestures or speech, for better interaction between humans and computers, and many promising human–computer interfaces (HCI) have been developed in recent years. However, most of the current HCI systems have a few drawbacks: firstly, they are highly dependent on the performance of individual sensors. S econdly, the information fusion process from these sensors tends to ignore the semantic nature of the modalities, which may reinforce or clarify each other over time. Finally, they are not robust enough at representing the imprecise nature of human gestures, since individual gestures are highly ambiguous in themselves. In this paper, we propose an approach for the semantic fusion of different input modalities, based on transferable belief models. We show that this approach allows for a better representation of the ambiguity involved in recognizing gestures. Ambiguity is resolved by combining the beliefs of the individual sensors on the input information, to form new extended concepts, based on a pre-defined domain specific knowledge base, represented by conceptual graphs. We apply this technique to a multimodal system consisting of a hand gesture recognition sensor and a brain computing interface. It is shown that the technique can successfully combine individual gestures obtained from the two sensors, to form meaningful concepts and resolve ambiguity. The advantage of this approach is that it is robust even if one of the sensors is inefficient or has no input. Another important feature is its scalability, wherein more input modalities, like speech or facial gestures, can be easily integrated into the system at minimal cost, to form a comprehensive HCI interface.  相似文献   

13.
The backbone of aiding inertial navigation systems by satellite navigation receivers is the data fusion of the subsystem outputs. Its conventional mechanization emerged from the classical, necessary aiding of inertial navigation systems and is based on linearized error models. Recent approaches therefore use directly the nonlinear kinematical equations of the rigid body motion. These methods lead to a simpler system structure as well as to smaller estimation error variances. To verify this statement, the paper presents a systematization of the different fusion schemes and a comparison between results of postprocessed flight test data.  相似文献   

14.
Silicon–glass wafer bonding is realized with silicon hydrophilic fusion bonding technology. Tensile strength testing shows that the bonding strength is large enough for most applications of integrated circuits and transducers. The bonding strengths of 4 in. 525 μm thick #7740 glass–4 in. 525 μm thick silicon and of 1.5 in. 1000 μm thick #7740 glass–2 in. 380 μm thick silicon are larger than 9 MPa both with an annealing temperature of 450°C.  相似文献   

15.
The aim of this study is to develop a new method for data fusion, in a statistical manner, on a per‐field basis that can be applied before a per‐field classification for increasing per‐field classification accuracy. The method developed for statistical data fusion is a model‐based data fusion method which utilizes nested mixture distribution modelling and integrates two multiresolution and multispectral image data each from different wavelengths of different spectral bands of different imaging sensors with different resolutions and having a different number of spectral bands. The statistical data fusion process involves two successive steps.  相似文献   

16.
This article presents the information-theoretic based feature information interaction, a measure that can describe complex feature dependencies in multivariate settings. According to the theoretical development, feature interactions are more accurate than current, bivariate dependence measures due to their stable and unambiguous definition. In experiments with artificial and real data we compare first the empirical dependency estimates of correlation, mutual information and 3-way feature interaction. Then, we present feature selection and classification experiments that show superior performance of interactions over bivariate dependence measures for the artificial data, for real world data this goal is not achieved yet.
Stéphane Marchand-MailletEmail:
  相似文献   

17.
This paper investigates the applicability and limitations of combining multi‐sensor data through data fusion, to increase the usefulness of the datasets. This study focuses on merging daily mean aerosol optical thickness (AOT), as measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites, to increase spatial coverage and produce complete fields to facilitate comparison with models and station data. The fusion algorithm used the maximum likelihood (ML) technique to merge the pixel values where available, and then the optimal interpolation method to fill the remaining gaps. The algorithm was applied to a regional AOT subset. The results illustrate that the fusion algorithm can produce complete AOT fields with reasonably good data values and acceptable errors. The cumulative semivariogram (CSV) was found to be sensitive to the spatial distribution and fraction of gap areas and, thus, useful for assessing the sensitivity of the fused data to spatial gaps.  相似文献   

18.
This paper addresses the problem of track fusion for unordered distributed sensors with unknown measurement noise. A robust Dempster–Shafer (D–S) fusion algorithm is proposed, which includes three parts, namely, the local track estimation, the track association, and the state fusion. First, a labeling VB-PHD filter is derived to present target states with track labels and the unknown measurement noises of local sensors. Next, a heuristic D–S method is proposed to determine the relationship of local tracks and fused tracks, where the accumulated information is taken into account. Finally, a fusion method is given to show the state fusion results, which can fully utilize local state estimates and measurement noise information. Simulation results are provided to illustrate the high precision of tracking and good robustness, comparing with the traditional methods.  相似文献   

19.
P–M equation proposed by Perona and Malik can not only perform scale-space, but also preserve edges while smoothing an image. In this paper, we employ this property to construct a new multiscale decomposition method, by which an image can be decomposed into a sequence of detail images and a base image, and the initial image can be perfectly reconstructed by adding up these decomposed images. This decomposition method is applied to multisensor image fusion. The source images are first decomposed into the detail images and the base image. Then, these images are combined according to the given fusion rules. Finally, the fused image is reconstructed by adding up the fused detail images and base image. Compared with conventional methods based on multiscale decomposition, experimental results over multifocus images, visible and infrared images, and medical images demonstrate the superiority of our method in terms of visual inspection and objective measures.  相似文献   

20.
In this work, we present a novel spectral-spatial classification framework of hyperspectral images (HSIs) by integrating the techniques of algebraic multigrid (AMG), hierarchical segmentation (HSEG) and Markov random field (MRF). The proposed framework manifests two main contributions. First, an effective HSI segmentation method is developed by combining the AMG-based marker selection approach and the conventional HSEG algorithm to construct a set of unsupervised segmentation maps in multiple scales. To improve the computational efficiency, the fast Fish Markov selector (FMS) algorithm is exploited for feature selection before image segmentation. Second, an improved MRF energy function is proposed for multiscale information fusion (MIF) by considering both spatial and inter-scale contextual information. Experiments were performed using two airborne HSIs to evaluate the performance of the proposed framework in comparison with several popular classification methods. The experimental results demonstrated that the proposed framework can provide superior performance in terms of both qualitative and quantitative analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号