首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A two-pass adaptive filtering algorithm is proposed for cancellation of recurrent interferences such as the heart interference in biomedical signals. In the first pass, an average waveform in one period of the interference is estimated by event-synchronous (QRS-synchronous) averaging of the corrupted signal. In a second pass, an adaptive Schur recursive least squares (RLS) lattice filter is used to cancel the interference by using the event synchronously repeated estimated average waveform of the interference as an artificial reference signal. One key feature of this approach is that the ECG is only used for QRS synchronization and not directly as a reference signal for adaptive filtering. Thus the proposed algorithm can be applied to interference problems where ECG and true interference are almost synchronous but show considerably different waveforms. This is usually the case with the heart interference in biomedical signals. Both off-line and real-time implementations of the event synchronous interference canceller are described. The method is applied to the cancellation of the heart interference in magnetoencephalogram (MEG) signals and to the effective isolation of ventricular extrasystoles (VES) in magnetocardiogram (MCG) signals. Experimental results are shown. The new method typically attenuates the amplitudes of R-wave and T-wave interference components by an amplitude factor of 30 without influencing the MEG events of interest  相似文献   

2.
We extend the signal space separation (SSS) method to decompose multichannel magnetoencephalographic (MEG) data into regions of interest inside the head. It has been shown that the SSS method can transform MEG data into a signal component generated by neurobiological sources and a noise component generated by external sources outside the head. In this paper, we show that the signal component obtained by the SSS method can be further decomposed by a simple operation into signals originating from deep and superficial sources within the brain. This is achieved by using a scheme that exploits the beamspace methodology that relies on a linear transformation that maximizes the power of the source space of interest. The efficiency and accuracy of the algorithm are demonstrated by experiments utilizing both simulated and real MEG data.  相似文献   

3.
This paper describes the theoretical background of a new data-driven approach to encephalographic single-trial (ST) data analysis. Temporal constrained source extraction using sparse decomposition identifies signal topographies that closely match the shape characteristics of a reference signal, one response for each ST. The correlations between these ST topographies are computed for formal Correlation Matrix Analysis (CMA) based on Random Matrix Theory (RMT). The RMT-CMA provides clusters of similar ST topologies in a completely unsupervised manner. These patterns are then classified into deterministic set and noise using well established RMT results. The efficacy of the method is applied to EEG and MEG data of somatosensory evoked responses (SERs). The results demonstrate that the method can recover brain signals with time course resembling the reference signal and follow changes in strength and/or topography in time by simply stepping the reference signal through time.  相似文献   

4.
We propose a new group-theoretical approach to the problem of alignment of time events in multichannel signal recordings. Such an alignment is an essential phase in the classification of transients in electroencephalogram/magnetoencephalogram (MEG) signals. A common reference frame is reconstructed applying a time translation transformation based on delayed mutual correlation functions of the individual events. The method is applied to MEG data sets recorded from epileptic patients showing paroxysmal interictal discharges.  相似文献   

5.
In magnetoencephalography (MEG) and electroencephalography (EEG), independent component analysis is widely applied to separate brain signals from artifact components. A number of different methods have been proposed for the automatic or semiautomatic identification of artifact components. Most of the proposed methods are based on amplitude statistics of the decomposed MEG/EEG signal. We present a fully automated approach based on amplitude and phase statistics of decomposed MEG signals for the isolation of biological artifacts such as ocular, muscle, and cardiac artifacts (CAs). The performance of different artifact identification measures was investigated. In particular, we show that phase statistics is a robust and highly sensitive measure to identify strong and weak components that can be attributed to cardiac activity, whereas a combination of different measures is needed for the identification of artifacts caused by ocular and muscle activity. With the introduction of a rejection performance parameter, we are able to quantify the rejection quality for eye blinks and CAs. We demonstrate in a set of MEG data the good performance of the fully automated procedure for the removal of cardiac, ocular, and muscle artifacts. The new approach allows routine application to clinical measurements with small effect on the brain signal.   相似文献   

6.
A multiresolution framework to MEG/EEG source imaging   总被引:3,自引:0,他引:3  
A new method based on a multiresolution approach for solving the ill-posed problem of brain electrical activity reconstruction from electroencephaloram (EEG)/magnetoencephalogram (MEG) signals is proposed in a distributed source model. At each step of the algorithm, a regularized solution to the inverse problem is used to constrain the source space on the cortical surface to be scanned at higher spatial resolution. We present the iterative procedure together with an extension of the ST-maximum a posteriori method [1] that integrates spatial and temporal a priori information in an estimator of the brain electrical activity. Results from EEG in a phantom head experiment with a real human skull and from real MEG data on a healthy human subject are presented. The performances of the multiresolution method combined with a nonquadratic estimator are compared with commonly used dipolar methods, and to minimum-norm method with and without multiresolution. In all cases, the proposed approach proved to be more efficient both in terms of computational load and result quality, for the identification of sparse focal patterns of cortical current density, than the fixed scale imaging approach.  相似文献   

7.
广播式自动相关监视作为国际民航组织主推的监视技术,越来越多的飞行器配备了ADS-B收发设备,此外,介于ADS-B信号的广播体制以及空域流量的不断增加,不同的ADS-B信号之间将会发生交织,这种情况同样也会发生在直达信号与多径信号之间;以上两种情况均会给ADS-B信号的解码带来严重影响,相应地会造成信息的错误读取或丢失。本文提出了一种单天线下基于累加判决的解交织算法:首先通过单天线下的有效脉冲位置检测以及基于希尔伯特变换的交织检测分别获得信号的起始位置以及信号交织位置,进而得到两条信号之间的相对时延,从而对数据进行累加并利用K-means方法进行分类,最终得到比特位判决结果以实现交织信号分离。   相似文献   

8.
Independent component analysis (ICA) is a technique which extracts statistically independent components from a set of measured signals. The technique enjoys numerous applications in biomedical signal analysis in the literature, especially in the analysis of electromagnetic (EM) brain signals. Standard implementations of ICA are restrictive mainly due to the square mixing assumption-for signal recordings which have large numbers of channels, the large number of resulting extracted sources makes the subsequent analysis laborious and highly subjective. There are many instances in neurophysiological analysis where there is strong a priori information about the signals being sought; temporally constrained ICA (cICA) can extract signals that are statistically independent, yet which are constrained to be similar to some reference signal which can incorporate such a priori information. We demonstrate this method on a synthetic dataset and on a number of artifactual waveforms identified in multichannel recordings of EEG and MEG. cICA repeatedly converges to the desired component within a few iterations and subjective analysis shows the waveforms to be of the expected morphologies and with realistic spatial distributions. This paper shows that cICA can be applied with great success to EM brain signal analysis, with an initial application in automating artifact extraction in EEG and MEG.  相似文献   

9.
We introduce a spatial filtering method in the spherical harmonics domain for constraining magnetoencephalographic (MEG) multichannel measurements to any user-specified spherical region of interest (ROI) inside the head. The method relies on a linear transformation of the signal space separation inner coefficients that represent the MEG signal generated by sources located inside the head. The spatial filtering is achieved effectively by constructing a spherical harmonics basis vector that is dependent on the center of the targeted ROI and it does not require any discrete division of the headspace into grids like the traditional MEG spatial filtering approaches. The validation and the performance of the method are demonstrated through both simulated and actual bilateral auditory-evoked data experiments.  相似文献   

10.
There has been tremendous advances in our ability to produce images of human brain function. Applications of functional brain imaging extend from improving our understanding of the basic mechanisms of cognitive processes to better characterization of pathologies that impair normal function. Magnetoencephalography (MEG) and electroencephalography (EEG) (MEG/EEG) localize neural electrical activity using noninvasive measurements of external electromagnetic signals. Among the available functional imaging techniques, MEG and EEG uniquely have temporal resolutions below 100 ms. This temporal precision allows us to explore the timing of basic neural processes at the level of cell assemblies. MEG/EEG source localization draws on a wide range of signal processing techniques including digital filtering, three-dimensional image analysis, array signal processing, image modeling and reconstruction, and, blind source separation and phase synchrony estimation. We describe the underlying models currently used in MEG/EEG source estimation and describe the various signal processing steps required to compute these sources. In particular we describe methods for computing the forward fields for known source distributions and parametric and imaging-based approaches to the inverse problem  相似文献   

11.
Electroencephalography (EEG) and magnetoencephalography (MEG) measurements are used to localize neural activity by solving the electromagnetic inverse problem. In this paper, we propose a new approach based on the particle filter implementation of the probability hypothesis density filter (PF-PHDF) to automatically estimate the unknown number of time-varying neural dipole sources and their parameters using EEG/MEG measurements. We also propose an efficient sensor scheduling algorithm to adaptively configure EEG/MEG sensors at each time step to reduce total power consumption. We demonstrate the improved performance of the proposed algorithms using simulated neural activity data. We map the algorithms onto a Xilinx Virtex-5 field-programmable gate array (FPGA) platform and show that it only takes 10 ms to process 100 data samples using 6,400 particles. Thus, the proposed system can support real-time processing of an EEG/MEG neural activity system with a sampling rate of up to 10 kHz.  相似文献   

12.
An important class of experiments in functional brain mapping involves collecting pairs of data corresponding to separate "Task" and "Control" conditions. The data are then analyzed to determine what activity occurs during the Task experiment but not in the Control. Here we describe a new method for processing paired magnetoencephalographic (MEG) data sets using our recursively applied and projected multiple signal classification (RAP-MUSIC) algorithm. In this method the signal subspace of the Task data is projected against the orthogonal complement of the Control data signal subspace to obtain a subspace which describes spatial activity unique to the Task. A RAP-MUSIC localization search is then performed on this projected data to localize the sources which are active in the Task but not in the Control data. In addition to dipolar sources, effective blocking of more complex sources, e.g., multiple synchronously activated dipoles or synchronously activated distributed source activity, is possible since these topographies are well-described by the Control data signal subspace. Unlike previously published methods, the proposed method is shown to be effective in situations where the time series associated with Control and Task activity possess significant cross correlation. The method also allows for straightforward determination of the estimated time series of the localized target sources. A multiepoch MEG simulation and a phantom experiment are presented to demonstrate the ability of this method to successfully identify sources and their time series in the Task data.  相似文献   

13.
Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.  相似文献   

14.
To reduce physiological artifacts in magnetoencephalographic (MEG) and electroencephalographic recordings, a number of methods have been applied in the past such as principal component analysis, signal-space projection, regression using secondary information, and independent component analysis. This method has become popular as it does not have constraints such as orthogonality between artifact and signal or the need for a priori information. Applying the time-delayed decorrelation algorithm to raw data from a visual stimulation MEG experiment, we show that several of the independent components can be attributed to the cardiac artifact. Calculating an average cardiac activity shows that physiologically different excitation states of the heart produce similar field distributions in the MEG sensor system. This is equivalent to differing spectral properties of cardiac field distributions in the raw data. As a consequence, the algorithm combines, e.g., the R peak and the T wave of the cardiac cycle into a single component and the one-to-one assignment of each independent component with a physiological source is not justified in this case. To improve the signal quality of visually evoked fields, the multidimensional cardiac artifact subspace is suppressed from the data. To assess the preservation of the evoked signal after artifact suppression, a geometrical and a temporal measure are introduced. The suppression of cardiac and alpha wave artifacts allows, in our experimental setting, the reduction of the number of epochs to one half while preserving the visually evoked signal.  相似文献   

15.
Multiple Peer-to-Peer Communications Using a Network of Relays   总被引:1,自引:0,他引:1  
We consider an ad hoc wireless network consisting of d source-destination pairs communicating, in a pairwise manner, via R relaying nodes. The relay nodes wish to cooperate, through a decentralized beamforming algorithm, in order to establish all the communication links from each source to its respective destination. Our communication strategy consists of two steps. In the first step, all sources transmit their signals simultaneously. As a result, each relay receives a noisy faded mixture of all source signals. In the second step, each relay transmits an amplitude- and phase-adjusted version of its received signal. That is each relay multiply its received signal by a complex coefficient and retransmits the so-obtained signal. Our goal is to obtain these complex coefficients (beamforming weights) through minimization of the total relay transmit power while the signal-to-interference-plus-noise ratio (SINR) at the destinations are guaranteed to be above certain predefined thresholds. Although such a power minimization problem is not convex, we use semidefinite relaxation to turn this problem into a semidefinite programming (SDP) problem. Therefore, we can efficiently solve the SDP problem using interior point methods. Our numerical examples reveal that for high network data rates, our space division multiplexing scheme requires significantly less total relay transmit power compared to other orthogonal multiplexing schemes, such as time-division multiple access schemes.  相似文献   

16.
The aim of this study was to assess whether independent component analysis (ICA) could be valuable to remove power line noise, cardiac, and ocular artifacts from magnetoencephalogram (MEG) background activity. The MEGs were recorded from 11 subjects with a 148-channel whole-head magnetometer. We used a statistical criterion to estimate the number of independent components. Then, a robust ICA algorithm decomposed the MEG epochs and several methods were applied to detect those artifacts. The whole process had been previously tested on synthetic data. We found that the line noise components could be easily detected by their frequency spectrum. In addition, the ocular artifacts could be identified by their frequency characteristics and scalp topography. Moreover, the cardiac artifact was better recognized by its skewness value than by its kurtosis one. Finally, the MEG signals were compared before and after artifact rejection to evaluate our method.  相似文献   

17.
Combined MEG and EEG source imaging by minimization of mutual information   总被引:2,自引:0,他引:2  
Though very frequently assumed, the necessity to operate a joint processing of simultaneous magnetoencephalography (MEG) and electroencephalography (EEG) recordings for functional brain imaging has never been clearly demonstrated. However, the very last generation of MEG instruments allows the simultaneous recording of brain magnetic fields and electrical potentials on the scalp. But the general fear regarding the fusion between MEG and EEG data is that the drawbacks from one modality will systematically spoil the performances of the other one without any consequent improvement. This is the case for instance for the estimation of deeper or radial sources with MEG. In this paper, we propose a method for a cooperative processing of MEG and EEG in a distributed source model. First, the evaluation of the respective performances of each modality for the estimation of every dipole in the source pattern is made using a conditional entropy criterion. Then, the algorithm operates a preprocessing of the MEG and EEG gain matrices which minimizes the mutual information between these two transfer functions, by a selective weighting of the MEG and EEG lead fields. This new combined EEG/MEG modality brings major improvements to the localization of active sources, together with reduced sensitivity to perturbations on data.  相似文献   

18.
对现有的基于自动波束形成的传声器阵列语音信号增强算法提出了改进,将各传声器采集到的信号利用ABF(自适应波束形成)进行延时补偿并求和,消除信号中弱相干和不相干噪声;再利用特征空间逼近的方法进一步去除残留的噪声。将一种定阶方法应用到基于特征空间分解的语音信号增强中,利用其“最大稳定”原理,使得有效信号模型的阶数尽可能不受原始信号信噪比的影响,消除了传统定阶过程中的随意性和不稳定性。仿真结果表明:把自适应波束形成技术和特征空间逼近的方法结合起来,能够取得良好的去噪效果。  相似文献   

19.
Locally monotonic diffusion   总被引:2,自引:0,他引:2  
Anisotropic diffusion affords an efficient, adaptive signal smoothing technique that can be used for signal enhancement, signal segmentation, and signal scale-space creation. This paper introduces a novel partial differential equation (PDE)-based diffusion method for generating locally monotonic signals. Unlike previous diffusion techniques that diverge or converge to trivial signals, locally monotonic (LOMO) diffusion converges rapidly to well-defined LOMO signals of the desired degree. The property of local monotonicity allows both slow and rapid signal transitions (ramp and step edges) while excluding outliers due to noise. In contrast with other diffusion methods, LOMO diffusion does not require an additional regularization step to process a noisy signal and uses no ad hoc thresholds or parameters. In the paper, we develop the LOMO diffusion technique and provide several salient properties, including stability and a characterization of the root signals. The convergence of the algorithm is well behaved (nonoscillatory) and is independent of the signal length, in contrast with the median filter. A special case of LOMO diffusion is identical to the optimal solution achieved via regression. Experimental results validate the claim that LOMO diffusion can produce denoised LOMO signals with low error using less computation than the median-order statistic approach  相似文献   

20.
This paper presents an analysis on the performance of the prewhitening beamformer when applied to magnetoencephalography (MEG) experiments involving dual (task and control) conditions. We first analyze the method's robustness to two types of violations of the prerequisites for the prewhitening method that may arise in real-life two-condition experiments. In one type of violation, some sources exist only in the control condition but not in the task condition. In the other type of violation, some signal sources exist both in the control and the task conditions, and that they change intensity between the two conditions. Our analysis shows that the prewhitening method is very robust to these nonideal conditions. In this paper, we also present a theoretical analysis showing that the prewhitening method is considerably insensitive to overestimation of the signal-subspace dimensionality. Therefore, the prewhitening beamformer does not require accurate estimation of the signal subspace dimension. Results of our theoretical analyses are validated in numerical experiments and in experiments using a real MEG data set obtained during self-paced hand movements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号