首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Scalp electric potentials (electroencephalogram; EEG) are contingent to the impressed current density unleashed by cortical pyramidal neurons undergoing post-synaptic processes. EEG neuroimaging consists of estimating the cortical current density from scalp recordings. We report a solution to this inverse problem that attains exact localization: exact low-resolution brain electromagnetic tomography (eLORETA). This non-invasive method yields high time-resolution intracranial signals that can be used for assessing functional dynamic connectivity in the brain, quantified by coherence and phase synchronization. However, these measures are non-physiologically high because of volume conduction and low spatial resolution. We present a new method to solve this problem by decomposing them into instantaneous and lagged components, with the lagged part having almost pure physiological origin.  相似文献   

2.
Localizing the sources of electrical activity in the brain from electroencephalographic (EEG) data is an important tool for noninvasive study of brain dynamics. Generally, the source localization process involves a high‐dimensional inverse problem that has an infinite number of solutions and thus requires additional constraints to be considered to have a unique solution. In this article, we propose a novel method for EEG source localization. The proposed method is based on dividing the cerebral cortex of the brain into a finite number of “functional zones” which correspond to unitary functional areas in the brain. To specify the sparsity profile of human brain activity more concisely, the proposed approach considers grouping of the electrical current dipoles inside each of the functional zones. In this article, we investigate the use of Brodmann's areas as the functional zones while sparse Bayesian learning is used to perform sparse approximation. Numerical experiments are conducted on a realistic head model obtained from segmentation of MRI images of the head and includes four major compartments namely scalp, skull, cerebrospinal fluid (CSF), and brain with relative conductivity values. Three different electrode setups are tested in the numerical experiments. The results demonstrate that the proposed approach is quite promising in solving the EEG source localization problem. In a noiseless environment with 71 electrodes, the proposed method was found to accurately locate up to 6 simultaneously active sources with accuracy >70%.  相似文献   

3.
Brain source imaging based on EEG aims to reconstruct the neural activities producing the scalp potentials. This includes solving the forward and inverse problems. The aim of the inverse problem is to estimate the activity of the brain sources based on the measured data and leadfield matrix computed in the forward step. Spatial filtering, also known as beamforming, is an inverse method that reconstructs the time course of the source at a particular location by weighting and linearly combining the sensor data. In this paper, we considered a temporal assumption related to the time course of the source, namely sparsity, in the Linearly Constrained Minimum Variance (LCMV) beamformer. This assumption sounds reasonable since not all brain sources are active all the time such as epileptic spikes and also some experimental protocols such as electrical stimulations of a peripheral nerve can be sparse in time. Developing the sparse beamformer is done by incorporating L1-norm regularization of the beamformer output in the relevant cost function while obtaining the filter weights. We called this new beamformer SParse LCMV (SP-LCMV). We compared the performance of the SP-LCMV with that of LCMV for both superficial and deep sources with different amplitudes using synthetic EEG signals. Also, we compared them in localization and reconstruction of sources underlying electric median nerve stimulation. Results show that the proposed sparse beamformer can enhance reconstruction of sparse sources especially in the case of sources with high amplitude spikes.  相似文献   

4.
In this paper, we evaluate the performance of block sparse Bayesian learning (BSBL) method for EEG source localization. By exploiting the internal block structure, the BSBL method solves the ill‐posed inverse problem more efficiently than other methods that do not consider block structure. Simulation experiments were conducted on a realistic head model obtained by segmentation of MRI images of the head. Two definitions of blocks were considered: Brodmann areas and automated anatomical labeling (AAL). The experiments were performed both with and without the presence of noise. Six different noise levels were considered having SNR values from 5 dB to 30 dB with 5dB increment. The evaluation reveals several potential findings—first, BSBL is more likely to produce better source localization than sparse Bayesian learning (SBL), however, this is true up until a limited number of simultaneously active areas only. Experimental results show that for 71‐channel electrodes setup BSBL outperforms SBL for up to three simultaneously active blocks. From four simultaneously active blocks SBL turns out to be marginally better and the difference between them is statistically insignificant. Second, different anatomical block structures such as Brodmann areas or AAL does not seem to produce any significant difference in EEG source localization relying on BSBL. Third, even when the block partitions are not known exactly BSBL ensures better localization than SBL as soon as block structure persists in the signal. © 2017 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 27, 46–56, 2017  相似文献   

5.
Epilepsy is a central nervous system disorder in which brain activity becomes abnormal. Electroencephalogram (EEG) signals, as recordings of brain activity, have been widely used for epilepsy recognition. To study epileptic EEG signals and develop artificial intelligence (AI)-assist recognition, a multi-view transfer learning (MVTL-LSR) algorithm based on least squares regression is proposed in this study. Compared with most existing multi-view transfer learning algorithms, MVTL-LSR has two merits: (1) Since traditional transfer learning algorithms leverage knowledge from different sources, which poses a significant risk to data privacy. Therefore, we develop a knowledge transfer mechanism that can protect the security of source domain data while guaranteeing performance. (2) When utilizing multi-view data, we embed view weighting and manifold regularization into the transfer framework to measure the views’ strengths and weaknesses and improve generalization ability. In the experimental studies, 12 different simulated multi-view & transfer scenarios are constructed from epileptic EEG signals licensed and provided by the University of Bonn, Germany. Extensive experimental results show that MVTL-LSR outperforms baselines. The source code will be available on .  相似文献   

6.
Electroencephalography (EEG) source localization of brain activity is of high diagnostic value. This work aims to improve the low-spatial-resolution scalp EEG measurement through noninvasive numerical procedures. An image-based boundary element method (BEM) is developed to reconstruct cortical brain potential distribution from the scalp input. The developed BEM circumvents a practical challenge of linking scan images to model-based computation by translating the scan surface tessellation directly into mesh discretization for the BEM. Related issues, such as the numerical regularization of the ill-posed inverse problem, which are crucial to achieving reliable solutions, are discussed. The numerical studies show that the developed BEM can effectively handle the potential reconstruction on a detailed brain surface from blurry scalp potential input, and may become a promising tool to aid clinical diagnosis of brain-related problems.  相似文献   

7.
Localizing brain neural activity using electroencephalography (EEG) neuroimaging technique is getting increasing response from neuroscience researchers and medical community. It is due to the fact that brain source localization has a variety of applications for diagnoses of various brain disorders. This problem is ill-posed in nature because an infinite number of source configurations can produce the same potential at the head surface. Recently, a new technique that is based on Bayesian framework, called the multiple sparse priors (MSP), was proposed as a solution to this problem. The MSP develops the solution for source localization using the current densities associated with dipoles in terms of prior source covariance matrix and sensor covariance matrix, respectively. Then, it uses the maximization of the cost function of the free energy under the assumption of a fixed number of hyperparameters or patches in order to obtain the elements of prior source covariance matrix. This research work aims to further enhance the maximization process of MSP with regard to the free energy by considering a variable number of patches. This will lead to a better estimation of brain sources in terms of localization errors. The performance of the modified MSP with a variable number of patches is compared with the original MSP using simulated and real-time EEG data. The results show a significant improvement in terms of localization errors.  相似文献   

8.
陈果  吕俊芳  李静 《计测技术》2003,(2):1-3,13
临床神经外科医生针对某一种脑部病变做病灶(可看作一种信号源)摘除的手术时,需要对该病变的病灶做精确的定位。本文对脑电信号产生的根源、脑电信号源定位的重要性以及脑电信号源定位方法进行了探讨。  相似文献   

9.
This research investigated imaging quality of two important methods widely used in electromagnetic inverse scattering problems. The algorithms, time reversal (TR) and linear sampling method (LSM), were compared for resolution of point imaging and a correlation indicator to determine image quality. Comparisons were made in single- and multifrequency modes for 2D scenarios in free-space. Comparisons revealed that resolution of TR is much better than LSM. In order to compare the total reconstructed images, several cases were considered to determine a comprehensive conclusion. The simulations were done based on experimental data. In this case, comparisons showed that in the term of correlation indicator, LSM surpasses TR.  相似文献   

10.
Electroencephalography (EEG) occupies an important place for studying human brain activity in general, and epileptic processes in particular, with appropriate time resolution. Scalp EEG or intracerebral EEG signals recorded in patients with drug-resistant partial epilepsy convey important information about epileptogenic networks that must be localized and understood prior to subsequent therapeutic procedures. However, this information, often subtle, is 'hidden' in the signals. It is precisely the role of signal processing to extract this information and to put it into a 'coherent and interpretable picture' that can participate in the therapeutic strategy. Nowadays, the panel of available methods is very wide depending on the objectives such as, for instance, the detection of transient epileptiform events, the detection and/or prediction of seizures, the recognition and/or the classification of EEG patterns, the localization of epileptic neuronal sources, the characterization of neural synchrony, the determination of functional connectivity, among others. The intent of this paper is to focus on a specific category of methods providing relevant information about epileptogenic networks from the analysis of spatial properties of EEG signals in the time and frequency domain. These methods apply to either interictal or ictal recordings and share the common objective of localizing the subsets of brain structures involved in both types of paroxysmal activity. Most of these methods were developed by our group and are routinely used during pre-surgical evaluation. Examples are detailed. Results, as well as limitations of the methods, are also discussed.  相似文献   

11.
We evaluate the use of linear and nonlinear inverse algorithms (maximum entropy method, low resolution electromagnetic tomography, L 1 and L2 norm methods) in the analysis of magnetic flux leakage (MFL) measurements commonly used for the detection of flaws and irregularities in gas and oil pipelines. We employed MFL data from a pipe with well-defined artificial surface breaking flaws at the internal and external wall. Except for the low-resolution electromagnetic tomography, all algorithms show, on average, similar accuracy in the flaw extent estimation. Maximum entropy and the L1 norm have a tendency to yield better results for smaller flaws, while the L2 norm performs slightly better for larger flaws. The errors of the flaw location estimation are comparable for the maximum entropy and the L2 norm algorithm. The L1 norm performs worse for those flaws situated on the internal pipe wall. Linear methods (L2 norm) are easier to implement and require less computation time than nonlinear methods (maximum entropy method, L1 norm). In conclusion, inverse algorithms potentially provide a powerful means for the detection and characterization of flaws in MFL data  相似文献   

12.
通过在真实头模型上进行仿真研究,采用有限差分法求解人脑深部灰质和浅处皮层上的脑电源在头皮表面产生的电位分布,分别讨论了两种不同的白质组织非均质电导率特性对脑电正问题的影响.仿真研究结果表明,脑电源在大脑中的位置越深,头皮电位分布受脑白质非均质的影响越大.脑白质电导率各向异性非均质对头皮电位分布具有一定的影响.  相似文献   

13.
This work focuses on interpolation methods which are proposed as solutions to the EEG source localization. First, a low pass and a high pass filter were applied to the EEG signal in order to remove EEG artifacts. Then, classical interpolation techniques such as three‐dimensional (3D) K‐nearest neighbor and 3D spline were implemented. The major contribution of this article is to develop a new interpolation method called 3D multiquadratic technique which is based on the Euclidean distances between the electrodes. A substitution of the Euclidean distance by the corresponding arc length was realized to promote the 3D spherical multiquadratic interpolation. Based on measured EEG recordings from 19 electrodes mounted on the scalp, these interpolation methods (3D K‐nearest neighbor, 3D spline, 3D multiquadratic and spherical multiquadratic) were applied to EEG recordings of 15 healthy subjects at rest and with closed eyes. The aim of EEG interpolation is to reach the maximum of the spatial resolution of EEG mapping by predicting the brain activity distribution of 109 virtual points located on the scalp surface. The evaluation of the different interpolation methods was achieved by measuring the means of the normalized root mean squared error (NRMSE) and processing time. The results showed that the multiquadratic and 3D spline interpolation methods gave the minimum normalized root mean squared error, but the multiquadratic method was characterized by the minimal processing time compared with 3D K‐nearest neighbor, 3D spline, and 3D spherical multiquadratic methods. Finally, a Spectral density variation mapping of different cerebral waves (delta, theta, alpha and beta) with 128 electrodes was generated by applying the Fast Fourier Transform (FFT). © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 191–198, 2015  相似文献   

14.
Low-temperature superconductivity plays an important role in some specific biomedical applications, and, in particular, in non-invasive imaging methods of human brain activity. Superconducting magnets are indispensable for functional magnetic resonance imaging (fMRI) which allows functional imaging of the brain with high spatial but poor temporal resolution. Superconducting quantum interference devices (SQUIDs) are the most sensitive magnetic field detectors. Up to a few hundreds of SQUIDs are nowdays used in modern whole-head magnetoencephalography (MEG) systems. They allow tracking brain activation with a superior temporal resolution of milliseconds, which is a quintessential condition for the monitoring of brain dynamics and the understanding of information processing in the human brain. We introduce the prerequisites of MEG data acquisition and briefly review two established methods of biomagnetic signal processing: The concept of signal averaging, and the subsequent source identification as a solution of the biomagnetic inverse problem. Beside these standard techniques, we discuss advanced methods for signal processing in MEG, which take into account the frequency content of the recorded signal. We briefly refer to the prospects of Fourier analysis and wavelet transform in MEG data analysis, and suggest matching pursuit as a promising tool for signal decomposition and reconstruction with high resolution in time-frequency plane.  相似文献   

15.
Fused deposition modelling (FDM) is an extrusion based Rapid prototyping (RP) technique which can be used to fabricate tissue engineering scaffolds. The present work focuses on the study of the melt flow behaviour (MFB) of Poly-epsilon-caprolactone (PCL) as a representative biomaterial, on the FDM. The MFB significantly affects the quality of the scaffold which depends not only on the pressure gradient, its velocity, and the temperature gradients but also physical properties like the melt temperature and rheology. The MFB is studied using two methods: mathematical modelling and finite element analysis (FEA) using Ansys(R). The MFB is studied using accurate channel geometry by varying filament velocity at the entry and by varying nozzle diameters and angles at the exit. The comparative results of both mathematical modelling and FEA suggest that the pressure drop and the velocities of the melt flow depend on the flow channel parameters. One inference of particular interest is the temperature gradient of the PCL melt, which shows that it liquefies within 35% of the channel length. These results are invaluable to better understand the MFB of biomaterials that affects the quality of the scaffold built via FDM and can also be used to predict the MFB of other biomaterials.  相似文献   

16.
Results are provided for modelling in a Mathcad medium digital algorithms for processing signals obtained in diamond x-ray luminescence separators with continuous excitation. An estimate is given of the effectiveness of these algorithms and they are compared with analog traditional methods of signal processing. The question of temporal resolution of mineral luminescence signals is considered.  相似文献   

17.
Fused deposition modelling is the most significant technique in additive manufacturing (AM) that refers to the process where successive layers of material are deposited in a computer-controlled environment to create a three-dimensional object. The main limitations of using fused deposition modelling (FDM) process in the industrial applications are the narrow range of available materials and parts fabricated by FDM are used only as demonstration or conceptual parts rather than as functional parts. Recently, researchers have studied many ways in order to increase the range of materials available for the FDM process which resulted in the increase in the scope of FDM in various manufacturing sectors. Most of the research are focussed on the composite materials such as metal matrix composites, ceramic composites, natural fibre-reinforced composites and polymer matrix composites. This article intends to review the research carried out so far in developing samples using different composite materials and optimising their process parameters for FDM in order to improve different mechanical properties and other desired properties of the FDM components.  相似文献   

18.
Recently, a set of gradient-based optical proximity correction (OPC) and phase-shifting mask (PSM) optimization methods has been developed to solve for the inverse lithography problem under scalar imaging models, which are only accurate for numerical apertures (NAs) of less than approximately 0.4. However, as lithography technology enters the 45 nm realm, immersion lithography systems with hyper-NA (NA>1) are now extensively used in the semiconductor industry. For the hyper-NA lithography systems, the vector nature of the electromagnetic field must be taken into account, leading to the vector imaging models. Thus, the OPC and PSM optimization approaches developed under the scalar imaging models are inadequate to enhance the resolution in immersion lithography systems. This paper focuses on developing pixelated gradient-based OPC and PSM optimization algorithms under a vector imaging model. We first formulate the mask optimization framework, in which the imaging process of the optical lithography system is represented by an integrative and analytic vector imaging model. A gradient-based algorithm is then used to optimize the mask iteratively. Subsequently, a generalized wavelet penalty is proposed to keep a balance between the mask complexity and convergence errors. Finally, a set of methods is exploited to speed up the proposed algorithms.  相似文献   

19.
Fused deposition modelling (FDM) is one of the most commonly used additive manufacturing processes because of its environment-friendly nature and cost-effectiveness. However, it suffers badly from low surface quality due to a larger layer resolution. The surface finish of FDM parts can be enhanced by post chemical treatment using various solvents. The chemical treatment reduces the surface roughness by dissolving the external surfaces of 3D-printed samples. Chemical treatment is an easy, fast and economical technique. In the present investigation, the effect of chemical treatment on surface roughness and tensile strength of acrylonitrile butadiene styrene (ABS) parts made using the FDM process is investigated using two chemicals, namely acetone and 1, 2 dichloroethane. The post chemical treatment dramatically improves the surface finish and dimensional accuracy of ABS specimens. But chemical treatment results in the reduction of the tensile strength. Better tensile strength is obtained while using acetone solvent and a better surface finish is obtained using dichloroethane.  相似文献   

20.
Error localization plays a great role in modelling and in model-supported fault detection and diagnosis. Error localization includes detection and a rough quantification. A review of methods is presented. Global as well as local methods exist. An attempt is made to overcome the difficulties arising with inverse problems by using sophisticated procedures. First, onsets with modified well-posed operators are registered. These are consequently continued by applying regulatization methods which results in generalized solutions. The generalized solutions are stable and unique with minimum norm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号