首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Frequent occurrence of ocular artefacts leads to serious problems in reading and analysing the electroencephalogram (EEG) signal. These artefacts have high amplitude and overlapping frequency band with the physiological signal or real brain signal. Hence, it is difficult to reduce this type of artefacts by traditional filtering methods. In this paper, a novel ocular artefact removal method using artificial neural networks is described. In the proposed method, the number of radial basis function (RBF) neurons and input output space clustering are adaptively determined. Furthermore, the structure of the system and the parameters of the corresponding RBF units are trained automatically and relatively fast adaptation is attained. By the recursive least square error estimator techniques, the proposed system is suitable for real EEG applications. The advantages of the proposed method are demonstrated on EEG recordings by comparing with systems based on ICA. Our results demonstrate that this new system is preferable to other methods for ocular artefact reduction, achieving a better trade-off between removing artefacts and preserving inherent brain activities.  相似文献   

2.
In this work, we present a method to extract high-amplitude artefacts from single channel electroencephalogram (EEG) signals. The method is called local singular spectrum analysis (local SSA). It is based on a principal component analysis (PCA) applied to clusters of the multidimensional signals obtained after embedding the signals in their time-delayed coordinates. The decomposition of the multidimensional signals in each cluster is achieved by relating the largest eigenvalues with the large amplitude artefact component of the embedded signal. Then by reverting the clustering and embedding processes, the high-amplitude artefact can be extracted. Subtracting it from the original signal a corrected EEG signal results. The algorithm is applied to segments of real EEG recordings containing paroxysmal epileptiform activity contaminated by large EOG artefacts. We will show that the method can be applied also in parallel to correct all channels that present high-amplitude artefacts like ocular movement interferences or high-amplitude low frequency baseline drifts. The extracted artefacts as well as the corrected EEG will be presented.  相似文献   

3.
Automated detection of different waveforms in physiological signals has been one of the most intensively studied applications of signal processing in the clinical medicine. During recent years an increasing amount of neural network based methods have been proposed. In this paper we present a radial basis function (RBF) network based method for automated detection of different interference waveforms in epileptic EEG. This kind of artefact detector is especially useful as a preprocessing system in combination with different kinds of automated EEG analyzers to improve their general applicability. The results show that our neural network based classifier successfully detects artefacts at the rate of over 75% while the correct classification rate for normal segments is as high as about 95%.  相似文献   

4.
Routinely recorded electrocardiograms (ECGs) are often corrupted by different types of artefacts and many efforts have been made to enhance their quality by reducing the noise or artefacts. This paper addresses the problem of removing noise and artefacts from ECGs using independent component analysis (ICA). An ICA algorithm is tested on three-channel ECG recordings taken from human subjects, mostly in the coronary care unit. Results are presented that show that ICA can detect and remove a variety of noise and artefact sources in these ECGs. One difficulty with the application of ICA is the determination of the order of the independent components. A new technique based on simple statistical parameters is proposed to solve this problem in this application. The developed technique is successfully applied to the ECG data and offers potential for online processing of ECG using ICA.  相似文献   

5.
Digital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of these is an important part of preservation and dissemination of photographs of historical and cultural importance. While state-of-the-art deep learning models have shown impressive results in general image inpainting and denoising, film artefact removal is an understudied problem. It has particularly challenging requirements, due to the complex nature of analogue damage, the high resolution of film scans, and potential ambiguities in the restoration. There are no publicly available high-quality datasets of real-world analogue film damage for training and evaluation, making quantitative studies impossible. We address the lack of ground-truth data for evaluation by collecting a dataset of 4K damaged analogue film scans paired with manually-restored versions produced by a human expert, allowing quantitative evaluation of restoration performance. We have made the dataset available at https://doi.org/10.6084/m9.figshare.21803304. We construct a larger synthetic dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real, heavily-damaged images. We carefully validate the realism of the simulated damage via a human perceptual study, showing that even expert users find our synthetic damage indistinguishable from real. In addition, we demonstrate that training with our synthetically damaged dataset leads to improved artefact segmentation performance when compared to previously proposed synthetic analogue damage overlays. The synthetically damaged dataset can be found at https://doi.org/10.6084/m9.figshare.21815844, and the annotated authentic artefacts along with the resulting statistical damage model at https://github.com/daniela997/FilmDamageSimulator. Finally, we use these datasets to train and analyse the performance of eight state-of-the-art image restoration methods on high-resolution scans. We compare both methods which directly perform the restoration task on scans with artefacts, and methods which require a damage mask to be provided for the inpainting of artefacts. We modify the methods to process the inputs in a patch-wise fashion to operate on original high resolution film scans.  相似文献   

6.
Automating the detection of epileptic seizures could reduce the significant human resources necessary for the care of patients suffering from intractable epilepsy and offer improved solutions for closed-loop therapeutic devices such as implantable electrical stimulation systems. While numerous detection algorithms have been published, an effective detector in the clinical setting remains elusive. There are significant challenges facing seizure detection algorithms. The epilepsy EEG morphology can vary widely among the patient population. EEG recordings from the same patient can change over time. EEG recordings can be contaminated with artifacts that often resemble epileptic seizure activity. In order for an epileptic seizure detector to be successful, it must be able to adapt to these different challenges. In this study, a novel detector is proposed based on a support vector machine assembly classifier (SVMA). The SVMA consists of a group of SVMs each trained with a different set of weights between the seizure and non-seizure data and the user can selectively control the output of the SVMA classifier. The algorithm can improve the detection performance compared to traditional methods by providing an effective tuning strategy for specific patients. The proposed algorithm also demonstrates a clear advantage over threshold tuning. When compared with the detection performances reported by other studies using the publicly available epilepsy dataset hosted by the University of BONN, the proposed SVMA detector achieved the best total accuracy of 98.72%. These results demonstrate the efficacy of the proposed SVMA detector and its potential in the clinical setting.  相似文献   

7.
ElectroEncephaloGram (EEG) gives information about the electrical characteristics of the brain. EEG can be used for various applications, such as diagnosis of diseases, neuroscience and Brain Computer Interface (BCI). Several artefacts sources can disturb the brain signals in EEG measurements. The signals caused by eye movements are the most important sources of artefacts that must be removed in order to obtain a clean EEG signal. During the removal of Ocular Artefacts (OAs), the preserve of the original EEG signal is one of the most important points to be taken into account. An ElectroOculoGram (EOG) reference signal is needed in order to remove OAs in some methods. However, long-term EOG measurements can disturb a subject. In this paper, a novel robust method is proposed in order to remove OAs automatically from EEG without EOG reference signal by combining Outlier Detection and Independent Component Analysis (OD-ICA). The OD-ICA method searches OA patterns in all components instead of a single component. Moreover, OD-ICA removes only OA patterns and preserves meaningful EEG signal. In this method, user intervention is not needed. These advantages make the method robust. The OD-ICA is tested on two real datasets. Relative Error (RE), Correlation Coefficient (CorrCoeff) and percentage of finding OA pattern are used for the performance test. Furthermore, three different methods are used as Outlier Detection (OD) methods. These are the Chauvenet Criterion, the Peirce's Criterion and the Adjusted Box Plot. The performance analysis is made between our proposed method and the method of zeroing the component with artefact. The experiment results show that the proposed OD-ICA method effectively removes OAs from EEG signals and is also successful in preserving the meaningful EEG signals during the removal of OAs.  相似文献   

8.

Context

Requirements Engineering (RE) is a critical discipline mostly driven by uncertainty, since it is influenced by the customer domain or by the development process model used. Volatile project environments restrict the choice of methods and the decision about which artefacts to produce in RE.

Objective

We aim to investigate RE processes in successful project environments to discover characteristics and strategies that allow us to elaborate RE tailoring approaches in the future.

Method

We perform a field study on a set of projects at one company. First, we investigate by content analysis which RE artefacts were produced in each project and to what extent they were produced. Second, we perform qualitative analysis of semi-structured interviews to discover project parameters that relate to the produced artefacts. Third, we use cluster analysis to infer artefact patterns and probable RE execution strategies, which are the responses to specific project parameters. Fourth, we investigate by statistical tests the effort spent in each strategy in relation to the effort spent in change requests to evaluate the efficiency of execution strategies.

Results

We identified three artefact patterns and corresponding execution strategies. Each strategy covers different project parameters that impact the creation of certain artefacts. The effort analysis shows that the strategies have no significant differences in their effort and efficiency.

Conclusions

In contrast to our initial assumption that an increased effort in requirements engineering lowers the probability of change requests or project failures in general, our results show no statistically significant difference between the efficiency of the strategies. In addition, it turned out that many parameters considered as the main causes for project failures can be successfully handled. Hence, practitioners can apply the artefact patterns and related project parameters to tailor the RE process according to individual project characteristics.  相似文献   

9.
The aim of the study is classification of the electroencephalogram (EEG) signals by combination of the model-based methods and the least squares support vector machines (LS-SVMs). The LS-SVMs were implemented for classification of two types of EEG signals (set A – EEG signals recorded from healthy volunteers with eyes open and set E – EEG signals recorded from epilepsy patients during epileptic seizures). In order to extract the features representing the EEG signals, the spectral analysis of the EEG signals was performed by using the three model-based methods (Burg autoregressive – AR, moving average – MA, least squares modified Yule–Walker autoregressive moving average – ARMA methods). The present research demonstrated that the Burg AR coefficients are the features which well represent the EEG signals and the LS-SVM trained on these features achieved high classification accuracies.  相似文献   

10.
The problem of adaptive segmentation of time series with abrupt changes in the spectral characteristics is addressed. Such time series have been encountered in various fields of time series analysis such as speech processing, biomedical signal processing, image analysis and failure detection. Mathematically, these time series often can be modeled by zero mean gaussian distributed autoregressive (AR) processes, where the parameters of the process, including the gain factor, remain constant for certain time intervals and then jump abruptly to new values. Identification of such processes requires adaptive segmentation: the times of parameter jumps have to be estimated thoroughly to constitute boundaries of “homogeneous” segments which can be described by stationary AR processes. In this paper, a new effective method for sequential adaptive segmentation is proposed, which is based on parallel application of two sequential parameter estimation procedures. The detection of a parameter change as well as the estimation of the accurate position of a segment boundary is effectively performed by a sequence of suitable generalized likelihood ratio (GLR) tests. Flow charts as well as a block diagram of the algorithm are presented. The adjustment of the three control parameters of the procedure (the AR model order, a threshold for the GLR test and the length of a “test window”) is discussed with respect to various performance features. The results of simulation experiments are presented which demonstrate the good detection properties of the algorithm and in particular an excellent ability to allocate the segment boundaries even within a sequence of short segments. As an application to biomedical signals, the analysis of human electroencephalograms (EEG) is considered and an example is shown.  相似文献   

11.
12.
Electroencephalography (EEG) has been recently investigated as a biometric modality for automatic people recognition purposes. Several studies have shown that brain signals possess subject-specific traits that allow distinguishing people. Nonetheless, extracting discriminative characteristics from EEG recordings may be a challenging task, due to the significant presence of artifacts in the acquired data. In order to cope with such issue, in this paper we evaluate the effectiveness of some preprocessing techniques in automatically removing undesired EEG contributions, with the aim of improving the achievable recognition rates. Specifically, methods based on blind source separation and sample entropy estimation are here investigated. An extensive set of experimental tests, performed over a large database comprising recordings taken from 50 healthy subjects during three distinct sessions spanning a period of about one month, in both eyes-closed and eyes-open conditions, is carried out to analyze the performance of the proposed approaches.  相似文献   

13.
In the design of artefacts, tasks and environments for human use, the body dimensions of the target population are a critical element in spatial optimisation of the design. This study examines how the choices designers make affect the ability of different user groups to safely and effectively interact with a designed artefact. Due to the variability in body size and shape across different demographic groups, heterogeneous user populations are unlikely to experience uniform levels of performance. The associated variability in the rate of unacceptable user conditions is referred to here as disproportionate disaccommodation. This is both an ethical and a performance concern that can partially be addressed through improved design practice. Three methods for incorporating the consideration of user demographics and the corresponding variability in body size and shape are presented. They are compared with a baseline strategy in terms of accommodation and cost.  相似文献   

14.
ICA在思维脑电特征提取中的应用   总被引:3,自引:0,他引:3  
简要介绍了独立分量分析(ICA)的基本思想及算法,并将其应用在基于多导思维脑电(mental EEG)的特征提取方面。实验结果表明:ICA可以将脑电信号中包含的心电(ECG)、眼电(EOG)等多种干扰信号成功地分离出来,较好地完成了脑电消噪预处理工作。同时,通过使用ICA方法对不同心理作业的脑电信号进行分析处理,发现了与心理作业相对应的脑电独立分量特征,这些稳定的独立分量特征为心理作业分类和脑一机接口技术提供了新的实现方法。  相似文献   

15.
Image acquisition systems integrated with laboratory automation produce multi-dimensional datasets. An effective computational approach for automatic analysis of image datasets is given by pattern recognition methods; in some cases, it can be advantageous to accomplish pattern recognition with image super-resolution procedures. In this paper, we define a method derived from pattern recognition techniques for the recognition of artefacts and noise on set of images combined with super resolution algorithms. The advantage of our approach is automatic artefacts recognition, opening the possibility to build a general framework for artefact recognition independently by the specific application where it is used.  相似文献   

16.
17.
白帅帅  陈超  魏玮  代璐瑶  刘烨  邱爽  何晖光 《自动化学报》2023,49(10):2084-2093
基于脑电(Electroencephalogram, EEG)的谎言检测技术依赖于对事件相关电位(Event-related potential, ERP)的有效解码, 当前主要采用手工设计特征进行脑电分析. 近年来, 单试次脑电分类方法取得了长足进步, 其中端到端的脑电分类方法能够实现对脑电的自动特征提取和分类, 但在谎言检测中缺乏研究和应用, 同时存在无法在测谎场景下直接应用的问题. 本研究设计基于复合反应范式(Complex trial protocol, CTP)进行自我面孔信息识别任务的实验, 采集了18 名被试的脑电数据. 研究了不同端到端的单试次ERP分类方法在谎言检测中的应用, 同时针对单试次脑电解码方法无法直接实际应用的问题, 提出了一种类自举算法. 算法基于数据分布假设, 通过对比各类刺激图像被视为探针刺激时所训练模型的性能, 来推断真正的探针刺激. 实验结果表明, 在基于自我面孔信息的CTP的谎言预测中, 所提出的类自举法性能优于传统探针预测方法, 在仅使用少量脑电数据情况下, 可实现准确的谎言预测.  相似文献   

18.
The enhancement of monitoring biosignals plays a crucial role to thrive successfully computer-assisted diagnosis, ergo the deployment of outstanding approaches is an ongoing field of research demand. In the present article, a computational prototype for preprocessing short daytime polysomnographic (sdPSG) recordings based on advanced estimation techniques is introduced. The postulated model is capable of performing data segmentation, baseline correction, whitening, embedding artefacts removal and noise cancellation upon multivariate sdPSG data sets. The methodological framework includes Karhunen–Loève Transformation (KLT), Blind Source Separation with Second Order Statistics (BSS-SOS) and Wavelet Packet Transform (WPT) to attain low-order, time-to-diagnosis efficiency and modular autonomy. The data collected from 10 voluntary subjects were preprocessed by the model, in order to evaluate the withdrawal of noisy and artefactual activity from electroencephalographic (EEG) and electrooculographic (EOG) channels. The performance metrics are distinguished in qualitative (visual inspection) and quantitative manner, such as: Signal-to-Interference Ratio (SIR), Root Mean Square Error (RMSE) and Signal-to-Noise Ratio (SNR). The computational model demonstrated a complete artefact rejection in 80% of the preprocessed epochs, 4 to 8 dB for residual error and 12 to 30 dB in signal-to-noise gain after denoising trial. In comparison to previous approaches, N-way ANOVA tests were conducted to attest the prowess of the system in the improvement of electrophysiological signals to forthcoming processing and classification stages.  相似文献   

19.
We present ADvanced Artefact Management System (ADAMS), a web‐based system that integrates project management features, such as work‐breakdown structure definition, resource allocation, and schedule management as well as artefact management features, such as artefact versioning, traceability management, and artefact quality management. In this article we focus on the fine‐grained artefact management approach adopted in ADAMS, which is a valuable support to high‐level documentation and traceability management. In particular, the traceability layer in ADAMS is used to propagate events concerning changes to an artefact to the dependent artefacts, thus also increasing the context‐awareness in the project. We also present the results of experimenting with the system in software projects developed at the University of Salerno. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Electroencephalogram (EEG) recordings often experience interference by different kinds of noise, including white, muscle and baseline, severely limiting its utility. Artificial neural networks (ANNs) are effective and powerful tools for removing interference from EEGs. Several methods have been developed, but ANNs appear to be the most effective for reducing muscle and baseline contamination, especially when the contamination is greater in amplitude than the brain signal. An ANN as a filter for EEG recordings is proposed in this paper, developing a novel framework for investigating and comparing the relative performance of an ANN incorporating real EEG recordings. This method is based on a growing ANN that optimized the number of nodes in the hidden layer and the coefficient matrices, which are optimized by the simultaneous perturbation method. The ANN improves the results obtained with the conventional EEG filtering techniques: wavelet, singular value decomposition, principal component analysis, adaptive filtering and independent components analysis. The system has been evaluated within a wide range of EEG signals. The present study introduces a new method of reducing all EEG interference signals in one step with low EEG distortion and high noise reduction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号