首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
A new approach to common signals classification of relevance vector machine(RVM) was presented and two signal classifiers based on kernel methods of support vector machine(SVM) and RVM were compared and analyzed.First several robust features of signals were extracted as the input of classifiers,then the kernel thought was used to map feature vectors impliedly to the high dimensional feature space,and multi-class RVM and SVM classifiers were designed to complete AM,CW,SSB,MFSK and MPSK signals recognition.Simulation result showed that when chose proper parameter,RVM and SVM had comparable accuracy but RVM had less learning time and basis functions.The classification speed of RVM is much faster than SVM.  相似文献   

2.
Premature ventricular contraction (PVC) may lead to life-threatening cardiac conditions. Real-time automated PVC recognition approaches provide clinicians the useful tools for timely diagnosis if dangerous conditions surface in their patients. Based on the morphological differences of the PVC beats in the ventricular depolarization phase (QRS complex) and repolarization phase (mainly T-wave), two beat-to-beat template-matching procedures were implemented to identify them. Both templates were obtained by a probability-based approach and hence were fully data-adaptive. A PVC recognizer was then established by analyzing the correlation coefficients from the two template-matching procedures. Our approach was trained on 22 ECG recordings from the MIT-BIH arrhythmia database (MIT-BIH-AR) and then tested on another 22 nonoverlapping recordings from the same database. The PVC recognition accuracy was 98.2 %, with the sensitivity and positive predictivity of 93.1 and 81.4 %, respectively. To evaluate its robustness against noise, our approach was applied again to the above testing set, but this time, the ECGs were not preprocessed. A comparable performance was still obtained. A good generalization capability was also confirmed by validating our approach on an independent St. Petersburg Institute of Cardiological Technics database. In addition, our performance was comparable with these published complex approaches. In conclusion, we have developed a low-complexity data-adaptive PVC recognition approach with good robustness against noise and generalization capability. Its performance is comparable to other state-of-the-art methods, demonstrating a good potential in real-time application.  相似文献   

3.
Detection of signals in chaos   总被引:31,自引:0,他引:31  
In this paper, we present a new method for the detection of signals in “noise”, which is based on the premise that the “noise” is chaotic with at least one positive Lyapunov exponent. The method is naturally rooted in nonlinear dynamical systems and relies on neural networks for its implementation. We first present an introductory review of chaos. The subject matter selected for this part of the paper is written with emphasis on experimental studies of chaos using a time series. Specifically, we discuss the issues involved in the reconstruction of chaotic dynamics, attractor dimensions, and Lyapunov exponents. We describe procedures for the estimation of the correlation dimension and the largest Lyapunov exponent. The need for an adequate data length is stressed. In the second part of the paper we apply the chaos-based method to a difficult task: the radar detection of a small target in sea clutter  相似文献   

4.
For the compression of medical signals such as electrocardiogram (ECG), excellent reconstruction quality of a highly compressed signal can be obtained by using a wavelet-based approach. The most widely used objective quality criterion for the compressed ECG is called the percent of root-mean-square difference (PRD). In this paper, given a user-specified PRD, an algorithm is proposed to meet the PRD demand by searching for an appropriate bit rate in an automatic, smooth, and fast manner for the wavelet-based compression. The bit rate searching is modeled as a root-finding problem for a one-dimensional function, where an unknown rate-distortion curve represents the function and the desired rate is the root to be sought. A solution derived from root-finding methods in numerical analysis is proposed. The proposed solution is incorporated in a well-known wavelet-based coding strategy called set partitioning in hierarchical trees. ECG signals taken from the MIT/BIH database are tested, and excellent results in terms of convergence speed, quality variation, and coding performance are obtained.  相似文献   

5.
Vibroarthrography (VAG) is an innovative, objective, noninvasive technique for obtaining diagnostic information concerning the articular cartilage of a joint. Knee VAG signals can be detected using a contact sensor over the skin surface of the knee joint during knee movement such as flexion and/or extension. These measured signals. However, contain significant interference caused by muscle contraction that is required for knee movement. Quality improvement of VAG signals is an important subject, and crucial in computer-aided diagnosis of cartilage pathology. While simple frequency domain high-pass (or band-pass) filtering could be used for minimizing muscle contraction interference (MCI), it could eliminate possible overlapping spectral components of the VAG signals. In this work, an adaptive MCI cancellation technique is presented as an alternative technique for filtering VAG signals. Methods of measuring the VAG and reference signals (MCI) are described, with details on MCI identification. Characterization, and step size optimization for the adaptive filter. The performance of the method is evaluated by simulated signals as well as signals obtained from human subjects under isotonic contraction  相似文献   

6.
A kernel based on the first kind Bessel function of order one is proposed to compute the time-frequency distributions of nonstationary signals. This kernel can suppress the cross terms of the distribution effectively. It is shown that the Bessel distribution (the time-frequency distribution using Bessel kernel) meets most of the desirable properties with high time-frequency resolution. A numerical alias-free implementation of the distribution is presented. Examples of applications in time-frequency analysis of the heart's sound and Doppler blood flow signals are given to show that the Bessel distribution can be easily adapted to two very different signals for cardiovascular signal processing. By controlling a kernel parameter, this distribution can be used to compute the time-frequency representations of transient deterministic and random signals. The study confirms the potentials of the proposed distribution in nonstationary signal analysis  相似文献   

7.
In this paper, a new kernel-based deformable model is proposed for detecting deformable shapes. To incorporate valuable information for shape detection, such as edge orientations into the shape representation, a novel scheme based on kernel methods has been utilized. The variation model of a deformable shape is established by a set of training samples of the shape represented in a kernel feature space. The proposed deformable model consists of two parts: a set of basis vectors describing the sample subspace, including the shape representations of the training samples, and a feasibility constraint generated by the one-class support vector machine to describe the feasible region of the training samples in the sample subspace. The aim of the proposed feasibility constraint is to avoid finding some invalid shapes. By using the proposed deformable model, an efficient algorithm without initial solutions is developed for shape detection. The proposed approach was tested against real images. Experimental results show the effectiveness of the proposed deformable model and prove the feasibility of the proposed approach.  相似文献   

8.
Presents a new, quantitative approach to measuring abnormal intra-QRS signals, using the high-resolution electrocardiogram (HRECG). These signals are conventionally known as QRS “notches and slurs.” They are measured qualitatively and form the basis for the ECG identification of myocardial infarction. The HRECG is used for detection of ventricular late potentials (LP), which are linked with the presence of a reentry substrate for ventricular tachycardia (VT) after a myocardial infarction. LP's are defined as signals from areas of delayed conduction which outlast the normal QRS period. The authors' objective is to quantify very low-level abnormal signals that may not outlast the normal QRS period. In this work, abnormal intra-QRS potentials (AIQP) were characterized by removing the predictable, smooth part of the QRS from the original waveform. This was represented as the impulse response of an ARX parametric model, with model order selected empirically from a training data set. AIQP were estimated using the residual of the modeling procedure. Critical AIQP parameters to separate VT and non-VT subjects were obtained using discriminant functions. Results suggest that AIQP indexes are a new predictive index of the HRECG for VT. The concept of abnormal intra-QRS potentials permits the characterization of pathophysiological signals contained wholly within the normal QRS period, but related to arrhythmogenesis. The new method may have other applications, such as detection of myocardial ischemia and improved ECG identification of the site of myocardial infarction, particularly in the absence of Q waves  相似文献   

9.
Aineto  M. Lawson  S. 《Electronics letters》1998,34(19):1813-1814
Four-channel real reverberation signals recorded in shallow water are appropriately combined with zero-Doppler synthetic signals, regarded as the contact signals, to evaluate the performance of a least-squares lattice filter when applied to the detection of zero-Doppler contacts buried in reverberation, for different levels of input signal-to-reverberation ratio  相似文献   

10.
The problem of optimal detection of signal transients with unknown arrival times contaminated by additive Gaussian noise is considered. The transients are assumed to be time continuous and belong to a parameterized family with the uncertainty about the parameters described by means of an a priori distribution. Under the assumption of a negligible probability that the independent transient observations overlap in time, a likelihood ratio is derived for the problem of detecting an unknown number of transients from the family, each transient with unknown arrival time. The uncertainty about the arrival times is assumed to be equal for all transients and is also described by means of a distribution. Numerical simulations of the performance of detecting a particular transient signal family are presented in the form of receiver operating characteristics (ROCs) for both the optimal detector and the classical generalized likelihood ratio test (GLRT). The results show that the optimal detector yields noticeable performance improvements over the GLRT. Moreover, the results show that the optimal detector may still outperform the GLRT when the true and modeled uncertainties about arrival times no longer agree.  相似文献   

11.
Detection of weak signals in non-Gaussian noise   总被引:1,自引:0,他引:1  
A locally optimum detector structure is derived for the detection of weak signals in non-Gaussian environments. Optimum performance is obtained by employing a zero-memory nonlinearity prior to the matched filter that would be optimum for detecting the signal were the noise Gaussian. The asymptotic detection performance of the locally optimum detector under non-Gaussian conditions is derived and compared with that for the corresponding detector optimized for operations in Gaussian noise. Numerical results for the asymptotic detection performance are shown for signal detection in noise environments of practical interest.  相似文献   

12.
Nonlinear distortion of a signal passing through a system may be caused by a number of factors. One of those factors, a limiter like transfer function, is considered. The nonlinear distortion causes a change in the probability density function (PDF) of the signal. The PDF of the signal can be characterized by the coefficients of a fifth-order polynomial fitted to the PDF curve. The coefficients are used as a vector input to an artificial neural network trained to classify the vector as belonging to a distorted or undistorted audio signal. Results show that the artificial neural network is able to classify signals, with PDFs indicating the presence of significant high amplitude components, into distorted or undistorted. A low amplitude signal will not be distorted during its passage through a nonlinear system and therefore the output will be classified as "not distorted". This gives rise to, what seem to be, errors in the classification of signals. However, the technique developed identifies distortion in the signal and not in the system through which the signal has passed.  相似文献   

13.
Bayesian kernel methods for analysis of functional neuroimages   总被引:1,自引:0,他引:1  
We propose an approach to analyzing functional neuroimages in which 1) regions of neuronal activation are described by a superposition of spatial kernel functions, the parameters of which are estimated from the data and 2) the presence of activation is detected by means of a generalized likelihood ratio test (GLRT). Kernel methods have become a staple of modern machine learning. Herein, we show that these techniques show promise for neuroimage analysis. In an on-off design, we model the spatial activation pattern as a sum of an unknown number of kernel functions of unknown location, amplitude, and/or size. We employ two Bayesian methods of estimating the kernel functions. The first is a maximum a posteriori (MAP) estimation method based on a Reversible-Jump Markov-chain Monte-Carlo (RJMCMC) algorithm that searches for both the appropriate model complexity and parameter values. The second is a relevance vector machine (RVM), a kernel machine that is known to be effective in controlling model complexity (and thus discouraging overfitting). In each method, after estimating the activation pattern, we test for local activation using a GLRT. We evaluate the results using receiver operating characteristic (ROC) curves for simulated neuroimaging data and example results for real fMRI data. We find that, while RVM and RJMCMC both produce good results, RVM requires far less computation time, and thus appears to be the more promising of the two approaches.  相似文献   

14.
Zhidkov  S.V. 《Electronics letters》2005,41(25):1383-1384
Code-division multiplexing (CDM) is a robust transmission scheme recently adopted for satellite multimedia broadcasting systems. A drawback of CDM is its high peak-to-average power ratio, which can be reduced by clipping the baseband CDM signal. However, clipping introduces in-band noise that may considerably degrade system performance. Proposed is an iterative decision-directed technique for detecting clipped CDM signals. The performance of the algorithm is studied by means of computer simulation. Simulation results show that the proposed technique can provide significant performance improvement compared to a conventional (linear) detection of clipped CDM signals.  相似文献   

15.
16.
Search-efficient methods of detection of cyclostationary signals   总被引:6,自引:0,他引:6  
Conventional signal processing methods that exploit cyclostationarity for the detection of weak signals in noise require fine resolution in cycle frequency for long integration time. Hence, in cases of weak-signal detection and broadband search, problems in implementation, such as excessive computational complexity and storage and search arise. This paper introduces two new search-efficient methods of cycle detection, namely the autocorrelated cyclic autocorrelation (ACA) and the autocorrelated cyclic periodogram (ACP) methods. For a given level of performance reliability, the ACA and ACP methods allow much larger resolution width in cycle frequency to be used in their implementations, compared to the conventional methods of cyclic spectral analysis. Thus, the amount of storage and search can be substantially reduced. Analyses of the two methods, performance comparison, and computer simulation results are presented  相似文献   

17.
Ventricular fibrillation (VF) is the most serious variety of arrhythmia which requires quick and accurate detection to save lives. In this paper, we propose a new time domain algorithm, called threshold crossing sample count (TCSC), which is an improved version of the threshold crossing interval (TCI) algorithm for VF detection. The algorithm is based on an important feature of the VF signal which relies on the random behavior of the electrical heart vector. By two simple operations: comparison and count, the technique calculates an effective measure which is used to separate life-threatening VF from other heart rhythms. For assessment of the performance of the algorithm, the method is applied on the complete MIT-BIH arrhythmia and CU databases, and a promising good performance is observed. Seven other classical and new VF detection algorithms, including TCI, have been simulated and comparative performance results in terms of different quality parameters are presented. The TCSC algorithm yields the highest value of the area under the receiver operating characteristic curve (AUC). The new algorithm shows strong potential to be applied in clinical applications for faster and accurate detection of VF.  相似文献   

18.
Pritchard  J.A.S. 《Electronics letters》1985,21(25):1183-1185
A novel approach to the uncommitted, real-time demodulation of signals using statistical classifiers is described. Preliminary results are presented together with a brief discussion of current work.  相似文献   

19.
A correlation is presented between experimental data obtained under various operating conditions and a modified coherent rotation model which predicts the behavior of the residual signal and the sideband noise. The authors identify an operating regime that eliminates the residual signal and gives rise to a reduction in noise levels by a factor of five over previously reported results. Minimum detectable fields of 11±2pT/√Hz at 1 Hz and 38±8 pT/√Hz at 0.2 Hz are achieved  相似文献   

20.
Detection of non-Gaussian signals using integrated polyspectrum   总被引:7,自引:0,他引:7  
We consider the problem of detecting an unknown, random, stationary, non-Gaussian signal in Gaussian noise of unknown correlation structure. The same framework applies if one desires to determine whether the given random signal is non-Gaussian. The most commonly used method for detection of random signals is the so-called energy detector, which cannot distinguish between Gaussian and non-Gaussian signals and requires the knowledge of the noise power. Recently, the use of bispectrum and/or trispectrum of the signal has been suggested for detection of non-Gaussian signals. The higher order spectra-based detectors do not require the knowledge of the noise statistics if the noise is Gaussian. In this paper, we suggest the use of an integrated polyspectrum (bispectrum of trispectrum) to improve computational efficiency of the detectors based on polyspectrum and to possibly further enhance their detection performance. We investigate conditions under which use of the integrated polyspectrum is appropriate. The detector structure is derived, acid its performance is evaluated via simulations and comparisons with several other existing approaches  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号