排序方式: 共有39条查询结果,搜索用时 15 毫秒
1.
2.
A new approach for convolutive blind source separation (BSS) by explicitly exploiting the second-order nonstationarity of signals and operating in the frequency domain is proposed. The algorithm accommodates a penalty function within the cross-power spectrum-based cost function and thereby converts the separation problem into a joint diagonalization problem with unconstrained optimization. This leads to a new member of the family of joint diagonalization criteria and a modification of the search direction of the gradient-based descent algorithm. Using this approach, not only can the degenerate solution induced by a unmixing matrix and the effect of large errors within the elements of covariance matrices at low-frequency bins be automatically removed, but in addition, a unifying view to joint diagonalization with unitary or nonunitary constraint is provided. Numerical experiments are presented to verify the performance of the new method, which show that a suitable penalty function may lead the algorithm to a faster convergence and a better performance for the separation of convolved speech signals, in particular, in terms of shape preservation and amplitude ambiguity reduction, as compared with the conventional second-order based algorithms for convolutive mixtures that exploit signal nonstationarity. 相似文献
3.
Ahmad Sanei Mark French 《International Journal of Adaptive Control and Signal Processing》2004,18(4):403-421
We consider standard robust adaptive control designs based on the dead‐zone and projection modifications, and compare their performance w.r.t. a worst case transient cost functional penalizing the ??∞ norm of the output, control and control derivative. If a bound on the ??∞ norm of the disturbance is known, it is shown that the dead‐zone controller outperforms the projection controller if the a priori information on the uncertainty level is sufficiently conservative. The second result shows that the projection controller is superior to the dead‐zone controller when the a priori information on the disturbance level is sufficiently conservative. For conceptual clarity the results are presented on a non‐linear scalar system with a single uncertain parameter and generalizations are briefly discussed. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
4.
Number-average molecular weight ( $ \overline{M}_{n} $ ) variation of polyethylene terephthalate with respect to crystallization temperature and time, and solid-state polymerization (SSP) time were studied using response surface experimental design method. All experiments were conducted in a fluidized bed reactor. $ \overline{M}_{n} $ values were calculated by Mark?CHouwink equation upon determining intrinsic viscosity (IV) of samples. Two suitable models were proposed for $ \overline{M}_{n} $ and IV, based on the regression coefficient. It was observed that $ \overline{M}_{n} $ increases with decrease in crystallization temperature and increase in crystallization time and SSP time. It was shown that SSP time is the most important parameter based on statistical calculations. Crystallization time, crystallization temperature and SSP time were determined 60?min, 160?°C and 8?h, respectively, in order to achieve maximum $ \overline{M}_{n} $ . Density measurements were applied to study the overall crystallinity of samples. Based on density results it was revealed that percent of crystallinity is not the only factor that affects the $ \overline{M}_{n} $ of polymer. Differential scanning calorimeter was used to analyze thermal properties of the samples. All samples showed two melting peaks. It was observed that the lower melting temperature peak is related to the isothermal crystallization process temperature. Polarized light microscopy was used to study spherulitic structures of polymer films after crystallization process. It was shown that the sample with smallest spherulite size had the maximum $ \overline{M}_{n} $ equal to 26,000?g/mol. 相似文献
5.
Sparsity has been shown to be very useful in source separation of multichannel observations. However, in most cases, the sources of interest are not sparse in their current domain and one needs to sparsify them using a known transform or dictionary. If such a priori about the underlying sparse domain of the sources is not available, then the current algorithms will fail to successfully recover the sources. In this paper, we address this problem and attempt to give a solution via fusing the dictionary learning into the source separation. We first define a cost function based on this idea and propose an extension of the denoising method in the work of Elad and Aharon to minimize it. Due to impracticality of such direct extension, we then propose a feasible approach. In the proposed hierarchical method, a local dictionary is adaptively learned for each source along with separation. This process improves the quality of source separation even in noisy situations. In another part of this paper, we explore the possibility of adding global priors to the proposed method. The results of our experiments are promising and confirm the strength of the proposed approach. 相似文献
6.
Loukianos Spyrou Samaneh Kouchaki Saeid Sanei 《Journal of Signal Processing Systems》2018,90(2):273-284
Electroencephalography (EEG) signals arise as mixtures of various neural processes which occur in particular spatial, frequency, and temporal brain locations. In classification paradigms, algorithms are developed that can distinguish between these processes. In this work, we apply tensor factorisation to a set of EEG data from a group of epileptic patients and factorise the data into three modes; space, time, and frequency with each mode containing a number of components or signatures. We train separate classifiers on various feature sets corresponding to complementary combinations of those modes and components and test the classification accuracy for each set. The relative influence on the classification accuracy of the respective spatial, temporal, or frequency signatures can then be analysed and useful interpretations can be made. Additionaly, we show that through tensor factorisation we can perform dimensionality reduction by evaluating the classification performance with regards to the number of components in each mode and also by rejecting components with insignificant contribution to the classification accuracy. 相似文献
7.
Tracey K. M. Lee Mohammed Belkhatir Saeid Sanei 《Multimedia Tools and Applications》2014,72(3):2833-2869
Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available—for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance. 相似文献
8.
Hamed Azami Saeid Sanei Karim Mohammadi Hamid Hassanpour 《Digital Signal Processing》2013,23(4):1103-1114
Automatic segmentation of non-stationary signals such as electroencephalogram (EEG), electrocardiogram (ECG) and brightness of galactic objects has many applications. In this paper an improved segmentation method based on fractal dimension (FD) and evolutionary algorithms (EAs) for non-stationary signals is proposed. After using Kalman filter (KF) to reduce existing noises, FD which can detect the changes in both the amplitude and frequency of the signal is applied to reveal segments of the signal. In order to select two acceptable parameters of FD, in this paper two authoritative EAs, namely, genetic algorithm (GA) and imperialist competitive algorithm (ICA) are used. The proposed approach is applied to synthetic multi-component signals, real EEG data, and brightness changes of galactic objects. The proposed methods are compared with some well-known existing algorithms such as improved nonlinear energy operator (INLEO), Varri?s and wavelet generalized likelihood ratio (WGLR) methods. The simulation results demonstrate that segmentation by using KF, FD, and EAs have greater accuracy which proves the significance of this algorithm. 相似文献
9.
A gradient-based alternating minimization approach for optimization of the measurement matrix in compressive sensing 总被引:2,自引:0,他引:2
In this paper the problem of optimization of the measurement matrix in compressive (also called compressed) sensing framework is addressed. In compressed sensing a measurement matrix that has a small coherence with the sparsifying dictionary (or basis) is of interest. Random measurement matrices have been used so far since they present small coherence with almost any sparsifying dictionary. However, it has been recently shown that optimizing the measurement matrix toward decreasing the coherence is possible and can improve the performance. Based on this conclusion, we propose here an alternating minimization approach for this purpose which is a variant of Grassmannian frame design modified by a gradient-based technique. The objective is to optimize an initially random measurement matrix to a matrix which presents a smaller coherence than the initial one. We established several experiments to measure the performance of the proposed method and compare it with those of the existing approaches. The results are encouraging and indicate improved reconstruction quality, when utilizing the proposed method. 相似文献
10.
Ehsan Ardjmand William A. Young II Omid Sanei Bajgiran Bizhan Aminipour 《国际生产研究杂志》2016,54(13):3885-3905
The profitability of every manufacturing plant is dependent on its pricing strategy and a production plan to support the customers’ demand. In this paper, a new robust multi-product and multi-period model for planning and pricing is proposed. The demand is considered to be uncertain and price-dependent. Thus, for each price, a range of demands is possible. The unsatisfied demand is considered to be lost and hence, no backlogging is allowed. The objective is to maximise the profit over the planning horizon, which consists of a finite number of periods. To solve the proposed model, a modified unconscious search (US) algorithm is introduced. Several artificial test problems along with a real case implementation of the model in a textile manufacturing plant are used to show the applicability of the model and effectiveness of the US for tackling this problem. The results show that the proposed model can improve the profitability of the plant and the US is able to find high quality solutions in a very short time compared to exact methods. 相似文献