首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   38篇
  免费   1篇
电工技术   2篇
化学工业   2篇
机械仪表   3篇
轻工业   3篇
无线电   20篇
一般工业技术   2篇
冶金工业   1篇
自动化技术   6篇
  2019年   1篇
  2018年   2篇
  2017年   2篇
  2016年   1篇
  2014年   2篇
  2013年   3篇
  2012年   7篇
  2011年   5篇
  2009年   3篇
  2008年   3篇
  2007年   2篇
  2006年   4篇
  2005年   1篇
  2004年   2篇
  1994年   1篇
排序方式: 共有39条查询结果,搜索用时 46 毫秒
1.
2.
A new approach for convolutive blind source separation (BSS) by explicitly exploiting the second-order nonstationarity of signals and operating in the frequency domain is proposed. The algorithm accommodates a penalty function within the cross-power spectrum-based cost function and thereby converts the separation problem into a joint diagonalization problem with unconstrained optimization. This leads to a new member of the family of joint diagonalization criteria and a modification of the search direction of the gradient-based descent algorithm. Using this approach, not only can the degenerate solution induced by a unmixing matrix and the effect of large errors within the elements of covariance matrices at low-frequency bins be automatically removed, but in addition, a unifying view to joint diagonalization with unitary or nonunitary constraint is provided. Numerical experiments are presented to verify the performance of the new method, which show that a suitable penalty function may lead the algorithm to a faster convergence and a better performance for the separation of convolved speech signals, in particular, in terms of shape preservation and amplitude ambiguity reduction, as compared with the conventional second-order based algorithms for convolutive mixtures that exploit signal nonstationarity.  相似文献   
3.
We consider standard robust adaptive control designs based on the dead‐zone and projection modifications, and compare their performance w.r.t. a worst case transient cost functional penalizing the ?? norm of the output, control and control derivative. If a bound on the ?? norm of the disturbance is known, it is shown that the dead‐zone controller outperforms the projection controller if the a priori information on the uncertainty level is sufficiently conservative. The second result shows that the projection controller is superior to the dead‐zone controller when the a priori information on the disturbance level is sufficiently conservative. For conceptual clarity the results are presented on a non‐linear scalar system with a single uncertain parameter and generalizations are briefly discussed. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
4.
Number-average molecular weight ( $ \overline{M}_{n} $ ) variation of polyethylene terephthalate with respect to crystallization temperature and time, and solid-state polymerization (SSP) time were studied using response surface experimental design method. All experiments were conducted in a fluidized bed reactor. $ \overline{M}_{n} $ values were calculated by Mark?CHouwink equation upon determining intrinsic viscosity (IV) of samples. Two suitable models were proposed for $ \overline{M}_{n} $ and IV, based on the regression coefficient. It was observed that $ \overline{M}_{n} $ increases with decrease in crystallization temperature and increase in crystallization time and SSP time. It was shown that SSP time is the most important parameter based on statistical calculations. Crystallization time, crystallization temperature and SSP time were determined 60?min, 160?°C and 8?h, respectively, in order to achieve maximum $ \overline{M}_{n} $ . Density measurements were applied to study the overall crystallinity of samples. Based on density results it was revealed that percent of crystallinity is not the only factor that affects the $ \overline{M}_{n} $ of polymer. Differential scanning calorimeter was used to analyze thermal properties of the samples. All samples showed two melting peaks. It was observed that the lower melting temperature peak is related to the isothermal crystallization process temperature. Polarized light microscopy was used to study spherulitic structures of polymer films after crystallization process. It was shown that the sample with smallest spherulite size had the maximum $ \overline{M}_{n} $ equal to 26,000?g/mol.  相似文献   
5.
Blind separation of image sources via adaptive dictionary learning   总被引:1,自引:0,他引:1  
Sparsity has been shown to be very useful in source separation of multichannel observations. However, in most cases, the sources of interest are not sparse in their current domain and one needs to sparsify them using a known transform or dictionary. If such a priori about the underlying sparse domain of the sources is not available, then the current algorithms will fail to successfully recover the sources. In this paper, we address this problem and attempt to give a solution via fusing the dictionary learning into the source separation. We first define a cost function based on this idea and propose an extension of the denoising method in the work of Elad and Aharon to minimize it. Due to impracticality of such direct extension, we then propose a feasible approach. In the proposed hierarchical method, a local dictionary is adaptively learned for each source along with separation. This process improves the quality of source separation even in noisy situations. In another part of this paper, we explore the possibility of adding global priors to the proposed method. The results of our experiments are promising and confirm the strength of the proposed approach.  相似文献   
6.
Electroencephalography (EEG) signals arise as mixtures of various neural processes which occur in particular spatial, frequency, and temporal brain locations. In classification paradigms, algorithms are developed that can distinguish between these processes. In this work, we apply tensor factorisation to a set of EEG data from a group of epileptic patients and factorise the data into three modes; space, time, and frequency with each mode containing a number of components or signatures. We train separate classifiers on various feature sets corresponding to complementary combinations of those modes and components and test the classification accuracy for each set. The relative influence on the classification accuracy of the respective spatial, temporal, or frequency signatures can then be analysed and useful interpretations can be made. Additionaly, we show that through tensor factorisation we can perform dimensionality reduction by evaluating the classification performance with regards to the number of components in each mode and also by rejecting components with insignificant contribution to the classification accuracy.  相似文献   
7.
Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available—for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance.  相似文献   
8.
Automatic segmentation of non-stationary signals such as electroencephalogram (EEG), electrocardiogram (ECG) and brightness of galactic objects has many applications. In this paper an improved segmentation method based on fractal dimension (FD) and evolutionary algorithms (EAs) for non-stationary signals is proposed. After using Kalman filter (KF) to reduce existing noises, FD which can detect the changes in both the amplitude and frequency of the signal is applied to reveal segments of the signal. In order to select two acceptable parameters of FD, in this paper two authoritative EAs, namely, genetic algorithm (GA) and imperialist competitive algorithm (ICA) are used. The proposed approach is applied to synthetic multi-component signals, real EEG data, and brightness changes of galactic objects. The proposed methods are compared with some well-known existing algorithms such as improved nonlinear energy operator (INLEO), Varri?s and wavelet generalized likelihood ratio (WGLR) methods. The simulation results demonstrate that segmentation by using KF, FD, and EAs have greater accuracy which proves the significance of this algorithm.  相似文献   
9.
In this paper the problem of optimization of the measurement matrix in compressive (also called compressed) sensing framework is addressed. In compressed sensing a measurement matrix that has a small coherence with the sparsifying dictionary (or basis) is of interest. Random measurement matrices have been used so far since they present small coherence with almost any sparsifying dictionary. However, it has been recently shown that optimizing the measurement matrix toward decreasing the coherence is possible and can improve the performance. Based on this conclusion, we propose here an alternating minimization approach for this purpose which is a variant of Grassmannian frame design modified by a gradient-based technique. The objective is to optimize an initially random measurement matrix to a matrix which presents a smaller coherence than the initial one. We established several experiments to measure the performance of the proposed method and compare it with those of the existing approaches. The results are encouraging and indicate improved reconstruction quality, when utilizing the proposed method.  相似文献   
10.
The profitability of every manufacturing plant is dependent on its pricing strategy and a production plan to support the customers’ demand. In this paper, a new robust multi-product and multi-period model for planning and pricing is proposed. The demand is considered to be uncertain and price-dependent. Thus, for each price, a range of demands is possible. The unsatisfied demand is considered to be lost and hence, no backlogging is allowed. The objective is to maximise the profit over the planning horizon, which consists of a finite number of periods. To solve the proposed model, a modified unconscious search (US) algorithm is introduced. Several artificial test problems along with a real case implementation of the model in a textile manufacturing plant are used to show the applicability of the model and effectiveness of the US for tackling this problem. The results show that the proposed model can improve the profitability of the plant and the US is able to find high quality solutions in a very short time compared to exact methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号