首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   48篇
  免费   5篇
  国内免费   2篇
电工技术   3篇
综合类   6篇
机械仪表   2篇
建筑科学   2篇
矿业工程   1篇
轻工业   2篇
无线电   5篇
一般工业技术   10篇
冶金工业   2篇
自动化技术   22篇
  2023年   2篇
  2022年   1篇
  2020年   1篇
  2019年   2篇
  2018年   3篇
  2017年   2篇
  2016年   4篇
  2015年   3篇
  2014年   4篇
  2013年   4篇
  2012年   1篇
  2011年   7篇
  2010年   4篇
  2009年   4篇
  2008年   1篇
  2007年   2篇
  2005年   2篇
  2004年   1篇
  2002年   1篇
  2001年   1篇
  2000年   1篇
  1997年   1篇
  1996年   1篇
  1989年   1篇
  1988年   1篇
排序方式: 共有55条查询结果,搜索用时 62 毫秒
1.
2.
在计算机辅助诊断系统中,视网膜眼底图像序列的变化检测是一项重要且具有挑战性的任务。针对视网膜眼底图像序列采样帧少、光照干扰大、难以获得稳健的背景模型,提出了一种基于张量鲁棒主成分分析(tensor robust principal component analysis, TRPCA)的变化检测方法。该方法以TRPCA为模型,通过对序列背景扩充,再利用张量分解而获得变化区域:首先,先选择出序列中最接近正常状态的一张图像作为背景模型;然后,通过预处理将单帧背景模型扩张成多帧背景使得背景模型包含更丰富的光照变化;接着,将整个序列建模为一个3维张量体;最后,利用总变分约束背景模型和变化区域的时空连续性,通过Tucker分解分离出背景模型,获得变化区域。实验结果表明,与基于矩阵的鲁棒主成分分析(matrix robust principal component analysis, Matrix RPCA)方法,Masked-RPCA方法以及不加总变分约束的TRPCA方法相比,基于总变分约束的TRPCA方法能够更准确地分离出变化区域,对血管和光照干扰更具有鲁棒性。  相似文献   
3.
This paper addresses the sampling period scheduling of Networked Control Systems (NCSs) with multiple control loops. The generalized exponential function is employed to describe Integral Absolute Error (IAE) performance versus sampling period by Truetime toolbox under Matlab environment, and the sampling periods are scheduled to obtain the optimal integrated performance based on Kuhn–Tucker Theorem, which are subject to the stability of every control loop and the bandwidth on available network resource. Numerical examples are given to show the effectiveness of our method.  相似文献   
4.
The purpose of the present study was to examine the prevalence of cyber-bullying through Facebook in a sample of 226 Greek university undergraduates, and to explore whether big five personality characteristics, narcissism, as well as attitudes toward Facebook, technological knowledge and skills were predictive of such behavior. Participants completed a self-report questionnaire measuring the above constructs. Results indicated that almost one third of the sample reported Facebook bullying engagement at least once during the past month, with male students reporting more frequent involvement than females. Bullying through Facebook was predicted by low Agreeableness and more time spent on Facebook only for males, whereas for females none of the studied variables predicted engagement in Facebook bullying. Findings are discussed in terms of prevention and intervention strategies.  相似文献   
5.
In this paper, we present a hyperspectral image compression system based on the lapped transform and Tucker decomposition (LT-TD). In the proposed method, each band of a hyperspectral image is first decorrelated by a lapped transform. The transformed coefficients of different frequencies are rearranged into three-dimensional (3D) wavelet sub-band structures. The 3D sub-bands are viewed as third-order tensors. Then they are decomposed by Tucker decomposition into a core tensor and three factor matrices. The core tensor preserves most of the energy of the original tensor, and it is encoded using a bit-plane coding algorithm into bit-streams. Comparison experiments have been performed and provided, as well as an analysis regarding the contributing factors for the compression performance, such as the rank of the core tensor and quantization of the factor matrices.  相似文献   
6.
Analysis of high dimensional data in modern applications, such as neuroscience, text mining, spectral analysis, chemometrices naturally requires tensor decomposition methods. The Tucker decompositions allow us to extract hidden factors (component matrices) with different dimension in each mode, and investigate interactions among various modalities. The alternating least squares (ALS) algorithms have been confirmed effective and efficient in most of tensor decompositions, especially Tucker with orthogonality constraints. However, for nonnegative Tucker decomposition (NTD), standard ALS algorithms suffer from unstable convergence properties, demand high computational cost for large scale problems due to matrix inverse, and often return suboptimal solutions. Moreover they are quite sensitive with respect to noise, and can be relatively slow in the special case when data are nearly collinear. In this paper, we propose a new algorithm for nonnegative Tucker decomposition based on constrained minimization of a set of local cost functions and hierarchical alternating least squares (HALS). The developed NTD-HALS algorithm sequentially updates components, hence avoids matrix inverse, and is suitable for large-scale problems. The proposed algorithm is also regularized with additional constraint terms such as sparseness, orthogonality, smoothness, and especially discriminant. Extensive experiments confirm the validity and higher performance of the developed algorithm in comparison with other existing algorithms.  相似文献   
7.
Ledyard R Tucker, known as "Tuck" to generations of colleagues, students, and friends, died on August 16, 2004, at the age of 93 at his home in Savoy, Illinois. Tucker was one of the great pioneers in the history of psychometric methods. The impact of his extraordinary body of work remains evident in both applied and theoretical research today. This obituary discusses Tucker's life, his professional contributions, and his many achievements and awards. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
8.
葛素楠  韩敏 《电子学报》2014,42(5):992-997
针对瞬时欠定盲源信号分离问题,提出一种四阶累积张量分解算法.首先构建观察信号四阶累积协方差,依据源信号具有相互独立且均值为零的性质,对累积协方差化简并扩展到张量域,得到四阶累积张量.采用分层交替最小二乘算法对四阶累积张量进行非负库克分解,求得非负库克模型的参数,同时获得非负混合矩阵并求其伪逆,最终估计出源信号.选用真实的语音信号和生物信号进行仿真实验,结果表明该方法提高了源信号和非负混合矩阵的估计性能.  相似文献   
9.
Presents the citation, biography, and selected bibliography for Ledyard R. Tucker, one of the 1987 recipients of the American Psychological Association's Awards for Distinguished Scientific Contributions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
10.
The constrained estimation in Cox’s model for the right-censored survival data is studied and the asymptotic properties of the constrained estimators are derived by using the Lagrangian method based on Karush–Kuhn–Tucker conditions. A novel minorization–maximization (MM) algorithm is developed for calculating the maximum likelihood estimates of the regression coefficients subject to box or linear inequality restrictions in the proportional hazards model. The first M-step of the proposed MM algorithm is to construct a surrogate function with a diagonal Hessian matrix, which can be reached by utilizing the convexity of the exponential function and the negative logarithm function. The second M-step is to maximize the surrogate function with a diagonal Hessian matrix subject to box constraints, which is equivalent to separately maximizing several one-dimensional concave functions with a lower bound and an upper bound constraint, resulting in an explicit solution via a median function. The ascent property of the proposed MM algorithm under constraints is theoretically justified. Standard error estimation is also presented via a non-parametric bootstrap approach. Simulation studies are performed to compare the estimations with and without constraints. Two real data sets are used to illustrate the proposed methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号