首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   103篇
  免费   3篇
  国内免费   6篇
电工技术   2篇
综合类   3篇
机械仪表   4篇
无线电   32篇
一般工业技术   7篇
自动化技术   64篇
  2023年   3篇
  2022年   2篇
  2021年   4篇
  2020年   5篇
  2019年   5篇
  2018年   3篇
  2017年   8篇
  2016年   8篇
  2015年   5篇
  2014年   12篇
  2013年   9篇
  2012年   14篇
  2011年   11篇
  2010年   4篇
  2009年   6篇
  2008年   1篇
  2007年   4篇
  2006年   2篇
  2005年   1篇
  2003年   1篇
  2001年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
排序方式: 共有112条查询结果,搜索用时 15 毫秒
1.
We present a new image reconstruction method for Electrical Capacitance Tomography (ECT). ECT image reconstruction is generally ill-posed because the number of measurements is small whereas the image dimensions are large. Here, we present a sparsity-inspired approach to achieve better ECT image reconstruction from the small number of measurements. Our approach for ECT image reconstruction is based on Total Variation (TV) regularization. We apply an efficient Split-Bregman Iteration (SBI) approach to solve the problem. We also propose three metrics to evaluate image reconstruction performance, i.e., a joint metric of positive reconstruction rate (PRR) and false reconstruction rate (FRR), correlation coefficient, and a shape and location metric. The results on both synthetic and real data show that the proposed TV-SBI method can better preserve the edges of images and better resolve different objects within reconstructed images, as compared to a representative state-of-the-art ECT image reconstruction algorithm, Projected Landweber Iteration with Linear Back Projection initialization (LBP-PLI).  相似文献   
2.
Traditional greedy algorithms need to know the sparsity of the signal in advance, while the sparsity adaptive matching pursuit algorithm avoids this problem at the expense of computational time. To overcome these problems, this paper proposes a variable step size sparsity adaptive matching pursuit (SAMPVSS). In terms of how to select atoms, this algorithm constructs a set of candidate atoms by calculating the correlation between the measurement matrix and the residual and selects the atom most related to the residual. In determining the number of atoms to be selected each time, the algorithm introduces an exponential function. At the beginning of the iteration, a larger step is used to estimate the sparsity of the signal. In the latter part of the iteration, the step size is set to one to improve the accuracy of reconstruction. The simulation results show that the proposed algorithm has good reconstruction effects on both one-dimensional and two-dimensional signals.  相似文献   
3.
This paper addresses the problem of direction of arrival (DOA) estimation by exploiting the sparsity enforced recovery technique for co-prime arrays, which can increase the degrees of freedom. To apply the sparsity based technique, the discretization of the potential DOA range is required and every target must fall on the predefined grid. Off-grid target can highly deteriorate the recovery performance. To the end, this paper takes the off-grid DOAs into account and reformulates the sparse recovery problem with unknown grid offset vector. By introducing a convex function majorizing the given objective function, an iterative approach is developed to gradually amend the offset vector to achieve final DOA estimation. Numerical simulations are provided to verify the effectiveness of the proposed method in terms of detection ability, resolution ability and root mean squared estimation error, as compared to the other state-of-the-art methods.  相似文献   
4.
应文豪  王士同 《计算机科学》2013,40(8):239-244,257
许多模式分类方法比如支持向量机和L2核分类器等都会利用核方法并转化为二次规划问题进行求解,而计算核矩阵需要O(m2)的空间复杂度,求解QP问题则需要O(m3)的时间复杂度,这就使得此类方法在大样本数据上的学习性能非常低下。对此,首次提出了相似度差支持向量机算法DSSVM。算法旨在寻求样本与某类相似度的一个最佳线性表示,并从线性表示的稀疏性以及相似度差意义上的间隔最大化角度构造了新的最优化问题。同时,证明了该算法等价于中心约束型最小包含球问题,这样就可以通过引入最小包含球的快速学习理论将相似度差支持向量机扩展为相似度差核支持向量机DSCVM,从而较好地解决了大规模数据集的分类问题。实验证明了相似度差支持向量机和相似度差核支持向量机的有效性。  相似文献   
5.
Recommendation system has been a rhetoric area and a topic of rigorous research owing to its application in various domains, from academics to industries through e-commerce. Recommendation system is useful in reducing information overload and improving decision making for customers in any arena. Recommending products to attract customers and meet their needs have become an important aspect in this competitive environment. Although there are many approaches to recommend items, collaborative filtering has emerged as an efficient mechanism to perform the same. Added to it there are many evolutionary methods that could be incorporated to achieve better results in terms of accuracy of prediction, handling sparsity as well as cold start problems. In this paper, we have used unsupervised learning to address the problem of scalability. The recommendation engine reduces calculation time by matching the interest profile of the user to its partitioned and even smaller training samples. Additionally, we have explored the aspect of finding global neighbours through transitive similarities and incorporating particle swarm optimization (PSO) to assign weights to various alpha estimates (including the proposed α7) that alleviate sparsity problem. Our experimental study reveals that the particle swarm optimized alpha estimate has significantly increased the accuracy of prediction over the traditional methods of collaborative filtering and fixed alpha scheme.  相似文献   
6.
对于一般凸问题,对偶平均方法的收敛性分析需要在对偶空间进行转换,难以得到个体收敛性结果.对此,文中首先给出对偶平均方法的简单收敛性分析,证明对偶平均方法具有与梯度下降法相同的最优个体收敛速率Ο(lnt/t).不同于梯度下降法,讨论2种典型的步长策略,验证对偶平均方法在个体收敛分析中具有步长策略灵活的特性.进一步,将个体收敛结果推广至随机形式,确保对偶平均方法可有效处理大规模机器学习问题.最后,在L1范数约束的hinge损失问题上验证理论分析的正确性.  相似文献   
7.
Classification based on Fisher's linear discriminant analysis (FLDA) is challenging when the number of variables largely exceeds the number of given samples. The original FLDA needs to be carefully modified and with high dimensionality implementation issues like reduction of storage costs are of crucial importance. Methods are reviewed for the high dimension/small sample size problem and the one closest, in some sense, to the classical regular approach is chosen. The implementation of this method with regard to computational and storage costs and numerical stability is improved. This is achieved through combining a variety of known and new implementation strategies. Experiments demonstrate the superiority, with respect to both overall costs and classification rates, of the resulting algorithm compared with other methods.  相似文献   
8.
We consider the problem of finding a linear combination of at most t out of K column vectors in a matrix, such that a target vector is approximated as closely as possible. The motivation of the model is to find a lower-dimensional representation of a given signal vector (target) while minimizing loss of accuracy. We point out the computational intractability of this problem, and suggest some local search heuristics for the unit norm case. The heuristics, all of which are based on pivoting schemes in a related linear program, are compared experimentally with respect to speed and accuracy.  相似文献   
9.
压缩感知理论的基本思想是原始信号在某一变换域是稀疏的或者是可压缩的,并将奈奎斯特采样定理中的采样过程和压缩过程合二为一。稀疏度自适应匹配追踪(SAMP)算法能够实现稀疏度未知情况下的重构,而广义正交匹配追踪算法每次迭代时选择多个原子,提高了算法的收敛速度。基于上述两种重构算法的优势,提出了广义稀疏度自适应匹配追踪(Generalized Sparse Adaptive Matching Pursuit,gSAMP)算法。针对重构图像的峰值信噪比、重构时间、相对误差等客观评价指标,以及主观视觉上对所提算法与传统的贪婪算法进行对比。在压缩比固定为0.5时,gSAMP算法的重构效果优于传统的MP、OMP、ROMP、SAMP以及gOMP贪婪类重构算法的效果。  相似文献   
10.
Since their introduction by Jones and Nachtsheim in 2011 Jones, B., and Nachtsheim, C. J. (2011), “A Class of Three-Level Designs for Definitive Screening in the Presence of Second-Order Effects,” Journal of Quality Technology, 43, 115.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], definitive screening designs (DSDs) have seen application in fields as diverse as bio-manufacturing, green energy production, and laser etching. One barrier to their routine adoption for screening is due to the difficulties practitioners experience in model selection when both main effects and second-order effects are active. Jones and Nachtsheim showed that for six or more factors, DSDs project to designs in any three factors that can fit a full quadratic model. In addition, they showed that DSDs have high power for detecting all the main effects as well as one two-factor interaction or one quadratic effect as long as the true effects are much larger than the error standard deviation. However, simulation studies of model selection strategies applied to DSDs can disappoint by failing to identify the correct set of active second-order effects when there are more than a few such effects. Standard model selection strategies such as stepwise regression, all-subsets regression, and the Dantzig selector are general tools that do not make use of any structural information about the design. It seems reasonable that a modeling approach that makes use of the known structure of a designed experiment could perform better than more general purpose strategies. This article shows how to take advantage of the special structure of the DSD to obtain the most clear-cut analytical results possible.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号