首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9019篇
  免费   4篇
电工技术   122篇
综合类   1篇
化学工业   211篇
金属工艺   280篇
机械仪表   43篇
建筑科学   23篇
能源动力   24篇
轻工业   2篇
无线电   281篇
一般工业技术   74篇
冶金工业   38篇
原子能技术   83篇
自动化技术   7841篇
  2015年   1篇
  2014年   206篇
  2013年   163篇
  2012年   757篇
  2011年   2265篇
  2010年   1094篇
  2009年   937篇
  2008年   660篇
  2007年   574篇
  2006年   439篇
  2005年   566篇
  2004年   516篇
  2003年   573篇
  2002年   269篇
  2001年   2篇
  1997年   1篇
排序方式: 共有9023条查询结果,搜索用时 0 毫秒
91.
We present a method for object class detection in images based on global shape. A distance measure for elastic shape matching is derived, which is invariant to scale and rotation, and robust against non-parametric deformations. Starting from an over-segmentation of the image, the space of potential object boundaries is explored to find boundaries, which have high similarity with the shape template of the object class to be detected. An extensive experimental evaluation is presented. The approach achieves a remarkable detection rate of 83-91% at 0.2 false positives per image on three challenging data sets.  相似文献   
92.
In this paper an efficient feature extraction method named as locally linear discriminant embedding (LLDE) is proposed for face recognition. It is well known that a point can be linearly reconstructed by its neighbors and the reconstruction weights are under the sum-to-one constraint in the classical locally linear embedding (LLE). So the constrained weights obey an important symmetry: for any particular data point, they are invariant to rotations, rescalings and translations. The latter two are introduced to the proposed method to strengthen the classification ability of the original LLE. The data with different class labels are translated by the corresponding vectors and those belonging to the same class are translated by the same vector. In order to cluster the data with the same label closer, they are also rescaled to some extent. So after translation and rescaling, the discriminability of the data will be improved significantly. The proposed method is compared with some related feature extraction methods such as maximum margin criterion (MMC), as well as other supervised manifold learning-based approaches, for example ensemble unified LLE and linear discriminant analysis (En-ULLELDA), locally linear discriminant analysis (LLDA). Experimental results on Yale and CMU PIE face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.  相似文献   
93.
In this paper, a novel one-dimensional correlation filter based class-dependence feature analysis (1D-CFA) method is presented for robust face recognition. Compared with original CFA that works in the two dimensional (2D) image space, 1D-CFA encodes the image data as vectors. In 1D-CFA, a new correlation filter called optimal extra-class origin output tradeoff filter (OEOTF), which is designed in the low-dimensional principal component analysis (PCA) subspace, is proposed for effective feature extraction. Experimental results on benchmark face databases, such as FERET, AR, and FRGC, show that OEOTF based 1D-CFA consistently outperforms other state-of-the-art face recognition methods. This demonstrates the effectiveness and robustness of the novel method.  相似文献   
94.
Instance-based learning (IBL), so called memory-based reasoning (MBR), is a commonly used non-parametric learning algorithm. k-nearest neighbor (k-NN) learning is the most popular realization of IBL. Due to its usability and adaptability, k-NN has been successfully applied to a wide range of applications. However, in practice, one has to set important model parameters only empirically: the number of neighbors (k) and weights to those neighbors. In this paper, we propose structured ways to set these parameters, based on locally linear reconstruction (LLR). We then employed sequential minimal optimization (SMO) for solving quadratic programming step involved in LLR for classification to reduce the computational complexity. Experimental results from 11 classification and eight regression tasks were promising enough to merit further investigation: not only did LLR outperform the conventional weight allocation methods without much additional computational cost, but also LLR was found to be robust to the change of k.  相似文献   
95.
This paper presents a statistical approach to estimating the performance of a superscalar processor. Traditional trace-driven simulators can take a large amount time to conduct a performance evaluation of a machine, especially as the number of instructions increases. The result of this type of simulation is typically tied to the particular trace that was run. Elements such as dependencies, delays, and stalls are all a direct result of the particular trace being run, and can differ from trace to trace. This paper describes a model designed to separate simulation results from a specific trace. Rather than running a trace-driven simulation, a statistical model is employed, more specifically a Poisson distribution, to predict how these types of delay affects performance. Through the use of this statistical model, a performance evaluation can be conducted using a general code model, with specific stall rates, rather than a particular code trace. This model allows simulations to quickly run tens of millions of instructions and evaluate the performance of a particular micro-architecture while at the same time, allowing the flexibility to change the structure of the architecture.  相似文献   
96.
Recently, a chaos-based image encryption scheme called RCES (also called RSES) was proposed. This paper analyses the security of RCES, and points out that it is insecure against the known/chosen-plaintext attacks: the number of required known/chosen plain-images is only one or two to succeed an attack. In addition, the security of RCES against the brute-force attack was overestimated. Both theoretical and experimental analyses are given to show the performance of the suggested known/chosen-plaintext attacks. The insecurity of RCES is due to its special design, which makes it a typical example of insecure image encryption schemes. A number of lessons are drawn from the reported cryptanalysis of RCES, consequently suggesting some common principles for ensuring a high level of security of an image encryption scheme.  相似文献   
97.
Web proxy caches are used to reduce the strain of contemporary web traffic on web servers and network bandwidth providers. In this research, a novel approach to web proxy cache replacement which utilizes neural networks for replacement decisions is developed and analyzed. Neural networks are trained to classify cacheable objects from real world data sets using information known to be important in web proxy caching, such as frequency and recency. Correct classification ratios between 0.85 and 0.88 are obtained both for data used for training and data not used for training. Our approach is compared with Least Recently Used (LRU), Least Frequently Used (LFU) and the optimal case which always rates an object with the number of future requests. Performance is evaluated in simulation for various neural network structures and cache conditions. The final neural networks achieve hit rates that are 86.60% of the optimal in the worst case and 100% of the optimal in the best case. Byte-hit rates are 93.36% of the optimal in the worst case and 99.92% of the optimal in the best case. We examine the input-to-output mappings of individual neural networks and analyze the resulting caching strategy with respect to specific cache conditions.  相似文献   
98.
99.
100.
Adaptive random testing (ART) has recently been proposed to enhance the failure-detection capability of random testing. In ART, test cases are not only randomly generated, but also evenly spread over the input domain. Various ART algorithms have been developed to evenly spread test cases in different ways. Previous studies have shown that some ART algorithms prefer to select test cases from the edge part of the input domain rather than from the centre part, that is, inputs do not have equal chance to be selected as test cases. Since we do not know where the failure-causing inputs are prior to testing, it is not desirable for inputs to have different chances of being selected as test cases. Therefore, in this paper, we investigate how to enhance some ART algorithms by offsetting the edge preference, and propose a new family of ART algorithms. A series of simulations have been conducted and it is shown that these new algorithms not only select test cases more evenly, but also have better failure detection capabilities.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号