首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9431篇
  免费   23篇
电工技术   125篇
化学工业   290篇
金属工艺   288篇
机械仪表   52篇
建筑科学   38篇
矿业工程   1篇
能源动力   34篇
轻工业   32篇
水利工程   3篇
石油天然气   1篇
无线电   312篇
一般工业技术   162篇
冶金工业   120篇
原子能技术   85篇
自动化技术   7911篇
  2024年   1篇
  2023年   7篇
  2022年   7篇
  2021年   17篇
  2020年   11篇
  2019年   10篇
  2018年   13篇
  2017年   5篇
  2016年   18篇
  2015年   11篇
  2014年   224篇
  2013年   186篇
  2012年   782篇
  2011年   2292篇
  2010年   1118篇
  2009年   954篇
  2008年   679篇
  2007年   586篇
  2006年   457篇
  2005年   583篇
  2004年   524篇
  2003年   579篇
  2002年   281篇
  2001年   5篇
  2000年   5篇
  1999年   9篇
  1998年   24篇
  1997年   8篇
  1996年   8篇
  1995年   6篇
  1994年   7篇
  1993年   2篇
  1991年   5篇
  1990年   1篇
  1989年   3篇
  1988年   1篇
  1987年   4篇
  1986年   1篇
  1985年   3篇
  1983年   4篇
  1981年   2篇
  1979年   1篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1974年   1篇
  1973年   1篇
  1972年   2篇
排序方式: 共有9454条查询结果,搜索用时 15 毫秒
921.
In this paper we propose a new algorithm to detect the irises of both eyes from a face image. The algorithm first detects the face region in the image and then extracts intensity valleys from the face region. Next, the algorithm extracts iris candidates from the valleys using the feature template of Lin and Wu (IEEE Trans. Image Process. 8 (6) (1999) 834) and the separability filter of Fukui and Yamaguchi (Trans. IEICE Japan J80-D-II (8) (1997) 2170). Finally, using the costs for pairs of iris candidates proposed in this paper, the algorithm selects a pair of iris candidates corresponding to the irises. The costs are computed by using Hough transform, separability filter and template matching. As the results of the experiments, the iris detection rate of the proposed algorithm was 95.3% for 150 face images of 15 persons without spectacles in the database of University of Bern and 96.8% for 63 images of 21 persons without spectacles in the AR database.  相似文献   
922.
This paper presents an innovative approach called box method for feature extraction for the recognition of handwritten characters. In this method, the binary image of the character is partitioned into a fixed number of subimages called boxes. The features consist of vector distance (γ) from each box to a fixed point. To find γ the vector distances of all the pixels, lying in a particular box, from the fixed point are calculated and added up and normalized by the number of pixels within that box. Here, both neural networks and fuzzy logic techniques are used for recognition and recognition rates are found to be around 97 percent using neural networks and 98 percent using fuzzy logic. The methods are independent of font, size and with minor changes in preprocessing, it can be adopted for any language.  相似文献   
923.
One of the problems that hinders conventional methods for shape-from-shading is the presence of local specularities which may be misidentified as high curvature surface features. In this paper we address the problem of estimating the proportions of Lambertian and specular reflection components in order to improve the quality of surface normal information recoverable using shape-from-shading. The framework for our study is provided by the iterated conditional modes algorithm. We develop a maximum a posteriori probability (MAP) estimation method for estimating the mixing proportions for Lambertian and specular reflectance, and also, for recovering local surface normals. The MAP estimation scheme has two model ingredients. First, there are separate conditional measurement densities which describe the distributions of surface normal directions for the Lambertian and specular reflectance components. We experimentally compare three different models for the specular component. The second ingredient is a smoothness prior which models the distribution of surface normal directions over local image regions. We demonstrate the utility of method on real-world data. Ground truth data is provided by imagery obtained with crossed polaroid filters. This reveals not only that the method accurately estimates the proportion of specular reflection, but that it also results in good surface normal reconstruction in the proximity of specular highlights.  相似文献   
924.
In this paper, Delaunay triangulation is applied for the extraction of text areas in a document image. By representing the location of connected components in a document image with their centroids, the page structure is described as a set of points in two-dimensional space. When imposing Delaunay triangulation on these points, the text regions in the Delaunay triangulation will have distinguishing triangular features from image and drawing regions. For analysis, the Delaunay triangles are divided into four classes. The study reveals that specific triangles in text areas can be clustered together and identified as text body. Using this method, text regions in a document image containing fragments can also be recognized accurately. Experiments show the method is also very efficient.  相似文献   
925.
Techniques for color-based tracking of faces or hands often assume a static skin model yet skin color, as measured by a camera, can change when lighting changes. Therefore, for robust skin pixel detection, an adaptive skin color model must be employed. We demonstrate a chromaticity-based constraint to select training pixels in a scene for updating a dynamic skin color model under changing illumination conditions. The method makes use of the ‘skin locus’ of a camera, that is, the area in chromaticity space where skin chromaticity under various lighting and camera calibration conditions is observed. Skin color models derived from the technique are compared with that derived by a common spatial constraint and is shown to be more consistent with manually extracted ground truth skin model per frame even as localization errors increase. The technique is applied to color-based face tracking in indoor and outdoor videos and is shown to succeed more often than other color model adaptation techniques.  相似文献   
926.
Visual analysis of human motion is currently one of the most active research topics in computer vision. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Human motion analysis concerns the detection, tracking and recognition of people, and more generally, the understanding of human behaviors, from image sequences involving humans. This paper provides a comprehensive survey of research on computer-vision-based human motion analysis. The emphasis is on three major issues involved in a general human motion analysis system, namely human detection, tracking and activity understanding. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are discussed.  相似文献   
927.
This paper gives insight into the methods about how to improve the learning capabilities of multilayer feedforward neural networks with linear basis functions in the case of limited number of patterns according to the basic principles of support vector machine (SVM), namely, about how to get the optimal separating hyperplanes. And furthermore, this paper analyses the characteristics of sigmoid-type activation functions, and investigates the influences of absolute sizes of variables on the convergence rate, classification ability and non-linear fitting accuracy of multilayer feedforward networks, and presents the way of how to select suitable activation functions. As a result, this proposed method effectively enhances the learning abilities of multilayer feedforward neural networks by introducing the sum-of-squares weight term into the networks’ error functions and appropriately enlarging the variable components with the help of the SVM theory. Finally, the effectiveness of the proposed method is verified through three classification examples as well as a non-linear mapping one.  相似文献   
928.
With the emergence of digital libraries, more and more documents are stored and transmitted through the Internet in the format of compressed images. It is of significant meaning to develop a system which is capable of retrieving documents from these compressed document images. Aiming at the popular compression standard-CCITT Group 4 which is widely used for compressing document images, we present an approach to retrieve the documents from CCITT Group 4 compressed document images in this paper. The black and white changing elements are extracted directly from the compressed document images to act as the feature pixels, and the connected components are detected simultaneously. Then the word boxes are bounded based on the merging of the connected components. Weighted Hausdorff distance is proposed to assign all of the word objects from both the query document and the document from database to corresponding classes by an unsupervised classifier, whereas the possible stop words are excluded. Document vectors are built by the occurrence frequency of the word object classes, and the pair-wise similarity of two document images is represented by the scalar product of the document vectors. Nine groups of articles pertaining to different domains are used to test the validity of the presented approach. Preliminary experimental results with the document images captured from students’ theses show that the proposed approach has achieved a promising performance.  相似文献   
929.
Imagination and innovation are among the energy sources the PM industry needs to maintain growth rates. Research in New Jersey shows that binders and lubricants have a role to play in pushing green densities higher…  相似文献   
930.
Several studies have examined the relative performance merits of the torus and hypercube taking into account the channel bandwidth constraints imposed by implementation technology. While the torus has been shown to outperform the hypercube under the constant wiring density constraint, the opposite conclusion has been reached when the constant pin-out constraint is considered. However, all these studies have assumed deterministic routing and have not taken into account the internal hardware cost of routers. This paper re-examines the performance merits of the torus and hypercube using both fully-adaptive and deterministic routing strategies. Moreover, it uses a new cost model which takes into account the internal hardware cost of routers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号