首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9107篇
  免费   4篇
  国内免费   1篇
电工技术   123篇
化学工业   228篇
金属工艺   280篇
机械仪表   44篇
建筑科学   27篇
能源动力   28篇
轻工业   14篇
水利工程   1篇
无线电   287篇
一般工业技术   90篇
冶金工业   49篇
原子能技术   84篇
自动化技术   7857篇
  2023年   2篇
  2022年   2篇
  2021年   9篇
  2020年   2篇
  2019年   4篇
  2018年   4篇
  2017年   2篇
  2016年   2篇
  2015年   3篇
  2014年   209篇
  2013年   179篇
  2012年   763篇
  2011年   2270篇
  2010年   1104篇
  2009年   941篇
  2008年   662篇
  2007年   574篇
  2006年   445篇
  2005年   568篇
  2004年   514篇
  2003年   571篇
  2002年   270篇
  1999年   1篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1995年   1篇
  1994年   1篇
  1983年   1篇
  1980年   2篇
  1977年   1篇
  1961年   1篇
排序方式: 共有9112条查询结果,搜索用时 0 毫秒
181.
One of the important obstacles in the image-based analysis of the human face is the 3D nature of the problem and the 2D nature of most imaging systems used for biometric applications. Due to this, accuracy is strongly influenced by the viewpoint of the images, being frontal views the most thoroughly studied. However, when fully automatic face analysis systems are designed, capturing frontal-view images cannot be guaranteed. Examples of this situation can be found in surveillance systems, car driver images or whenever there are architectural constraints that prevent from placing a camera frontal to the subject. Taking advantage of the fact that most facial features lie approximately on the same plane, we propose the use of projective geometry across different views. An active shape model constructed with frontal-view images can then be directly applied to the segmentation of pictures taken from other viewpoints. The proposed extension demonstrates being significantly more invariant than the standard approach. Validation of the method is presented in 360 images from the AV@CAR database, systematically divided into three different rotations (to both sides), as well as upper and lower views due to nodding. The presented tests are among the largest quantitative results reported to date in face segmentation under varying poses.  相似文献   
182.
A novel facial expression classification (FEC) method is presented and evaluated. The classification process is decomposed into multiple two-class classification problems, a choice that is analytically justified, and unique sets of features are extracted for each classification problem. Specifically, for each two-class problem, an iterative feature selection process that utilizes a class separability measure is employed to create salient feature vectors (SFVs), where each SFV is composed of a selected feature subset. Subsequently, two-class discriminant analysis is applied on the SFVs to produce salient discriminant hyper-planes (SDHs), which are used to train the corresponding two-class classifiers. To properly integrate the two-class classification results and produce the FEC decision, a computationally efficient and fast classification scheme is developed. During each step of this scheme, the most reliable classifier is identified and utilized, thus, a more accurate final classification decision is produced. The JAFFE and the MMI databases are used to evaluate the performance of the proposed salient-feature-and-reliable-classifier selection (SFRCS) methodology. Classification rates of 96.71% and 93.61% are achieved under the leave-one-sample-out evaluation strategy, and 85.92% under the leave-one-subject-out evaluation strategy.  相似文献   
183.
In knowledge discovery in a text database, extracting and returning a subset of information highly relevant to a user's query is a critical task. In a broader sense, this is essentially identification of certain personalized patterns that drives such applications as Web search engine construction, customized text summarization and automated question answering. A related problem of text snippet extraction has been previously studied in information retrieval. In these studies, common strategies for extracting and presenting text snippets to meet user needs either process document fragments that have been delimitated a priori or use a sliding window of a fixed size to highlight the results. In this work, we argue that text snippet extraction can be generalized if the user's intention is better utilized. It overcomes the rigidness of existing approaches by dynamically returning more flexible start-end positions of text snippets, which are also semantically more coherent. This is achieved by constructing and using statistical language models which effectively capture the commonalities between a document and the user intention. Experiments indicate that our proposed solutions provide effective personalized information extraction services.  相似文献   
184.
An ultrasound speckle reduction method is proposed in this paper. The filter, which enhances the power of anisotropic diffusion with the Smallest Univalue Segment Assimilating Nucleus (SUSAN) edge detector, is referred to as the SUSAN-controlled anisotropic diffusion (SUSAN_AD). The SUSAN edge detector finds image features by using local information from a pseudo-global perspective. Thanks to the noise insensitivity and structure preservation properties of SUSAN, a better control can be provided to the subsequent diffusion process. To enhance the adaptability of the SUSAN_AD, the parameters of the SUSAN edge detector are calculated based on the statistics of a fully formed speckle (FFS) region. Different FFS estimation schemes are proposed for envelope-detected speckle images and log-compressed ultrasonic images. Adaptive diffusion threshold estimation and automatic diffusion termination criterion are employed to enhance the robustness of the method. Both synthetic and real ultrasound images are used to evaluate the proposed method. The performance of the SUSAN_AD is compared with four other existing speckle reduction methods. It is shown that the proposed method is superior to other methods in both noise reduction and detail preservation.  相似文献   
185.
This paper proposes a generic methodology for segmentation and reconstruction of volumetric datasets based on a deformable model, the topological active volumes (TAV). This model, based on a polyhedral mesh, integrates features of region-based and boundary-based segmentation methods in order to fit the contours of the objects and model its inner topology. Moreover, it implements automatic procedures, the so-called topological changes, that alter the mesh structure and allow the segmentation of complex features such as pronounced curvatures or holes, as well as the detection of several objects in the scene. This work presents the TAV model and the segmentation methodology and explains how the changes in the TAV structure can improve the adjustment process. In particular, it is focused on the increase of the mesh density in complex image areas in order to improve the adjustment to object surfaces. The suitability of the mesh structure and the segmentation methodology is analyzed and the accuracy of the proposed model is proved with both synthetic and real images.  相似文献   
186.
In this paper, we introduce an efficient algorithm for mining discriminative regularities on databases with mixed and incomplete data. Unlike previous methods, our algorithm does not apply an a priori discretization on numerical features; it extracts regularities from a set of diverse decision trees, induced with a special procedure. Experimental results show that a classifier based on the regularities obtained by our algorithm attains higher classification accuracy, using fewer discriminative regularities than those obtained by previous pattern-based classifiers. Additionally, we show that our classifier is competitive with traditional and state-of-the-art classifiers.  相似文献   
187.
Recently Lin and Tsai [Secret image sharing with steganography and authentication, The Journal of Systems and Software 73 (2004) 405-414] and Yang et al. [Improvements of image sharing with steganography and authentication, The Journal of Systems and Software 80 (2007) 1070-1076] proposed secret image sharing schemes combining steganography and authentication based on Shamir's polynomials. The schemes divide a secret image into some shadows which are then embedded in cover images in order to produce stego images for distributing among participants. To achieve better authentication ability Chang et al. [Sharing secrets in stego images with authentication, Pattern Recognition 41 (2008) 3130-3137] proposed in 2008 an improved scheme which enhances the visual quality of the stego images as well and the probability of successful verification for a fake stego block is 1/16.In this paper, we employ linear cellular automata, digital signatures, and hash functions to propose a novel (t,n)-threshold image sharing scheme with steganographic properties in which a double authentication mechanism is introduced which can detect tampering with probability 255/256. Employing cellular automata instead of Shamir's polynomials not only improves computational complexity from to O(n) but obviates the need to modify pixels of cover images unnecessarily. Compared to previous methods [C. Lin, W. Tsai, Secret image sharing with steganography and authentication, The Journal of Systems and Software 73 (2004) 405-414; C. Yang, T. Chen, K. Yu, C. Wang, Improvements of image sharing with steganography and authentication, The Journal of Systems and Software 80 (2007) 1070-1076; C. Chang, Y. Hsieh, C. Lin, Sharing secrets in stego images with authentication, Pattern Recognition 41 (2008) 3130-3137], we use fewer number of bits in each pixel of cover images for embedding data so that a better visual quality is guaranteed. We further present some experimental results.  相似文献   
188.
189.
Many clustering approaches have been proposed in the literature, but most of them are vulnerable to the different cluster sizes, shapes and densities. In this paper, we present a graph-theoretical clustering method which is robust to the difference. Based on the graph composed of two rounds of minimum spanning trees (MST), the proposed method (2-MSTClus) classifies cluster problems into two groups, i.e. separated cluster problems and touching cluster problems, and identifies the two groups of cluster problems automatically. It contains two clustering algorithms which deal with separated clusters and touching clusters in two phases, respectively. In the first phase, two round minimum spanning trees are employed to construct a graph and detect separated clusters which cover distance separated and density separated clusters. In the second phase, touching clusters, which are subgroups produced in the first phase, can be partitioned by comparing cuts, respectively, on the two round minimum spanning trees. The proposed method is robust to the varied cluster sizes, shapes and densities, and can discover the number of clusters. Experimental results on synthetic and real datasets demonstrate the performance of the proposed method.  相似文献   
190.
Fusion of laser and vision in object detection has been accomplished by two main approaches: (1) independent integration of sensor-driven features or sensor-driven classifiers, or (2) a region of interest (ROI) is found by laser segmentation and an image classifier is used to name the projected ROI. Here, we propose a novel fusion approach based on semantic information, and embodied on many levels. Sensor fusion is based on spatial relationship of parts-based classifiers, being performed via a Markov logic network. The proposed system deals with partial segments, it is able to recover depth information even if the laser fails, and the integration is modeled through contextual information—characteristics not found on previous approaches. Experiments in pedestrian detection demonstrate the effectiveness of our method over data sets gathered in urban scenarios.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号