首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10680篇
  免费   29篇
  国内免费   139篇
电工技术   139篇
综合类   1篇
化学工业   295篇
金属工艺   436篇
机械仪表   53篇
建筑科学   68篇
矿业工程   2篇
能源动力   52篇
轻工业   69篇
水利工程   30篇
石油天然气   8篇
无线电   380篇
一般工业技术   166篇
冶金工业   205篇
原子能技术   91篇
自动化技术   8853篇
  2022年   8篇
  2021年   7篇
  2020年   6篇
  2019年   19篇
  2018年   13篇
  2017年   8篇
  2016年   12篇
  2015年   7篇
  2014年   220篇
  2013年   191篇
  2012年   776篇
  2011年   3086篇
  2010年   1146篇
  2009年   1017篇
  2008年   699篇
  2007年   609篇
  2006年   469篇
  2005年   588篇
  2004年   531篇
  2003年   592篇
  2002年   285篇
  2001年   14篇
  2000年   7篇
  1999年   37篇
  1998年   129篇
  1997年   59篇
  1996年   34篇
  1995年   15篇
  1994年   20篇
  1993年   17篇
  1992年   13篇
  1991年   14篇
  1990年   16篇
  1989年   13篇
  1988年   11篇
  1987年   9篇
  1986年   15篇
  1985年   9篇
  1984年   26篇
  1983年   12篇
  1982年   8篇
  1981年   11篇
  1977年   4篇
  1976年   10篇
  1975年   5篇
  1974年   5篇
  1971年   4篇
  1969年   6篇
  1968年   4篇
  1967年   5篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
201.
In this paper we propose a new circularity measure which defines the degree to which a shape differs from a perfect circle. The new measure is easy to compute and, being area based, is robust—e.g., with respect to noise or narrow intrusions. Also, it satisfies the following desirable properties:
it ranges over (0,1] and gives the measured circularity equal to 1 if and only if the measured shape is a circle;
it is invariant with respect to translations, rotations and scaling.
Compared with the most standard circularity measure, which considers the relation between the shape area and the shape perimeter, the new measure performs better in the case of shapes with boundary defects (which lead to a large increase in perimeter) and in the case of compound shapes. In contrast to the standard circularity measure, the new measure depends on the mutual position of the components inside a compound shape.Also, the new measure performs consistently in the case of shapes with very small (i.e., close to zero) measured circularity. It turns out that such a property enables the new measure to measure the linearity of shapes.In addition, we propose a generalisation of the new measure so that shape circularity can be computed while controlling the impact of the relative position of points inside the shape. An additional advantage of the generalised measure is that it can be used for detecting small irregularities in nearly circular shapes damaged by noise or during an extraction process in a particular image processing task.  相似文献   
202.
One of the important obstacles in the image-based analysis of the human face is the 3D nature of the problem and the 2D nature of most imaging systems used for biometric applications. Due to this, accuracy is strongly influenced by the viewpoint of the images, being frontal views the most thoroughly studied. However, when fully automatic face analysis systems are designed, capturing frontal-view images cannot be guaranteed. Examples of this situation can be found in surveillance systems, car driver images or whenever there are architectural constraints that prevent from placing a camera frontal to the subject. Taking advantage of the fact that most facial features lie approximately on the same plane, we propose the use of projective geometry across different views. An active shape model constructed with frontal-view images can then be directly applied to the segmentation of pictures taken from other viewpoints. The proposed extension demonstrates being significantly more invariant than the standard approach. Validation of the method is presented in 360 images from the AV@CAR database, systematically divided into three different rotations (to both sides), as well as upper and lower views due to nodding. The presented tests are among the largest quantitative results reported to date in face segmentation under varying poses.  相似文献   
203.
A novel facial expression classification (FEC) method is presented and evaluated. The classification process is decomposed into multiple two-class classification problems, a choice that is analytically justified, and unique sets of features are extracted for each classification problem. Specifically, for each two-class problem, an iterative feature selection process that utilizes a class separability measure is employed to create salient feature vectors (SFVs), where each SFV is composed of a selected feature subset. Subsequently, two-class discriminant analysis is applied on the SFVs to produce salient discriminant hyper-planes (SDHs), which are used to train the corresponding two-class classifiers. To properly integrate the two-class classification results and produce the FEC decision, a computationally efficient and fast classification scheme is developed. During each step of this scheme, the most reliable classifier is identified and utilized, thus, a more accurate final classification decision is produced. The JAFFE and the MMI databases are used to evaluate the performance of the proposed salient-feature-and-reliable-classifier selection (SFRCS) methodology. Classification rates of 96.71% and 93.61% are achieved under the leave-one-sample-out evaluation strategy, and 85.92% under the leave-one-subject-out evaluation strategy.  相似文献   
204.
In knowledge discovery in a text database, extracting and returning a subset of information highly relevant to a user's query is a critical task. In a broader sense, this is essentially identification of certain personalized patterns that drives such applications as Web search engine construction, customized text summarization and automated question answering. A related problem of text snippet extraction has been previously studied in information retrieval. In these studies, common strategies for extracting and presenting text snippets to meet user needs either process document fragments that have been delimitated a priori or use a sliding window of a fixed size to highlight the results. In this work, we argue that text snippet extraction can be generalized if the user's intention is better utilized. It overcomes the rigidness of existing approaches by dynamically returning more flexible start-end positions of text snippets, which are also semantically more coherent. This is achieved by constructing and using statistical language models which effectively capture the commonalities between a document and the user intention. Experiments indicate that our proposed solutions provide effective personalized information extraction services.  相似文献   
205.
An ultrasound speckle reduction method is proposed in this paper. The filter, which enhances the power of anisotropic diffusion with the Smallest Univalue Segment Assimilating Nucleus (SUSAN) edge detector, is referred to as the SUSAN-controlled anisotropic diffusion (SUSAN_AD). The SUSAN edge detector finds image features by using local information from a pseudo-global perspective. Thanks to the noise insensitivity and structure preservation properties of SUSAN, a better control can be provided to the subsequent diffusion process. To enhance the adaptability of the SUSAN_AD, the parameters of the SUSAN edge detector are calculated based on the statistics of a fully formed speckle (FFS) region. Different FFS estimation schemes are proposed for envelope-detected speckle images and log-compressed ultrasonic images. Adaptive diffusion threshold estimation and automatic diffusion termination criterion are employed to enhance the robustness of the method. Both synthetic and real ultrasound images are used to evaluate the proposed method. The performance of the SUSAN_AD is compared with four other existing speckle reduction methods. It is shown that the proposed method is superior to other methods in both noise reduction and detail preservation.  相似文献   
206.
This paper proposes a generic methodology for segmentation and reconstruction of volumetric datasets based on a deformable model, the topological active volumes (TAV). This model, based on a polyhedral mesh, integrates features of region-based and boundary-based segmentation methods in order to fit the contours of the objects and model its inner topology. Moreover, it implements automatic procedures, the so-called topological changes, that alter the mesh structure and allow the segmentation of complex features such as pronounced curvatures or holes, as well as the detection of several objects in the scene. This work presents the TAV model and the segmentation methodology and explains how the changes in the TAV structure can improve the adjustment process. In particular, it is focused on the increase of the mesh density in complex image areas in order to improve the adjustment to object surfaces. The suitability of the mesh structure and the segmentation methodology is analyzed and the accuracy of the proposed model is proved with both synthetic and real images.  相似文献   
207.
In this paper, we introduce an efficient algorithm for mining discriminative regularities on databases with mixed and incomplete data. Unlike previous methods, our algorithm does not apply an a priori discretization on numerical features; it extracts regularities from a set of diverse decision trees, induced with a special procedure. Experimental results show that a classifier based on the regularities obtained by our algorithm attains higher classification accuracy, using fewer discriminative regularities than those obtained by previous pattern-based classifiers. Additionally, we show that our classifier is competitive with traditional and state-of-the-art classifiers.  相似文献   
208.
Recently Lin and Tsai [Secret image sharing with steganography and authentication, The Journal of Systems and Software 73 (2004) 405-414] and Yang et al. [Improvements of image sharing with steganography and authentication, The Journal of Systems and Software 80 (2007) 1070-1076] proposed secret image sharing schemes combining steganography and authentication based on Shamir's polynomials. The schemes divide a secret image into some shadows which are then embedded in cover images in order to produce stego images for distributing among participants. To achieve better authentication ability Chang et al. [Sharing secrets in stego images with authentication, Pattern Recognition 41 (2008) 3130-3137] proposed in 2008 an improved scheme which enhances the visual quality of the stego images as well and the probability of successful verification for a fake stego block is 1/16.In this paper, we employ linear cellular automata, digital signatures, and hash functions to propose a novel (t,n)-threshold image sharing scheme with steganographic properties in which a double authentication mechanism is introduced which can detect tampering with probability 255/256. Employing cellular automata instead of Shamir's polynomials not only improves computational complexity from to O(n) but obviates the need to modify pixels of cover images unnecessarily. Compared to previous methods [C. Lin, W. Tsai, Secret image sharing with steganography and authentication, The Journal of Systems and Software 73 (2004) 405-414; C. Yang, T. Chen, K. Yu, C. Wang, Improvements of image sharing with steganography and authentication, The Journal of Systems and Software 80 (2007) 1070-1076; C. Chang, Y. Hsieh, C. Lin, Sharing secrets in stego images with authentication, Pattern Recognition 41 (2008) 3130-3137], we use fewer number of bits in each pixel of cover images for embedding data so that a better visual quality is guaranteed. We further present some experimental results.  相似文献   
209.
210.
Many clustering approaches have been proposed in the literature, but most of them are vulnerable to the different cluster sizes, shapes and densities. In this paper, we present a graph-theoretical clustering method which is robust to the difference. Based on the graph composed of two rounds of minimum spanning trees (MST), the proposed method (2-MSTClus) classifies cluster problems into two groups, i.e. separated cluster problems and touching cluster problems, and identifies the two groups of cluster problems automatically. It contains two clustering algorithms which deal with separated clusters and touching clusters in two phases, respectively. In the first phase, two round minimum spanning trees are employed to construct a graph and detect separated clusters which cover distance separated and density separated clusters. In the second phase, touching clusters, which are subgroups produced in the first phase, can be partitioned by comparing cuts, respectively, on the two round minimum spanning trees. The proposed method is robust to the varied cluster sizes, shapes and densities, and can discover the number of clusters. Experimental results on synthetic and real datasets demonstrate the performance of the proposed method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号