首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2592篇
  免费   87篇
  国内免费   94篇
电工技术   32篇
技术理论   1篇
综合类   205篇
化学工业   51篇
金属工艺   20篇
机械仪表   33篇
建筑科学   602篇
矿业工程   48篇
能源动力   21篇
轻工业   27篇
水利工程   343篇
石油天然气   6篇
武器工业   1篇
无线电   106篇
一般工业技术   55篇
冶金工业   68篇
原子能技术   5篇
自动化技术   1149篇
  2024年   3篇
  2023年   11篇
  2022年   27篇
  2021年   52篇
  2020年   46篇
  2019年   20篇
  2018年   32篇
  2017年   45篇
  2016年   78篇
  2015年   84篇
  2014年   134篇
  2013年   94篇
  2012年   155篇
  2011年   154篇
  2010年   130篇
  2009年   150篇
  2008年   185篇
  2007年   181篇
  2006年   208篇
  2005年   149篇
  2004年   126篇
  2003年   131篇
  2002年   127篇
  2001年   79篇
  2000年   67篇
  1999年   48篇
  1998年   33篇
  1997年   43篇
  1996年   30篇
  1995年   25篇
  1994年   20篇
  1993年   21篇
  1992年   9篇
  1991年   9篇
  1990年   6篇
  1989年   6篇
  1988年   3篇
  1987年   5篇
  1986年   3篇
  1985年   3篇
  1984年   6篇
  1983年   3篇
  1982年   7篇
  1981年   6篇
  1980年   8篇
  1978年   2篇
  1977年   3篇
  1973年   1篇
  1972年   2篇
  1965年   1篇
排序方式: 共有2773条查询结果,搜索用时 152 毫秒
81.
分析了特征选择采用互信息方法时文本分类性能较低的原因,认为与其在特征选择时倾向于选择稀有特征这一缺陷有很大关系。在此基础上,提出了一种基于分散度和平均频度的互信息特征选择方法。实验结果表明,改进后的互信息方法使文本分类性能有明显提高。  相似文献   
82.
在分析传统互信息法缺陷的基础上,提出一种基于二次TF*IDF的互信息特征选择算法,对仅在一个类别中出现的特征词的重要程度给予再次的衡量,解决了互信息值相等而无法进行有效特征选择的问题。利用贝叶斯分类器对该方法进行验证的结果表明该算法在文本分类效率和正确率上比原有方法有一定的提高。  相似文献   
83.
基于文本相似度的网页消重策略   总被引:1,自引:0,他引:1  
针对在网页检索结果中经常出现内容相同或相似的问题,提出了一种通过计算网页相似度的方法进行网页消重。该算法通过提取网页特征串,特征串的提取在参考以往特征码提取的基础上,加入了文本结构特征的提取,通过比较特征串之间差异性的基础上得到网页的相似度。经与相似方法比较,结果表明,该方法减少了时间复杂度,具有较高的查全率和查准率,适于大规模网页消重。  相似文献   
84.
85.
Feature selection for text categorization is a well-studied problem and its goal is to improve the effectiveness of categorization, or the efficiency of computation, or both. The system of text categorization based on traditional term-matching is used to represent the vector space model as a document; however, it needs a high dimensional space to represent the document, and does not take into account the semantic relationship between terms, which leads to a poor categorization accuracy. The latent semantic indexing method can overcome this problem by using statistically derived conceptual indices to replace the individual terms. With the purpose of improving the accuracy and efficiency of categorization, in this paper we propose a two-stage feature selection method. Firstly, we apply a novel feature selection method to reduce the dimension of terms; and then we construct a new semantic space, between terms, based on the latent semantic indexing method. Through some applications involving the spam database categorization, we find that our two-stage feature selection method performs better.  相似文献   
86.
文本聚类在很多领域都有广泛的应用,传统的文本聚类方法由于并不考虑语义因素,得出的聚类效果并不理想.利用语义对VSM模型进行变换,即基于语义对VSM模型的各维进行扭曲,将原本的正交坐标系基于语义变换为斜角坐标系,然后将文本的特征向量映射到变换后的VSM模型上再进行聚类,相对减小语义相关的特征向量间的语义距离,从而提高了文...  相似文献   
87.
In video indexing and summarization, videotext is the very compact and accurate information. Most videotext detection and extraction methods only deal with the static videotext on video frames. Few methods can handle motion videotext efficiently since motion videotext is hardly extracted well. In this paper, we propose a two-directional videotext extractor, called 2DVTE. It is developed as an integrated system to detect, localize and extract the scrolling videotexts. First, the detection method is carried out by edge information to classify regions into text and non-text regions. Second, referring to the localization on scrolling videotext, we propose the two-dimensional projection profile method with horizontal and vertical edge map information. Considering the characteristics of Chinese text, the vertical edge map is used to localize the possible text region and horizontal edge map is used to refine the text region. Third, the extraction method consists of dual mode adaptive thresholding and multi-seed filling algorithm. In the dual mode adaptive thresholding, it produces the non-rectangle pattern to divide the background and foreground more precisely. Referring to the multi-seed filling algorithm, it is based on the consideration of the minimum and maximum length and four directions of the stroke while the previous method only considers the minimum length and two directions of the stroke. With this multi-seed exploitation on strokes, precise seeds are obtained to produce more sophisticated videotext. Considering high throughput and the low complexity issue, we can achieve a real-time system on detecting, localizing, and extracting the scrolling videotexts with only one frame usage instead of multi-frame integration in other literatures. According to the experiment results on various video sequences, all of the horizontal and vertical scrolling videotexts can be extracted precisely. We also make comparisons with other methods. In our analysis, the performance of our algorithm is superior to other existing methods in speed and quality.  相似文献   
88.
In this paper, we present a segmentation methodology of handwritten documents in their distinct entities, namely, text lines and words. Text line segmentation is achieved by applying Hough transform on a subset of the document image connected components. A post-processing step includes the correction of possible false alarms, the detection of text lines that Hough transform failed to create and finally the efficient separation of vertically connected characters using a novel method based on skeletonization. Word segmentation is addressed as a two class problem. The distances between adjacent overlapped components in a text line are calculated using the combination of two distance metrics and each of them is categorized either as an inter- or an intra-word distance in a Gaussian mixture modeling framework. The performance of the proposed methodology is based on a consistent and concrete evaluation methodology that uses suitable performance measures in order to compare the text line segmentation and word segmentation results against the corresponding ground truth annotation. The efficiency of the proposed methodology is demonstrated by experimentation conducted on two different datasets: (a) on the test set of the ICDAR2007 handwriting segmentation competition and (b) on a set of historical handwritten documents.  相似文献   
89.
90.
Finite mixture models have been applied for different computer vision, image processing and pattern recognition tasks. The majority of the work done concerning finite mixture models has focused on mixtures for continuous data. However, many applications involve and generate discrete data for which discrete mixtures are better suited. In this paper, we investigate the problem of discrete data modeling using finite mixture models. We propose a novel, well motivated mixture that we call the multinomial generalized Dirichlet mixture. The novel model is compared with other discrete mixtures. We designed experiments involving spatial color image databases modeling and summarization, and text classification to show the robustness, flexibility and merits of our approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号