首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1683篇
  免费   84篇
  国内免费   136篇
电工技术   16篇
综合类   88篇
化学工业   16篇
金属工艺   16篇
机械仪表   63篇
建筑科学   34篇
矿业工程   17篇
能源动力   2篇
轻工业   43篇
水利工程   7篇
石油天然气   4篇
武器工业   4篇
无线电   175篇
一般工业技术   58篇
冶金工业   12篇
原子能技术   3篇
自动化技术   1345篇
  2024年   2篇
  2023年   15篇
  2022年   24篇
  2021年   53篇
  2020年   40篇
  2019年   32篇
  2018年   35篇
  2017年   43篇
  2016年   51篇
  2015年   38篇
  2014年   86篇
  2013年   79篇
  2012年   101篇
  2011年   131篇
  2010年   109篇
  2009年   130篇
  2008年   113篇
  2007年   119篇
  2006年   140篇
  2005年   95篇
  2004年   80篇
  2003年   80篇
  2002年   70篇
  2001年   50篇
  2000年   46篇
  1999年   32篇
  1998年   22篇
  1997年   9篇
  1996年   8篇
  1995年   12篇
  1994年   3篇
  1993年   6篇
  1992年   4篇
  1990年   2篇
  1989年   3篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1983年   8篇
  1982年   5篇
  1981年   5篇
  1980年   4篇
  1979年   1篇
  1978年   3篇
  1977年   1篇
  1976年   2篇
  1973年   1篇
  1971年   1篇
  1960年   1篇
排序方式: 共有1903条查询结果,搜索用时 15 毫秒
131.
语句拼音-汉字转换的智能处理机制分析   总被引:4,自引:4,他引:4  
语句拼音- 汉字转换是中文信息处理研究的一个重要方面,是键盘汉字输入和语音输入的核心技术,其主要特征是对动态输入的拼音串进行词法分析,给出所有可能的汉语句子,然后对这些汉语句子根据上下文环境进行句法分析和语义分析,动态调整句子中的字词,输出最佳结果。近年来,语句拼音- 汉字转换系统大量应用了人工智能技术和机器翻译的理论,以期提高系统转换的准确率和增强系统的智能处理功能。本文分析了语句拼音- 汉字转换系统所采用的核心技术,即知识支持、自动分词和动态调整等,讨论了语句拼音- 汉字转换的处理方法和过程,知识库的组成结构,用于拼音串自动分词的算法和实现,音字转换中动态调整的概率模型等,本文还分析了现有语句拼音- 汉字转换系统在拼音串自动分词和音字转换的动态调整中发生错误的原因,并提出了改进方法。  相似文献   
132.
The development of video applications for digital multimedia has highlighted the need for indexing tools, enabling the access to meaningful segments of video. The high cost of manual indexing creates a demand for the development of automatic algorithms, able to extract such indices with little intervention. In this paper we present new editing model–based algorithms that automatically extract low–level features in a movie: camera shots and camera motion. Rules of film making are used to derive higher-level elements, such as shot-reverse shot sequences. The algorithms have been tested on 20 h of movies and comparison with techniques in the literature is provided.  相似文献   
133.
基于Word文档的数据交换策略及其实现   总被引:6,自引:0,他引:6  
论文分析了当前管理信息系统数据采集与传统的数据上报方式之间存在的问题,提出了一种基于Word文档实现数据采集和上报的解决方案,对基于Word文档的数据交换策略进行了研究,实现了基于Word文档的数据自动采集和数据库系统不确定报表的自动生成,结合实际MIS系统开发,验证了该解决方案的实用性和有效性。  相似文献   
134.
介绍Word使用中几种常见故障的解决方法.  相似文献   
135.
本文介绍了用绘图软件Microsoft Visio 2003绘制Word文档用电路图形的方法。应用这种绘图方法所绘制的电路图形在粘贴到Word文档中非常标准美观,且易学易用。因此,它是广大从事电子专业教学人员的绘图工具。  相似文献   
136.
Word prediction methodologies depend heavily on the statistical approach that uses the unigram, bigram, and the trigram of words. However, the construction of the N-gram model requires a very large size of memory, which is beyond the capability of many existing computers. Beside this, the approximation reduces the accuracy of word prediction. In this paper, we suggest to use a cluster of computers to build an Optimal Binary Search Tree (OBST) that will be used for the statistical approach in word prediction. The OBST will contain extra links so that the bigram and the trigram of the language will be presented. In addition, we suggest the incorporation of other enhancements to achieve optimal performance of word prediction. Our experimental results showed that the suggested approach improves the keystroke saving.  相似文献   
137.
介绍了利用Word及AutoCAD等通用软件管理各种工艺文件的方法和心得。  相似文献   
138.
The problem of extracting anatomical structures from medical images is both very important and difficult. In this paper we are motivated by a new paradigm in medical image segmentation, termed Citizen Science, which involves a volunteer effort from multiple, possibly non-expert, human participants. These contributors observe 2D images and generate their estimates of anatomical boundaries in the form of planar closed curves. The challenge, of course, is to combine these different estimates in a coherent fashion and to develop an overall estimate of the underlying structure. Treating these curves as random samples, we use statistical shape theory to generate joint inferences and analyze this data generated by the citizen scientists. The specific goals in this analysis are: (1) to find a robust estimate of the representative curve that provides an overall segmentation, (2) to quantify the level of agreement between segmentations, both globally (full contours) and locally (parts of contours), and (3) to automatically detect outliers and help reduce their influence in the estimation. We demonstrate these ideas using a number of artificial examples and real applications in medical imaging, and summarize their potential use in future scenarios.  相似文献   
139.
This paper proposes a new and reliable segmentation approach based on a fusion framework for combining multiple region-based segmentation maps (with any number of regions) to provide a final improved (i.e., accurate and consistent) segmentation result. The core of this new combination model is based on a consensus (cost) function derived from the recent information Theory based variation of information criterion, proposed by Meila, and allowing to quantify the amount of information that is lost or gained in changing from one clustering to another. In this case, the resulting consensus energy-based segmentation fusion model can be efficiently optimized by exploiting an iterative steepest local energy descent strategy combined with a connectivity constraint. This new framework of segmentation combination, relying on the fusion of inaccurate, quickly and roughly calculated, spatial clustering results, emerges as an appealing alternative to the use of complex segmentation models existing nowadays. Experiments on the Berkeley Segmentation Dataset show that the proposed fusion framework compares favorably to previous techniques in terms of reliability scores.  相似文献   
140.
In this paper we present an algorithm to segment the nuclei of neuronal cells in confocal microscopy images, a key technical problem in many experimental studies in the field of neuroscience. We describe the whole procedure, from the original images to the segmented individual nuclei, paying particular attention to the binarization of the images, which is not straightforward due to the technical difficulties related to the visualization of nuclei as individual objects and incomplete and irregular staining. We have focused on the division of clusters of nuclei that appear frequently in these images. Thus we have developed a clump-splitting algorithm to separate touching or overlapping nuclei allowing us to accurate account for both the number and size of the nuclei. The results presented in the paper show that the proposed algorithm performs well on different sets of images from different layers of the cerebral cortex.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号