首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12390篇
  免费   1811篇
  国内免费   1064篇
电工技术   392篇
综合类   1378篇
化学工业   825篇
金属工艺   728篇
机械仪表   501篇
建筑科学   2117篇
矿业工程   374篇
能源动力   500篇
轻工业   293篇
水利工程   220篇
石油天然气   201篇
武器工业   153篇
无线电   2085篇
一般工业技术   1631篇
冶金工业   309篇
原子能技术   48篇
自动化技术   3510篇
  2024年   59篇
  2023年   164篇
  2022年   335篇
  2021年   433篇
  2020年   471篇
  2019年   379篇
  2018年   428篇
  2017年   477篇
  2016年   537篇
  2015年   559篇
  2014年   816篇
  2013年   804篇
  2012年   894篇
  2011年   968篇
  2010年   731篇
  2009年   699篇
  2008年   810篇
  2007年   939篇
  2006年   822篇
  2005年   732篇
  2004年   591篇
  2003年   453篇
  2002年   390篇
  2001年   323篇
  2000年   274篇
  1999年   215篇
  1998年   214篇
  1997年   185篇
  1996年   120篇
  1995年   95篇
  1994年   79篇
  1993年   49篇
  1992年   40篇
  1991年   38篇
  1990年   26篇
  1989年   22篇
  1988年   21篇
  1987年   10篇
  1986年   6篇
  1985年   8篇
  1984年   7篇
  1983年   9篇
  1982年   8篇
  1981年   3篇
  1980年   9篇
  1979年   3篇
  1978年   3篇
  1977年   3篇
  1973年   3篇
  1959年   1篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
31.
In this paper we present a new radiosity algorithm, based on the notion of a well distributed ray set (WDRS). A WDRS is a set of rays, connecting mutually visible points and patches, that forms an approximate representation of the radiosity operator and the radiosity distribution. We propose an algorithm that constructs an optimal WDRS for a given accuracy and mesh. The construction is based on discrete importance sampling as in previously proposed stochastic radiosity algorithms, and on quasi Monte Carlo sampling. Quasi Monte Carlo sampling leads to faster convergence rates and the fact that the sampling is deterministic makes it possible to represent the well distributed ray set very efficiently in computer memory. Like previously proposed stochastic radiosity algorithms, the new algorithm is well suited for computing the radiance distribution in very complex diffuse scenes, when it is not feasible to explicitly compute and store form factors as in classical radiosity algorithms. Experiments show that the new algorithm is often more efficient than previously proposed Monte Carlo radiosity algorithms by half an order of magnitude and more.  相似文献   
32.
Speed-up fractal image compression with a fuzzy classifier   总被引:4,自引:0,他引:4  
This paper presents a fractal image compression scheme incorporated with a fuzzy classifier that is optimized by a genetic algorithm. The fractal image compression scheme requires to find matching range blocks to domain blocks from all the possible division of an image into subblocks. With suitable classification of the subblocks by a fuzzy classifier we can reduce the search time for this matching process so as to speedup the encoding process in the scheme. Implementation results show that by introducing three image classes and using fuzzy classifier optimized by a genetic algorithm the encoding process can be speedup by about 40% of an unclassified encoding system.  相似文献   
33.
离散效应存在时孤子脉冲的增强压缩   总被引:1,自引:0,他引:1  
在群速度不匹配的条件下,提出一种提高孤子脉冲压缩效应的新方法.研究表明,若使信号脉冲与泵浦脉冲存在一适当的初始延时,则不但可以提高离散效应单独存在时信号脉冲的压缩比,而且可以提高孤子脉冲的压缩质量.  相似文献   
34.
A system-on-chip (SOC) usually consists of many memory cores with different sizes and functionality, and they typically represent a significant portion of the SOC and therefore dominate its yield. Diagnostics for yield enhancement of the memory cores thus is a very important issue. In this paper we present two data compression techniques that can be used to speed up the transmission of diagnostic data from the embedded RAM built-in self-test (BIST) circuit that has diagnostic support to the external tester. The proposed syndrome-accumulation approach compresses the faulty-cell address and March syndrome to about 28% of the original size on average under the March-17N diagnostic test algorithm. The key component of the compressor is a novel syndrome-accumulation circuit, which can be realized by a content-addressable memory. Experimental results show that the area overhead is about 0.9% for a 1Mb SRAM with 164 faults. A tree-based compression technique for word-oriented memories is also presented. By using a simplified Huffman coding scheme and partitioning each 256-bit Hamming syndrome into fixed-size symbols, the average compression ratio (size of original data to that of compressed data) is about 10, assuming 16-bit symbols. Also, the additional hardware to implement the tree-based compressor is very small. The proposed compression techniques effectively reduce the memory diagnosis time as well as the tester storage requirement.  相似文献   
35.
洪庆月  周志林 《电子器件》1996,19(3):210-214
本文介绍了在计算机上实现的二种心电图数据压缩方法,并详述了每种压缩法对心电图压缩技术的压缩比、失真度的影响。  相似文献   
36.
双正交变换为基于块的低复杂度变换,而且与传统的离散余弦变换相比,一定程度上减小了变换后图像的块效应,因而被采纳入联合图像专家组织JPEG(Joint Photographic Expert Group)最新制定的静态图像编码标准JPEG XR中。为了能够改善其无法实现码流长度控制的缺陷,文章深入研究了JPEG XR的编码技术,提出了一种针对固定压缩比的编码算法。主要思路是通过对双正交变换后的系数进行嵌入式位平面编码,取代了原先的量化步骤,使得压缩码流长度可以精确控制。  相似文献   
37.
现有的小波分解边界连续性保持方法由于将去除的节点全部插回,摒弃了数据压缩特性.为此在曲线小波分解边界处理方面提出2种基于数据压缩的连续性保持方法:一是插入与Cr连续相关的节点,然后调整相应的控制顶点与高分辨曲线的控制顶点保持一致;二是不插入节点,直接调整与Cr连续相关的控制顶点.在曲面方面,提出了曲面分裂法和T曲面法:通过插入重节点把曲面分裂成边界部分和中心部分并对边界部分插入节点,再调整相应的控制顶点与高分辨曲面的重合;构造了T网格,使其边界部分与高分辨曲面一致,中心部分与低分辨曲面一致.最后把T曲面方法推广到周期曲面小波变换边界处理上.实验结果表明,文中方法既保证了小波分解后模型的边界连续,又保留了数据压缩特性.  相似文献   
38.
在VFP开发的应用中利用第三方压缩软件,实现对数据的压缩和解压缩操作。通过WinRAR压缩软件的一般调用方法,编程实现了对选择的任意数据文件进行压缩和解压缩的操作。  相似文献   
39.
The Dictionary of Old English computing systems have provided access since the 1970s to a database of approximately three million running words. These systems, designed for a variety of machines and written in a variety of languages, have until recently been planned with computing center billing algorithms in mind. With personal workstations emphasis has shifted to building more elegant user interfaces and to providing the entire DOE database to editors around the world. While the shift from sequential files to random access files and the provision of extensive development tools have changed some of the design process, error checking and protection of the database against accidental intrusion have remained as central issues.Richard L. Venezky is Unidel Professor of Educational Studies and professor of Computer and Information Sciences at the University of Delaware. He was formerly professor and chair of Computer Sciences at the University of Wisconsin. His research interests include writing systems, literacy, knowledge representation, and computer-assisted instruction. Among his recent publications are The Subtle Danger: Reflections on the Literacy Abilities of America's Young Adults (Princeton, NJ: ETS, 1987), and Steps Towards a Modern History of American Reading Instruction (Review of Research in Education, 1986, vol. 13, 129–70).  相似文献   
40.
提出了一种无理数字典码的测试数据压缩方案,能有效地压缩芯片测试数据。方案利用无理数序列建立字典,编码时只需进行相容性检查,无需填充无关位,简化了操作步骤;同时,选择局部压缩率最大的一组数据作为最终编码结果,保证压缩效果可以达到局部最优。对ISCAS 89标准电路Mintest集的实验结果显示,该方案的压缩效果优于传统压缩方法,如Golomb码、FDR码、混合定变长码等。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号