首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2615篇
  免费   281篇
  国内免费   111篇
电工技术   168篇
综合类   153篇
化学工业   410篇
金属工艺   147篇
机械仪表   150篇
建筑科学   271篇
矿业工程   87篇
能源动力   70篇
轻工业   194篇
水利工程   41篇
石油天然气   153篇
武器工业   13篇
无线电   289篇
一般工业技术   354篇
冶金工业   152篇
原子能技术   24篇
自动化技术   331篇
  2024年   16篇
  2023年   57篇
  2022年   79篇
  2021年   120篇
  2020年   77篇
  2019年   91篇
  2018年   90篇
  2017年   119篇
  2016年   80篇
  2015年   97篇
  2014年   150篇
  2013年   182篇
  2012年   176篇
  2011年   197篇
  2010年   139篇
  2009年   137篇
  2008年   103篇
  2007年   119篇
  2006年   123篇
  2005年   124篇
  2004年   75篇
  2003年   81篇
  2002年   107篇
  2001年   81篇
  2000年   68篇
  1999年   54篇
  1998年   56篇
  1997年   48篇
  1996年   29篇
  1995年   29篇
  1994年   23篇
  1993年   21篇
  1992年   16篇
  1991年   8篇
  1990年   9篇
  1989年   5篇
  1988年   4篇
  1987年   2篇
  1986年   3篇
  1984年   1篇
  1980年   1篇
  1979年   1篇
  1977年   1篇
  1976年   2篇
  1975年   1篇
  1973年   1篇
  1943年   2篇
  1935年   1篇
  1934年   1篇
排序方式: 共有3007条查询结果,搜索用时 46 毫秒
21.
22.
针对湘钢三中央变电站220kV备用电源自投装置存在的问题,提出了新的方式和设想.分析采用新的方式后,停电时间由3.5s缩短至0.04~0.09s的可行性.同时还提出在SF6断路器的变电站,自投装置应采用的方式.  相似文献   
23.
The HaLoop approach to large-scale iterative data analysis   总被引:1,自引:0,他引:1  
The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce has enjoyed particular success. However, MapReduce lacks built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, and model fitting. This paper (This is an extended version of the VLDB 2010 paper “HaLoop: Efficient Iterative Data Processing on Large Clusters” PVLDB 3(1):285–296, 2010.) presents HaLoop, a modified version of the Hadoop MapReduce framework, that is designed to serve these applications. HaLoop allows iterative applications to be assembled from existing Hadoop programs without modification, and significantly improves their efficiency by providing inter-iteration caching mechanisms and a loop-aware scheduler to exploit these caches. HaLoop retains the fault-tolerance properties of MapReduce through automatic cache recovery and task re-execution. We evaluated HaLoop on a variety of real applications and real datasets. Compared with Hadoop, on average, HaLoop improved runtimes by a factor of 1.85 and shuffled only 4 % as much data between mappers and reducers in the applications that we tested.  相似文献   
24.
In places where mobile users can access multiple wireless networks simultaneously, a multipath scheduling algorithm can benefit the performance of wireless networks and improve the experience of mobile users. However, existing literature shows that it may not be the case, especially for TCP flows. According to early investigations, there are mainly two reasons that result in bad performance of TCP flows in wireless networks. One is the occurrence of out-of-order packets due to different delays in multiple paths. The other is the packet loss which is resulted from the limited bandwidth of wireless networks. To better exploit multipath scheduling for TCP flows, this paper presents a new scheduling algorithm named Adaptive Load Balancing Algorithm (ALBAM) to split traffic across multiple wireless links within the ISP infrastructure. Targeting at solving the two adverse impacts on TCP flows, ALBAM develops two techniques. Firstly, ALBAM takes advantage of the bursty nature of TCP flows and performs scheduling at the flowlet granularity where the packet interval is large enough to compensate for the different path delays. Secondly, ALBAM develops a Packet Number Estimation Algorithm (PNEA) to predict the buffer usage in each path. With PNEA, ALBAM can prevent buffer overflow and schedule the TCP flow to a less congested path before it suffers packet loss. Simulations show that ALBAM can provide better performance to TCP connections than its other counterparts.  相似文献   
25.
The spectrum of a residuated lattice L is the set Spec(L) of all prime i-filters. It is well known that Spec(L) can be endowed with the spectral topology. The main scope of this paper is to introduce and study another topology on Spec(L),?the so called stable topology, which turns out to be coarser than the spectral one. With this and in view, we introduce the notions of pure i-filter for a residuated lattice and the notion of normal residuated lattice. So, we generalize to case of residuated lattice some results relative to MV-algebras (Belluce and Sessa in Quaest Math 23:269–277, 2000; Cavaccini et?al. in Math Japonica 45(2):303–310, 1997) or BL-algebras (Eslami and Haghani in Kybernetika 45:491–506, 2009; Leustean in Central Eur J Math 1(3): 382–397, 2003; Turunen and Sessa in Mult-Valued Log 6(1–2):229–249, 2001).  相似文献   
26.
This paper presents a novel online learning visual servo controller integrating the FCMAC with proportion controller for the control of position of manipulator end-effector. Since the FCMAC has good learning capability and fast learning speed, and can save much computer memory space by fuzzy processing of input space division and memory unit activation, it is used to develop an adaptive control law by learning the relationship between the image feature errors and manipulator input, and the aim of online learning of the FCMAC is to minimize the output of proportion controller. Furthermore, the FCMAC has no need for models of robot manipulator and image feature extraction, so that the capability of proposed controller for tasks under uncertain environment can be improved. Finally, the proposed controller is proved to be effective by the experiment, and compared with BP neural network.  相似文献   
27.
酚醛树脂为原料制备双电层电容器用电极材料的工艺研究   总被引:6,自引:2,他引:6  
以酚醛树脂为原料,NaOH为活化剂制取双电层电容器用高比表面积活性炭电极材料,考察了炭化温度、活化温度、活化剂用量、活化时间等工艺参数对活性炭比电容的影响。实验结果表明,在炭化温度为600℃,活化温度为900℃,碱炭比为4,活化时间为1h的工艺条件下,制得的高比表面积活性炭比电容可达58.8F/g,用它组装成的电容器具有良好的充放电性能和循环性能,既能在大电流下快速充放电也能在小电流下缓慢充放电,但存在微孔所占比例较高引起的分散电容效应,这是大电流下放电容量有所下降的主要原因。  相似文献   
28.
窑炉节能监测是能源管理的一个重要组成部分.本文讨论了陶瓷隧道窑节能监测的意义、项目、方法和节能途径,以供参考.  相似文献   
29.
Since indoor scenes are frequently changed in daily life, such as re‐layout of furniture, the 3D reconstructions for them should be flexible and easy to update. We present an automatic 3D scene update algorithm to indoor scenes by capturing scene variation with RGBD cameras. We assume an initial scene has been reconstructed in advance in manual or other semi‐automatic way before the change, and automatically update the reconstruction according to the newly captured RGBD images of the real scene update. It starts with an automatic segmentation process without manual interaction, which benefits from accurate labeling training from the initial 3D scene. After the segmentation, objects captured by RGBD camera are extracted to form a local updated scene. We formulate an optimization problem to compare to the initial scene to locate moved objects. The moved objects are then integrated with static objects in the initial scene to generate a new 3D scene. We demonstrate the efficiency and robustness of our approach by updating the 3D scene of several real‐world scenes.  相似文献   
30.
Automatically identifying and extracting the target information of a webpage, especially main text, is a critical task in many web content analysis applications, such as information retrieval and automated screen reading. However, compared with typical plain texts, the structures of information on the web are extremely complex and have no single fixed template or layout. On the other hand, the amount of presentation elements on web pages, such as dynamic navigational menus, flashing logos, and a multitude of ad blocks, has increased rapidly in the past decade. In this paper, we have proposed a statistics-based approach that integrates the concept of fuzzy association rules (FAR) with that of sliding window (SW) to efficiently extract the main text content from web pages. Our approach involves two separate stages. In Stage 1, the original HTML source is pre-processed and features are extracted for every line of text; then, a supervised learning is performed to detect fuzzy association rules in training web pages. In Stage 2, necessary HTML source preprocessing and text line feature extraction are conducted the same way as that of Stage 1, after which each text line is tested whether it belongs to the main text by extracted fuzzy association rules. Next, a sliding window is applied to segment the web page into several potential topical blocks. Finally, a simple selection algorithm is utilized to select those important blocks that are then united as the detected topical region (main texts). Experimental results on real world data show that the efficiency and accuracy of our approach are better than existing Document Object Model (DOM)-based and Vision-based approaches.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号