首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10005篇
  免费   870篇
  国内免费   95篇
电工技术   198篇
综合类   45篇
化学工业   2729篇
金属工艺   243篇
机械仪表   426篇
建筑科学   375篇
矿业工程   22篇
能源动力   670篇
轻工业   985篇
水利工程   188篇
石油天然气   145篇
武器工业   5篇
无线电   1035篇
一般工业技术   1740篇
冶金工业   252篇
原子能技术   71篇
自动化技术   1841篇
  2024年   35篇
  2023年   194篇
  2022年   303篇
  2021年   630篇
  2020年   573篇
  2019年   719篇
  2018年   799篇
  2017年   756篇
  2016年   757篇
  2015年   447篇
  2014年   749篇
  2013年   1109篇
  2012年   702篇
  2011年   792篇
  2010年   513篇
  2009年   444篇
  2008年   277篇
  2007年   209篇
  2006年   168篇
  2005年   114篇
  2004年   115篇
  2003年   71篇
  2002年   68篇
  2001年   36篇
  2000年   28篇
  1999年   34篇
  1998年   31篇
  1997年   27篇
  1996年   31篇
  1995年   25篇
  1994年   16篇
  1993年   19篇
  1992年   13篇
  1991年   22篇
  1990年   20篇
  1989年   17篇
  1988年   11篇
  1987年   9篇
  1986年   7篇
  1985年   9篇
  1984年   14篇
  1983年   14篇
  1982年   5篇
  1981年   8篇
  1980年   5篇
  1979年   7篇
  1978年   6篇
  1977年   5篇
  1976年   2篇
  1973年   2篇
排序方式: 共有10000条查询结果,搜索用时 23 毫秒
101.
Robot manufacturers will be required to demonstrate objectively that all reasonably foreseeable hazards have been identified in any robotic product design that is to be marketed commercially. This is problematic for autonomous mobile robots because conventional methods, which have been developed for automatic systems do not assist safety analysts in identifying non-mission interactions with environmental features that are not directly associated with the robot’s design mission, and which may comprise the majority of the required tasks of autonomous robots. In this paper we develop a new variant of preliminary hazard analysis that is explicitly aimed at identifying non-mission interactions by means of new sets of guidewords not normally found in existing variants. We develop the required features of the method and describe its application to several small trials conducted at Bristol Robotics Laboratory in the 2011–2012 period.  相似文献   
102.
Cloud computing techniques take the form of distributed computing by utilizing multiple computers to execute computing simultaneously on the service side. To process the increasing quantity of multimedia data, numerous large-scale multimedia data storage computing techniques in the cloud computing have been developed. Of all the techniques, Hadoop plays a key role in the cloud computing. Hadoop, a computing cluster formed by low-priced hardware, can conduct the parallel computing of petabytes of multimedia data. Hadoop features high-reliability, high-efficiency, and high-scalability. The numerous large-scale multimedia data computing techniques include not only the key core techniques, Hadoop and MapReduce, but also the data collection techniques, such as File Transfer Protocol and Flume. In addition, distributed system configuration allocation, automatic installation, and monitoring platform building and management techniques are all included. As a result, only with the integration of all the techniques, a reliable large-scale multimedia data platform can be offered. In this paper, we introduce how cloud computing can make a breakthrough by proposing a multimedia social network dataset on Hadoop platform and implementing a prototype version. Detailed specifications and design issues are discussed as well. An important finding of this article is that we can save more time if we conduct the multimedia social networking analysis using Cloud Hadoop Platform rather than using a single computer. The advantages of cloud computing over the traditional data processing practices are fully demonstrated in this article. The applicable framework designs and the tools available for the large-scale data processing are also proposed. We show the experimental multimedia data including data sizes and processing time.  相似文献   
103.
A new variant of Differential Evolution (DE), called ADE-Grid, is presented in this paper which adapts the mutation strategy, crossover rate (CR) and scale factor (F) during the run. In ADE-Grid, learning automata (LA), which are powerful decision making machines, are used to determine the proper value of the parameters CR and F, and the suitable strategy for the construction of a mutant vector for each individual, adaptively. The proposed automata based DE is able to maintain the diversity among the individuals and encourage them to move toward several promising areas of the search space as well as the best found position. Numerical experiments are conducted on a set of twenty four well-known benchmark functions and one real-world engineering problem. The performance comparison between ADE-Grid and other state-of-the-art DE variants indicates that ADE-Grid is a viable approach for optimization. The results also show that the proposed ADE-Grid improves the performance of DE in terms of both convergence speed and quality of final solution.  相似文献   
104.
In the present study, the Group method of data handling (GMDH) network was utilized to predict the scour depth below pipelines. GMDH network was developed using back propagation. Input parameters that were considered as effective parameters on the scour depth included those of sediment size, geometry of pipeline, and approaching flow characteristics. Training and testing performances of the GMDH networks have been carried out using nondimensional data sets that were collected from the literature. These data sets are related to the two main situations of pipelines scour experiments namely clear-water and live-bed conditions. The testing results of performances were compared with the support vector machines (SVM) and existing empirical equations. The GMDH network indicated that using of back propagation produced lower error of scour depth prediction than those obtained using the SVM and empirical equations. Also, the effects of many input parameters on the scour depth have been investigated.  相似文献   
105.
In this paper, a novel algorithm for image encryption based on hash function is proposed. In our algorithm, a 512-bit long external secret key is used as the input value of the salsa20 hash function. First of all, the hash function is modified to generate a key stream which is more suitable for image encryption. Then the final encryption key stream is produced by correlating the key stream and plaintext resulting in both key sensitivity and plaintext sensitivity. This scheme can achieve high sensitivity, high complexity, and high security through only two rounds of diffusion process. In the first round of diffusion process, an original image is partitioned horizontally to an array which consists of 1,024 sections of size 8 × 8. In the second round, the same operation is applied vertically to the transpose of the obtained array. The main idea of the algorithm is to use the average of image data for encryption. To encrypt each section, the average of other sections is employed. The algorithm uses different averages when encrypting different input images (even with the same sequence based on hash function). This, in turn, will significantly increase the resistance of the cryptosystem against known/chosen-plaintext and differential attacks. It is demonstrated that the 2D correlation coefficients (CC), peak signal-to-noise ratio (PSNR), encryption quality (EQ), entropy, mean absolute error (MAE) and decryption quality can satisfy security and performance requirements (CC <0.002177, PSNR <8.4642, EQ >204.8, entropy >7.9974 and MAE >79.35). The number of pixel change rate (NPCR) analysis has revealed that when only one pixel of the plain-image is modified, almost all of the cipher pixels will change (NPCR >99.6125 %) and the unified average changing intensity is high (UACI >33.458 %). Moreover, our proposed algorithm is very sensitive with respect to small changes (e.g., modification of only one bit) in the external secret key (NPCR >99.65 %, UACI >33.55 %). It is shown that this algorithm yields better security performance in comparison to the results obtained from other algorithms.  相似文献   
106.
We have been developed novel catalysts for gasification of biomass with much higher energy efficiency than conventional methods (non-catalyst, dolomite, commercial steam reforming Ni catalyst). From the result of the gasification of cellulose over novel Rh/CeO2/SiO2 catalysts, it is found that the gasification process consists of the reforming of tar and the combustion of solid carbon. We also tested novel Rh/CeO2/SiO2 in the gasification with air, pyrogasification, and steam reforming of cedar wood. As a result, Rh/CeO2/SiO2 gave higher yield of syngas than the conventional steam reforming Ni catalyst. Furthermore, we compared the performance between single and dual bed reactors. Single bed reactor was effective in the gasification of cedar, however, it was not suitable for the gasification of rice straw since a rapid deactivation was observed. Gasification of rice straw, jute stick, baggase using the fluidized dual-bed reactor and Rh/CeO2/SiO2 was also investigated. Especially, the catalyst stability in the gasification of rice straw clearly was enhanced by using the fluidized dual bed reactor.  相似文献   
107.
Joining of sintered Si3N4 was performed using a high-temperature brazing technique. Ni-based brazing alloys having the same Ni:Cr ratio as AWS BNi-5 (Ni·18Cr·19Si (at. %)) but different Si content were used as the brazing filler metals. Joining experiments were performed at 1220°C under a N2 partial pressure of 15 Pa for different times between 5 to 15 min. The highest room-temperature four-point bend strength of the joints was 115 MPa, whereas 220 MPa was achieved when the joints were tested at 900°C. The high strength of the experimental joints was attributed to the reduction in residual stresses and formation of a CrN reaction layer at the ceramic/filler metal interface.  相似文献   
108.
The cultivation of toxic lignocellulosic hydrolyzates has become a challenging research topic in recent decades. Although several cultivation methods have been proposed, numerous questions have arisen regarding their industrial applications. The current work deals with a solution to this problem which has a good potential application on an industrial scale. A toxic dilute-acid hydrolyzate was continuously cultivated using a high-cell-density flocculating yeast in a single and serial bioreactor which was equipped with a settler to recycle the cells back to the bioreactors. No prior detoxification was necessary to cultivate the hydrolyzates, as the flocks were able to detoxify it in situ. The experiments were successfully carried out at dilution rates up to 0.52 h−1. The cell concentration inside the bioreactors was between 23 and 35 g-DW/L, while the concentration in the effluent of the settlers was 0.32 ± 0.05 g-DW/L. An ethanol yield of 0.42–0.46 g/g-consumed sugar was achieved, and the residual sugar concentration was less than 6% of the initial fermentable sugar (glucose, galactose and mannose) of 35.2 g/L.  相似文献   
109.
In clustering algorithm, one of the main challenges is to solve the global allocation of the clusters instead of just local tuning of the partition borders. Despite this, all external cluster validity indexes calculate only point-level differences of two partitions without any direct information about how similar their cluster-level structures are. In this paper, we introduce a cluster level index called centroid index. The measure is intuitive, simple to implement, fast to compute and applicable in case of model mismatch as well. To a certain extent, we expect it to generalize other clustering models beyond the centroid-based k-means as well.  相似文献   
110.
A comprehensive Arabic handwritten text database is an essential resource for Arabic handwritten text recognition research. This is especially true due to the lack of such database for Arabic handwritten text. In this paper, we report our comprehensive Arabic offline Handwritten Text database (KHATT) consisting of 1000 handwritten forms written by 1000 distinct writers from different countries. The forms were scanned at 200, 300, and 600 dpi resolutions. The database contains 2000 randomly selected paragraphs from 46 sources, 2000 minimal text paragraph covering all the shapes of Arabic characters, and optionally written paragraphs on open subjects. The 2000 random text paragraphs consist of 9327 lines. The database forms were randomly divided into 70%, 15%, and 15% sets for training, testing, and verification, respectively. This enables researchers to use the database and compare their results. A formal verification procedure is implemented to align the handwritten text with its ground truth at the form, paragraph and line levels. The verified ground truth database contains meta-data describing the written text at the page, paragraph, and line levels in text and XML formats. Tools to extract paragraphs from pages and segment paragraphs into lines are developed. In addition we are presenting our experimental results on the database using two classifiers, viz. Hidden Markov Models (HMM) and our novel syntactic classifier.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号