全文获取类型
收费全文 | 19131篇 |
免费 | 831篇 |
国内免费 | 139篇 |
专业分类
电工技术 | 324篇 |
综合类 | 33篇 |
化学工业 | 4223篇 |
金属工艺 | 605篇 |
机械仪表 | 631篇 |
建筑科学 | 371篇 |
矿业工程 | 49篇 |
能源动力 | 1375篇 |
轻工业 | 1165篇 |
水利工程 | 150篇 |
石油天然气 | 84篇 |
武器工业 | 1篇 |
无线电 | 2362篇 |
一般工业技术 | 4347篇 |
冶金工业 | 1450篇 |
原子能技术 | 172篇 |
自动化技术 | 2759篇 |
出版年
2024年 | 101篇 |
2023年 | 420篇 |
2022年 | 999篇 |
2021年 | 1158篇 |
2020年 | 916篇 |
2019年 | 944篇 |
2018年 | 1219篇 |
2017年 | 959篇 |
2016年 | 928篇 |
2015年 | 606篇 |
2014年 | 839篇 |
2013年 | 1522篇 |
2012年 | 888篇 |
2011年 | 1070篇 |
2010年 | 859篇 |
2009年 | 806篇 |
2008年 | 710篇 |
2007年 | 574篇 |
2006年 | 475篇 |
2005年 | 363篇 |
2004年 | 272篇 |
2003年 | 243篇 |
2002年 | 190篇 |
2001年 | 179篇 |
2000年 | 171篇 |
1999年 | 169篇 |
1998年 | 290篇 |
1997年 | 237篇 |
1996年 | 219篇 |
1995年 | 169篇 |
1994年 | 160篇 |
1993年 | 152篇 |
1992年 | 105篇 |
1991年 | 130篇 |
1990年 | 99篇 |
1989年 | 95篇 |
1988年 | 79篇 |
1987年 | 88篇 |
1986年 | 75篇 |
1985年 | 87篇 |
1984年 | 75篇 |
1983年 | 73篇 |
1982年 | 62篇 |
1981年 | 71篇 |
1980年 | 50篇 |
1979年 | 31篇 |
1978年 | 28篇 |
1977年 | 26篇 |
1976年 | 35篇 |
1971年 | 12篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
51.
Information Systems and e-Business Management - Collaborative filtering (CF) is a popular and widely accepted recommendation technique. CF is an automated form of word-of-mouth communication... 相似文献
52.
Schloegel K. Karypis G. Kumar V. 《Parallel and Distributed Systems, IEEE Transactions on》2001,12(5):451-466
Current multilevel repartitioning schemes tend to perform well on certain types of problems while obtaining worse results for other types of problems. We present two new multilevel algorithms for repartitioning adaptive meshes that improve the performance of multilevel schemes for the types of problems that current schemes perform poorly while maintaining similar or better results for those problems that current schemes perform well. Specifically, we present a new scratch-remap scheme called Locally-matched Multilevel Scratch-remap (or simply LMSR) for repartitioning of adaptive meshes. LMSR tries to compute a high-quality partitioning that has a large amount of overlap with the original partitioning. We show that LMSR generally decreases the data redistribution costs required to balance the load compared to current scratch-remap schemes. We present a new diffusion-based scheme that we refer to as Wavefront Diffusion. In Wavefront Diffusion, the flow of vertices moves in a wavefront from overweight to underweight subdomains. We show that Wavefront Diffusion obtains significantly lower data redistribution costs while maintaining similar or better edge-cut results compared to existing diffusion algorithms. We also compare Wavefront Diffusion with LMSR and show that these provide a trade-off between edge-cut and data redistribution costs for a wide range of problems. Our experimental results on a Gray T3E, an IBM SP2, and a cluster of Pentium Pro workstations show that both schemes are fast and scalable. For example, both are capable of repartitioning a seven million vertex graph in under three seconds on 128 processors of a Gray T3E. Our schemes obtained relative speedups of between nine and 12 when the number of processors was increased by a factor of 16 on a Gray T3E 相似文献
53.
Full instrumental rationality and perfect institutions are two cornerstoneassumptions underlying neoclassical models. However, in the real world, thesetwo assumptions never hold, especially not in developing countries. In thispaper, we develop a game theoretical model to investigate if relaxations inthe full instrumental rationality and perfect institutions premise can explainthe conflicts that have been occurring between the various principals in theNarok district in Kenya with regard to land tenure and use. 相似文献
54.
ON THE HARDNESS OF APPROXIMATING MULTICUT AND SPARSEST-CUT 总被引:1,自引:1,他引:0
55.
Scalable parallel data mining for association rules 总被引:3,自引:0,他引:3
Eui-Hong Han Karypis G. Kumar V. 《Knowledge and Data Engineering, IEEE Transactions on》2000,12(3):337-352
The authors propose two new parallel formulations of the Apriori algorithm (R. Agrawal and R. Srikant, 1994) that is used for computing association rules. These new formulations, IDD and HD, address the shortcomings of two previously proposed parallel formulations CD and DD. Unlike the CD algorithm, the IDD algorithm partitions the candidate set intelligently among processors to efficiently parallelize the step of building the hash tree. The IDD algorithm also eliminates the redundant work inherent in DD, and requires substantially smaller communication overhead than DD. But IDD suffers from the added cost due to communication of transactions among processors. HD is a hybrid algorithm that combines the advantages of CD and DD. Experimental results on a 128-processor Cray T3E show that HD scales just as well as the CD algorithm with respect to the number of transactions, and scales as well as IDD with respect to increasing candidate set size 相似文献
56.
57.
58.
59.
Durga Praveen Kumar D Gantayet LM Singh S Rawat AS Rana P Rajasree V Agarwalla SK Chakravarthy DP 《The Review of scientific instruments》2012,83(2):025105
Temporal jitter in a magnetic pulse compression based copper vapor laser (CVL) system is analyzed by considering ripple present in the input dc power supply and ripple present in the magnetic core resetting power supply. It is shown that the jitter is a function of the ratio of operating voltage to the designed voltage, percentage ripple, and the total propagation delay of the magnetic pulse compression circuit. Experimental results from a CVL system operating at a repetition rate of 9 kHz are presented. 相似文献
60.
Soft computing-based approaches have been developed to predict specific energy consumption and stability margin of a six-legged robot ascending and descending some gradient terrains. Three different neuro-fuzzy and one neural network-based approaches have been developed. The performances of these approaches are compared among themselves, through computer simulations. Genetic algorithm-tuned multiple adaptive neuro-fuzzy inference system is found to perform better than other three approaches for predicting both the outputs. This could be due to a more exhaustive search carried out by the genetic algorithm in comparison with back-propagation algorithm and the use of two separate adaptive neuro-fuzzy inference systems for two different outputs. A designer may use the developed soft computing-based approaches in order to predict specific energy consumption and stability margin of the robot for a set of input parameters, beforehand. 相似文献