首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20479篇
  免费   872篇
  国内免费   141篇
电工技术   335篇
综合类   33篇
化学工业   4549篇
金属工艺   611篇
机械仪表   692篇
建筑科学   402篇
矿业工程   49篇
能源动力   1435篇
轻工业   1221篇
水利工程   225篇
石油天然气   85篇
武器工业   1篇
无线电   2533篇
一般工业技术   4586篇
冶金工业   1591篇
原子能技术   185篇
自动化技术   2959篇
  2024年   104篇
  2023年   436篇
  2022年   1026篇
  2021年   1189篇
  2020年   943篇
  2019年   975篇
  2018年   1253篇
  2017年   1003篇
  2016年   963篇
  2015年   634篇
  2014年   876篇
  2013年   1660篇
  2012年   942篇
  2011年   1139篇
  2010年   925篇
  2009年   858篇
  2008年   763篇
  2007年   618篇
  2006年   515篇
  2005年   395篇
  2004年   293篇
  2003年   263篇
  2002年   204篇
  2001年   187篇
  2000年   191篇
  1999年   195篇
  1998年   320篇
  1997年   271篇
  1996年   245篇
  1995年   190篇
  1994年   187篇
  1993年   178篇
  1992年   124篇
  1991年   157篇
  1990年   112篇
  1989年   115篇
  1988年   97篇
  1987年   107篇
  1986年   89篇
  1985年   110篇
  1984年   98篇
  1983年   83篇
  1982年   69篇
  1981年   79篇
  1980年   58篇
  1979年   37篇
  1978年   36篇
  1977年   33篇
  1976年   43篇
  1975年   18篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
This paper considers an economic lot sizing model with constant capacity, non-increasing setup cost, and convex inventory cost function. Algorithms with computational time of O(N×TDN)have been developed for solving the model, where N is the number of planning periods and TDN is the total demand. This study partially characterizes the optimal planning structure of the model. A new efficient algorithm with computational time of O(N log N) has also been developed based on the partial optimal structure. Moreover, computational study demonstrates that the new algorithm is efficient.  相似文献   
62.
A new kind of molecularly imprinted polymer-modified graphite electrode was fabricated by “grafting-to” approach, incorporating sol–gel technique, for the detection of acute deficiency in serum ascorbic acid level (SAAL), manifesting hypovitaminosis C. The modified electrode exhibited ascorbic acid (AA) oxidation at less positive potential (0.0 V) than the earlier reported methods, resulting in a limit of detection as low as 6.13 ng mL−1 (RSD = 1.2%, S/N = 3). The diffusion coefficient (1.096 × 10−5 cm2 s−1), rate constant (7.308 s−1), and Gibb's free energy change (−12.59 kJ mol−1) due to analyte adsorption, were also calculated to explore the kinetics of AA oxidation. The proposed sensor was found to enhance sensitivity substantially so as to detect ultra trace level of AA in the presence of other biologically important compounds (dopamine, uric acid, etc.), without any cross interference and matrix complications from biological fluids and pharmaceutical samples.  相似文献   
63.
Modern businesses are facing the challenge of effectively coordinating their supply chains from upstream to downstream services. It is a complex problem to search, schedule, and coordinate a set of services from a large number of service resources under various constraints and uncertainties. Existing approaches to this problem have relied on complete information regarding service requirements and resources, without adequately addressing the dynamics and uncertainties of the environments. The real-world situations are complicated as a result of ambiguity in the requirements of the services, the uncertainty of solutions from service providers, and the interdependencies among the services to be composed. This paper investigates the complexity of supply chain formation and proposes an agent-mediated coordination approach. Each agent works as a broker for each service type, dedicated to selecting solutions for each service as well as interacting with other agents in refining the decision making to achieve compatibility among the solutions. The coordination among agents concerns decision making at strategic, tactical, and operational level. At the strategic level, agents communicate and negotiate for supply chain formation; at the tactical level, argumentation is used by agents to communicate and understand the preferences and constraints of each other; at the operational level, different strategies are used for selecting the preferences. Based on this approach, a prototype has been implemented with simulated experiments highlighting the effectiveness of the approach.  相似文献   
64.
In this paper we formulate a least squares version of the recently proposed twin support vector machine (TSVM) for binary classification. This formulation leads to extremely simple and fast algorithm for generating binary classifiers based on two non-parallel hyperplanes. Here we attempt to solve two modified primal problems of TSVM, instead of two dual problems usually solved. We show that the solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TSVM. Classification using nonlinear kernel also leads to systems of linear equations. Our experiments on publicly available datasets indicate that the proposed least squares TSVM has comparable classification accuracy to that of TSVM but with considerably lesser computational time. Since linear least squares TSVM can easily handle large datasets, we further went on to investigate its efficiency for text categorization applications. Computational results demonstrate the effectiveness of the proposed method over linear proximal SVM on all the text corpuses considered.  相似文献   
65.
66.
Information Systems and e-Business Management - Collaborative filtering (CF) is a popular and widely accepted recommendation technique. CF is an automated form of word-of-mouth communication...  相似文献   
67.
Current multilevel repartitioning schemes tend to perform well on certain types of problems while obtaining worse results for other types of problems. We present two new multilevel algorithms for repartitioning adaptive meshes that improve the performance of multilevel schemes for the types of problems that current schemes perform poorly while maintaining similar or better results for those problems that current schemes perform well. Specifically, we present a new scratch-remap scheme called Locally-matched Multilevel Scratch-remap (or simply LMSR) for repartitioning of adaptive meshes. LMSR tries to compute a high-quality partitioning that has a large amount of overlap with the original partitioning. We show that LMSR generally decreases the data redistribution costs required to balance the load compared to current scratch-remap schemes. We present a new diffusion-based scheme that we refer to as Wavefront Diffusion. In Wavefront Diffusion, the flow of vertices moves in a wavefront from overweight to underweight subdomains. We show that Wavefront Diffusion obtains significantly lower data redistribution costs while maintaining similar or better edge-cut results compared to existing diffusion algorithms. We also compare Wavefront Diffusion with LMSR and show that these provide a trade-off between edge-cut and data redistribution costs for a wide range of problems. Our experimental results on a Gray T3E, an IBM SP2, and a cluster of Pentium Pro workstations show that both schemes are fast and scalable. For example, both are capable of repartitioning a seven million vertex graph in under three seconds on 128 processors of a Gray T3E. Our schemes obtained relative speedups of between nine and 12 when the number of processors was increased by a factor of 16 on a Gray T3E  相似文献   
68.
Full instrumental rationality and perfect institutions are two cornerstoneassumptions underlying neoclassical models. However, in the real world, thesetwo assumptions never hold, especially not in developing countries. In thispaper, we develop a game theoretical model to investigate if relaxations inthe full instrumental rationality and perfect institutions premise can explainthe conflicts that have been occurring between the various principals in theNarok district in Kenya with regard to land tenure and use.  相似文献   
69.
70.
Scalable parallel data mining for association rules   总被引:3,自引:0,他引:3  
The authors propose two new parallel formulations of the Apriori algorithm (R. Agrawal and R. Srikant, 1994) that is used for computing association rules. These new formulations, IDD and HD, address the shortcomings of two previously proposed parallel formulations CD and DD. Unlike the CD algorithm, the IDD algorithm partitions the candidate set intelligently among processors to efficiently parallelize the step of building the hash tree. The IDD algorithm also eliminates the redundant work inherent in DD, and requires substantially smaller communication overhead than DD. But IDD suffers from the added cost due to communication of transactions among processors. HD is a hybrid algorithm that combines the advantages of CD and DD. Experimental results on a 128-processor Cray T3E show that HD scales just as well as the CD algorithm with respect to the number of transactions, and scales as well as IDD with respect to increasing candidate set size  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号