首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1980篇
  免费   81篇
  国内免费   11篇
电工技术   9篇
综合类   1篇
化学工业   737篇
金属工艺   78篇
机械仪表   37篇
建筑科学   25篇
矿业工程   2篇
能源动力   90篇
轻工业   174篇
水利工程   9篇
石油天然气   4篇
无线电   146篇
一般工业技术   370篇
冶金工业   68篇
原子能技术   17篇
自动化技术   305篇
  2024年   4篇
  2023年   34篇
  2022年   86篇
  2021年   88篇
  2020年   74篇
  2019年   75篇
  2018年   83篇
  2017年   70篇
  2016年   50篇
  2015年   46篇
  2014年   86篇
  2013年   307篇
  2012年   93篇
  2011年   95篇
  2010年   81篇
  2009年   75篇
  2008年   70篇
  2007年   58篇
  2006年   62篇
  2005年   50篇
  2004年   26篇
  2003年   38篇
  2002年   23篇
  2001年   19篇
  2000年   20篇
  1999年   20篇
  1998年   31篇
  1997年   19篇
  1996年   21篇
  1995年   17篇
  1994年   24篇
  1993年   14篇
  1992年   23篇
  1991年   17篇
  1990年   26篇
  1989年   22篇
  1988年   13篇
  1987年   13篇
  1986年   12篇
  1985年   21篇
  1984年   11篇
  1983年   11篇
  1982年   9篇
  1981年   5篇
  1979年   7篇
  1978年   7篇
  1977年   3篇
  1976年   5篇
  1975年   3篇
  1971年   2篇
排序方式: 共有2072条查询结果,搜索用时 31 毫秒
41.
Behavioral uncertainty of a supplier is a major challenge to a buyer operating in e-procurement setting. Modeling suppliers’ behavior from past transactions, estimation of possible future performance and integrating this knowledge with the winner determination process can bring a new dimension to procurement process automation. We propose a states-space model to capture the uncertainty involved in long-term supplier behavior. The states represent the performance level of a supplier. This behavioral aspect is then integrated with the winner determination process of a multi-attribute reverse auction for efficient supplier selection using parallel MDP. We also propose an implementation framework to collect the feedback on supplier, generate an aggregate performance score and integrate it with the winner determination process. The performance aggregation and winner determination with help of Markov decision process effectively uses the past performance information. In addition, it updates performance information in regular invervals and allevates the problem of maintaining a long history. We compare the MDP-based selection with that of performance score-based selection through a simulation experiment. It is observed that our scheme gives better buyer utility, selects best suppliers and fetches better quality product. The benefits realized through these attributes to the buyer increases the efficiency of the MDP-based selection process.  相似文献   
42.
This paper considers an economic lot sizing model with constant capacity, non-increasing setup cost, and convex inventory cost function. Algorithms with computational time of O(N×TDN)have been developed for solving the model, where N is the number of planning periods and TDN is the total demand. This study partially characterizes the optimal planning structure of the model. A new efficient algorithm with computational time of O(N log N) has also been developed based on the partial optimal structure. Moreover, computational study demonstrates that the new algorithm is efficient.  相似文献   
43.
Research into the problem of predicting the maximum depth of scour on grade-control structures like sluice gates, weirs and check dams, etc., has been mainly of an experimental nature and several investigators have proposed a number of empirical relations for a particular situation. These traditional scour prediction equations, although offer some guidance on the likely magnitude of maximum scour depth, yet applicable to a limited range of the situations. It appears from the literature review that a regression mathematical model for predicting maximum depth of scour under all circumstances is not currently available. This paper explores the potential of support vector machines in modeling the scour from the available laboratory and field data obtained form the earlier published studies. To compare the results, a recently proposed empirical relation and a feed forward back propagation neural network model are also used in the present study. The outcome from the support vector machines-based modeling approach suggests a better performance in comparison to both the empirical relation and back propagation neural network approach with the laboratory data. The results also suggest an encouraging performance by the support vector machines learning technique in comparison to both empirical relation as well as neural network approach in scaling up the results from laboratory to field conditions for the purpose of scour prediction.  相似文献   
44.
In this paper we formulate a least squares version of the recently proposed twin support vector machine (TSVM) for binary classification. This formulation leads to extremely simple and fast algorithm for generating binary classifiers based on two non-parallel hyperplanes. Here we attempt to solve two modified primal problems of TSVM, instead of two dual problems usually solved. We show that the solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TSVM. Classification using nonlinear kernel also leads to systems of linear equations. Our experiments on publicly available datasets indicate that the proposed least squares TSVM has comparable classification accuracy to that of TSVM but with considerably lesser computational time. Since linear least squares TSVM can easily handle large datasets, we further went on to investigate its efficiency for text categorization applications. Computational results demonstrate the effectiveness of the proposed method over linear proximal SVM on all the text corpuses considered.  相似文献   
45.
In this digital era, where Internet of Things (IoT) is increasing day by day, use of resource constrained devices is also increasing. Indeed, the features such as low cost, less maintenance, more adaptive to hostile environment, etc. make the wireless multimedia devices to be the best choice as the resource constrained devices. For the security, the end user device requires to establish the session key with the server before transferring the data. Mobile is one of the device having more and more usage as wireless multimedia device in recent years. In 2013, Li et al. proposed an efficient scheme for the wireless mobile communications and claimed it to be secure against various attacks. Recently, Shen et al. claimed that the scheme of Li et al. is still vulnerable to the privileged insider attack, the stolen verifier attack and finally proposed a scheme to withstand the mentioned and other attacks. However, in this paper we claim that the scheme of Shen et al. is still susceptible to the user anonymity, the session specific temporary information attack and the replay attack. In addition, Shen et al.’s scheme requires more time due to many operations. Further, we propose an efficient scheme that is secure against various known attacks and due to reduced time complexity our scheme is a preferred choice for the wireless mobile networks and hence for wireless multimedia systems.  相似文献   
46.
A malware mutation engine is able to transform a malicious program to create a different version of the program. Such mutation engines are used at distribution sites or in self-propagating malware in order to create variation in the distributed programs. Program normalization is a way to remove variety introduced by mutation engines, and can thus simplify the problem of detecting variant strains. This paper introduces the “normalizer construction problem” (NCP), and formalizes a restricted form of the problem called “NCP=”, which assumes a model of the engine is already known in the form of a term rewriting system. It is shown that even this restricted version of the problem is undecidable. A procedure is provided that can, in certain cases, automatically solve NCP= from the model of the engine. This procedure is analyzed in conjunction with term rewriting theory to create a list of distinct classes of normalizer construction problems. These classes yield a list of possible attack vectors. Three strategies are defined for approximate solutions of NCP=, and an analysis is provided of the risks they entail. A case study using the virus suggests the approximations may be effective in practice for countering mutated malware. R. Mathur is presently at McAfee AVERT Labs.  相似文献   
47.
A rapid screening system for heterogeneous catalyst discovery has been developed by coupling an in-house designed and fabricated high temperature vapor phase pulse reactor on-line to a GC-MS. The incorporation of gas chromatography for separation of the products with the mass spectrometry system allowed simultaneous identification and determination of reaction products and substrate conversion. This system was employed to study the vapor phase catalytic hydride transfer reduction (CHTR) of nitrobenzene with methanol as hydrogen donor on an MgO catalyst as a model reaction. Structural information of all the by-products that were formed was useful to understand the reaction mechanism. The products obtained with the new screening technique were in good agreement with conventional bench scale experiments. The rapid online screening provided an efficient methodology for optimization of reaction conditions such as catalyst loading, reaction temperature, and mole ratios. Response Surface Methodology (RSM) was used to optimize the conversion of reactants and selectivity of products.  相似文献   
48.
49.
In component‐based development, software systems are built by assembling components already developed and prepared for integration. To estimate the quality of components, complexity, reusability, dependability, and maintainability are the key aspects. The quality of an individual component influences the quality of the overall system. Therefore, there is a strong need to select the best quality component, both from functional and nonfunctional aspects. The present paper produces a critical analysis of metrics for various quality aspects for components and component‐based systems. These aspects include four main quality factors: complexity, dependency, reusability, and maintainability. A systematic study is applied to find as much literature as possible. A total of 49 papers were found suitable after a defined search criteria. The analysis provided in this paper has a different objective as we focused on efficiency and practical ability of the proposed approach in the selected papers. The various key attributes from these two are defined. Each paper is evaluated based on the various key parameters viz. metrics definition, implementation technique, validation, usability, data source, comparative analysis, practicability, and extendibility. The paper critically examines various quality aspects and their metrics for component‐based systems. In some papers, authors have also compared the results with other techniques. For characteristics like complexity and dependency, most of the proposed metrics are analytical. Soft computing and evolutionary approaches are either not being used or much less explored so far for these aspects, which may be the future concern for the researchers. In addition, hybrid approaches like neuro‐fuzzy, neuro‐genetic, etc., may also be examined for evaluation of these aspects. However, to conclude that one particular technique is better than others may not be appropriate. It may be true for one characteristic by considering different set of inputs and dataset but may not be true for the same with different inputs. The intension in the proposed work is to give a score for each metric proposed by the researchers based on the selected parameters, but certainly not to criticize any research contribution by authors. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
50.
In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号