首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2448篇
  免费   165篇
  国内免费   3篇
电工技术   31篇
综合类   1篇
化学工业   713篇
金属工艺   48篇
机械仪表   88篇
建筑科学   95篇
矿业工程   5篇
能源动力   112篇
轻工业   322篇
水利工程   25篇
石油天然气   18篇
无线电   137篇
一般工业技术   378篇
冶金工业   93篇
原子能技术   13篇
自动化技术   537篇
  2024年   2篇
  2023年   23篇
  2022年   97篇
  2021年   127篇
  2020年   93篇
  2019年   120篇
  2018年   97篇
  2017年   107篇
  2016年   115篇
  2015年   90篇
  2014年   118篇
  2013年   224篇
  2012年   173篇
  2011年   204篇
  2010年   132篇
  2009年   110篇
  2008年   128篇
  2007年   107篇
  2006年   81篇
  2005年   57篇
  2004年   59篇
  2003年   48篇
  2002年   42篇
  2001年   29篇
  2000年   17篇
  1999年   22篇
  1998年   23篇
  1997年   23篇
  1996年   14篇
  1995年   12篇
  1994年   11篇
  1993年   12篇
  1992年   12篇
  1991年   12篇
  1989年   12篇
  1988年   7篇
  1987年   10篇
  1986年   5篇
  1985年   5篇
  1983年   3篇
  1982年   3篇
  1981年   5篇
  1980年   4篇
  1979年   7篇
  1978年   2篇
  1975年   3篇
  1972年   1篇
  1966年   1篇
  1961年   1篇
  1949年   1篇
排序方式: 共有2616条查询结果,搜索用时 62 毫秒
51.
In glueless shared-memory multiprocessors where cache coherence is usually maintained using a directory-based protocol, the fast access to the on-chip components (caches and network router, among others) contrasts with the much slower main memory. Unfortunately, directory-based protocols need to obtain the sharing status of every memory block before coherence actions can be performed. This information has traditionally been stored in main memory, and therefore these cache coherence protocols are far from being optimal. In this work, we propose two alternative designs for the last-level private cache of glueless shared-memory multiprocessors: the lightweight directory and the SGluM cache. Our proposals completely remove directory information from main memory and store it in the home node’s L2 cache, thus reducing both the number of accesses to main memory and the directory memory overhead. The main characteristics of the lightweight directory are its simplicity and the significant improvement in the execution time for most applications. Its drawback, however, is that the performance of some particular applications could be degraded. On the other hand, the SGluM cache offers more modest improvements in execution time for all the applications by adding some extra structures that cope with the cases in which the lightweight directory fails.  相似文献   
52.
53.
54.
When interval-grouped data are available, the classical Parzen–Rosenblatt kernel density estimator has to be modified to get a computable and useful approach in this context. The new nonparametric grouped data estimator needs of the choice of a smoothing parameter. In this paper, two different bandwidth selectors for this estimator are analyzed. A plug-in bandwidth selector is proposed and its relative rate of convergence obtained. Additionally, a bootstrap algorithm to select the bandwidth in this framework is designed. This method is easy to implement and does not require Monte Carlo. Both proposals are compared through simulations in different scenarios. It is observed that when the sample size is medium or large and grouping is not heavy, both bandwidth selection methods have a similar and good performance. However, when the sample size is large and under heavy grouping scenarios, the bootstrap bandwidth selector leads to better results.  相似文献   
55.
56.
Groupware applications have special features that, if they were taken into account from the very beginning, could reasonably improve the quality of the system. Such features concern human-computer-human interactions, i.e. a further step in the human-computer interaction field: communication, collaboration, cooperation and coordination, time, space, and awareness are issues to be considered. This paper presents a novel approach to gather requirements for groupware applications. The proposal is based on a methodology that includes the use of templates to gather the information regarding the different types of requirements. The requirements templates have been extended to include new information to give account of specific features concerning groupware applications. The information gathered is managed in a CASE tool we have developed; then general and specific diagrams are automatic or semi-automatically generated.  相似文献   
57.
The main objective of this paper is to relieve the power system engineers from the burden of the complex and time-consuming process of power system stabilizer (PSS) tuning. To achieve this goal, the paper proposes an automatic process for computerized tuning of PSSs, which is based on an iterative process that uses a linear matrix inequality (LMI) solver to find the PSS parameters. It is shown in the paper that PSS tuning can be written as a search problem over a non-convex feasible set. The proposed algorithm solves this feasibility problem using an iterative LMI approach and a suitable initial condition, corresponding to a PSS designed for nominal operating conditions only (which is a quite simple task, since the required phase compensation is uniquely defined). Some knowledge about the PSS tuning is also incorporated in the algorithm through the specification of bounds defining the allowable PSS parameters. The application of the proposed algorithm to a benchmark test system and the nonlinear simulation of the resulting closed-loop models demonstrate the efficiency of this algorithm.  相似文献   
58.
In this paper, a hybrid intelligent morphological approach is presented for stock market forecasting. It consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) and a Modified Genetic Algorithm (MGA), which searches for the minimum number of time lags for a correct time series representation, as well as by the initial weights, architecture and number of modules of the MMNN. Each element of the MGA population is trained via Back Propagation (BP) algorithm to further improve the parameters supplied by the MGA. Initially, the proposed method chooses the most tuned prediction model for time series representation, then it performs a behavioral statistical test in the attempt to adjust time phase distortions that appear in financial time series. An experimental analysis is conducted with the proposed method using four real world time series and five well-known performance measurements, demonstrating consistent better performance of this kind of morphological system.  相似文献   
59.
The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compared the effectiveness of a symbolic solver (CVC3), a random solver, two heuristic search solvers, and seven hybrid solvers (i.e. mix of random, symbolic, and heuristic solvers). We evaluated the solvers on a benchmark generated with a concolic execution of 9 subjects. The performance of each solver was measured by its precision, which is the fraction of constraints that the solver can find solution out of the total number of constraints that some solver can find solution. As expected, symbolic solving subsumes the other approaches for the 4 subjects that only generate decidable constraints. For the remaining 5 subjects, which contain undecidable constraints, the hybrid solvers achieved the highest precision (fraction of constraints that a solver can find a solution out of the total number of satisfiable constraints). We also observed that the solvers were complementary, which suggests that one should alternate their use in iterations of a concolic execution driver.  相似文献   
60.
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper, we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号