首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4448篇
  免费   240篇
  国内免费   2篇
电工技术   55篇
综合类   4篇
化学工业   1162篇
金属工艺   64篇
机械仪表   79篇
建筑科学   159篇
矿业工程   18篇
能源动力   177篇
轻工业   791篇
水利工程   24篇
石油天然气   16篇
无线电   212篇
一般工业技术   719篇
冶金工业   202篇
原子能技术   21篇
自动化技术   987篇
  2024年   14篇
  2023年   49篇
  2022年   201篇
  2021年   199篇
  2020年   124篇
  2019年   147篇
  2018年   186篇
  2017年   176篇
  2016年   179篇
  2015年   154篇
  2014年   193篇
  2013年   339篇
  2012年   329篇
  2011年   361篇
  2010年   283篇
  2009年   241篇
  2008年   253篇
  2007年   211篇
  2006年   163篇
  2005年   147篇
  2004年   105篇
  2003年   95篇
  2002年   93篇
  2001年   44篇
  2000年   39篇
  1999年   35篇
  1998年   44篇
  1997年   32篇
  1996年   25篇
  1995年   45篇
  1994年   18篇
  1993年   17篇
  1992年   12篇
  1991年   17篇
  1990年   13篇
  1989年   13篇
  1988年   9篇
  1987年   6篇
  1986年   5篇
  1985年   7篇
  1984年   11篇
  1983年   8篇
  1981年   3篇
  1980年   3篇
  1979年   7篇
  1978年   4篇
  1977年   5篇
  1976年   4篇
  1975年   8篇
  1974年   3篇
排序方式: 共有4690条查询结果,搜索用时 15 毫秒
61.
62.
63.
For the last 30 years, several dynamic memory managers (DMMs) have been proposed. Such DMMs include first fit, best fit, segregated fit and buddy systems. Since the performance, memory usage and energy consumption of each DMM differs, software engineers often face difficult choices in selecting the most suitable approach for their applications. This issue has special impact in the field of portable consumer embedded systems, that must execute a limited amount of multimedia applications (e.g., 3D games, video players, signal processing software, etc.), demanding high performance and extensive memory usage at a low energy consumption. Recently, we have developed a novel methodology based on genetic programming to automatically design custom DMMs, optimizing performance, memory usage and energy consumption. However, although this process is automatic and faster than state-of-the-art optimizations, it demands intensive computation, resulting in a time-consuming process. Thus, parallel processing can be very useful to enable to explore more solutions spending the same time, as well as to implement new algorithms. In this paper we present a novel parallel evolutionary algorithm for DMMs optimization in embedded systems, based on the Discrete Event Specification (DEVS) formalism over a Service Oriented Architecture (SOA) framework. Parallelism significantly improves the performance of the sequential exploration algorithm. On the one hand, when the number of generations are the same in both approaches, our parallel optimization framework is able to reach a speed-up of 86.40× when compared with other state-of-the-art approaches. On the other, it improves the global quality (i.e., level of performance, low memory usage and low energy consumption) of the final DMM obtained in a 36.36% with respect to two well-known general-purpose DMMs and two state-of-the-art optimization methodologies.  相似文献   
64.
Real-time hierarchical stereo Visual SLAM in large-scale environments   总被引:1,自引:0,他引:1  
In this paper we present a new real-time hierarchical (topological/metric) Visual SLAM system focusing on the localization of a vehicle in large-scale outdoor urban environments. It is exclusively based on the visual information provided by a cheap wide-angle stereo camera. Our approach divides the whole map into local sub-maps identified by the so-called fingerprints (vehicle poses). At the sub-map level (low level SLAM), 3D sequential mapping of natural landmarks and the robot location/orientation are obtained using a top-down Bayesian method to model the dynamic behavior. A higher topological level (high level SLAM) based on fingerprints has been added to reduce the global accumulated drift, keeping real-time constraints. Using this hierarchical strategy, we keep the local consistency of the metric sub-maps, by mean of the EKF, and global consistency by using the topological map and the MultiLevel Relaxation (MLR) algorithm. Some experimental results for different large-scale outdoor environments are presented, showing an almost constant processing time.  相似文献   
65.
Hop is a multi-tier programming language where the behavior of interacting servers and clients are expressed by a single program. Hop adheres to the standard web programming style where servers elaborate HTML pages containing JavaScript code. This JavaScript code responds locally to user’s interactions but also (following the so-called Ajax style) requests services from remote servers. These services bring back new HTML fragments containing additional JavaScript code replacing or modifying the state of the client. This paper presents a continuation-based denotational semantics for a sequential subset of Hop. Though restricted to a single server and a single client, this semantics takes into account the key feature of Hop namely that the server elaborates client code to be run in the client’s browser. This new client-code dynamically requests services from the server which, again, elaborate new client code to be run in the client’s browser. This semantics details the programming model advocated by Hop and provides a sound basis for future studies such as security of web applications and web continuations.  相似文献   
66.
Although directory-based cache-coherence protocols are the best choice when designing chip multiprocessors with tens of cores on-chip, the memory overhead introduced by the directory structure may not scale gracefully with the number of cores. Many approaches aimed at improving the scalability of directories have been proposed. However, they do not bring perfect scalability and usually reduce the directory memory overhead by compressing coherence information, which in turn results in extra unnecessary coherence messages and, therefore, wasted energy and some performance degradation. In this work, we present a distributed directory organization based on duplicate tags for tiled CMP architectures whose size is independent on the number of tiles of the system up to a certain number of tiles. We demonstrate that this number of tiles corresponds to the number of sets in the private caches. Additionally, we show that the area overhead of the proposed directory structure is 0.56% with respect to the on-chip data caches. Moreover, the proposed directory structure keeps the same information than a non-scalable full-map directory. Finally, we propose a mechanism that takes advantage of this directory organization to remove the network traffic caused by replacements. This mechanism reduces total traffic by 15% for a 16-core configuration compared to a traditional directory-based protocol.  相似文献   
67.
High performance processor designs have evolved toward architectures that integrate multiple processing cores on the same chip. As the number of cores inside a Chip MultiProcessor (CMP) increases, the interconnection network will have significant impact on both overall performance and energy consumption as previous studies have shown. Moreover, wires used in such interconnect can be designed with varying latency, bandwidth and power characteristics. In this work, we show how messages can be efficiently managed in tiled CMP, from the point of view of both performance and energy, by combining both address compression with a heterogeneous interconnect. In particular, our proposal is based on applying an address compression scheme that dynamically compresses the addresses within coherence messages allowing for a significant area slack. The arising area is exploited for wire latency improvement by using a heterogeneous interconnection network comprised of a small set of very-low-latency wires for critical short-messages in addition to baseline wires. Detailed simulations of a 16-core CMP show that our proposal obtains average improvements of 10% in execution time and 38% in the energy-delay2 product of the interconnect. Additionally, the sensitivity analysis shows that our proposal performs well when either OoO cores or caches with higher latencies are considered.  相似文献   
68.
In this paper, Bayesian network (BN) and ant colony optimization (ACO) techniques are combined in order to find the best path through a graph representing all available itineraries to acquire a professional competence. The combination of these methods allows us to design a dynamic learning path, useful in a rapidly changing world. One of the most important advances in this work, apart from the variable amount of pheromones, is the automatic processing of the learning graph. This processing is carried out by the learning management system and helps towards understanding the learning process as a competence-oriented itinerary instead of a stand-alone course. The amount of pheromones is calculated by taking into account the results acquired in the last completed course in relation to the minimum score required and by feeding this into the learning tree in order to obtain a relative impact on the path taken by the student. A BN is used to predict the probability of success, by taking historical data and student profiles into account. Usually, these profiles are defined beforehand; however, in our approach, some characteristics of these profiles, such as the level of knowledge, are classified automatically through supervised and/or unsupervised learning. By using ACO and BN, a fitness function, responsible for automatically selecting the next course in the learning graph, is defined. This is done by generating a path which maximizes the probability of each user??s success on the course. Therefore, the path can change in order to adapt itself to learners?? preferences and needs, by taking into account the pedagogical weight of each learning unit and the social behaviour of the system.  相似文献   
69.
Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In case the constraint multipliers are either 0 or , such cuts are known as -cuts. It has been proven by Caprara and Fischetti (Math. Program. 74:221–235, 1996) that separation of -cuts is -hard. In this paper, we study ways to separate -cuts effectively in practice. We propose a range of preprocessing rules to reduce the size of the separation problem. The core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated -cut, we formulate the (reduced) problem as integer linear program. Some simple heuristic separation routines complete the algorithmic framework. Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation times of state-of-the-art ILP-solvers.  相似文献   
70.
We present new GPU-based techniques for implementing linear digital filters for real-time audio processing. Our solution for recursive filters is the first presented in the literature. We demonstrate the relevance of these algorithms to computer graphics by synthesizing realistic sounds of colliding objects made of different materials, such as glass, plastic, and wood, in real time. The synthesized sounds can be parameterized by the object materials, velocities, and collision angles. Despite its flexibility, our approach uses very little memory, since it essentially requires a set of coefficients representing the impulse response of each material sound. Such features make our approach an attractive alternative to traditional CPU-based techniques that use playback of pre-recorded sounds.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号