首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4413篇
  免费   178篇
  国内免费   7篇
电工技术   63篇
综合类   9篇
化学工业   594篇
金属工艺   71篇
机械仪表   83篇
建筑科学   306篇
矿业工程   15篇
能源动力   143篇
轻工业   410篇
水利工程   58篇
石油天然气   16篇
无线电   378篇
一般工业技术   737篇
冶金工业   688篇
原子能技术   24篇
自动化技术   1003篇
  2023年   18篇
  2022年   32篇
  2021年   71篇
  2020年   54篇
  2019年   76篇
  2018年   108篇
  2017年   126篇
  2016年   110篇
  2015年   104篇
  2014年   127篇
  2013年   302篇
  2012年   247篇
  2011年   315篇
  2010年   237篇
  2009年   209篇
  2008年   279篇
  2007年   240篇
  2006年   229篇
  2005年   178篇
  2004年   132篇
  2003年   142篇
  2002年   140篇
  2001年   89篇
  2000年   88篇
  1999年   86篇
  1998年   125篇
  1997年   85篇
  1996年   85篇
  1995年   65篇
  1994年   55篇
  1993年   37篇
  1992年   43篇
  1991年   26篇
  1990年   33篇
  1989年   28篇
  1988年   27篇
  1987年   25篇
  1986年   17篇
  1985年   22篇
  1984年   26篇
  1983年   12篇
  1982年   15篇
  1981年   11篇
  1980年   13篇
  1979年   22篇
  1978年   11篇
  1977年   9篇
  1976年   22篇
  1974年   5篇
  1972年   7篇
排序方式: 共有4598条查询结果,搜索用时 628 毫秒
131.
A new semantics in terms of mean field equations is presented for WSCCS (Weighted Synchronous Calculus of Communicating Systems). The semantics captures the average behaviour of the system over time, but without computing the entire state space, therefore avoiding the state space explosion problem. This allows easy investigation of models with large numbers of components. The new semantics is shown to be equivalent to the standard Discrete Time Markov Chain semantics of WSCCS as the number of processes tends to infinity. The method of deriving the semantics is illustrated with examples drawn from biology and from computing.  相似文献   
132.
We are interested in information management for decision support applications, especially those that monitor distributed, heterogeneous databases to assess time-critical decisions. Users of such applications can easily be overwhelmed with data that may change rapidly, may conflict, and may be redundant. Developers are faced with a dilemma: either filter out most information and risk excluding critical items, or gather possibly irrelevant or redundant information, and overwhelm the decision maker. This paper describes a solution to this dilemma called decision-centric information monitoring (DCIM). First, we observe that decision support systems should monitor only information that can potentially change some decision. We present an architecture for DCIM that meets the requirements implied by this observation. We describe techniques for identifying the highest value information to monitor and techniques for monitoring that information despite autonomy, distribution, and heterogeneity of data sources. Finally, we present lessons learned from building LOOKOUT, which is to our knowledge the first implementation of a top-to-bottom system performing decision-centric information monitoring.  相似文献   
133.
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.  相似文献   
134.
Large datasets typically contain coarse features comprised of finer sub-features. Even if the shapes of the small structures are evident in a 3D display, the aggregate shapes they suggest may not be easily inferred. From previous studies in shape perception, the evidence has not been clear whether physically-based illumination confers any advantage over local illumination for understanding scenes that arise in visualization of large data sets that contain features at two distinct scales. In this paper we show that physically-based illumination can improve the perception for some static scenes of complex 3D geometry from flow fields. We perform human-subjects experiments to quantify the effect of physically-based illumination on participant performance for two tasks: selecting the closer of two streamtubes from a field of tubes, and identifying the shape of the domain of a flow field over different densities of tubes. We find that physically-based illumination influences participant performance as strongly as perspective projection, suggesting that physically-based illumination is indeed a strong cue to the layout of complex scenes. We also find that increasing the density of tubes for the shape identification task improved participant performance under physically-based illumination but not under the traditional hardware-accelerated illumination model.  相似文献   
135.
Interactive history tools, ranging from basic undo and redo to branching timelines of user actions, facilitate iterative forms of interaction. In this paper, we investigate the design of history mechanisms for information visualization. We present a design space analysis of both architectural and interface issues, identifying design decisions and associated trade-offs. Based on this analysis, we contribute a design study of graphical history tools for Tableau, a database visualization system. These tools record and visualize interaction histories, support data analysis and communication of findings, and contribute novel mechanisms for presenting, managing, and exporting histories. Furthermore, we have analyzed aggregated collections of history sessions to evaluate Tableau usage. We describe additional tools for analyzing users’ history logs and how they have been applied to study usage patterns in Tableau.  相似文献   
136.
Network data frequently arises in a wide variety of fields, and node-link diagrams are a very natural and intuitive represen- tation of such data. In order for a node-link diagram to be effective, the nodes must be arranged well on the screen. While many graph layout algorithms exist for this purpose, they often have limitations such as high computational complexity or node colocation. This paper proposes a new approach to graph layout through the use of space filling curves which is very fast and guarantees that there will be no nodes that are colocated. The resulting layout is also aesthetic and satisfies several criteria for graph layout effectiveness.  相似文献   
137.
138.
139.
In many applications, volumetric data sets are examined by displaying isosurfaces, surfaces where the data, or some function of the data, takes on a given value. Interactive applications typically use local lighting models to render such surfaces. This work introduces a method to precompute or lazily compute global illumination to improve interactive isosurface renderings. The precompiled illumination resides in a separate volume and includes direct light, shadows, and intersections. Using this volume, interactive globally illuminated renderings of isosurfaces become feasible while still allowing dynamic manipulation of lighting, viewpoint and isovalue.  相似文献   
140.
Many important science and engineering applications, such as regulating the temperature distribution over a semiconductor wafer and controlling the noise from a photocopy machine, require interpreting distributed data and designing decentralized controllers for spatially distributed systems. Developing effective computational techniques for representing and reasoning about these systems, which are usually modeled with partial differential equations (PDEs), is one of the major challenge problems for qualitative and spatial reasoning research.

This paper introduces a novel approach to decentralized control design, influence-based model decomposition, and applies it in the context of thermal regulation. Influence-based model decomposition uses a decentralized model, called an influence graph, as a key data abstraction representing influences of controls on distributed physical fields. It serves as the basis for novel algorithms for control placement and parameter design for distributed systems with large numbers of coupled variables. These algorithms exploit physical knowledge of locality, linear superposability, and continuity, encapsulated in influence graphs representing dependencies of field nodes on control nodes. The control placement design algorithms utilize influence graphs to decompose a problem domain so as to decouple the resulting regions. The decentralized control parameter optimization algorithms utilize influence graphs to efficiently evaluate thermal fields and to explicitly trade off computation, communication, and control quality. By leveraging the physical knowledge encapsulated in influence graphs, these control design algorithms are more efficient than standard techniques, and produce designs explainable in terms of problem structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号