首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1060篇
  免费   42篇
  国内免费   2篇
电工技术   32篇
综合类   8篇
化学工业   320篇
金属工艺   14篇
机械仪表   28篇
建筑科学   59篇
矿业工程   6篇
能源动力   21篇
轻工业   77篇
水利工程   6篇
无线电   92篇
一般工业技术   160篇
冶金工业   66篇
原子能技术   5篇
自动化技术   210篇
  2023年   14篇
  2022年   15篇
  2021年   27篇
  2020年   22篇
  2019年   20篇
  2018年   24篇
  2017年   16篇
  2016年   29篇
  2015年   39篇
  2014年   36篇
  2013年   49篇
  2012年   61篇
  2011年   70篇
  2010年   65篇
  2009年   53篇
  2008年   59篇
  2007年   50篇
  2006年   60篇
  2005年   39篇
  2004年   28篇
  2003年   23篇
  2002年   28篇
  2001年   17篇
  2000年   20篇
  1999年   15篇
  1998年   20篇
  1997年   24篇
  1996年   16篇
  1995年   24篇
  1994年   20篇
  1993年   18篇
  1992年   12篇
  1991年   9篇
  1990年   6篇
  1989年   10篇
  1988年   5篇
  1987年   5篇
  1985年   5篇
  1983年   6篇
  1982年   5篇
  1981年   7篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   4篇
  1976年   3篇
  1975年   2篇
  1974年   3篇
  1972年   2篇
  1969年   2篇
排序方式: 共有1104条查询结果,搜索用时 937 毫秒
31.
Rule cubes for causal investigations   总被引:1,自引:1,他引:0  
With the complexity of modern vehicles tremendously increasing, quality engineers play a key role within today’s automotive industry. Field data analysis supports corrective actions in development, production and after sales support. We decompose the requirements and show that association rules, being a popular approach to generating explanative models, still exhibit shortcomings. Interactive rule cubes, which have been proposed recently, are a promising alternative. We extend this work by introducing a way of intuitively visualizing and meaningfully ranking them. Moreover, we present methods to interactively factorize a problem and validate hypotheses by ranking patterns based on expectations, and by browsing a cube-based network of related influences. All this is currently in use as an interactive tool for warranty data analysis in the automotive industry. A real-world case study shows how engineers successfully use it in identifying root causes of quality issues.
Axel BlumenstockEmail:
  相似文献   
32.
Four experiments were conducted to test whether recent developments in display technology would suffice to eliminate the well-known disadvantages in reading from screen as compared with paper. Proofreading speed and performance were equal for a TFT-LCD and a paper display, but there were more symptoms of eyestrain in the screen condition accompanied by a strong preference for paper (Experiment 1). These results were replicated using a longer reading duration (Experiment 2). Additional experiments were conducted to test hypotheses about the reasons for the higher amount of eyestrain associated with reading from screen. Reduced screen luminance did not change the pattern of results (Experiment 3), but positioning both displays in equal inclination angles eliminated the differences in eyestrain symptoms and increased proofreading speed in the screen condition (Experiment 4). A paper-like positioning of TFT-LCDs seems to enable unimpaired reading without evidence of increased physical strain.

Practitioner Summary: Given the developments in screen technology, a re-assessment of the differences in proofreading speed and performance, well-being, and preference between computer screen and paper was conducted. State-of-the-art TFT-LCDs enable unimpaired reading, but a book-like positioning of screens seems necessary to minimise eyestrain symptoms.  相似文献   

33.
We investigate the complexity of preorder checking when the specification is a flat finite-state system whereas the implementation is either a non-flat finite-state system or a standard timed automaton. In both cases, we show that simulation checking is Exptime-hard, and for the case of a non-flat implementation, the result holds even if there is no synchronization between the parallel components and their alphabets of actions are pairwise disjoint. Moreover, we show that the considered problems become Pspace-complete when the specification is assumed to be deterministic. Additionally, we establish that comparing a synchronous non-flat system with no hiding and a flat system is Pspace-hard for any relation between trace containment and bisimulation equivalence, even if the flat system is assumed to be fixed.  相似文献   
34.
Recent developments in cellular imaging now permit the minimally invasive study of protein interactions in living cells. These advances are of enormous interest to cell biologists, as proteins rarely act in isolation, but rather in concert with others in forming cellular machinery. Up until recently, all protein interactions had to be determined in vitro using biochemical approaches. This biochemical legacy has provided cell biologists with the basis to test defined protein-protein interactions not only inside cells, but now also with spatial resolution. More recent developments in TCSPC imaging are now also driving towards being able to determine protein interaction rates with similar spatial resolution, and together, these experimental advances allow investigators to perform biochemical experiments inside living cells. Here, we discuss some findings we have made along the way which may be useful for physiologists to consider.  相似文献   
35.
In the current study a meshfree Lagrangian particle method for the Landau–Lifshitz Navier–Stokes (LLNS) equations is developed. The LLNS equations incorporate thermal fluctuation into macroscopic hydrodynamics by the addition of white noise fluxes whose magnitudes are set by a fluctuation–dissipation theorem. The study focuses on capturing the correct variance and correlations computed at equilibrium flows, which are compared with available theoretical values. Moreover, a numerical test for the random walk of standing shock wave has been considered for capturing the shock location.  相似文献   
36.
Engineering frameworks are currently required to support the easy, low-cost, modular and integrated development of manufacturing systems addressing the emergent requirements of re-configurability, responsiveness and robustness. This paper discusses the integration of 2D/3D digital software tools with Petri net based service-oriented frameworks to allow the design, configuration, analysis, validation, simulation, monitoring and control of manufacturing systems in a virtual environment and its posterior smooth migration into the real ??physical?? environment. An experimental case study was implemented to validate the proposed concepts, using the Continuum platform to design, compose, analyze, validate and simulate the Petri nets based service-oriented manufacturing control system, and the Delmia AutomationTM software suite to support the rapid prototyping and the easy simulation of the designed control solution. The experimental results prove several aspects of the proposed approach, notably the smooth migration between the design and the operation phases, one of the main objectives of the work.  相似文献   
37.
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with an approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle–particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.  相似文献   
38.
In an old weighing puzzle, there are n?3 coins that are identical in appearance. All the coins except one have the same weight, and that counterfeit one is a little bit lighter or heavier than the others, though it is not known in which direction. What is the smallest number of weighings needed to identify the counterfeit coin and to determine its type, using balance scales without measuring weights? This question was fully answered in 1946 by Dyson [The Mathematical Gazette 30 (1946) 231-234]. For values of n that are divisible by three, Dyson's scheme is non-adaptive and hence its later weighings do not depend on the outcomes of its earlier weighings. For values of n that are not divisible by three, however, Dyson's scheme is adaptive. In this note, we show that for all values n?3 there exists an optimal weighing scheme that is non-adaptive.  相似文献   
39.
We present the software package FRESHS (http://www.freshs.org) for parallel simulation of rare events using sampling techniques from the ‘splitting’ family of methods. Initially, Forward Flux Sampling (FFS) and Stochastic Process Rare Event Sampling (SPRES) have been implemented. These two methods together make rare event sampling available for both quasi-static and full non-equilibrium regimes. Our framework provides a plugin system for software implementing the underlying physics of the system of interest. At present, example plugins exist for our framework to steer the popular MD packages GROMACS, LAMMPS and ESPResSo, but due to the simple interface of our plugin system, it is also easy to attach other simulation software or self-written code. Use of our framework does not require recompilation of the simulation program. The modular structure allows the flexible implementation of further sampling methods or physics engines and creates a basis for objective comparison of different sampling algorithms.  相似文献   
40.
A fundamental tenet of the information systems (IS) discipline holds that: (a) a lack of formal power and influence over the organization targeted for change, (b) weak support from top management, and (c) organizational memories of prior failures are barriers to implementation success. Our research, informed by organization influence, compellingly illustrates that such conditions do not necessarily doom a project to failure. In this paper, we present an analysis of how an IS implementation team designed and enacted a coordinated strategy of organizational influence to achieve implementation success despite these barriers. Our empirical analysis also found that technology implementation and change is largely an organizational influence process (OIP), and thus technical-rational approaches alone are inadequate for achieving success. Our findings offer managers important insights into how they can design and enact OIPs to effectively manage IS implementation. Further, we show how the theory of organizational influence can enhance understanding of IS implementation dynamics and advance the development of a theory of effective IS change agentry.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号