首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2421篇
  免费   201篇
  国内免费   2篇
电工技术   38篇
综合类   8篇
化学工业   765篇
金属工艺   39篇
机械仪表   45篇
建筑科学   133篇
矿业工程   35篇
能源动力   55篇
轻工业   255篇
水利工程   15篇
无线电   180篇
一般工业技术   483篇
冶金工业   149篇
原子能技术   12篇
自动化技术   412篇
  2023年   45篇
  2022年   72篇
  2021年   114篇
  2020年   80篇
  2019年   68篇
  2018年   95篇
  2017年   92篇
  2016年   114篇
  2015年   115篇
  2014年   144篇
  2013年   156篇
  2012年   151篇
  2011年   172篇
  2010年   100篇
  2009年   120篇
  2008年   118篇
  2007年   98篇
  2006年   75篇
  2005年   80篇
  2004年   62篇
  2003年   51篇
  2002年   41篇
  2001年   25篇
  2000年   26篇
  1999年   24篇
  1998年   33篇
  1997年   35篇
  1996年   28篇
  1995年   29篇
  1994年   17篇
  1993年   17篇
  1992年   11篇
  1991年   14篇
  1990年   13篇
  1989年   10篇
  1988年   10篇
  1987年   12篇
  1986年   13篇
  1985年   7篇
  1984年   12篇
  1983年   10篇
  1982年   7篇
  1981年   9篇
  1980年   8篇
  1979年   6篇
  1978年   5篇
  1977年   5篇
  1976年   5篇
  1974年   8篇
  1973年   5篇
排序方式: 共有2624条查询结果,搜索用时 0 毫秒
11.
Early phase distributed system design can be accomplished using solution spaces that provide an interval of permissible values for each functional parameter. The feasibility property guarantees fulfillment of all design requirements for all possible realizations. Flexibility denotes the size measure of the intervals, with higher flexibility benefiting the design process. Two methods are available for solution space identification. The direct method solves a computationally cheap optimization problem. The indirect method employs a sampling approach that requires a relaxation of the feasibility property through re-formulation as a chance constraint. Even for high probabilities of fulfillment, \(P>0.99\), this results in substantial increases in flexibility, which offsets the risk of infeasibility. This work implements the chance constraint formulation into the direct method for linear constraints by showing that its problem statement can be understood as a linear robust optimization problem. Approximations of chance constraints from the literature are transferred into the context of solution spaces. From this, we derive a theoretical value for the safety parameter \(\varOmega\). A further modification is presented for use cases, where some intervals are already predetermined. A problem from vehicle safety is used to compare the modified direct and indirect methods and discuss suitable choices of \(\varOmega\). We find that the modified direct method is able to identify solution spaces with similar flexibility, while maintaining its cost advantage.  相似文献   
12.
We demonstrate controlled transport of superparamagnetic beads in the opposite direction of a laminar flow. A permanent magnet assembles 200 nm magnetic particles into about 200 μm long bead chains that are aligned in parallel to the magnetic field lines. Due to a magnetic field gradient, the bead chains are attracted towards the wall of a microfluidic channel. A rotation of the permanent magnet results in a rotation of the bead chains in the opposite direction to the magnet. Due to friction on the surface, the bead chains roll along the channel wall, even in counter-flow direction, up to at a maximum counter-flow velocity of 8 mm s−1. Based on this approach, magnetic beads can be accurately manoeuvred within microfluidic channels. This counter-flow motion can be efficiently be used in Lab-on-a-Chip systems, e.g. for implementing washing steps in DNA purification.  相似文献   
13.
14.
To understand the handling behaviour of a three-wheeled tilting vehicle, models of the vehicle with different level of detail, corresponding to specific fields of investigation, have been developed. Then the proposed kinematics of the three-wheeler are assessed and optimized with respect to desired dynamic properties by applying a detailed multibody system model. The partially unstable nature of the motion of the vehicle suggests the application of an analytically derived, simplified model, to allow for focusing on stability aspects and steady-state handling properties. These investigations reveal the necessity of employing a steer-by-wire control system to support the driver by stabilizing the motion of the vehicle. Thus, an additional basic vehicle model is derived for control design, and an energy-efficient control strategy is presented. Numerical simulation results demonstrate the dynamic properties of the optimized kinematics and the control system, approved by successful test runs of a prototype.  相似文献   
15.
The IEEE 802.21 standard facilitates media independent handovers by providing higher layer mobility management functions with common service primitives for all technologies. Right after the base specification was published, several voices rose up in the working group advocating to broaden the scope of IEEE 802.21 beyond handovers. This paper aims at updating the reader with the main challenges and functionalities required to create a Media Independence Service Layer, through the analysis of scenarios which are being discussed within the working group: 1) Wireless Coexistence, and 2) Heterogeneous Wireless Multihop Backhaul Networks.  相似文献   
16.
For evaluating visual-analytics tools, many studies confine to scoring user insights into data. For participatory design of those tools, we propose a three-level methodology to make more out of users' insights. The Relational Insight Organizer (RIO) helps to understand how insights emerge and build on one each other.  相似文献   
17.
We study four problems from the geometry of numbers, the shortest vector problem  (Svp)(Svp), the closest vector problem  (Cvp)(Cvp), the successive minima problem  (Smp)(Smp), and the shortest independent vectors problem   (SivpSivp). Extending and generalizing results of Ajtai, Kumar, and Sivakumar we present probabilistic single exponential time algorithms for all four problems for all ?p?p norms. The results on SmpSmp and SivpSivp are new for all norms. The results on SvpSvp and CvpCvp generalize previous results of Ajtai et al. for the Euclidean ?2?2 norm to arbitrary ?p?p norms. We achieve our results by introducing a new lattice problem, the generalized shortest vector problem   (GSvpGSvp). 1 We describe a single exponential time algorithm for GSvpGSvp. We also describe polynomial time reductions from Svp,Cvp,SmpSvp,Cvp,Smp, and SivpSivp to GSvpGSvp, establishing single exponential time algorithms for the four classical lattice problems. This approach leads to a unified algorithmic treatment of the lattice problems Svp,Cvp,SmpSvp,Cvp,Smp, and SivpSivp.  相似文献   
18.
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA® 8800gt graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer.

Program summary

Program title: Phoogle-C/Phoogle-GCatalogue identifier: AEEB_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 51 264No. of bytes in distributed program, including test data, etc.: 2 238 805Distribution format: tar.gzProgramming language: C++Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1Operating system: Windows XPHas the code been vectorised or parallelized?: Phoogle-G is written for SIMD architecturesRAM: 1 GBClassification: 21.1External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1].Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing.Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA.Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab).Additional comments: The random number library used has a LGPL (http://www.gnu.org/copyleft/lesser.html) licence.Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium.References:
[1]
http://www.nvidia.com/object/cuda_home.html.
[2]
S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
  相似文献   
19.
The existence of a representative volume element (RVE) for a class of quasi-brittle materials having a random heterogeneous microstructure in tensile, shear and mixed mode loading is demonstrated by deriving traction–separation relations, which are objective with respect to RVE size. A computational homogenization based multiscale crack modelling framework, implemented in an FE2 setting, for quasi-brittle solids with complex random microstructure is presented. The objectivity of the macroscopic response to the micro-sample size is shown by numerical simulations. Therefore, a homogenization scheme, which is objective with respect to macroscopic discretization and microscopic sample size, is devised. Numerical examples including a comparison with direct numerical simulation are given to demonstrate the performance of the proposed method.  相似文献   
20.
The direct observation of cells over time using time-lapse microscopy can provide deep insights into many important biological processes. Reliable analyses of motility, proliferation, invasive potential or mortality of cells are essential to many studies involving live cell imaging and can aid in biomarker discovery and diagnostic decisions. Given the vast amount of image- and time-series data produced by modern microscopes, automated analysis is a key feature to capitalize the potential of time-lapse imaging devices. To provide fast and reproducible analyses of multiple aspects of cell behaviour, we developed TimeLapseAnalyzer. Apart from general purpose image enhancements and segmentation procedures, this extensible, self-contained, modular cross-platform package provides dedicated modalities for fast and reliable analysis of multi-target cell tracking, scratch wound healing analysis, cell counting and tube formation analysis in high throughput screening of live-cell experiments. TimeLapseAnalyzer is freely available (MATLAB, Open Source) at http://www.informatik.uni-ulm.de/ni/mitarbeiter/HKestler/tla.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号