首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2828篇
  免费   198篇
  国内免费   16篇
电工技术   51篇
综合类   18篇
化学工业   738篇
金属工艺   67篇
机械仪表   42篇
建筑科学   143篇
矿业工程   7篇
能源动力   134篇
轻工业   137篇
水利工程   15篇
石油天然气   7篇
无线电   279篇
一般工业技术   633篇
冶金工业   136篇
原子能技术   15篇
自动化技术   620篇
  2024年   7篇
  2023年   61篇
  2022年   124篇
  2021年   160篇
  2020年   110篇
  2019年   108篇
  2018年   109篇
  2017年   122篇
  2016年   138篇
  2015年   102篇
  2014年   162篇
  2013年   194篇
  2012年   211篇
  2011年   227篇
  2010年   141篇
  2009年   149篇
  2008年   128篇
  2007年   105篇
  2006年   85篇
  2005年   43篇
  2004年   47篇
  2003年   49篇
  2002年   41篇
  2001年   32篇
  2000年   23篇
  1999年   28篇
  1998年   44篇
  1997年   25篇
  1996年   16篇
  1995年   21篇
  1994年   12篇
  1993年   11篇
  1992年   18篇
  1991年   20篇
  1990年   16篇
  1989年   6篇
  1988年   5篇
  1987年   8篇
  1985年   10篇
  1984年   7篇
  1983年   12篇
  1981年   10篇
  1979年   9篇
  1978年   7篇
  1976年   12篇
  1975年   8篇
  1974年   10篇
  1973年   8篇
  1972年   9篇
  1962年   4篇
排序方式: 共有3042条查询结果,搜索用时 0 毫秒
41.
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network.  相似文献   
42.
The efficiency of the valve-less rectification micropump depends primarily on the microfluidic diodicity (the ratio of the backward pressure drop to the forward pressure drop). In this study, different rectifying structures, including the conventional structures (nozzle/diffuser and Tesla structures), were investigated at very low Reynolds numbers (between 0.2 and 60). The rectifying structures were characterized with respect to their design, and a numerical approach was illustrated to calculate the diodicity for the rectifying structures. In this study, the microfluidic diodicity was evaluated numerically for different rectifying structures including half circle, semicircle, heart, triangle, bifurcation, nozzle/diffuser, and Tesla structures. The Lattice Boltzmann Method (LBM) was utilized as a numerical method to simulate the fluid flow in the microscale. The results suggest that at very low Reynolds number flow, rectification and multifunction micropumping may be achievable by using a number of the presented structures. The results for the conventional structures agree with the reported results.  相似文献   
43.
In this study, we analyze the criticality of nodes in air transportation using techniques from three different domains, and thus, three essentially different perspectives of criticality. First, we examine the unweighted structure of air transportation networks, using recent methods from control theory (maximum matching and minimum dominating set). Second, complex network metrics (betweenness and closeness) are used with passenger traffic as weights. Third, ticket data-level analysis (origin-destination betweenness and outbound traffic with transit threshold) is performed. Remarkably, all techniques identify a different set of critical nodes; while, in general, giving preference to the selection of high-degree nodes. Our evaluation on the international air transportation country network suggests that some countries, e.g., United States, France, and Germany, are critical from all three perspectives. Other countries, e.g., United Arab Emirates and Panama, have a very specific influence, by controlling the passenger traffic of their neighborhood countries. Furthermore, we assess the criticality of the country network using Multi-Criteria Decision Analysis (MCDA) techniques. United States, Great Britain, Germany, and United Arab Emirates are identified as non-dominated countries; Sensitivity analysis shows that United Arab Emirates is most sensitive to the preference information on the outbound traffic. Our work gears towards a better understanding of node criticality in air transportation networks. This study also stipulates future research possibilities on criticality in general transportation networks.  相似文献   
44.
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states.

Program summary

Program title: dmftCatalogue identifier: AEIL_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: ALPS LIBRARY LICENSE version 1.1No. of lines in distributed program, including test data, etc.: 899 806No. of bytes in distributed program, including test data, etc.: 32 153 916Distribution format: tar.gzProgramming language: C++Operating system: The ALPS libraries have been tested on the following platforms and compilers:
  • • 
    Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher)
  • • 
    MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0)
  • • 
    IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers
  • • 
    Compaq Tru64 UNIX with Compq C++ Compiler (cxx)
  • • 
    SGI IRIX with MIPSpro C++ Compiler (CC)
  • • 
    HP-UX with HP C++ Compiler (aCC)
  • • 
    Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher)
RAM: 10 MB–1 GBClassification: 7.3External routines: ALPS [1], BLAS/LAPACK, HDF5Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of “correlated electron” materials as auxiliary problems whose solution gives the “dynamical mean field” approximation to the self-energy and local correlation functions.Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2].Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper.Running time: 60 s–8 h per iteration.References:
  • [1] 
    A. Albuquerque, F. Alet, P. Corboz, et al., J. Magn. Magn. Mater. 310 (2007) 1187.
  • [2] 
    http://arxiv.org/abs/1012.4474, Rev. Mod. Phys., in press.
  相似文献   
45.
During summer and autumn 2007, a 11 GHz microwave radiometer was deployed in an experimental tree plantation in Sardinilla, Panama. The opacity of the tree canopy was derived from incoming brightness temperatures received on the ground. A collocated eddy-covariance flux tower measured water vapor fluxes and meteorological variables above the canopy. In addition, xylem sapflow of trees was measured within the flux tower footprint. We observed considerable diurnal differences between measured canopy opacities and modeled theoretical opacities that were closely linked to xylem sapflow. It is speculated that dielectric changes in the leaves induced by the sapflow are causing the observed diurnal changes. In addition, canopy intercepted rain and dew formation also modulated the diurnal opacity cycle. With an enhanced canopy opacity model accounting for water deposited on the leaves, we quantified the influence of canopy stored water (i.e. intercepted water and dew) on the opacity. A time series of dew formation and rain interception was directly monitored during a period of two weeks. We found that during light rainfall up to 60% of the rain amount is intercepted by the canopy whereas during periods of intense rainfall only 4% were intercepted. On average, 0.17 mm of dew was formed during the night. Dew evaporation contributed 5% to the total water vapor flux measured above the canopy.  相似文献   
46.
Contention resolution schemes in optical burst switched networks (OBS) as well as contention avoidance schemes delay burst delivery and change the burst arrival sequence. The burst arrival sequence usually changes the packet arrival sequence and degrades the upper layer protocols performance, e.g., the throughput of the transmission control protocol (TCP).In this paper, we present and analyze a detailed burst reordering model for two widely applied burst assembly strategies: time-based and random selection. We apply the IETF reordering metrics and calculate explicitly three reordering metrics: the reordering ratio, the reordering extent metric and the TCP relevant metric. These metrics allow estimating the degree of reordering in a certain network scenario. They estimate the buffer space at the destination to resolve reordering and quantify the number of duplicate acknowledgements relevant for investigations on the transmission control protocol.We show that our model reflects the burst/packet reordering pattern of simulated OBS networks very well. Applying our model in a network emulation scenario, enables investigations on real protocol implementations in network emulation environments. It therefore serves as a substitute for extensive TCP over OBS network simulations with a focus on burst reordering.  相似文献   
47.
The characterization of patients with acute coronary syndromes (ACS) at the molecular and cellular levels provides a novel vision for understanding the pathological and clinical expression of the disease. Recent advances in proteomic technologies permit the evaluation of systematic changes in protein expression in many biological systems and have been extensively applied to cardiovascular diseases (CVD). The cardiovascular system is in permanent intimate contact with blood, making blood-based biomarker discovery a particularly worthwhile approach. Thus, proteomics can potentially yield novel biomarkers reflecting CVD, establish earlier detection strategies, and monitor response to therapy. Here we review the different proteomic strategies used in the study of atherosclerosis and the novel proteins differentially expressed and secreted by atherosclerotic lesions which constitute novel potential biomarkers (HSP-27, Cathepsin D). Special attention is paid to MS-Imaging of atheroma plaque and the generation, for the first time, of 2-D images of lipids, showing the distribution of these molecules in the different areas of the atherosclerotic lesions. In addition new potential biomarkers have been identified in plasma (amyloid A1α, transtherytin), circulating cells (protein profile in monocytes from ACS patients) and individual cells constituents of atheroma plaques (endothelial, VSMC, macrophages) which provide novel insights into vascular pathophysiology.  相似文献   
48.
The variability of fresh water availability in arid and semi-arid countries poses a serious challenge to farmers to cope with when depending on irrigation for crop growing. This has shifted the focus onto improving irrigation management and water productivity (WP) through controlled deficit irrigation (DI). DI can be conceived as a strategy to deal with these challenges but more knowledge on risks and chances of this strategy is urgently needed. The availability of simulation models that can reliably predict crop yield under the influence of soil, atmosphere, irrigation, and agricultural management practices is a prerequisite for deriving reliable and effective deficit irrigation strategies. In this context, this article discusses the performance of the crop models CropWat, PILOTE, Daisy, and APSIM when being part of a stochastic simulation-based approach to improve WP by focusing primarily on the impact of climate variability. The stochastic framework consists of: (i) a weather generator for simulating regional impacts of climate variability; (ii) a tailor-made evolutionary optimization algorithm for optimal irrigation scheduling with limited water supply; and (iii) the above mentioned models for simulating water transport and crop growth in a sound manner. The results present stochastic crop water production functions (SCWPFs) that can be used as basic tools for assessing the impact on the risk for the potential yield due to water stress and climate variability. Example simulations from India, Malawi, France and Oman are presented and the suitability of these crop models to be employed in a framework for optimizing WP is evaluated.  相似文献   
49.
Helicopters are valuable since they can land at unprepared sites; however, current unmanned helicopters are unable to select or validate landing zones (LZs) and approach paths. For operation in unknown terrain it is necessary to assess the safety of a LZ. In this paper, we describe a lidar-based perception system that enables a full-scale autonomous helicopter to identify and land in previously unmapped terrain with no human input.We describe the problem, real-time algorithms, perception hardware, and results. Our approach has extended the state of the art in terrain assessment by incorporating not only plane fitting, but by also considering factors such as terrain/skid interaction, rotor and tail clearance, wind direction, clear approach/abort paths, and ground paths.In results from urban and natural environments we were able to successfully classify LZs from point cloud maps. We also present results from 8 successful landing experiments with varying ground clutter and approach directions. The helicopter selected its own landing site, approaches, and then proceeds to land. To our knowledge, these experiments were the first demonstration of a full-scale autonomous helicopter that selected its own landing zones and landed.  相似文献   
50.
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号