首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2634篇
  免费   54篇
  国内免费   8篇
电工技术   35篇
综合类   16篇
化学工业   131篇
金属工艺   25篇
机械仪表   36篇
建筑科学   75篇
矿业工程   4篇
能源动力   17篇
轻工业   90篇
水利工程   10篇
石油天然气   6篇
武器工业   1篇
无线电   109篇
一般工业技术   198篇
冶金工业   1751篇
原子能技术   9篇
自动化技术   183篇
  2023年   6篇
  2022年   10篇
  2021年   10篇
  2020年   8篇
  2019年   12篇
  2018年   14篇
  2017年   16篇
  2016年   22篇
  2015年   28篇
  2014年   42篇
  2013年   55篇
  2012年   34篇
  2011年   41篇
  2010年   37篇
  2009年   41篇
  2008年   43篇
  2007年   52篇
  2006年   33篇
  2005年   69篇
  2004年   38篇
  2003年   33篇
  2002年   31篇
  2001年   28篇
  2000年   24篇
  1999年   69篇
  1998年   517篇
  1997年   309篇
  1996年   215篇
  1995年   115篇
  1994年   111篇
  1993年   104篇
  1992年   22篇
  1991年   36篇
  1990年   26篇
  1989年   33篇
  1988年   30篇
  1987年   39篇
  1986年   21篇
  1985年   36篇
  1984年   5篇
  1983年   6篇
  1982年   13篇
  1981年   14篇
  1980年   24篇
  1979年   5篇
  1978年   9篇
  1977年   43篇
  1976年   116篇
  1975年   12篇
  1966年   4篇
排序方式: 共有2696条查询结果,搜索用时 15 毫秒
71.
The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs.  相似文献   
72.
World lines     
In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas, decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting, the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation, visualization and computational steering into a single unified system that is capable of dealing with the extended solution space. World Lines represent simulation runs as causally connected tracks that share a common time axis. This setup enables users to interfere and add new information quickly. A World Line is introduced as a visual combination of user events and their effects in order to present a possible future. To quickly find the most attractive outcome, we suggest World Lines as the governing component in a system of multiple linked views and a simulation component. World Lines employ linking and brushing to enable comparative visual analysis of multiple simulations in linked views. Analysis results can be mapped to various visual variables that World Lines provide in order to highlight the most compelling solutions. To demonstrate this technique we present a flooding scenario and show the usefulness of the integrated approach to support informed decision making.  相似文献   
73.
This study was part of an interdisciplinary research project on soil carbon and phytomass dynamics of boreal and arctic permafrost landscapes. The 45 ha study area was a catchment located in the forest tundra in northern Siberia, approximately 100 km north of the Arctic Circle.The objective of this study was to estimate aboveground carbon (AGC) and assess and model its spatial variability. We combined multi-spectral high resolution remote sensing imagery and sample based field inventory data by means of the k-nearest neighbor (k-NN) technique and linear regression.Field data was collected by stratified systematic sampling in August 2006 with a total sample size of n = 31 circular nested sample plots of 154 m2 for trees and shrubs and 1 m2 for ground vegetation. Destructive biomass samples were taken on a sub-sample for fresh weight and moisture content. Species-specific allometric biomass models were constructed to predict dry biomass from diameter at breast height (dbh) for trees and from elliptic projection areas for shrubs.Quickbird data (standard imagery product), acquired shortly before the field campaign and archived ASTER data (Level-1B product) of 2001 were geo-referenced, converted to calibrated radiances at sensor and used as carrier data. Spectral information of the pixels which were located in the inventory plots were extracted and analyzed as reference set. Stepwise multiple linear regression was applied to identify suitable predictors from the set of variables of the original satellite bands, vegetation indices and texture metrics. To produce thematic carbon maps, carbon values were predicted for all pixels of the investigated satellite scenes. For this prediction, we compared the kNN distance-weighted classifier and multiple linear regression with respect to their predictions.The estimated mean value of aboveground carbon from stratified sampling in the field is 15.3 t/ha (standard error SE = 1.50 t/ha, SE% = 9.8%). Zonal prediction from the k-NN method for the Quickbird image as carrier is 14.7 t/ha with a root mean square error RMSE = 6.42 t/ha, RMSEr = 44%) resulting from leave-one-out cross-validation. The k-NN-approach allows mapping and analysis of the spatial variability of AGC. The results show high spatial variability with AGC predictions ranging from 4.3 t/ha to 28.8 t/ha, reflecting the highly heterogeneous conditions in those permafrost-influenced landscapes. The means and totals of linear regression and k-NN predictions revealed only small differences but some regional distinctions were recognized in the maps.  相似文献   
74.
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states.

Program summary

Program title: dmftCatalogue identifier: AEIL_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: ALPS LIBRARY LICENSE version 1.1No. of lines in distributed program, including test data, etc.: 899 806No. of bytes in distributed program, including test data, etc.: 32 153 916Distribution format: tar.gzProgramming language: C++Operating system: The ALPS libraries have been tested on the following platforms and compilers:
  • • 
    Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher)
  • • 
    MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0)
  • • 
    IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers
  • • 
    Compaq Tru64 UNIX with Compq C++ Compiler (cxx)
  • • 
    SGI IRIX with MIPSpro C++ Compiler (CC)
  • • 
    HP-UX with HP C++ Compiler (aCC)
  • • 
    Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher)
RAM: 10 MB–1 GBClassification: 7.3External routines: ALPS [1], BLAS/LAPACK, HDF5Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of “correlated electron” materials as auxiliary problems whose solution gives the “dynamical mean field” approximation to the self-energy and local correlation functions.Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2].Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper.Running time: 60 s–8 h per iteration.References:
  • [1] 
    A. Albuquerque, F. Alet, P. Corboz, et al., J. Magn. Magn. Mater. 310 (2007) 1187.
  • [2] 
    http://arxiv.org/abs/1012.4474, Rev. Mod. Phys., in press.
  相似文献   
75.
Vector fields are a common concept for the representation of many different kinds of flow phenomena in science and engineering. Methods based on vector field topology are known for their convenience for visualizing and analysing steady flows, but a counterpart for unsteady flows is still missing. However, a lot of good and relevant work aiming at such a solution is available. We give an overview of previous research leading towards topology‐based and topology‐inspired visualization of unsteady flow, pointing out the different approaches and methodologies involved as well as their relation to each other, taking classical (i.e. steady) vector field topology as our starting point. Particularly, we focus on Lagrangian methods, space–time domain approaches, local methods and stochastic and multifield approaches. Furthermore, we illustrate our review with practical examples for the different approaches.  相似文献   
76.
We present a new dynamic programming algorithm that solves the minimum Steiner tree problem on graphs with k terminals in time O*(ck) for any c > 2. This improves the running time of the previously fastest parameterized algorithm by Dreyfus-Wagner of order O*(3k) and the so-called "full set dynamic programming" algorithm solving rectilinear instances in time O*(2.38k).  相似文献   
77.
We prove convergence in distribution for the profile (the number of nodes at each level), normalized by its mean, of random recursive trees when the limit ratio α of the level and the logarithm of tree size lies in [0,e). Convergence of all moments is shown to hold only for α ∈ [0,1] (with only convergence of finite moments when α ∈ (1,e)). When the limit ratio is 0 or 1 for which the limit laws are both constant, we prove asymptotic normality for α = 0 and a "quicksort type" limit law for α = 1, the latter case having additionally a small range where there is no fixed limit law. Our tools are based on the contraction method and method of moments. Similar phenomena also hold for other classes of trees; we apply our tools to binary search trees and give a complete characterization of the profile. The profiles of these random trees represent concrete examples for which the range of convergence in distribution differs from that of convergence of all moments.  相似文献   
78.
The present study investigated the relationship between the time of nocturnal onset of urinary 6-sulfatoxymelatonin (aMT6s) secretion, and the timing of the steepest increase in nocturnal sleepiness ("sleep gate"), as determined by an ultrashort sleep-wake cycle test (7 min sleep, 13 min wake). Twenty-nine men (mean age 23.8 +/- 2.7 years) participated. The ultrashort sleep-wake paradigm started at 0700 hr after a night of sleep deprivation and continued for 24 hr until 0700 hr the next day. Electrophysiological recordings were carried out during the 7-min sleep trials, which were then scored conventionally for sleep stages. Urinary aMT6s was measured every 2 hr. The results showed that the timing of the sleep gate was significantly correlated with the onset of aMT6s secretion. These results are discussed in light of the possible role of melatonin in sleep-wake regulation.  相似文献   
79.
The TR1C fragment of turkey skeletal muscle TnC (residues 12-87) comprises the two regulatory calcium binding sites of the protein. Complete assignments of the 1H-NMR resonances of the backbone and amino acid side chains of this domain in the absence of metal ions have been obtained using 2D 1H-NMR techniques. Sequential (i,i+1) and short-range (i,i+3) NOE connectivities define two helix-loop-helix calcium binding motifs, and long-range NOE connectivities indicate a short two-stranded beta-sheet formed between the two calcium binding loops. The two calcium binding sites are different in secondary structure. In terms of helix length, site II conforms to a standard "EF-hand" motif with the first helix ending one residue before the first calcium ligand and the second helix starting one residue after the beta-sheet. In site I, the first helix ends three residues before the first calcium ligand, and the second helix starts three residues after the beta-sheet. A number of long-range NOE connectivities between the helices define their relative orientation and indicate formation of a hydrophobic core between helices A, B, and D. The secondary structure and global fold of the TR1C fragment in solution in the calcium-free state are therefore very similar to those of the corresponding region in the crystal structure of turkey skeletal TnC [Herzberg, O., & James, M.N.G. (1988) J. Mol. Biol. 203, 761-779].  相似文献   
80.
Methods for standardized classification of epileptic seizures are important for both clinical practice and epidemiologic research. In this study, we developed a strategy for standardized classification using a semistructured telephone interview and operational diagnostic criteria. We interviewed 1,957 adults with epilepsy ascertained from voluntary organizations. To confirm and expand the seizure history, we also interviewed a first-degree relative for 67% of subjects and obtained medical records for 59%. Three lay reviewers used all available information to classify seizures. To assess reliability, each reviewer classified a sample of subjects assigned to the others. In addition, an expert physician classified a sample of subjects assigned to two of the reviewers. Agreement was "moderate-substantial" for generalized-onset seizures, both for the comparisons between pairs of lay reviewers and for the neurologist versus lay reviewers. Agreement was "substantial-almost perfect" for partial-onset seizures, both for pairs of lay reviewers and for the neurologist versus lay reviewers. These results suggest that seizures can be reliably classified by lay reviewers, using operational criteria applied to symptoms ascertained in a semistructured telephone interview.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号