首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7083篇
  免费   291篇
  国内免费   9篇
电工技术   99篇
综合类   39篇
化学工业   1847篇
金属工艺   161篇
机械仪表   148篇
建筑科学   464篇
矿业工程   26篇
能源动力   152篇
轻工业   752篇
水利工程   34篇
石油天然气   4篇
无线电   594篇
一般工业技术   1366篇
冶金工业   577篇
原子能技术   80篇
自动化技术   1040篇
  2023年   69篇
  2022年   115篇
  2021年   171篇
  2020年   140篇
  2019年   155篇
  2018年   148篇
  2017年   163篇
  2016年   220篇
  2015年   195篇
  2014年   270篇
  2013年   349篇
  2012年   375篇
  2011年   434篇
  2010年   312篇
  2009年   307篇
  2008年   319篇
  2007年   307篇
  2006年   278篇
  2005年   226篇
  2004年   171篇
  2003年   146篇
  2002年   151篇
  2001年   98篇
  2000年   104篇
  1999年   110篇
  1998年   174篇
  1997年   127篇
  1996年   91篇
  1995年   95篇
  1994年   82篇
  1993年   74篇
  1992年   82篇
  1991年   66篇
  1990年   58篇
  1989年   65篇
  1988年   66篇
  1987年   70篇
  1986年   58篇
  1985年   55篇
  1984年   62篇
  1983年   55篇
  1982年   48篇
  1981年   58篇
  1980年   47篇
  1979年   50篇
  1978年   48篇
  1977年   46篇
  1976年   61篇
  1975年   34篇
  1973年   36篇
排序方式: 共有7383条查询结果,搜索用时 19 毫秒
81.
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network.  相似文献   
82.
Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered. This often leads to physiologically implausible results. In this work, a robust quantification method is introduced to overcome this problem. Collaborating radiologists and cardiologists were carefully observed while estimating SVs and RFs in various healthy volunteer and patient 4D PC‐MRI data sets with conventional quantification methods, that is, using a single plane above the valve that is freely movable along the centerline. By default it is aligned perpendicular to the vessel's centerline, but free angulation (rotation) is possible. This facilitated the automation of their approach which, in turn, allows to derive statistical information about the plane angulation sensitivity. Moreover, the experts expect a continuous decrease of the blood flow volume along the vessel course. Conventional methods are often unable to produce this behaviour. Thus, we present a procedure to fit a monotonous function that ensures such physiologically plausible results. In addition, this technique was adapted for the usage in branching vessels such as the pulmonary artery. The performed informal evaluation shows the capability of our method to support diagnosis; a parameter evaluation confirms the robustness. Vortex flow was identified as one of the main causes for quantification uncertainties.  相似文献   
83.
Enterprise Architecture Management (EAM) is discussed in academia and industry as a vehicle to guide IT implementations, alignment, compliance assessment, or technology management. Still, a lack of knowledge prevails about how EAM can be successfully used, and how positive impact can be realized from EAM. To determine these factors, we identify EAM success factors and measures through literature reviews and exploratory interviews and propose a theoretical model that explains key factors and measures of EAM success. We test our model with data collected from a cross-sectional survey of 133 EAM practitioners. The results confirm the existence of an impact of four distinct EAM success factors, ‘EAM product quality’, ‘EAM infrastructure quality’, ‘EAM service delivery quality’, and ‘EAM organizational anchoring’, and two important EAM success measures, ‘intentions to use EAM’ and ‘Organizational and Project Benefits’ in a confirmatory analysis of the model. We found the construct ‘EAM organizational anchoring’ to be a core focal concept that mediated the effect of success factors such as ‘EAM infrastructure quality’ and ‘EAM service quality’ on the success measures. We also found that ‘EAM satisfaction’ was irrelevant to determining or measuring success. We discuss implications for theory and EAM practice.  相似文献   
84.
This paper discusses approaches for the isolation of deep high aspect ratio through silicon vias (TSV) with respect to a Via Last approach for micro-electro-mechanical systems (MEMS). Selected TSV samples have depths in the range of 170…270 µm and a diameter of 50 µm. The investigations comprise the deposition of different layer stacks by means of subatmospheric and plasma enhanced chemical vapour deposition (PECVD) of tetraethyl orthosilicate; Si(OC2H5)4 (TEOS). Moreover, an etch-back approach and the selective deposition on SiN were also included in the investigations. With respect to the Via Last approach, the contact opening at the TSV bottom by means of a specific spacer-etching method have been addressed within this paper. Step coverage values of up to 74 % were achieved for the best of those approaches. As an alternative to the SiO2-isolation liners a polymer coating based on the CVD of Parylene F was investigated, which yields even higher step coverage in the range of 80 % at the lower TSV sidewall for a surface film thickness of about 1000 nm. Leakage current measurements were performed and values below 0.1 nA/cm2 at 10 kV/cm were determined for the Parylene F films which represents a promising result for the aspired application to Via Last MEMS-TSV.  相似文献   
85.
An important question for the upcoming Semantic Web is how to best combine open world ontology languages, such as the OWL-based ones, with closed world rule-based languages. One of the most mature proposals for this combination is known as hybrid MKNF knowledge bases (Motik and Rosati, 2010 [52]), and it is based on an adaptation of the Stable Model Semantics to knowledge bases consisting of ontology axioms and rules. In this paper we propose a well-founded semantics for nondisjunctive hybrid MKNF knowledge bases that promises to provide better efficiency of reasoning, and that is compatible with both the OWL-based semantics and the traditional Well-Founded Semantics for logic programs. Moreover, our proposal allows for the detection of inconsistencies, possibly occurring in tightly integrated ontology axioms and rules, with only little additional effort. We also identify tractable fragments of the resulting language.  相似文献   
86.
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states.

Program summary

Program title: dmftCatalogue identifier: AEIL_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: ALPS LIBRARY LICENSE version 1.1No. of lines in distributed program, including test data, etc.: 899 806No. of bytes in distributed program, including test data, etc.: 32 153 916Distribution format: tar.gzProgramming language: C++Operating system: The ALPS libraries have been tested on the following platforms and compilers:
  • • 
    Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher)
  • • 
    MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0)
  • • 
    IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers
  • • 
    Compaq Tru64 UNIX with Compq C++ Compiler (cxx)
  • • 
    SGI IRIX with MIPSpro C++ Compiler (CC)
  • • 
    HP-UX with HP C++ Compiler (aCC)
  • • 
    Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher)
RAM: 10 MB–1 GBClassification: 7.3External routines: ALPS [1], BLAS/LAPACK, HDF5Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of “correlated electron” materials as auxiliary problems whose solution gives the “dynamical mean field” approximation to the self-energy and local correlation functions.Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2].Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper.Running time: 60 s–8 h per iteration.References:
  • [1] 
    A. Albuquerque, F. Alet, P. Corboz, et al., J. Magn. Magn. Mater. 310 (2007) 1187.
  • [2] 
    http://arxiv.org/abs/1012.4474, Rev. Mod. Phys., in press.
  相似文献   
87.
During summer and autumn 2007, a 11 GHz microwave radiometer was deployed in an experimental tree plantation in Sardinilla, Panama. The opacity of the tree canopy was derived from incoming brightness temperatures received on the ground. A collocated eddy-covariance flux tower measured water vapor fluxes and meteorological variables above the canopy. In addition, xylem sapflow of trees was measured within the flux tower footprint. We observed considerable diurnal differences between measured canopy opacities and modeled theoretical opacities that were closely linked to xylem sapflow. It is speculated that dielectric changes in the leaves induced by the sapflow are causing the observed diurnal changes. In addition, canopy intercepted rain and dew formation also modulated the diurnal opacity cycle. With an enhanced canopy opacity model accounting for water deposited on the leaves, we quantified the influence of canopy stored water (i.e. intercepted water and dew) on the opacity. A time series of dew formation and rain interception was directly monitored during a period of two weeks. We found that during light rainfall up to 60% of the rain amount is intercepted by the canopy whereas during periods of intense rainfall only 4% were intercepted. On average, 0.17 mm of dew was formed during the night. Dew evaporation contributed 5% to the total water vapor flux measured above the canopy.  相似文献   
88.
Electrical borehole wall images represent micro-resistivity measurements at the borehole wall. The lithology reconstruction is often based on visual interpretation done by geologists. This analysis is very time-consuming and subjective. Different geologists may interpret the data differently. In this work, linear discriminant analysis (LDA) in combination with texture features is used for an automated lithology reconstruction of ODP (Ocean Drilling Program) borehole 1203A drilled during Leg 197. Six rock groups are identified by their textural properties in resistivity data obtained by a Formation MircoScanner (FMS). Although discriminant analysis can be used for multi-class classification, non-optimal decision criteria for certain groups could emerge. For this reason, we use a combination of 2-class (binary) classifiers to increase the overall classification accuracy. The generalization ability of the combined classifiers is evaluated and optimized on a testing dataset where a classification rate of more than 80% for each of the six rock groups is achieved. The combined, trained classifiers are then applied on the whole dataset obtaining a statistical reconstruction of the logged formation. Compared to a single multi-class classifier the combined binary classifiers show better classification results for certain rock groups and more stable results in larger intervals of equal rock type.  相似文献   
89.
This paper presents an analytical method to derive the worst-case traffic pattern caused by a task graph mapped to a cache-coherent shared-memory system. Our analysis allows designers to rapidly evaluate the impact of different mappings of tasks to IP cores on the traffic pattern. The accuracy varies with the application’s data sharing pattern, and is around 65% in the average case and 1% in the best case when considering the traffic pattern as a whole. For individual connections, our method produces tight worst-case bandwidths.  相似文献   
90.
The datasets used in statistical analyses are often small in the sense that the number of observations n is less than 5 times the number of parameters p to be estimated. In contrast, methods of robust regression are usually optimized in terms of asymptotics with an emphasis on efficiency and maximal bias of estimated coefficients. Inference, i.e., determination of confidence and prediction intervals, is proposed as complementary criteria. An analysis of MM-estimators leads to the development of a new scale estimate, the Design Adaptive Scale Estimate, and to an extension of the MM-estimate, the SMDM-estimate, as well as a suitable ψ-function. A simulation study shows and a real data example illustrates that the SMDM-estimate has better performance for small n/p and that the use the new scale estimate and of a slowly redescending ψ-function is crucial for adequate inference.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号