首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6603篇
  免费   282篇
  国内免费   18篇
电工技术   74篇
综合类   7篇
化学工业   1545篇
金属工艺   110篇
机械仪表   149篇
建筑科学   337篇
矿业工程   10篇
能源动力   226篇
轻工业   547篇
水利工程   65篇
石油天然气   28篇
武器工业   2篇
无线电   583篇
一般工业技术   1120篇
冶金工业   920篇
原子能技术   34篇
自动化技术   1146篇
  2023年   72篇
  2022年   109篇
  2021年   176篇
  2020年   125篇
  2019年   128篇
  2018年   177篇
  2017年   160篇
  2016年   195篇
  2015年   166篇
  2014年   226篇
  2013年   410篇
  2012年   371篇
  2011年   433篇
  2010年   327篇
  2009年   370篇
  2008年   354篇
  2007年   368篇
  2006年   263篇
  2005年   254篇
  2004年   204篇
  2003年   184篇
  2002年   157篇
  2001年   106篇
  2000年   111篇
  1999年   107篇
  1998年   107篇
  1997年   115篇
  1996年   80篇
  1995年   100篇
  1994年   88篇
  1993年   63篇
  1992年   64篇
  1991年   31篇
  1990年   46篇
  1989年   65篇
  1988年   49篇
  1987年   37篇
  1986年   51篇
  1985年   57篇
  1984年   45篇
  1983年   54篇
  1982年   35篇
  1981年   35篇
  1980年   32篇
  1979年   26篇
  1978年   19篇
  1977年   17篇
  1976年   24篇
  1975年   29篇
  1974年   22篇
排序方式: 共有6903条查询结果,搜索用时 15 毫秒
81.
82.
Recent advances in computing technology have brought multimedia information processing to prominence. The ability to digitize, store, retrieve, process, and transport analog information in digital form has changed the dimensions of information handling. Several architectural and network configurations have been proposed for efficient and reliable digital video delivery systems. However, these proposals succeed only in addressing subsets of the whole problem. In this paper, we discuss the characteristics of video services. These include Cable Television, Pay-Per-View, and Video Repository Centers. We also discuss requirements for Video On Demand services. With respect to these video services, we analyze two important video properties: image quality and response time. We discuss and present configurations of a Digital Video Delivery System (DVDS) from three general system components - servers, clients, and connectivities. Pertinent issues in developing each component are also analyzed. We also present an architecture of a DVDS that can support the various functionalities that exist in the various video services. Lastly, we discuss data allocation strategies which impact performance of interactive video on demand (IVOD). We present preliminary results from a study using a limited form of mirroring to support high performance IVOD.  相似文献   
83.
Greenhouse gas inventories and emissions reduction programs require robust methods to quantify carbon sequestration in forests. We compare forest carbon estimates from Light Detection and Ranging (Lidar) data and QuickBird high-resolution satellite images, calibrated and validated by field measurements of individual trees. We conducted the tests at two sites in California: (1) 59 km2 of secondary and old-growth coast redwood (Sequoia sempervirens) forest (Garcia-Mailliard area) and (2) 58 km2 of old-growth Sierra Nevada forest (North Yuba area). Regression of aboveground live tree carbon density, calculated from field measurements, against Lidar height metrics and against QuickBird-derived tree crown diameter generated equations of carbon density as a function of the remote sensing parameters. Employing Monte Carlo methods, we quantified uncertainties of forest carbon estimates from uncertainties in field measurements, remote sensing accuracy, biomass regression equations, and spatial autocorrelation. Validation of QuickBird crown diameters against field measurements of the same trees showed significant correlation (r = 0.82, P < 0.05). Comparison of stand-level Lidar height metrics with field-derived Lorey's mean height showed significant correlation (Garcia-Mailliard r = 0.94, P < 0.0001; North Yuba R = 0.89, P < 0.0001). Field measurements of five aboveground carbon pools (live trees, dead trees, shrubs, coarse woody debris, and litter) yielded aboveground carbon densities (mean ± standard error without Monte Carlo) as high as 320 ± 35 Mg ha− 1 (old-growth coast redwood) and 510 ± 120 Mg ha− 1 (red fir [Abies magnifica] forest), as great or greater than tropical rainforest. Lidar and QuickBird detected aboveground carbon in live trees, 70-97% of the total. Large sample sizes in the Monte Carlo analyses of remote sensing data generated low estimates of uncertainty. Lidar showed lower uncertainty and higher accuracy than QuickBird, due to high correlation of biomass to height and undercounting of trees by the crown detection algorithm. Lidar achieved uncertainties of < 1%, providing estimates of aboveground live tree carbon density (mean ± 95% confidence interval with Monte Carlo) of 82 ± 0.7 Mg ha− 1 in Garcia-Mailliard and 140 ± 0.9 Mg ha− 1 in North Yuba. The method that we tested, combining field measurements, Lidar, and Monte Carlo, can produce robust wall-to-wall spatial data on forest carbon.  相似文献   
84.
Abstract— In the past, a five‐mask LTPS CMOS process requiring only one single ion‐doping step was used. Based on that process, all necessary components for the realization of a fully integrated AMOLED display using a 3T1C current‐feedback pixel circuit has recently been developed. The integrated data driver is based on a newly developed LTPS operational amplifier, which does not require any compensation for Vth or mobility variations. Only one operational amplifier per column is used to perform digital‐to‐analog conversion as well as current control. In order to achieve high‐precision analog behavior, the operational amplifier is embedded in a switched capacitor network. In addition to circuit verification by simulation and analytic analysis, a 1‐in. fully integrated AMOLED demonstrator was successfully built. To the best of the authors' knowledge, this is the first implementation of a fully integrated AMOLED display with current feedback.  相似文献   
85.
In silico models that predict the rate of human renal clearance for a diverse set of drugs, that exhibit both active secretion and net re-absorption, have been produced using three statistical approaches. Partial Least Squares (PLS) and Random Forests (RF) have been used to produce continuous models whereas Classification And Regression Trees (CART) has only been used for a classification model. The best models generated from either PLS or RF produce significant models that can predict acids/zwitterions, bases and neutrals with approximate average fold errors of 3, 3 and 4, respectively, for an independent test set that covers oral drug-like property space. These models contain additional information on top of any influence arising from plasma protein binding on the rate of renal clearance. Classification And Regression Trees (CART) has been used to generate a classification tree leading to a simple set of Renal Clearance Rules (RCR) that can be applied to man. The rules are influenced by lipophilicity and ion class and can correctly predict 60% of an independent test set. These percentages increase to 71% and 79% for drugs with renal clearances of < 0.1 ml/min/kg and > 1 ml/min/kg, respectively. As far as the authors are aware these are the first set of models to appear in the literature that predict the rate of human renal clearance and can be used to manipulate molecular properties leading to new drugs that are less likely to fail due to renal clearance.  相似文献   
86.
The GreenCert? system was developed to help farm and ranch owners to quantify, standardize, pool and market CO2 emissions offset (sequestration) credits derived from improved rangeland or cropland management. It combines a user-friendly interface with the CENTURY biogeochemical model, a GIS database of soil and climate parameters, and a Monte Carlo-based uncertainty estimation methodology. This paper focuses on uncertainty treatment, discussing sources of error, parameter distributions, and the Monte Carlo randomization approach, culminating in a sensitivity analysis of model parameters.Idealized crop and grazing scenarios were used to evaluate the uncertainty of modeled soil organic carbon stocks and stock changes stemming from variability in site and management parameters. Normalized sensitivity coefficients and an integrated index for relative sensitivity of the model to the ensemble of the tested variables indicate that environmental factors are the most important in determining the actual size of the soil carbon stock, but that management is a much more important determinant of short- to medium-term carbon fluxes. GreenCert? uses the patented C-LOCK® approach to efficiently limit uncertainty in the most critical phase of the modelling process by maximizing the use of available management information, and quantifies the remaining uncertainty in an unbiased fashion using Monte Carlo parameter randomization.  相似文献   
87.
With the arrival of GPS, satellite remote sensing, and personal computers, the last two decades have witnessed rapid advances in the field of spatially-explicit marine ecological modeling. But with this innovation has come complexity. To keep up, ecologists must master multiple specialized software packages, such as ArcGIS for display and manipulation of geospatial data, R for statistical analysis, and MATLAB for matrix processing. This requires a costly investment of time and energy learning computer programming, a high hurdle for many ecologists. To provide easier access to advanced analytic methods, we developed Marine Geospatial Ecology Tools (MGET), an extensible collection of powerful, easy-to-use, open-source geoprocessing tools that ecologists can invoke from ArcGIS without resorting to computer programming. Internally, MGET integrates Python, R, MATLAB, and C++, bringing the power of these specialized platforms to tool developers without requiring developers to orchestrate the interoperability between them.In this paper, we describe MGET’s software architecture and the tools in the collection. Next, we present an example application: a habitat model for Atlantic spotted dolphin (Stenella frontalis) that predicts dolphin presence using a statistical model fitted with oceanographic predictor variables. We conclude by discussing the lessons we learned engineering a highly integrated tool framework.  相似文献   
88.
In many contemporary collaborative inquiry learning environments, chat is being used as a means for communication. Still, it remains an open issue whether chat communication is an appropriate means to support the deep reasoning process students need to perform in such environments. Purpose of the present study was to compare the impact of chat versus face-to-face communication on performance within a collaborative computer-supported modeling task. 44 Students from 11th-grade pre-university education, working in dyads, were observed during modeling. Dyads communicated either face-to-face or through a chat tool. Students’ reasoning during modeling was assessed by analyzing verbal protocols. In addition, we assessed the quality of student-built models. Results show that while model quality scores did not differ across both conditions, students communicating through chat compressed their interactions resulting in less time spent on surface reasoning, whereas students who communicated face-to-face spent significantly more time on surface reasoning.  相似文献   
89.
The general problem of answering top-k queries can be modeled using lists of data items sorted by their local scores. The main algorithm proposed so far for answering top-k queries over sorted lists is the Threshold Algorithm (TA). However, TA may still incur a lot of useless accesses to the lists. In this paper, we propose two algorithms that are much more efficient than TA. First, we propose the best position algorithm (BPA). For any database instance (i.e. set of sorted lists), we prove that BPA stops as early as TA, and that its execution cost is never higher than TA. We show that there are databases over which BPA executes top-k queries O(m) times faster than that of TA, where m is the number of lists. We also show that the execution cost of our algorithm can be (m−1) times lower than that of TA. Second, we propose the BPA2 algorithm, which is much more efficient than BPA. We show that the number of accesses to the lists done by BPA2 can be about (m−1) times lower than that of BPA. We evaluated the performance of our algorithms through extensive experimental tests. The results show that over our test databases, BPA and BPA2 achieve significant performance gains in comparison with TA.  相似文献   
90.
In this letter, we propose a general framework for studying neural mass models defined by ordinary differential equations. By studying the bifurcations of the solutions to these equations and their sensitivity to noise, we establish an important relation, similar to a dictionary, between their behaviors and normal and pathological, especially epileptic, cortical patterns of activity. We then apply this framework to the analysis of two models that feature most phenomena of interest, the Jansen and Rit model, and the slightly more complex model recently proposed by Wendling and Chauvel. This model-based approach allows us to test various neurophysiological hypotheses on the origin of pathological cortical behaviors and investigate the effect of medication. We also study the effects of the stochastic nature of the inputs, which gives us clues about the origins of such important phenomena as interictal spikes, interictal bursts, and fast onset activity that are of particular relevance in epilepsy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号