首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   174966篇
  免费   16294篇
  国内免费   9386篇
电工技术   13837篇
技术理论   8篇
综合类   17870篇
化学工业   17316篇
金属工艺   7160篇
机械仪表   15771篇
建筑科学   20298篇
矿业工程   6728篇
能源动力   7796篇
轻工业   13447篇
水利工程   7490篇
石油天然气   8279篇
武器工业   2002篇
无线电   9997篇
一般工业技术   17043篇
冶金工业   7019篇
原子能技术   2515篇
自动化技术   26070篇
  2024年   721篇
  2023年   2141篇
  2022年   4361篇
  2021年   5109篇
  2020年   5377篇
  2019年   4494篇
  2018年   4426篇
  2017年   5414篇
  2016年   6550篇
  2015年   6887篇
  2014年   11232篇
  2013年   11272篇
  2012年   12997篇
  2011年   14333篇
  2010年   10276篇
  2009年   10476篇
  2008年   9826篇
  2007年   11790篇
  2006年   10229篇
  2005年   8597篇
  2004年   7269篇
  2003年   6199篇
  2002年   5008篇
  2001年   4115篇
  2000年   3520篇
  1999年   2935篇
  1998年   2524篇
  1997年   2124篇
  1996年   1745篇
  1995年   1438篇
  1994年   1293篇
  1993年   965篇
  1992年   891篇
  1991年   654篇
  1990年   550篇
  1989年   468篇
  1988年   395篇
  1987年   268篇
  1986年   233篇
  1985年   214篇
  1984年   252篇
  1983年   239篇
  1982年   196篇
  1981年   95篇
  1980年   94篇
  1979年   65篇
  1978年   53篇
  1977年   46篇
  1976年   43篇
  1959年   30篇
排序方式: 共有10000条查询结果,搜索用时 20 毫秒
991.
Bimodal biometrics has been found to outperform single biometrics and are usually implemented using the matching score level or decision level fusion, though this fusion will enable less information of bimodal biometric traits to be exploited for personal authentication than fusion at the feature level. This paper proposes matrix-based complex PCA (MCPCA), a feature level fusion method for bimodal biometrics that uses a complex matrix to denote two biometric traits from one subject. The method respectively takes the two images from two biometric traits of a subject as the real part and imaginary part of a complex matrix. MCPCA applies a novel and mathematically tractable algorithm for extracting features directly from complex matrices. We also show that MCPCA has a sound theoretical foundation and the previous matrix-based PCA technique, two-dimensional PCA (2DPCA), is only one special form of the proposed method. On the other hand, the features extracted by the developed method may have a large number of data items (each real number in the obtained features is called one data item). In order to obtain features with a small number of data items, we have devised a two-step feature extraction scheme. Our experiments show that the proposed two-step feature extraction scheme can achieve a higher classification accuracy than the 2DPCA and PCA techniques.  相似文献   
992.
In this paper, we develop mathematical models for simultaneous consideration of suitability and optimality in asset allocation. We use a hybrid approach that combines behavior survey, cluster analysis, analytical hierarchy process and fuzzy mathematical programming.  相似文献   
993.
One of the important obstacles in the image-based analysis of the human face is the 3D nature of the problem and the 2D nature of most imaging systems used for biometric applications. Due to this, accuracy is strongly influenced by the viewpoint of the images, being frontal views the most thoroughly studied. However, when fully automatic face analysis systems are designed, capturing frontal-view images cannot be guaranteed. Examples of this situation can be found in surveillance systems, car driver images or whenever there are architectural constraints that prevent from placing a camera frontal to the subject. Taking advantage of the fact that most facial features lie approximately on the same plane, we propose the use of projective geometry across different views. An active shape model constructed with frontal-view images can then be directly applied to the segmentation of pictures taken from other viewpoints. The proposed extension demonstrates being significantly more invariant than the standard approach. Validation of the method is presented in 360 images from the AV@CAR database, systematically divided into three different rotations (to both sides), as well as upper and lower views due to nodding. The presented tests are among the largest quantitative results reported to date in face segmentation under varying poses.  相似文献   
994.
Document image binarization involves converting gray level images into binary images, which is a feature that has significantly impacted many portable devices in recent years, including PDAs and mobile camera phones. Given the limited memory space and the computational power of portable devices, reducing the computational complexity of an embedded system is of priority concern. This work presents an efficient document image binarization algorithm with low computational complexity and high performance. Integrating the advantages of global and local methods allows the proposed algorithm to divide the document image into several regions. A threshold surface is then constructed based on the diversity and the intensity of each region to derive the binary image. Experimental results demonstrate the effectiveness of the proposed method in providing a promising binarization outcome and low computational cost.  相似文献   
995.
Land use and land cover (LULC) maps from remote sensing are vital for monitoring, understanding and predicting the effects of complex human-nature interactions that span local, regional and global scales. We present a method to map annual LULC at a regional spatial scale with source data and processing techniques that permit scaling to broader spatial and temporal scales, while maintaining a consistent classification scheme and accuracy. Using the Dry Chaco ecoregion in Argentina, Bolivia and Paraguay as a test site, we derived a suite of predictor variables from 2001 to 2007 from the MODIS 250 m vegetation index product (MOD13Q1). These variables included: annual statistics of red, near infrared, and enhanced vegetation index (EVI), phenological metrics derived from EVI time series data, and slope and elevation. For reference data, we visually interpreted percent cover of eight classes at locations with high-resolution QuickBird imagery in Google Earth. An adjustable majority cover threshold was used to assign samples to a dominant class. When compared to field data, we found this imagery to have georeferencing error < 5% the length of a MODIS pixel, while most class interpretation error was related to confusion between agriculture and herbaceous vegetation. We used the Random Forests classifier to identify the best sets of predictor variables and percent cover thresholds for discriminating our LULC classes. The best variable set included all predictor variables and a cover threshold of 80%. This optimal Random Forests was used to map LULC for each year between 2001 and 2007, followed by a per-pixel, 3-year temporal filter to remove disallowed LULC transitions. Our sequence of maps had an overall accuracy of 79.3%, producer accuracy from 51.4% (plantation) to 95.8% (woody vegetation), and user accuracy from 58.9% (herbaceous vegetation) to 100.0% (water). We attributed map class confusion to limited spectral information, sub-pixel spectral mixing, georeferencing error and human error in interpreting reference samples. We used our maps to assess woody vegetation change in the Dry Chaco from 2002 to 2006, which was characterized by rapid deforestation related to soybean and planted pasture expansion. This method can be easily applied to other regions or continents to produce spatially and temporally consistent information on annual LULC.  相似文献   
996.
Greenhouse gas inventories and emissions reduction programs require robust methods to quantify carbon sequestration in forests. We compare forest carbon estimates from Light Detection and Ranging (Lidar) data and QuickBird high-resolution satellite images, calibrated and validated by field measurements of individual trees. We conducted the tests at two sites in California: (1) 59 km2 of secondary and old-growth coast redwood (Sequoia sempervirens) forest (Garcia-Mailliard area) and (2) 58 km2 of old-growth Sierra Nevada forest (North Yuba area). Regression of aboveground live tree carbon density, calculated from field measurements, against Lidar height metrics and against QuickBird-derived tree crown diameter generated equations of carbon density as a function of the remote sensing parameters. Employing Monte Carlo methods, we quantified uncertainties of forest carbon estimates from uncertainties in field measurements, remote sensing accuracy, biomass regression equations, and spatial autocorrelation. Validation of QuickBird crown diameters against field measurements of the same trees showed significant correlation (r = 0.82, P < 0.05). Comparison of stand-level Lidar height metrics with field-derived Lorey's mean height showed significant correlation (Garcia-Mailliard r = 0.94, P < 0.0001; North Yuba R = 0.89, P < 0.0001). Field measurements of five aboveground carbon pools (live trees, dead trees, shrubs, coarse woody debris, and litter) yielded aboveground carbon densities (mean ± standard error without Monte Carlo) as high as 320 ± 35 Mg ha− 1 (old-growth coast redwood) and 510 ± 120 Mg ha− 1 (red fir [Abies magnifica] forest), as great or greater than tropical rainforest. Lidar and QuickBird detected aboveground carbon in live trees, 70-97% of the total. Large sample sizes in the Monte Carlo analyses of remote sensing data generated low estimates of uncertainty. Lidar showed lower uncertainty and higher accuracy than QuickBird, due to high correlation of biomass to height and undercounting of trees by the crown detection algorithm. Lidar achieved uncertainties of < 1%, providing estimates of aboveground live tree carbon density (mean ± 95% confidence interval with Monte Carlo) of 82 ± 0.7 Mg ha− 1 in Garcia-Mailliard and 140 ± 0.9 Mg ha− 1 in North Yuba. The method that we tested, combining field measurements, Lidar, and Monte Carlo, can produce robust wall-to-wall spatial data on forest carbon.  相似文献   
997.
Hakjoo Oh  Kwangkeun Yi 《Software》2010,40(8):585-603
We present a simple algorithmic extension of the approximate call‐strings approach to mitigate substantial performance degradation caused by spurious interprocedural cycles. Spurious interprocedural cycles are, in a realistic setting, the key reasons for why approximate call‐return semantics in both context‐sensitive and ‐insensitive static analysis can make the analysis much slower than expected. In the approximate call‐strings‐based context‐sensitive static analysis, because the number of distinguished contexts is finite, multiple call‐contexts are inevitably joined at the entry of a procedure and the output at the exit is propagated to multiple return‐sites. We found that these multiple returns frequently create a single large cycle (we call it ‘butterfly cycle’) covering almost all parts of the program and such a spurious cycle makes analyses very slow and inaccurate. Our simple algorithmic technique (within the fixpoint iteration algorithm) identifies and prunes these spurious interprocedural flows. The technique's effectiveness is proven by experiments with a realistic C analyzer to reduce the analysis time by 7–96%. As the technique is algorithmic, it can be easily applicable to existing analyses without changing the underlying abstract semantics, it is orthogonal to the underlying abstract semantics' context‐sensitivity, and its correctness is obvious. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
998.
TORQUE是一种基于三维图形库OPENGL的仿真引擎。本文通过三维建模软件的引入,实现了仿真系统场景的构建,分析了STL模型的存储特点,设计并实现了STL模型的读入和优化处理,提高了整个系统的三维图形模拟效果和运行效率。借助于多体受力分析工具ADAM,以复杂环境下的行驶车辆为研究对象,构建了受力模型和运动方程,较为客观的描述了机车的运行规律,并分析了各个因素对车辆行驶安全性的影响,为驾驶人员提供了参考,对提高机车行驶安全具有一定的促进作用。  相似文献   
999.
本文介绍了一种利用触控式面板模块工具在WindowsCE系统上开发LabVIEW程序的方法,并将该方法运用到对电机振动测量与分析上,构成了以嵌入式系统为平台的便携式电机振动分析设备。应用LabVIEW软件的强大数学运算功能对振动信号作频谱分析,作为电机故障的预防和诊断依据,具有重要的意义。  相似文献   
1000.
选矿工艺试验,往往是通过多次重复试验来保证所需试验精度。而大量的试验数据,会给选矿工作者在提取有价值数据和挖掘各数据相关性的工作中带来一定难度。spss中的因子分析方法,为指标测评抽象因子的统计方法。它可将原始数据中的信息重叠部分提取和综合成因子,进而最终实现减少变量个数的目的。文章以钼尾矿粗选的多次试验数据作为研究变量,结果表明,不但可以简化分析量,还可从众多数据中提炼出试验规律。因此得出结论,此方法不失为一种分析大量选矿数据的有效方法。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号