全文获取类型
收费全文 | 62781篇 |
免费 | 8605篇 |
国内免费 | 6297篇 |
专业分类
电工技术 | 5338篇 |
技术理论 | 9篇 |
综合类 | 6950篇 |
化学工业 | 1662篇 |
金属工艺 | 903篇 |
机械仪表 | 3921篇 |
建筑科学 | 2620篇 |
矿业工程 | 1566篇 |
能源动力 | 850篇 |
轻工业 | 1257篇 |
水利工程 | 1445篇 |
石油天然气 | 3056篇 |
武器工业 | 840篇 |
无线电 | 9608篇 |
一般工业技术 | 3173篇 |
冶金工业 | 1067篇 |
原子能技术 | 810篇 |
自动化技术 | 32608篇 |
出版年
2024年 | 162篇 |
2023年 | 896篇 |
2022年 | 1823篇 |
2021年 | 2296篇 |
2020年 | 2413篇 |
2019年 | 1948篇 |
2018年 | 1746篇 |
2017年 | 2126篇 |
2016年 | 2435篇 |
2015年 | 2787篇 |
2014年 | 4384篇 |
2013年 | 3972篇 |
2012年 | 4779篇 |
2011年 | 5100篇 |
2010年 | 3948篇 |
2009年 | 3905篇 |
2008年 | 4373篇 |
2007年 | 4899篇 |
2006年 | 4154篇 |
2005年 | 3710篇 |
2004年 | 3184篇 |
2003年 | 2644篇 |
2002年 | 2029篇 |
2001年 | 1522篇 |
2000年 | 1279篇 |
1999年 | 920篇 |
1998年 | 721篇 |
1997年 | 597篇 |
1996年 | 492篇 |
1995年 | 465篇 |
1994年 | 366篇 |
1993年 | 262篇 |
1992年 | 187篇 |
1991年 | 188篇 |
1990年 | 137篇 |
1989年 | 112篇 |
1988年 | 91篇 |
1987年 | 71篇 |
1986年 | 62篇 |
1985年 | 89篇 |
1984年 | 62篇 |
1983年 | 73篇 |
1982年 | 61篇 |
1981年 | 40篇 |
1980年 | 20篇 |
1979年 | 34篇 |
1978年 | 12篇 |
1977年 | 21篇 |
1976年 | 13篇 |
1959年 | 8篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
大坝运行监测易受自然环境和监测条件影响,存在时间和空间上的变异性,监测数据具有不确定性。以云理论的随机性和不确定性分析方法为基础,并与空间数据辐射思想相结合,建立了云滴概率密度分布估计模型,然后导出云概率密度分布函数,依据样本监测数据推求母体空间数据的分布特征,并设计了基于逆向云算法云变换的计算程序。分析陆浑水库1979~1999年测压管监测数据和位移变形数据的云概率密度分布特征和云数字特征,得出了20 a来大坝的数据分布特征和运行状态。监测数据分析结果表明,云概率密度分布估计不仅能有效合理地分析大坝的运行状态,而且能够依据云数字特征来判断监测状态和监测环境的异常变化。
相似文献
82.
This article presents a new semidistance for functional observations that generalizes the Mahalanobis distance for multivariate datasets. The main characteristics of the functional Mahalanobis semidistance are shown. To illustrate the applicability of this measure of proximity between functional observations, new versions of several well-known functional classification procedures are developed using the functional Mahalanobis semidistance. A Monte Carlo study and the analysis of two real examples indicate that the classification methods used in conjunction with the functional Mahalanobis semidistance give better results than other well-known functional classification procedures. This article has supplementary material online. 相似文献
83.
《Measurement》2015
A new approach of frequency shifting by rotating kernel is proposed to improve the performance of a spatial filtering velocimeter, used to provide accurate velocity information for a vehicle self-contained navigation system. A linear CMOS image sensor was employed both as a spatiotemporal differential spatial filter and as a photodetector. The filtering operation was fully performed in FPGA and is realized by applying a rotating kernel to the pixel values of the image. Theoretical analysis showed this method could double the maximum measurable velocity. The power spectrum of the output signal was obtained by fast Fourier transform (FFT), and was corrected by a frequency spectrum correction algorithm, named energy centrobaric correction. This velocimeter was used to measure the moving velocities of a conveyor belt. Experimental results verified the method’s ability of reducing the output signal frequency and standard uncertainty of velocity measurement. What is more, the undesired output introduced by frequency shifting to the power spectrum of the output signal was deeply investigated and a new method was proposed to eliminate the undesired component in output signals. This velocimeter aims at providing accurate velocity information for vehicle autonomous navigation system. 相似文献
84.
Estimation of longitudinal models of relationship status between all pairs of individuals (dyads) in social networks is challenging due to the complex inter-dependencies among observations and lengthy computation times. To reduce the computational burden of model estimation, a method is developed that subsamples the “always-null” dyads in which no relationships develop throughout the period of observation. The informative sampling process is accounted for by weighting the likelihood contributions of the observations by the inverses of the sampling probabilities. This weighted-likelihood estimation method is implemented using Bayesian computation and evaluated in terms of its bias, efficiency, and speed of computation under various settings. Comparisons are also made to a full information likelihood-based procedure that is only feasible to compute when limited follow-up observations are available. Calculations are performed on two real social networks of very different sizes. The easily computed weighted-likelihood procedure closely approximates the corresponding estimates for the full network, even when using low sub-sampling fractions. The fast computation times make the weighted-likelihood approach practical and able to be applied to networks of any size. 相似文献
85.
Mohamed Abdellatif 《Color research and application》2015,40(6):564-576
The spectral overlap of color‐sampling filters increases errors when using a diagonal matrix transform, for color correction and reduces color distinction. Spectral sharpening is a transformation of colors that was introduced to reduce color‐constancy errors when the colors are collected through spectrally overlapping filters. The earlier color‐constancy methods improved color precision when the illuminant color is changed, but they overlooked the color distinction. In this article, we introduce a new spectral sharpening technique that has a good compromise of color precision and distinction, based on real physical constraints. The spectral overlap is measured through observing a gray reference chart with a set of real and spectrally disjoint filters selected by the user. The new sharpening method enables to sharpen colors obtained by a sensor without knowing the camera response functions. Experiments with real images showed that the colors sharpened by the new method have good levels of color precision and distinction as well. The color‐constancy performance is compared with the data‐based sharpening method in terms of both precision and distinction. © 2014 Wiley Periodicals, Inc. Col Res Appl, 40, 564–576, 2015 相似文献
86.
Chafic Saide Régis Lengelle Paul Honeine Cédric Richard Roger Achkar 《International Journal of Adaptive Control and Signal Processing》2015,29(11):1391-1410
Nonlinear adaptive filtering has been extensively studied in the literature, using, for example, Volterra filters or neural networks. Recently, kernel methods have been offering an interesting alternative because they provide a simple extension of linear algorithms to the nonlinear case. The main drawback of online system identification with kernel methods is that the filter complexity increases with time, a limitation resulting from the representer theorem, which states that all past input vectors are required. To overcome this drawback, a particular subset of these input vectors (called dictionary) must be selected to ensure complexity control and good performance. Up to now, all authors considered that, after being introduced into the dictionary, elements stay unchanged even if, because of nonstationarity, they become useless to predict the system output. The objective of this paper is to present an adaptation scheme of dictionary elements, which are considered here as adjustable model parameters, by deriving a gradient‐based method under collinearity constraints. The main interest is to ensure a better tracking performance. To evaluate our approach, dictionary adaptation is introduced into three well‐known kernel‐based adaptive algorithms: kernel recursive least squares, kernel normalized least mean squares, and kernel affine projection. The performance is evaluated on nonlinear adaptive filtering of simulated and real data sets. As confirmed by experiments, our dictionary adaptation scheme allows either complexity reduction or a decrease of the instantaneous quadratic error, or both simultaneously. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
87.
88.
Cristian Podesta Natalie Coleman Amir Esmalian Faxi Yuan Ali Mostafavi 《Journal of the Royal Society Interface》2021,18(177)
This research establishes a methodological framework for quantifying community resilience based on fluctuations in a population''s activity during a natural disaster. Visits to points-of-interests (POIs) over time serve as a proxy for activities to capture the combined effects of perturbations in lifestyles, the built environment and the status of business. This study used digital trace data related to unique visits to POIs in the Houston metropolitan area during Hurricane Harvey in 2017. Resilience metrics in the form of systemic impact, duration of impact, and general resilience (GR) values were examined for the region along with their spatial distributions. The results show that certain categories, such as religious organizations and building material and supplies dealers had better resilience metrics—low systemic impact, short duration of impact, and high GR. Other categories such as medical facilities and entertainment had worse resilience metrics—high systemic impact, long duration of impact and low GR. Spatial analyses revealed that areas in the community with lower levels of resilience metrics also experienced extensive flooding. This insight demonstrates the validity of the approach proposed in this study for quantifying and analysing data for community resilience patterns using digital trace/location-intelligence data related to population activities. While this study focused on the Houston metropolitan area and only analysed one natural hazard, the same approach could be applied to other communities and disaster contexts. Such resilience metrics bring valuable insight into prioritizing resource allocation in the recovery process. 相似文献
89.
Chin‐wei Huang 《International Transactions in Operational Research》2021,28(1):470-492
The purpose of this study is to develop a modification of the model developed by Chen and Zhu in 2004. Calculating stage and overall efficiencies precisely and consistently has become a major challenge of the two‐stage DEA model. However, most other models do not calculate the optimality of intermediates. Although the model developed by Chen and Zhu measures the optimality of intermediates, the calculated efficiency scores still have some shortfalls. The modified model, named the hybrid two‐stage DEA model, fills the gap between calculating the optimality of intermediates and the consistency of overall efficiency scores. In addition to obtaining an accurate measurement for the optimality of intermediates, the model confines efficiency scores to a range from zero to one (a ratio efficiency score). In an empirical evaluation, we use data from 64 medical manufacturing firms to test the performance of the hybrid model and offer recommendations for the industry. 相似文献
90.
The solder paste printing (SPP) is a critical procedure in a surface mount technology (SMT) based assembly line, which is one of the major attributes to the defect of the printed circuit boards (PCBs). The quality of SPP is influenced by multiple factors, such as the squeegee speed, pressure, the stencil separation speed, cleaning frequency, and cleaning profile. During printing, the printer environment is dynamically varying due to the physical change of solder paste, which can result in a dynamic variation of the relationships between the printing results and the influential factors. To reduce the printing defects, it is critical to understand such dynamic relationships. This research focuses on determining the printing performance during printing by implementing a wavelet filtering-based temporal recurrent neural network. To reduce the noise factor in the solder paste inspection (SPI) data, this research applies a three-dimensional dual-tree complex wavelet transformation for low-pass noise filtering and signal reconstruction. A recurrent neural network is utilized to model the performance prediction with low noise interference. Both printing sequence and process setting information are considered in the proposed recurrent network model. The proposed approach is validated using practical dataset and compared with other commonly used data mining approaches. The results show that the proposed wavelet-based multi-dimensional temporal recurrent neural network can effectively predict the printing process performance and can be a high potential approach in reducing the defects and controlling cleaning frequency. The proposed model is expected to advance the current research in the application of smart manufacturing in surface mount technology. 相似文献