全文获取类型
收费全文 | 44169篇 |
免费 | 406篇 |
国内免费 | 806篇 |
专业分类
电工技术 | 568篇 |
综合类 | 294篇 |
化学工业 | 4228篇 |
金属工艺 | 858篇 |
机械仪表 | 2347篇 |
建筑科学 | 2501篇 |
矿业工程 | 1556篇 |
能源动力 | 180篇 |
轻工业 | 8491篇 |
水利工程 | 831篇 |
石油天然气 | 973篇 |
武器工业 | 169篇 |
无线电 | 5869篇 |
一般工业技术 | 13071篇 |
冶金工业 | 816篇 |
原子能技术 | 457篇 |
自动化技术 | 2172篇 |
出版年
2015年 | 160篇 |
2014年 | 299篇 |
2013年 | 267篇 |
2012年 | 3970篇 |
2011年 | 4836篇 |
2010年 | 1049篇 |
2009年 | 714篇 |
2008年 | 3642篇 |
2007年 | 3407篇 |
2006年 | 2941篇 |
2005年 | 2572篇 |
2004年 | 2089篇 |
2003年 | 1820篇 |
2002年 | 1659篇 |
2001年 | 1393篇 |
2000年 | 1294篇 |
1999年 | 758篇 |
1998年 | 475篇 |
1997年 | 428篇 |
1996年 | 441篇 |
1995年 | 467篇 |
1994年 | 442篇 |
1993年 | 320篇 |
1992年 | 408篇 |
1991年 | 430篇 |
1990年 | 439篇 |
1989年 | 402篇 |
1988年 | 361篇 |
1987年 | 462篇 |
1986年 | 431篇 |
1985年 | 443篇 |
1984年 | 403篇 |
1983年 | 372篇 |
1982年 | 378篇 |
1981年 | 367篇 |
1980年 | 282篇 |
1979年 | 229篇 |
1978年 | 183篇 |
1977年 | 266篇 |
1976年 | 279篇 |
1975年 | 308篇 |
1974年 | 306篇 |
1973年 | 200篇 |
1972年 | 278篇 |
1971年 | 268篇 |
1970年 | 261篇 |
1968年 | 190篇 |
1967年 | 358篇 |
1965年 | 339篇 |
1964年 | 186篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
951.
952.
We propose an approach to shape detection of highly deformable shapes in images via manifold learning with regression. Our method does not require shape key points be defined at high contrast image regions, nor do we need an initial estimate of the shape. We only require sufficient representative training data and a rough initial estimate of the object position and scale. We demonstrate the method for face shape learning, and provide a comparison to nonlinear Active Appearance Model. Our method is extremely accurate, to nearly pixel precision and is capable of accurately detecting the shape of faces undergoing extreme expression changes. The technique is robust to occlusions such as glasses and gives reasonable results for extremely degraded image resolutions. 相似文献
953.
Vedaldi A Zisserman A 《IEEE transactions on pattern analysis and machine intelligence》2012,34(3):480-492
Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of Perronnin et al., which is data dependent, and the explicit map of Maji and Berg for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101, Daimler-Chrysler pedestrians, and INRIA pedestrians. 相似文献
954.
Zhu LL Chen Y Lin Y Lin C Yuille A 《IEEE transactions on pattern analysis and machine intelligence》2012,34(2):359-371
In this paper, we propose a Hierarchical Image Model (HIM) which parses images to perform segmentation and object recognition. The HIM represents the image recursively by segmentation and recognition templates at multiple levels of the hierarchy. This has advantages for representation, inference, and learning. First, the HIM has a coarse-to-fine representation which is capable of capturing long-range dependency and exploiting different levels of contextual information (similar to how natural language models represent sentence structure in terms of hierarchical representations such as verb and noun phrases). Second, the structure of the HIM allows us to design a rapid inference algorithm, based on dynamic programming, which yields the first polynomial time algorithm for image labeling. Third, we learn the HIM efficiently using machine learning methods from a labeled data set. We demonstrate that the HIM is comparable with the state-of-the-art methods by evaluation on the challenging public MSRC and PASCAL VOC 2007 image data sets. 相似文献
955.
Cluster Size Optimization in Sensor Networks with Decentralized Cluster-Based Protocols 总被引:1,自引:0,他引:1
Network lifetime and energy-efficiency are viewed as the dominating considerations in designing cluster-based communication protocols for wireless sensor networks. This paper analytically provides the optimal cluster size that minimizes the total energy expenditure in such networks, where all sensors communicate data through their elected cluster heads to the base station in a decentralized fashion. LEACH, LEACH-Coverage, and DBS comprise three cluster-based protocols investigated in this paper that do not require any centralized support from a certain node. The analytical outcomes are given in the form of closed-form expressions for various widely-used network configurations. Extensive simulations on different networks are used to confirm the expectations based on the analytical results. To obtain a thorough understanding of the results, cluster number variability problem is identified and inspected from the energy consumption point of view. 相似文献
956.
Gualdi G Prati A Cucchiara R 《IEEE transactions on pattern analysis and machine intelligence》2012,34(8):1589-1604
The common paradigm employed for object detection is the sliding window (SW) search. This approach generates grid-distributed patches, at all possible positions and sizes, which are evaluated by a binary classifier: The tradeoff between computational burden and detection accuracy is the real critical point of sliding windows; several methods have been proposed to speed up the search such as adding complementary features. We propose a paradigm that differs from any previous approach since it casts object detection into a statistical-based search using a Monte Carlo sampling for estimating the likelihood density function with Gaussian kernels. The estimation relies on a multistage strategy where the proposal distribution is progressively refined by taking into account the feedback of the classifiers. The method can be easily plugged into a Bayesian-recursive framework to exploit the temporal coherency of the target objects in videos. Several tests on pedestrian and face detection, both on images and videos, with different types of classifiers (cascade of boosted classifiers, soft cascades, and SVM) and features (covariance matrices, Haar-like features, integral channel features, and histogram of oriented gradients) demonstrate that the proposed method provides higher detection rates and accuracy as well as a lower computational burden w.r.t. sliding window detection. 相似文献
957.
Boudet S Peyrodie L Forzy G Pinti A Toumi H Gallois P 《Computer methods and programs in biomedicine》2012,108(1):234-249
Adaptive Filtering by Optimal Projection (AFOP) is an automatic method for reducing ocular and muscular artifacts on electro-encephalographic (EEG) recordings. This paper presents two additions to this method: an improvement of the stability of ocular artifact filtering and an adaptation of the method for filtering electrode artifacts. With these improvements, it is possible to reduce almost all the current types of artifacts, while preserving brain signals, particularly those characterising epilepsy. This generalised method consists of dividing the signal into several time-frequency windows, and in applying different spatial filters to each. Two steps are required to define one of these spatial filters: the first step consists of defining artifact spatial projection using the Common Spatial Pattern (CSP) method and the second consists of defining EEG spatial projection via regression. For this second step, a progressive orthogonalisation process is proposed to improve stability. This method has been tested on long-duration EEG recordings of epileptic patients. A neurologist quantified the ratio of removed artifacts and the ratio of preserved EEG. Among the 330 artifacted pages used for evaluation, readability was judged better for 78% of pages, equal for 20% of pages, and worse for 2%. Artifact amplitudes were reduced by 80% on average. At the same time, brain sources were preserved in amplitude from 70% to 95% depending on the type of waves (alpha, theta, delta, spikes, etc.). A blind comparison with manual Independent Component Analysis (ICA) was also realised. The results show that this method is competitive and useful for routine clinical practice. 相似文献
958.
Zidelmal Z Amirou A Adnane M Belouchrani A 《Computer methods and programs in biomedicine》2012,107(3):490-496
Electrocardiogram (ECG) signal processing and analysis provide crucial information about functional status of the heart. The QRS complex represents the most important component within the ECG signal. Its detection is the first step of all kinds of automatic feature extraction. QRS detector must be able to detect a large number of different QRS morphologies. This paper examines the use of wavelet detail coefficients for the accurate detection of different QRS morphologies in ECG. Our method is based on the power spectrum of QRS complexes in different energy levels since it differs from normal beats to abnormal ones. This property is used to discriminate between true beats (normal and abnormal) and false beats. Significant performance enhancement is observed when the proposed approach is tested with the MIT-BIH arrhythmia database (MITDB). The obtained results show a sensitivity of 99.64% and a positive predictivity of 99.82%. 相似文献
959.
In this paper we present the "R&W Simulator" (version 3.0), a Java simulator of Rescorla and Wagner's prediction error model of learning. It is able to run whole experimental designs, and compute and display the associative values of elemental and compound stimuli simultaneously, as well as use extra configural cues in generating compound values; it also permits change of the US parameters across phases. The simulator produces both numerical and graphical outputs, and includes a functionality to export the results to a data processor spreadsheet. It is user-friendly, and built with a graphical interface designed to allow neuroscience researchers to input the data in their own "language". It is a cross-platform simulator, so it does not require any special equipment, operative system or support program, and does not need installation. The "R&W Simulator" (version 3.0) is available free. 相似文献
960.
Haidar A Potocka E Boulet B Umpleby AM Hovorka R 《Computer methods and programs in biomedicine》2012,108(1):102-112
A new stochastic computational method was developed to estimate the endogenous glucose production, the meal-related glucose appearance rate (R(a meal)), and the glucose disposal (R(d)) during the meal tolerance test. A prior probability distribution was adopted which assumes smooth glucose fluxes with individualized smoothness level within the context of a Bayes hierarchical model. The new method was contrasted with the maximum likelihood method using data collected in 18 subjects with type 2 diabetes who ingested a mixed meal containing [U-(13)C]glucose. Primed [6,6-(2)H(2)]glucose was infused in a manner that mimicked the expected endogenous glucose production. The mean endogenous glucose production, R(a meal), and R(d) calculated by the new method and maximum likelihood method were nearly identical. However, the maximum likelihood gave constant, nonphysiological postprandial endogenous glucose production in two subjects whilst the new method gave plausible estimates of endogenous glucose production in all subjects. Additionally, the two methods were compared using a simulated triple-tracer experiment in 12 virtual subjects. The accuracy of the estimates of the endogenous glucose production and R(a meal) profiles was similar [root mean square error (RMSE) 1.0±0.3 vs. 1.4±0.7μmol/kg/min for EGP and 2.6±1.0 vs. 2.9±0.9μmol/kg/min for R(a meal); new method vs. maximum likelihood method; P=NS, paired t-test]. The accuracy of R(d) estimates was significantly increased by the new method (RMSE 5.3±1.9 vs. 4.2±1.3; new method vs. ML method; P<0.01, paired t-test). We conclude that the new method increases plausibility of the endogenous glucose production and improves accuracy of glucose disposal compared to the maximum likelihood method. 相似文献