全文获取类型
收费全文 | 3537篇 |
免费 | 216篇 |
国内免费 | 121篇 |
专业分类
电工技术 | 41篇 |
综合类 | 82篇 |
化学工业 | 114篇 |
金属工艺 | 40篇 |
机械仪表 | 96篇 |
建筑科学 | 321篇 |
矿业工程 | 187篇 |
能源动力 | 76篇 |
轻工业 | 106篇 |
水利工程 | 13篇 |
石油天然气 | 52篇 |
武器工业 | 11篇 |
无线电 | 341篇 |
一般工业技术 | 146篇 |
冶金工业 | 54篇 |
原子能技术 | 16篇 |
自动化技术 | 2178篇 |
出版年
2024年 | 8篇 |
2023年 | 25篇 |
2022年 | 61篇 |
2021年 | 79篇 |
2020年 | 81篇 |
2019年 | 87篇 |
2018年 | 66篇 |
2017年 | 93篇 |
2016年 | 123篇 |
2015年 | 103篇 |
2014年 | 219篇 |
2013年 | 187篇 |
2012年 | 198篇 |
2011年 | 295篇 |
2010年 | 174篇 |
2009年 | 263篇 |
2008年 | 243篇 |
2007年 | 227篇 |
2006年 | 243篇 |
2005年 | 193篇 |
2004年 | 138篇 |
2003年 | 147篇 |
2002年 | 101篇 |
2001年 | 62篇 |
2000年 | 57篇 |
1999年 | 66篇 |
1998年 | 52篇 |
1997年 | 59篇 |
1996年 | 26篇 |
1995年 | 37篇 |
1994年 | 20篇 |
1993年 | 13篇 |
1992年 | 9篇 |
1991年 | 6篇 |
1990年 | 6篇 |
1989年 | 4篇 |
1986年 | 8篇 |
1985年 | 7篇 |
1984年 | 10篇 |
1983年 | 9篇 |
1982年 | 7篇 |
1981年 | 8篇 |
1980年 | 8篇 |
1979年 | 9篇 |
1978年 | 5篇 |
1977年 | 8篇 |
1976年 | 10篇 |
1975年 | 3篇 |
1974年 | 4篇 |
1973年 | 3篇 |
排序方式: 共有3874条查询结果,搜索用时 15 毫秒
71.
Benchmarking classifiers to optimally integrate terrain analysis and multispectral remote sensing in automatic rock glacier detection 总被引:2,自引:0,他引:2
Alexander Brenning 《Remote sensing of environment》2009,113(1):239-247
The performance improvements that can be achieved by classifier selection and by integrating terrain attributes into land cover classification are investigated in the context of rock glacier detection. While exposed glacier ice can easily be mapped from multispectral remote-sensing data, the detection of rock glaciers and debris-covered glaciers is a challenge for multispectral remote sensing. Motivated by the successful use of digital terrain analysis in rock glacier distribution models, the predictive performance of a combination of terrain attributes derived from SRTM (Shuttle Radar Topography Mission) digital elevation models and Landsat ETM+ data for detecting rock glaciers in the San Juan Mountains, Colorado, USA, is assessed. Eleven statistical and machine-learning techniques are compared in a benchmarking exercise, including logistic regression, generalized additive models (GAM), linear discriminant techniques, the support vector machine, and bootstrap-aggregated tree-based classifiers such as random forests. Penalized linear discriminant analysis (PLDA) yields mapping results that are significantly better than all other classifiers, achieving a median false-positive rate (mFPR, estimated by cross-validation) of 8.2% at a sensitivity of 70%, i.e. when 70% of all true rock glacier points are detected. The GAM and standard linear discriminant analysis were second best (mFPR: 8.8%), followed by polyclass. For comparison, the predictive performance of the best three techniques is also evaluated using (1) only terrain attributes as predictors (mFPR: 13.1-14.5% for best three techniques), and (2), only Landsat ETM+ data (mFPR: 19.4-22.7%), yielding significantly higher mFPR estimates at a 70% sensitivity. The mFPR of the worst three classifiers was by about one-quarter higher compared to the best three classifiers, and the combination of terrain attributes and multispectral data reduced the mFPR by more than one-half compared to remote sensing only. These results highlight the importance of combining remote-sensing and terrain data for mapping rock glaciers and other debris-covered ice and choosing the optimal classifier based on unbiased error estimators. The proposed benchmarking methodology is more generally suitable for comparing the utility of remote-sensing algorithms and sensors. 相似文献
72.
The role of spectral resolution and classifier complexity in the analysis of hyperspectral images of forest areas 总被引:1,自引:0,他引:1
Michele Dalponte Lorenzo Bruzzone Loris Vescovo 《Remote sensing of environment》2009,113(11):2345-2355
Remote sensing hyperspectral sensors are important and powerful instruments for addressing classification problems in complex forest scenarios, as they allow one a detailed characterization of the spectral behavior of the considered information classes. However, the processing of hyperspectral data is particularly complex both from a theoretical viewpoint [e.g. problems related to the Hughes phenomenon (Hughes, 1968) and from a computational perspective. Despite many previous investigations that have been presented in the literature on feature reduction and feature extraction in hyperspectral data, only a few studies have analyzed the role of spectral resolution on the classification accuracy in different application domains. In this paper, we present an empirical study aimed at understanding the relationship among spectral resolution, classifier complexity, and classification accuracy obtained with hyperspectral sensors for the classification of forest areas. We considered two different test sets characterized by images acquired by an AISA Eagle sensor over 126 bands with a spectral resolution of 4.6 nm, and we subsequently degraded its spectral resolution to 9.2, 13.8, 18.4, 23, 27.6, 32.2 and 36.8 nm. A series of classification experiments were carried out with bands at each of the degraded spectral resolutions, and bands selected with a feature selection algorithm at the highest spectral resolution (4.6 nm). The classification experiments were carried out with three different classifiers: Support Vector Machine, Gaussian Maximum Likelihood with Leave-One-Out-Covariance estimator, and Linear Discriminant Analysis. From the experimental results, important conclusions can be made about the choice of the spectral resolution of hyperspectral sensors as applied to forest areas, also in relation to the complexity of the adopted classification methodology. The outcome of these experiments are also applicable in terms of directing the user towards a more efficient use of the current instruments (e.g. programming of the spectral channels to be acquired) and classification techniques in forest applications, as well as in the design of future hyperspectral sensors. 相似文献
73.
SecondSkin estimates an appearance model for an object visible in a video sequence, without the need for complex interaction or any calibration apparatus. This model can then be transferred to other objects, allowing a non‐expert user to insert a synthetic object into a real video sequence so that its appearance matches that of an existing object, and changes appropriately throughout the sequence. As the method does not require any prior knowledge about the scene, the lighting conditions, or the camera, it is applicable to video which was not captured with this purpose in mind. However, this lack of prior knowledge precludes the recovery of separate lighting and surface reflectance information. The SecondSkin appearance model therefore combines these factors. The appearance model does require a dominant light‐source direction, which we estimate via a novel process involving a small amount of user interaction. The resulting model estimate provides exactly the information required to transfer the appearance of the original object to new geometry composited into the same video sequence. 相似文献
74.
We propose a system that allows the user to design a continuous flow animation starting from a still fluid image. The basic idea is to apply the fluid motion extracted from a video example to the target image. The system first decomposes the video example into three components, an average image, a flow field and residuals. The user then specifies equivalent information over the target image. The user manually paints the rough flow field, and the system automatically refines it using the estimated gradients of the target image. The user semi-automatically transfers the residuals onto the target image. The system then approximates the average image and synthesizes an animation on the target image by adding the transferred residuals and warping them according to the user-specified flow field. Finally, the system adjusts the appearance of the resulting animation by applying histogram matching. We designed animations of various pictures, such as rivers, waterfalls, fires, and smoke. 相似文献
75.
This paper presents a new approach to Particle Swarm Optimization, called Michigan Approach PSO (MPSO), and its application
to continuous classification problems as a Nearest Prototype (NP) classifier. In Nearest Prototype classifiers, a collection
of prototypes has to be found that accurately represents the input patterns. The classifier then assigns classes based on
the nearest prototype in this collection. The MPSO algorithm is used to process training data to find those prototypes. In
the MPSO algorithm each particle in a swarm represents a single prototype in the solution and it uses modified movement rules
with particle competition and cooperation that ensure particle diversity. The proposed method is tested both with artificial
problems and with real benchmark problems and compared with several algorithms of the same family. Results show that the particles
are able to recognize clusters, find decision boundaries and reach stable situations that also retain adaptation potential.
The MPSO algorithm is able to improve the accuracy of 1-NN classifiers, obtains results comparable to the best among other
classifiers, and improves the accuracy reported in literature for one of the problems.
相似文献
Pedro IsasiEmail: |
76.
Structure identification of Bayesian classifiers based on GMDH 总被引:1,自引:0,他引:1
This paper introduces group method of data handing (GMDH) theory to Bayesian classification, and proposes GMBC algorithm for structure identification of Bayesian classifiers. The algorithm combines two structure identification ideas of search & scoring and dependence analysis, and is able to accomplish the process of adaptive structure identification. We experimentally test two versions of Bayesian classifiers (GMBC-BDe and GMBC-BIC) over 25 data sets. The results show that, the structure identification of the two Bayesian classifiers especially GMBC-BDe is very effective. And when the data sets contain lots of noise, the superiority of Bayesian classifiers learned by GMBC is more obvious. Finally, giving a classification domain without any prior information about the noise, we recommend adopting GMBC-BDe rather than GMBC-BIC. 相似文献
77.
Julián Luengo Salvador García Francisco Herrera 《Expert systems with applications》2009,36(4):7798-7808
In this paper, we focus on the experimental analysis on the performance in artificial neural networks with the use of statistical tests on the classification task. Particularly, we have studied whether the sample of results from multiple trials obtained by conventional artificial neural networks and support vector machines checks the necessary conditions for being analyzed through parametrical tests. The study is conducted by considering three possibilities on classification experiments: random variation in the selection of test data, the selection of training data and internal randomness in the learning algorithm.The results obtained state that the fulfillment of these conditions are problem-dependent and indefinite, which justifies the need of using non-parametric statistics in the experimental analysis. 相似文献
78.
为在需求分析阶段验证软件是否满足非功能质量要求,本文提出一种基于场景行为的需求建模及质量特性检测方案。该方案首先定义能够建立高精度、可推理、易理解的需求行为模型的行为描述语言BDL。然后构造需求行为模型到状态迁移模型CCS的模型转换函数。接着以互模拟的定义为前提,验证转换函数的正确性,进而开发可信建模检测工具MTS。该工具导出的行为模型能够与质量特性表达式一起导入特性检测工具CWB实现质量检测。最后,本文使用该工具对手机软件升级这一需求进行行为建模,成功验证手机软件的一致性、安全性、行为可信性及行为非终止性。 相似文献
79.
Okan Tarhan Tursun Ahmet Oğuz Akyüz Aykut Erdem Erkut Erdem 《Computer Graphics Forum》2016,35(2):139-152
Reconstructing high dynamic range (HDR) images of a complex scene involving moving objects and dynamic backgrounds is prone to artifacts. A large number of methods have been proposed that attempt to alleviate these artifacts, known as HDR deghosting algorithms. Currently, the quality of these algorithms are judged by subjective evaluations, which are tedious to conduct and get quickly outdated as new algorithms are proposed on a rapid basis. In this paper, we propose an objective metric which aims to simplify this process. Our metric takes a stack of input exposures and the deghosting result and produces a set of artifact maps for different types of artifacts. These artifact maps can be combined to yield a single quality score. We performed a subjective experiment involving 52 subjects and 16 different scenes to validate the agreement of our quality scores with subjective judgements and observed a concordance of almost 80%. Our metric also enables a novel application that we call as hybrid deghosting, in which the output of different deghosting algorithms are combined to obtain a superior deghosting result. 相似文献
80.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks. 相似文献