全文获取类型
收费全文 | 60585篇 |
免费 | 5679篇 |
国内免费 | 3098篇 |
专业分类
电工技术 | 4397篇 |
技术理论 | 5篇 |
综合类 | 4262篇 |
化学工业 | 9371篇 |
金属工艺 | 3569篇 |
机械仪表 | 3540篇 |
建筑科学 | 5290篇 |
矿业工程 | 1540篇 |
能源动力 | 1800篇 |
轻工业 | 3914篇 |
水利工程 | 1109篇 |
石油天然气 | 3234篇 |
武器工业 | 507篇 |
无线电 | 7445篇 |
一般工业技术 | 7443篇 |
冶金工业 | 2564篇 |
原子能技术 | 779篇 |
自动化技术 | 8593篇 |
出版年
2024年 | 342篇 |
2023年 | 1176篇 |
2022年 | 1883篇 |
2021年 | 2714篇 |
2020年 | 1936篇 |
2019年 | 1605篇 |
2018年 | 1807篇 |
2017年 | 1996篇 |
2016年 | 1765篇 |
2015年 | 2405篇 |
2014年 | 2871篇 |
2013年 | 3539篇 |
2012年 | 3830篇 |
2011年 | 4052篇 |
2010年 | 3664篇 |
2009年 | 3382篇 |
2008年 | 3508篇 |
2007年 | 3350篇 |
2006年 | 3365篇 |
2005年 | 2827篇 |
2004年 | 2004篇 |
2003年 | 1817篇 |
2002年 | 1910篇 |
2001年 | 1706篇 |
2000年 | 1481篇 |
1999年 | 1571篇 |
1998年 | 1163篇 |
1997年 | 1020篇 |
1996年 | 1004篇 |
1995年 | 813篇 |
1994年 | 674篇 |
1993年 | 490篇 |
1992年 | 442篇 |
1991年 | 324篇 |
1990年 | 215篇 |
1989年 | 183篇 |
1988年 | 158篇 |
1987年 | 100篇 |
1986年 | 67篇 |
1985年 | 56篇 |
1984年 | 34篇 |
1983年 | 23篇 |
1982年 | 32篇 |
1981年 | 18篇 |
1980年 | 19篇 |
1979年 | 8篇 |
1978年 | 2篇 |
1959年 | 4篇 |
1951年 | 7篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
992.
Yujiao Mai Jianping Hu Zheng Yan Shuangju Zhen Shujun Wang Wei Zhang 《Computers in human behavior》2012
This study empirically investigated the structure and function of maladaptive cognitions related to Pathological Internet Use (PIU) among Chinese adolescents. To explore the structure of maladaptive cognitions, this study validated a Chinese Adolescents’ Maladaptive Cognitions Scale (CAMCS) with two samples of adolescents (n1 = 293 and n2 = 609). The results of the exploratory factor analysis and confirmatory factor analysis revealed that CAMCS included three distinct factors, namely, “social comfort,” “distraction,” and “self-realization.” To examine the function of maladaptive cognitions, this study tested an updated cognitive-behavioral model in the third sample of 1059 adolescents. The results of structural equation model analyses verified both the direct effect of maladaptive cognitions on PIU and their mediating role in the relationships between distal factors (social anxiety and stressful life events) and PIU among Chinese adolescents. Theoretical and practical implications of these findings were discussed. 相似文献
993.
Semi-naive Bayesian techniques seek to improve the accuracy of naive Bayes (NB) by relaxing the attribute independence assumption.
We present a new type of semi-naive Bayesian operation, Subsumption Resolution (SR), which efficiently identifies occurrences
of the specialization-generalization relationship and eliminates generalizations at classification time. We extend SR to Near-Subsumption
Resolution (NSR) to delete near–generalizations in addition to generalizations. We develop two versions of SR: one that performs
SR during training, called eager SR (ESR), and another that performs SR during testing, called lazy SR (LSR). We investigate
the effect of ESR, LSR, NSR and conventional attribute elimination (BSE) on NB and Averaged One-Dependence Estimators (AODE),
a powerful alternative to NB. BSE imposes very high training time overheads on NB and AODE accompanied by varying decreases
in classification time overheads. ESR, LSR and NSR impose high training time and test time overheads on NB. However, LSR imposes
no extra training time overheads and only modest test time overheads on AODE, while ESR and NSR impose modest training and
test time overheads on AODE. Our extensive experimental comparison on sixty UCI data sets shows that applying BSE, LSR or
NSR to NB significantly improves both zero-one loss and RMSE, while applying BSE, ESR or NSR to AODE significantly improves
zero-one loss and RMSE and applying LSR to AODE significantly improves zero-one loss. The Friedman test and Nemenyi test show
that AODE with ESR or NSR have a significant zero-one loss and RMSE advantage over Logistic Regression and a zero-one loss
advantage over Weka’s LibSVM implementation with a grid parameter search on categorical data. AODE with LSR has a zero-one
loss advantage over Logistic Regression and comparable zero-one loss with LibSVM. Finally, we examine the circumstances under
which the elimination of near-generalizations proves beneficial. 相似文献
994.
Hyperspectral band selection aims at the determination of an optimal subset of spectral bands for dimensionality reduction
without loss of discriminability. Many conventional band selection approaches depend on the concept of “statistical distance”
measure between the probability distributions characterizing sample classes. However, the maximization of separability does
not necessarily guarantee that a classification process results in the best classification accuracies. This paper presents
a multidimensional local spatial autocorrelation (MLSA) measure that quantifies the spatial autocorrelation of the hyperspectral
image data. Based on the proposed spatial measure, a collaborative band selection strategy is developed that combines both
spectral separability measure and spatial homogeneity measure for hyperspectral band selection without losing the spectral
details useful in classification processes. The selected band subset by the proposed method shows both larger separability
between classes and stronger spatial similarity within class. Case studies in biomedical and remote sensing applications demonstrate
that the MLSA-based band selection approach improves object classification accuracies in hyperspectral imaging compared with
conventional approaches. 相似文献
995.
A novel algorithm for land use and land cover classification using RADARSAT-2 polarimetric SAR data 总被引:2,自引:0,他引:2
This study proposes a new four-component algorithm for land use and land cover (LULC) classification using RADARSAT-2 polarimetric SAR (PolSAR) data. These four components are polarimetric decomposition, PolSAR interferometry, object-oriented image analysis, and decision tree algorithms. First, polarimetric decomposition can be used to support the classification of PolSAR data. It is aimed at extracting polarimetric parameters related to the physical scattering mechanisms of the observed objects. Second, PolSAR interferometry is used to extract polarimetric interferometric information to support LULC classification. Third, the main purposes of object-oriented image analysis are delineating image objects, as well as extracting various textural and spatial features from image objects to improve classification accuracy. Finally, a decision tree algorithm provides an efficient way to select features and implement classification. A comparison between the proposed method and the Wishart supervised classification which is based on the coherency matrix was made to test the performance of the proposed method. The overall accuracy of the proposed method was 86.64%, whereas that of the Wishart supervised classification was 69.66%. The kappa value of the proposed method was 0.84, much higher than that of the Wishart supervised classification, which exhibited a kappa value of 0.65. The results indicate that the proposed method exhibits much better performance than the Wishart supervised classification for LULC classification. Further investigation was carried out on the respective contribution of the four components to LULC classification using RADARSAT-2 PolSAR data, and it indicates that all the four components have important contribution to the classification. Polarimetric information has significant implications for identifying different vegetation types and distinguishing between vegetation and urban/built-up. The polarimetric interferometric information extracted from repeat-pass RADARSAT-2 images is important in reducing the confusion between urban/built-up and vegetation and that between barren/sparsely vegetated land and vegetation. Object-oriented image analysis is very helpful in reducing the effect of speckle in PolSAR images by implementing classification based on image objects, and the textural information extracted from image objects is helpful in distinguishing between water and lawn. The decision tree algorithm can achieve higher classification accuracy than the nearest neighbor classification implemented using Definiens Developer 7.0, and the accuracy of the decision tree algorithm is similar with that of the support vector classification which is implemented based on the features selected using genetic algorithms. Compared with the nearest neighbor and support vector classification, the decision tree algorithm is more efficient to select features and implement classification. Furthermore, the decision tree algorithm can provide clear classification rules that can be easily interpreted based on the physical meaning of the features used in the classification. This can provide physical insight for LULC classification using PolSAR data. 相似文献
996.
Fukunaga–Koontz Transform (FKT) is a famous feature extraction method in statistical pattern recognition, which aims to find a set of vectors that have the best representative power for one class while the poorest representative power for the other class. Li and Savvides [1] propose a one-against-all strategy to deal with multi-class problems, in which the two-class FKT method can be directly applied to find the presentative vectors of each class. Motivated by the FKT method, in this paper we propose a new discriminant subspace analysis (DSA) method for the multi-class feature extraction problems. To solve DSA, we propose an iterative algorithm for the joint diagonalization (JD) problem. Finally, we generalize the linear DSA method to handle nonlinear feature extraction problems via the kernel trick. To demonstrate the effectiveness of the proposed method for pattern recognition problems, we conduct extensive experiments on real data sets and show that the proposed method outperforms most commonly used feature extraction methods. 相似文献
997.
Non-negativity matrix factorization (NMF) and its variants have been explored in the last decade and are still attractive due to its ability of extracting non-negative basis images. However, most existing NMF based methods are not ready for encoding higher-order data information. One reason is that they do not directly/explicitly model structured data information during learning, and therefore the extracted basis images may not completely describe the “parts” in an image [1] very well. In order to solve this problem, the structured sparse NMF has been recently proposed in order to learn structured basis images. It however depends on some special prior knowledge, i.e. one needs to exhaustively define a set of structured patterns in advance. In this paper, we wish to perform structured sparsity learning as automatically as possible. To that end, we propose a pixel dispersion penalty (PDP), which effectively describes the spatial dispersion of pixels in an image without using any manually predefined structured patterns as constraints. In PDP, we consider each part-based feature pattern of an image as a cluster of non-zero pixels; that is the non-zero pixels of a local pattern should be spatially close to each other. Furthermore, by incorporating the proposed PDP, we develop a spatial non-negative matrix factorization (Spatial NMF) and a spatial non-negative component analysis (Spatial NCA). In Spatial NCA, the non-negativity constraint is only imposed on basis images and such constraint on coefficients is released, so both subtractive and additive combinations of non-negative basis images are allowed for reconstructing any images. Extensive experiments are conducted to validate the effectiveness of the proposed pixel dispersion penalty. We also experimentally show that Spatial NCA is more flexible for extracting non-negative basis images and obtains better and more stable performance. 相似文献
998.
999.
Support vector machine (SVM) was initially designed for binary classification. To extend SVM to the multi-class scenario, a number of classification models were proposed such as the one by Crammer and Singer (2001). However, the number of variables in Crammer and Singer’s dual problem is the product of the number of samples (l) by the number of classes (k), which produces a large computational complexity. This paper presents a simplified multi-class SVM (SimMSVM) that reduces the size of the resulting dual problem from l × k to l by introducing a relaxed classification error bound. The experimental results demonstrate that the proposed SimMSVM approach can greatly speed-up the training process, while maintaining a competitive classification accuracy. 相似文献
1000.
Benign worms have been attracting wide attention in the field of worm research due to the proactive defense against the worm propagation and patch for the susceptible hosts. In this paper, two revised Worm?CAnti-Worm (WAW) models are proposed for cloud-based benign worm countermeasure. These Re-WAW models are based on the law of worm propagation and the two-factor model. One is the cloud-based benign Re-WAW model to achieve effective worm containment. Another is the two-stage Re-WAW propagation model, which uses proactive and passive switching defending strategy based on the ratio of benign worms to malicious worms. This model intends to avoid the network congestion and other potential risks caused by the proactive scan of benign worms. Simulation results show that the cloud-based Re-WAW model significantly improves the worm propagation containment effect. The cloud computing technology enables rapid delivery of massive initial benign worms, and the two stage Re-WAW model gradually clears off the benign worms with the containment of the malicious worms. 相似文献