首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3396篇
  免费   225篇
  国内免费   109篇
电工技术   38篇
综合类   93篇
化学工业   116篇
金属工艺   38篇
机械仪表   94篇
建筑科学   658篇
矿业工程   189篇
能源动力   82篇
轻工业   107篇
水利工程   23篇
石油天然气   54篇
武器工业   6篇
无线电   217篇
一般工业技术   146篇
冶金工业   53篇
原子能技术   16篇
自动化技术   1800篇
  2024年   5篇
  2023年   35篇
  2022年   65篇
  2021年   69篇
  2020年   85篇
  2019年   80篇
  2018年   69篇
  2017年   82篇
  2016年   112篇
  2015年   110篇
  2014年   203篇
  2013年   187篇
  2012年   211篇
  2011年   283篇
  2010年   170篇
  2009年   256篇
  2008年   240篇
  2007年   234篇
  2006年   242篇
  2005年   178篇
  2004年   133篇
  2003年   124篇
  2002年   90篇
  2001年   50篇
  2000年   49篇
  1999年   63篇
  1998年   51篇
  1997年   52篇
  1996年   29篇
  1995年   37篇
  1994年   19篇
  1993年   12篇
  1992年   9篇
  1991年   6篇
  1990年   6篇
  1989年   4篇
  1986年   7篇
  1985年   6篇
  1984年   6篇
  1983年   7篇
  1982年   6篇
  1981年   6篇
  1980年   8篇
  1979年   7篇
  1978年   3篇
  1977年   5篇
  1976年   7篇
  1975年   3篇
  1974年   2篇
  1973年   3篇
排序方式: 共有3730条查询结果,搜索用时 281 毫秒
121.
Feature selection plays an important role in data mining and pattern recognition, especially for large scale data. During past years, various metrics have been proposed to measure the relevance between different features. Since mutual information is nonlinear and can effectively represent the dependencies of features, it is one of widely used measurements in feature selection. Just owing to these, many promising feature selection algorithms based on mutual information with different parameters have been developed. In this paper, at first a general criterion function about mutual information in feature selector is introduced, which can bring most information measurements in previous algorithms together. In traditional selectors, mutual information is estimated on the whole sampling space. This, however, cannot exactly represent the relevance among features. To cope with this problem, the second purpose of this paper is to propose a new feature selection algorithm based on dynamic mutual information, which is only estimated on unlabeled instances. To verify the effectiveness of our method, several experiments are carried out on sixteen UCI datasets using four typical classifiers. The experimental results indicate that our algorithm achieved better results than other methods in most cases.  相似文献   
122.
We present a machine learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscopy (IA-SEM). For diagnosing signatures that may be unique to cellular states such as cancer, automatic tools with minimal user intervention need to be developed for analysis and mining of high-throughput data from these large volume data sets (typically ). Challenges for such a tool in 3D electron microscopy arise due to low contrast and signal-to-noise ratios (SNR) inherent to biological imaging. Our approach is based on block-wise classification of images into a trained list of regions. Given manually labeled images, our goal is to learn models that can localize novel instances of the regions in test datasets. Since datasets obtained using electron microscopes are intrinsically noisy, we improve the SNR of the data for automatic segmentation by implementing a 2D texture-preserving filter on each slice of the 3D dataset. We investigate texton-based region features in this work. Classification is performed by k-nearest neighbor (k-NN) classifier, support vector machines (SVMs), adaptive boosting (AdaBoost) and histogram matching using a NN classifier. In addition, we study the computational complexity vs. segmentation accuracy tradeoff of these classifiers. Segmentation results demonstrate that our approach using minimal training data performs close to semi-automatic methods using the variational level-set method and manual segmentation carried out by an experienced user. Using our method, which we show to have minimal user intervention and high classification accuracy, we investigate quantitative parameters such as volume of the cytoplasm occupied by mitochondria, differences between the surface area of inner and outer membranes and mean mitochondrial width which are quantities potentially relevant to distinguishing cancer cells from normal cells. To test the accuracy of our approach, these quantities are compared against manually computed counterparts. We also demonstrate extension of these methods to segment 3D images obtained using electron tomography.  相似文献   
123.
Combining reduced technique with iterative strategy, we propose a recursive reduced least squares support vector regression. The proposed algorithm chooses the data which make more contribution to target function as support vectors, and it considers all the constraints generated by the whole training set. Thus it acquires less support vectors, the number of which can be arbitrarily predefined, to construct the model with the similar generalization performance. In comparison with other methods, our algorithm also gains excellent parsimoniousness. Numerical experiments on benchmark data sets confirm the validity and feasibility of the presented algorithm. In addition, this algorithm can be extended to classification.  相似文献   
124.
One of the simplest, and yet most consistently well-performing set of classifiers is the naïve Bayes models (a special class of Bayesian network models). However, these models rely on the (naïve) assumption that all the attributes used to describe an instance are conditionally independent given the class of that instance. To relax this independence assumption, we have in previous work proposed a family of models, called latent classification models (LCMs). LCMs are defined for continuous domains and generalize the naïve Bayes model by using latent variables to model class-conditional dependencies between the attributes. In addition to providing good classification accuracy, the LCM has several appealing properties, including a relatively small parameter space making it less susceptible to over-fitting. In this paper we take a first step towards generalizing LCMs to hybrid domains, by proposing an LCM for domains with binary attributes. We present algorithms for learning the proposed model, and we describe a variational approximation-based inference procedure. Finally, we empirically compare the accuracy of the proposed model to the accuracy of other classifiers for a number of different domains, including the problem of recognizing symbols in black and white images.  相似文献   
125.
The performance improvements that can be achieved by classifier selection and by integrating terrain attributes into land cover classification are investigated in the context of rock glacier detection. While exposed glacier ice can easily be mapped from multispectral remote-sensing data, the detection of rock glaciers and debris-covered glaciers is a challenge for multispectral remote sensing. Motivated by the successful use of digital terrain analysis in rock glacier distribution models, the predictive performance of a combination of terrain attributes derived from SRTM (Shuttle Radar Topography Mission) digital elevation models and Landsat ETM+ data for detecting rock glaciers in the San Juan Mountains, Colorado, USA, is assessed. Eleven statistical and machine-learning techniques are compared in a benchmarking exercise, including logistic regression, generalized additive models (GAM), linear discriminant techniques, the support vector machine, and bootstrap-aggregated tree-based classifiers such as random forests. Penalized linear discriminant analysis (PLDA) yields mapping results that are significantly better than all other classifiers, achieving a median false-positive rate (mFPR, estimated by cross-validation) of 8.2% at a sensitivity of 70%, i.e. when 70% of all true rock glacier points are detected. The GAM and standard linear discriminant analysis were second best (mFPR: 8.8%), followed by polyclass. For comparison, the predictive performance of the best three techniques is also evaluated using (1) only terrain attributes as predictors (mFPR: 13.1-14.5% for best three techniques), and (2), only Landsat ETM+ data (mFPR: 19.4-22.7%), yielding significantly higher mFPR estimates at a 70% sensitivity. The mFPR of the worst three classifiers was by about one-quarter higher compared to the best three classifiers, and the combination of terrain attributes and multispectral data reduced the mFPR by more than one-half compared to remote sensing only. These results highlight the importance of combining remote-sensing and terrain data for mapping rock glaciers and other debris-covered ice and choosing the optimal classifier based on unbiased error estimators. The proposed benchmarking methodology is more generally suitable for comparing the utility of remote-sensing algorithms and sensors.  相似文献   
126.
Remote sensing hyperspectral sensors are important and powerful instruments for addressing classification problems in complex forest scenarios, as they allow one a detailed characterization of the spectral behavior of the considered information classes. However, the processing of hyperspectral data is particularly complex both from a theoretical viewpoint [e.g. problems related to the Hughes phenomenon (Hughes, 1968) and from a computational perspective. Despite many previous investigations that have been presented in the literature on feature reduction and feature extraction in hyperspectral data, only a few studies have analyzed the role of spectral resolution on the classification accuracy in different application domains. In this paper, we present an empirical study aimed at understanding the relationship among spectral resolution, classifier complexity, and classification accuracy obtained with hyperspectral sensors for the classification of forest areas. We considered two different test sets characterized by images acquired by an AISA Eagle sensor over 126 bands with a spectral resolution of 4.6 nm, and we subsequently degraded its spectral resolution to 9.2, 13.8, 18.4, 23, 27.6, 32.2 and 36.8 nm. A series of classification experiments were carried out with bands at each of the degraded spectral resolutions, and bands selected with a feature selection algorithm at the highest spectral resolution (4.6 nm). The classification experiments were carried out with three different classifiers: Support Vector Machine, Gaussian Maximum Likelihood with Leave-One-Out-Covariance estimator, and Linear Discriminant Analysis. From the experimental results, important conclusions can be made about the choice of the spectral resolution of hyperspectral sensors as applied to forest areas, also in relation to the complexity of the adopted classification methodology. The outcome of these experiments are also applicable in terms of directing the user towards a more efficient use of the current instruments (e.g. programming of the spectral channels to be acquired) and classification techniques in forest applications, as well as in the design of future hyperspectral sensors.  相似文献   
127.
This paper presents a new approach to Particle Swarm Optimization, called Michigan Approach PSO (MPSO), and its application to continuous classification problems as a Nearest Prototype (NP) classifier. In Nearest Prototype classifiers, a collection of prototypes has to be found that accurately represents the input patterns. The classifier then assigns classes based on the nearest prototype in this collection. The MPSO algorithm is used to process training data to find those prototypes. In the MPSO algorithm each particle in a swarm represents a single prototype in the solution and it uses modified movement rules with particle competition and cooperation that ensure particle diversity. The proposed method is tested both with artificial problems and with real benchmark problems and compared with several algorithms of the same family. Results show that the particles are able to recognize clusters, find decision boundaries and reach stable situations that also retain adaptation potential. The MPSO algorithm is able to improve the accuracy of 1-NN classifiers, obtains results comparable to the best among other classifiers, and improves the accuracy reported in literature for one of the problems.
Pedro IsasiEmail:
  相似文献   
128.
Structure identification of Bayesian classifiers based on GMDH   总被引:1,自引:0,他引:1  
This paper introduces group method of data handing (GMDH) theory to Bayesian classification, and proposes GMBC algorithm for structure identification of Bayesian classifiers. The algorithm combines two structure identification ideas of search & scoring and dependence analysis, and is able to accomplish the process of adaptive structure identification. We experimentally test two versions of Bayesian classifiers (GMBC-BDe and GMBC-BIC) over 25 data sets. The results show that, the structure identification of the two Bayesian classifiers especially GMBC-BDe is very effective. And when the data sets contain lots of noise, the superiority of Bayesian classifiers learned by GMBC is more obvious. Finally, giving a classification domain without any prior information about the noise, we recommend adopting GMBC-BDe rather than GMBC-BIC.  相似文献   
129.
This paper deals with some improvements to rule induction algorithms in order to resolve the tie that appear in special cases during the rule generation procedure for specific training data sets. These improvements are demonstrated by experimental results on various data sets. The tie occurs in decision tree induction algorithm when the class prediction at a leaf node cannot be determined by majority voting. When there is a conflict in the leaf node, we need to find the source and the solution to the problem. In this paper, we propose to calculate the Influence factor for each attribute and an update procedure to the decision tree has been suggested to deal with the problem and provide subsequent rectification steps.  相似文献   
130.
In this paper, we focus on the experimental analysis on the performance in artificial neural networks with the use of statistical tests on the classification task. Particularly, we have studied whether the sample of results from multiple trials obtained by conventional artificial neural networks and support vector machines checks the necessary conditions for being analyzed through parametrical tests. The study is conducted by considering three possibilities on classification experiments: random variation in the selection of test data, the selection of training data and internal randomness in the learning algorithm.The results obtained state that the fulfillment of these conditions are problem-dependent and indefinite, which justifies the need of using non-parametric statistics in the experimental analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号