首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27760篇
  免费   5217篇
  国内免费   4740篇
电工技术   1561篇
技术理论   2篇
综合类   2953篇
化学工业   1066篇
金属工艺   382篇
机械仪表   931篇
建筑科学   608篇
矿业工程   151篇
能源动力   329篇
轻工业   352篇
水利工程   133篇
石油天然气   168篇
武器工业   108篇
无线电   3152篇
一般工业技术   1753篇
冶金工业   1993篇
原子能技术   39篇
自动化技术   22036篇
  2024年   764篇
  2023年   2206篇
  2022年   3358篇
  2021年   3203篇
  2020年   2655篇
  2019年   1793篇
  2018年   1267篇
  2017年   1129篇
  2016年   1158篇
  2015年   1134篇
  2014年   1554篇
  2013年   1421篇
  2012年   1436篇
  2011年   1702篇
  2010年   1372篇
  2009年   1391篇
  2008年   1431篇
  2007年   1317篇
  2006年   1145篇
  2005年   1015篇
  2004年   798篇
  2003年   679篇
  2002年   616篇
  2001年   458篇
  2000年   392篇
  1999年   313篇
  1998年   297篇
  1997年   260篇
  1996年   209篇
  1995年   179篇
  1994年   137篇
  1993年   135篇
  1992年   109篇
  1991年   60篇
  1990年   53篇
  1989年   58篇
  1988年   33篇
  1987年   31篇
  1986年   44篇
  1984年   19篇
  1982年   15篇
  1981年   16篇
  1965年   24篇
  1964年   25篇
  1963年   23篇
  1961年   17篇
  1959年   15篇
  1958年   16篇
  1957年   22篇
  1955年   23篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
101.
联邦学习(federated learning)可以解决分布式机器学习中基于隐私保护的数据碎片化和数据隔离问题。在联邦学习系统中,各参与者节点合作训练模型,利用本地数据训练局部模型,并将训练好的局部模型上传到服务器节点进行聚合。在真实的应用环境中,各节点之间的数据分布往往具有很大差异,导致联邦学习模型精确度较低。为了解决非独立同分布数据对模型精确度的影响,利用不同节点之间数据分布的相似性,提出了一个聚类联邦学习框架。在Synthetic、CIFAR-10和FEMNIST标准数据集上进行了广泛实验。与其他联邦学习方法相比,基于数据分布的聚类联邦学习对模型的准确率有较大提升,且所需的计算量也更少。  相似文献   
102.
针对图表示方法的相关解析任务进行了研究,从形式化定义出发,首先以不同核心技术作为分类标准将图表示学习方法划分为五大类,其包括基于降维解析、矩阵分解、随机游走、深度学习和其他表示学习方法。其次通过归纳与对比分析梳理各类技术发展脉络,进而深层次展现各类图表示方法的优劣。随后结合图表示学习的常用数据集、评估方法和应用领域的归纳分析,展开动态性、可扩展性、可解释性和可解析性的四维剖析。最后总结并展望了图表示学习的未来研究趋势与发展方向。  相似文献   
103.
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.  相似文献   
104.
Case-based reasoning (CBR) is one of the main forecasting methods in business forecasting, which performs well in prediction and holds the ability of giving explanations for the results. In business failure prediction (BFP), the number of failed enterprises is relatively small, compared with the number of non-failed ones. However, the loss is huge when an enterprise fails. Therefore, it is necessary to develop methods (trained on imbalanced samples) which forecast well for this small proportion of failed enterprises and performs accurately on total accuracy meanwhile. Commonly used methods constructed on the assumption of balanced samples do not perform well in predicting minority samples on imbalanced samples consisting of the minority/failed enterprises and the majority/non-failed ones. This article develops a new method called clustering-based CBR (CBCBR), which integrates clustering analysis, an unsupervised process, with CBR, a supervised process, to enhance the efficiency of retrieving information from both minority and majority in CBR. In CBCBR, various case classes are firstly generated through hierarchical clustering inside stored experienced cases, and class centres are calculated out by integrating cases information in the same clustered class. When predicting the label of a target case, its nearest clustered case class is firstly retrieved by ranking similarities between the target case and each clustered case class centre. Then, nearest neighbours of the target case in the determined clustered case class are retrieved. Finally, labels of the nearest experienced cases are used in prediction. In the empirical experiment with two imbalanced samples from China, the performance of CBCBR was compared with the classical CBR, a support vector machine, a logistic regression and a multi-variant discriminate analysis. The results show that compared with the other four methods, CBCBR performed significantly better in terms of sensitivity for identifying the minority samples and generated high total accuracy meanwhile. The proposed approach makes CBR useful in imbalanced forecasting.  相似文献   
105.
In this paper, we design an adaptive iterative learning control method for a class of high-order nonlinear output feedback discrete-time systems with random initial conditions and iteration-varying desired trajectories. An n-step ahead predictor approach is employed to estimate future outputs. The discrete Nussbaum gain method is incorporated into the control design to deal with unknown control directions. The proposed control algorithm ensures that the tracking error converges to zero asymptotically along the iterative learning axis except for the beginning outputs affected by random initial conditions. A numerical simulation is carried out to demonstrate the efficacy of the presented control laws.  相似文献   
106.
Noises are inevitably introduced in digital image acquisition processes, and thus image denoising is still a hot research problem. Different from local methods operating on local regions of images, the non-local methods utilize non-local information (even the whole image) to accomplish image denoising. Due to their superior performance, the non-local methods have recently drawn more and more attention in the image denoising community. However, these methods generally do not work well in handling complicated noises with different levels and types. Inspired by the fact in machine learning field that multi-kernel methods are more robust and effective in tackling complex problems than single-kernel ones, we establish a general non-local denoising model based on multi-kernel-induced measures (GNLMKIM for short), which provides us a platform to analyze some existing and design new filters. With the help of GNLMKIM, we reinterpret two well-known non-local filters in the united view and extend them to their novel multi-kernel counterparts. The comprehensive experiments indicate that these novel filters achieve encouraging denoising results in both visual effect and PSNR index.  相似文献   
107.
Attributing authorship of documents with unknown creators has been studied extensively for natural language text such as essays and literature, but less so for non‐natural languages such as computer source code. Previous attempts at attributing authorship of source code can be categorised by two attributes: the software features used for the classification, either strings of n tokens/bytes (n‐grams) or software metrics; and the classification technique that exploits those features, either information retrieval ranking or machine learning. The results of existing studies, however, are not directly comparable as all use different test beds and evaluation methodologies, making it difficult to assess which approach is superior. This paper summarises all previous techniques to source code authorship attribution, implements feature sets that are motivated by the literature, and applies information retrieval ranking methods or machine classifiers for each approach. Importantly, all approaches are tested on identical collections from varying programming languages and author types. Our conclusions are as follows: (i) ranking and machine classifier approaches are around 90% and 85% accurate, respectively, for a one‐in‐10 classification problem; (ii) the byte‐level n‐gram approach is best used with different parameters to those previously published; (iii) neural networks and support vector machines were found to be the most accurate machine classifiers of the eight evaluated; (iv) use of n‐gram features in combination with machine classifiers shows promise, but there are scalability problems that still must be overcome; and (v) approaches based on information retrieval techniques are currently more accurate than approaches based on machine learning. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
108.
In this paper we focus on two complementary approaches to significantly decrease pre-training time of a deep belief network (DBN). First, we propose an adaptive step size technique to enhance the convergence of the contrastive divergence (CD) algorithm, thereby reducing the number of epochs to train the restricted Boltzmann machine (RBM) that supports the DBN infrastructure. Second, we present a highly scalable graphics processing unit (GPU) parallel implementation of the CD-k algorithm, which boosts notably the training speed. Additionally, extensive experiments are conducted on the MNIST and the HHreco databases. The results suggest that the maximum useful depth of a DBN is related to the number and quality of the training samples. Moreover, it was found that the lower-level layer plays a fundamental role for building successful DBN models. Furthermore, the results contradict the pre-conceived idea that all the layers should be pre-trained. Finally, it is shown that by incorporating multiple back-propagation (MBP) layers, the DBNs generalization capability is remarkably improved.  相似文献   
109.
For hyperspectral target detection, it is usually the case that only part of the targets pixels can be used as target signatures, so can we use them to construct the most proper background subspace for detecting all the probable targets? In this paper, a dynamic subspace detection (DSD) method which establishes a multiple detection framework is proposed. In each detection procedure, blocks of pixels are calculated by the random selection and the succeeding detection performance distribution analysis. Manifold analysis is further used to eliminate the probable anomalous pixels and purify the subspace datasets, and the remaining pixels construct the subspace for each detection procedure. The final detection results are then enhanced by the fusion of target occurrence frequencies in all the detection procedures. Experiments with both synthetic and real hyperspectral images (HSI) evaluate the validation of our proposed DSD method by using several different state-of-the-art methods as the basic detectors. With several other single detectors and multiple detection methods as comparable methods, improved receiver operating characteristic curves and better separability between targets and backgrounds by the DSD methods are illustrated. The DSD methods also perform well with the covariance-based detectors, showing their efficiency in selecting covariance information for detection.  相似文献   
110.
《Pattern recognition》2014,47(2):789-805
This paper studies Fisher linear discriminants (FLDs) based on classification accuracies for imbalanced datasets. An optimal threshold is found out from a series of empirical formulas developed, which is related not only to sample sizes but also to distribution regions. A mixed binary–decimal coding system is suggested to make the very dense datasets sparse and enlarge the class margins on condition that the neighborhood relationships of samples are nearly preserved. The within-class scatter matrices being or approximately singular should be moderately reduced in dimensionality but not added with tiny perturbations. The weight vectors can be further updated by a kind of epoch-limited (three at most) iterative learning strategy provided that the current training error rates come down accordingly. Putting the above ideas together, this paper proposes a type of integrated FLDs. The extensive experimental results over real-world datasets have demonstrated that the integrated FLDs have obvious advantages over the conventional FLDs in the aspects of learning and generalization performances for the imbalanced datasets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号