首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   87947篇
  免费   1071篇
  国内免费   413篇
电工技术   950篇
综合类   2322篇
化学工业   12185篇
金属工艺   4855篇
机械仪表   3079篇
建筑科学   2214篇
矿业工程   562篇
能源动力   1205篇
轻工业   3873篇
水利工程   1289篇
石油天然气   351篇
无线电   9723篇
一般工业技术   16928篇
冶金工业   3949篇
原子能技术   377篇
自动化技术   25569篇
  2022年   79篇
  2021年   87篇
  2020年   38篇
  2019年   48篇
  2018年   14478篇
  2017年   13408篇
  2016年   10014篇
  2015年   632篇
  2014年   304篇
  2013年   340篇
  2012年   3239篇
  2011年   9537篇
  2010年   8373篇
  2009年   5663篇
  2008年   6901篇
  2007年   7880篇
  2006年   222篇
  2005年   1305篇
  2004年   1227篇
  2003年   1269篇
  2002年   623篇
  2001年   215篇
  2000年   265篇
  1999年   184篇
  1998年   600篇
  1997年   362篇
  1996年   284篇
  1995年   137篇
  1994年   146篇
  1993年   120篇
  1992年   74篇
  1991年   77篇
  1990年   54篇
  1989年   52篇
  1988年   62篇
  1987年   49篇
  1986年   45篇
  1985年   45篇
  1982年   40篇
  1981年   44篇
  1976年   53篇
  1968年   44篇
  1967年   36篇
  1966年   42篇
  1965年   44篇
  1959年   36篇
  1958年   38篇
  1957年   36篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
971.
Based on advantages of basic non-negative sparse coding (NNSC) model, and considered the prior class constraint of image features, a novel NNSC model is discussed here. In this NNSC model, the sparseness criteria is selected as a two-parameter density estimation model and the dispersion ratio of within-class and between-class is used as the class constraint. Utilizing this NNSC model, image features can be extracted successfully. Further, the feature recognition task by using different classifiers can be implemented well. Simulation results prove that our NNSC model proposed is indeed effective in extracting image features and recognition task in application.  相似文献   
972.
973.
Linear discriminant analysis (LDA) is one of the most popular dimension reduction methods and has been widely used in many applications. In the last decades many LDA-based dimension reduction algorithms have been reported. Among these methods, orthogonal LDA (OLDA) is a famous one and several different implementations of OLDA have been proposed. In this paper, we propose a new and fast implementation of OLDA. Compared with the other OLDA implementations, our proposed implementation of OLDA is the fastest one when the dimensionality d is larger than the sample size n. Then, based on our proposed implementation of OLDA, we present an incremental OLDA algorithm which can accurately update the projection matrix of OLDA when new samples are added into the training set. The effectiveness of our proposed new OLDA algorithm and its incremental version are demonstrated by some real-world data sets.  相似文献   
974.
By combining of the benefits of high-order network and TSK (Tagaki-Sugeno-Kang) inference system, Pi-Sigma network is capable to dispose with the nonlinear problems much more effectively, which means it has a compacter construction, and quicker computational speed. The aim of this paper is to present a gradient-based learning method for Pi-Sigma network to train TSK fuzzy inference system. Moreover, some strong convergence results are established based on the weak convergence outcomes, which indicates that the sequence of weighted fuzzy parameters gets to a fixed point. Simulation results show the modified learning algorithm is effective to support the theoretical results.  相似文献   
975.
Semantic role labeling (SRL) is a fundamental task in natural language processing to find a sentence-level semantic representation. The semantic role labeling procedure can be viewed as a process of competition between many order parameters, in which the strongest order parameter will win by competition and the desired pattern will be recognized. To realize the above-mentioned integrative SRL, we use synergetic neural network (SNN). Since the network parameters of SNN directly influence the synergetic recognition performance, it is important to optimize the parameters. In this paper, we propose an improved particle swarm optimization (PSO) algorithm based on log-linear model and use it to effectively determine the network parameters. Our contributions are two-folds: firstly, a log-linear model is introduced to PSO algorithm which can effectively make use of the advantages of a variety of different knowledge sources, and enhance the decision making ability of the model. Secondly, we propose an improved SNN model based on the improved PSO and show its effectiveness in the SRL task. The experimental results show that the proposed model has a higher performance for semantic role labeling with more powerful global exploration ability and faster convergence speed, and indicate that the proposed model has a promising future for other natural language processing tasks.  相似文献   
976.
Inductive power transfer (IPT) systems facilitate contactless power transfer between two sides and across an air-gap, through weak magnetic coupling. However, IPT systems constitute a high order resonant circuit and, as such, are difficult to design and control. Aiming at the control problems for bidirectional IPT system, a neural networks based proportional-integral-derivative (PID) control strategy is proposed in this paper. In the proposed neural PID method, the PID gains, \(K_{P}\), \(K_{I}\) and \(K_{D}\) are treated as Gaussian potential function networks (GPFN) weights and they are adjusted using online learning algorithm. In this manner, the neural PID controller has more flexibility and capability than conventional PID controller with fixed gains. The convergence of the GPFN weights learning is guaranteed using Lyapunov method. Simulations are used to test the effective performance of the proposed controller.  相似文献   
977.
With the advent of low-cost 3D sensors and 3D printers, scene and object 3D surface reconstruction has become an important research topic in the last years. In this work, we propose an automatic (unsupervised) method for 3D surface reconstruction from raw unorganized point clouds acquired using low-cost 3D sensors. We have modified the growing neural gas network, which is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation, to perform 3D surface reconstruction of different real-world objects and scenes. Some improvements have been made on the original algorithm considering colour and surface normal information of input data during the learning stage and creating complete triangular meshes instead of basic wire-frame representations. The proposed method is able to successfully create 3D faces online, whereas existing 3D reconstruction methods based on self-organizing maps required post-processing steps to close gaps and holes produced during the 3D reconstruction process. A set of quantitative and qualitative experiments were carried out to validate the proposed method. The method has been implemented and tested on real data, and has been found to be effective at reconstructing noisy point clouds obtained using low-cost 3D sensors.  相似文献   
978.
With the rapid development of visual digital media, the demand for better quality of service has increased the pressure on broadcasters to automate their error detection and restoration activities for preserving their archives. Digital dropout is one of the defects that affect archived visual materials and tends to occur in block by block basis (size of 8 × 8). It is well established that human visual system (HVS) is highly adapted to the statistics of its visual natural environment. Consequently, in this paper, we have formulated digital dropout detection as a classification problem which predicts block label based on statistical features. These statistical features are indicative of perceptual quality relevant to human visual perception, and allow pristine images to be distinguished from distorted ones. Here, the idea is to extract discriminant block statistical features based on discrete cosine transform (DCT) coefficients and determine an optimal neighborhood sampling strategy to enhance the discrimination ability of block representation. Since this spatial frame based approach is free from any motion computation dependency, it works perfectly in the presence of fast moving objects. Experiments are performed on video archives to evaluate the efficacy of the proposed technique.  相似文献   
979.
Audio fingerprinting allows us to label an unidentified music fragment within a previously generated database. The use of spectral landmarks aims to obtain a robustness that lets a certain level of noise be present in the audio query. This group of audio identification algorithms holds several configuration parameters whose values are usually chosen based upon the researcher’s knowledge, previous published experimentation or just trial and error methods. In this paper we describe the whole optimisation process of a Landmark-based Music Recognition System using genetic algorithms. We define the actual structure of the algorithm as a chromosome by transforming its high relevant parameters into various genes and building up an appropriate fitness evaluation method. The optimised output parameters are used to set up a complete system that is compared with a non-optimised one by designing an unbiased evaluation model.  相似文献   
980.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号