首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84918篇
  免费   954篇
  国内免费   411篇
电工技术   866篇
综合类   2316篇
化学工业   11633篇
金属工艺   4826篇
机械仪表   3030篇
建筑科学   2165篇
矿业工程   562篇
能源动力   1161篇
轻工业   3653篇
水利工程   1267篇
石油天然气   341篇
无线电   9372篇
一般工业技术   16537篇
冶金工业   2930篇
原子能技术   285篇
自动化技术   25339篇
  2021年   28篇
  2019年   17篇
  2018年   14458篇
  2017年   13384篇
  2016年   9979篇
  2015年   610篇
  2014年   250篇
  2013年   251篇
  2012年   3164篇
  2011年   9440篇
  2010年   8307篇
  2009年   5599篇
  2008年   6814篇
  2007年   7816篇
  2006年   158篇
  2005年   1248篇
  2004年   1164篇
  2003年   1203篇
  2002年   568篇
  2001年   117篇
  2000年   211篇
  1999年   87篇
  1998年   175篇
  1997年   99篇
  1996年   103篇
  1995年   48篇
  1994年   42篇
  1993年   47篇
  1992年   29篇
  1991年   30篇
  1988年   21篇
  1987年   16篇
  1986年   22篇
  1984年   21篇
  1983年   18篇
  1969年   25篇
  1968年   44篇
  1967年   33篇
  1966年   42篇
  1965年   45篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
181.
Knowledge of the information goal of users is critical in website design, analyzing the efficacy of such designs, and in ensuring effective user-access to desired information. Determining the information goal is complex due to the subjective and latent nature of user information needs. This challenge is further exacerbated in media-rich websites since the semantics of media-based information is context-based and emergent. A critical step in determining information goals lies in the identification of content pages. These are the pages which contain the information the user seeks. We propose a method to automatically determine the content pages by taking into account the organization of the web site, the media-based information content, as well as the influence of a specific user browsing pattern. Given a specific browsing pattern, in our method, putative content pages are identified as the pages corresponding to the local minima of page-content entropy values. For an (unknown) user information goal this intuitively corresponds to modeling the progressive transition of the user from pages with generic information to those with specific information. Experimental investigations on media rich sites demonstrate the effectiveness of the technique and underline its potential in modeling user information needs and actions in a media-rich web.  相似文献   
182.
Fuzzy grey relational analysis for software effort estimation   总被引:1,自引:1,他引:0  
Accurate and credible software effort estimation is a challenge for academic research and software industry. From many software effort estimation models in existence, Estimation by Analogy (EA) is still one of the preferred techniques by software engineering practitioners because it mimics the human problem solving approach. Accuracy of such a model depends on the characteristics of the dataset, which is subject to considerable uncertainty. The inherent uncertainty in software attribute measurement has significant impact on estimation accuracy because these attributes are measured based on human judgment and are often vague and imprecise. To overcome this challenge we propose a new formal EA model based on the integration of Fuzzy set theory with Grey Relational Analysis (GRA). Fuzzy set theory is employed to reduce uncertainty in distance measure between two tuples at the k th continuous feature ( | ( xo(k) - xi(k) | ) \left( {\left| {\left( {{x_o}(k) - {x_i}(k)} \right.} \right|} \right) .GRA is a problem solving method that is used to assess the similarity between two tuples with M features. Since some of these features are not necessary to be continuous and may have nominal and ordinal scale type, aggregating different forms of similarity measures will increase uncertainty in the similarity degree. Thus the GRA is mainly used to reduce uncertainty in the distance measure between two software projects for both continuous and categorical features. Both techniques are suitable when relationship between effort and other effort drivers is complex. Experimental results showed that using integration of GRA with FL produced credible estimates when compared with the results obtained using Case-Based Reasoning, Multiple Linear Regression and Artificial Neural Networks methods.  相似文献   
183.
In this paper, the dynamic behaviors of a class of neural networks with time-varying delays are investigated. Some less weak sufficient conditions based on p-norm and ∞-norm are obtained to guarantee the existence, uniqueness of the equilibrium point for the addressed neural networks without impulsive control by applying homeomorphism theory. And then, by utilizing inequality technique, Lyapunov functional method and the analysis method, some new and useful criteria of the globally exponential stability with respect to the equilibrium point under impulsive control we assumed are derived based on p-norm and ∞-norm, respectively. Finally, an example with simulation is given to show the effectiveness of the obtained results.  相似文献   
184.
As readers of this journal will of course know, the Zugangserschwerungsgesetz has caused considerable and often very profound debate in Germany about the limits of legal and technological interference with the freedom of access to information, culminating in the temporary refusal of the President to sign the law into action. In the UK by contrast, a core aspect of this law, the technical prevention of access to sites hosting illegal content by ISPs, was introduced through the so called “Cleanfeed system” as early as 1996, with little or no public debate, and bypassing by and large all parliamentary procedure and scrutiny. This article has a threefold aim: First, it gives a brief account of the history and implementation of the UK Cleanfeed system1; second, it explains some of its more unusual aspects by putting them into the historical and constitutional context of policing in the UK, and third, it highlights those experiences made with the system that are of direct relevance for the German discussion.  相似文献   
185.
The flow of a model non-polar liquid through small carbon nanotubes is studied using non-equilibrium molecular dynamics simulation. We explain how a membrane of small-diameter nanotubes can transport this liquid faster than a membrane consisting of larger-diameter nanotubes. This effect is shown to be back-pressure dependent, and the reasons for this are explored. The flow through the very smallest nanotubes is shown to depend strongly on the depth of the potential inside, suggesting atomic separation can be based on carbon interaction strength as well as physical size. Finally, we demonstrate how increasing the back-pressure can counter-intuitively result in lower exit velocities from a nanotube. Such studies are crucial for optimisation of nanotube membranes.  相似文献   
186.
187.
188.
For motion compensated de-interlace, the accuracy and reliability of the motion vectors have a significant impact on the performance of the motion compensated interpolation. In order to improve the robustness of motion vector, a novel motion estimation algorithm with center-biased diamond search and its parallel VLSI architecture are proposed in this paper. Experiments show that it works better than conventional motion estimation algorithms in terms of motion compensation error and robustness, and its architecture overcomes the irregular data flow and achieves high efficiency. It also efficiently reuses data and reduces the control overhead. So, it is highly suitable for HDTV applications.  相似文献   
189.
Recognition of Arabic (Indian) bank check digits using log-gabor filters   总被引:3,自引:3,他引:0  
In this paper we present a technique for the automatic recognition of Arabic (Indian) bank check digits based on features extracted by using the Log Gabor filters. The digits are classified by using the K-Nearest Neighbor (K-NN), Hidden Markov Models (HMM) and Support Vector Machines (SVM) classifiers. An extensive experimentation is conducted on the CENPARMI data, a database consisting of 7390 samples of Arabic (Indian) digits for training and 3035 samples for testing extracted from real bank checks. The data is normalized to a height of 64 pixels, maintaining the aspect ratio. Log Gabor filters with several scales and orientations are used. In addition, the filtered images are segmented into different region sizes for feature extraction. Recognition rates of 98.95%, 98.75%, 98.62%, 97.21% and 94.43% are achieved with SVM, 1-NN, 3-NN, HMM and NM classifiers, respectively. These results significantly outperform published work using the same database. The misclassified digits are evaluated subjectively and results indicate that human subjects misclassified 1/3 of these digits. The experimental results, including the subjective evaluation of misclassified digits, indicate the effectiveness of the selected Log Gabor filters parameters, the implemented image segmentation technique, and extracted features for practical recognition of Arabic (Indian) digits.  相似文献   
190.
This paper presents a method of autonomous topological modeling and localization in a home environment using only low-cost sonar sensors. The topological model is extracted from a grid map using cell decomposition and normalized graph cut. The autonomous topological modeling involves the incremental extraction of a subregion without predefining the number of subregions. A method of topological localization based on this topological model is proposed wherein a current local grid map is compared with the original grid map. The localization is accomplished by obtaining a node probability from a relative motion model and rotational invariant grid-map matching. The proposed method extracts a well-structured topological model of the environment, and the localization provides reliable node probability even when presented with sparse and uncertain sonar data. Experimental results demonstrate the performance of the proposed topological modeling and localization in a real home environment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号