首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   774篇
  免费   14篇
  国内免费   1篇
电工技术   18篇
化学工业   155篇
金属工艺   8篇
机械仪表   10篇
建筑科学   27篇
矿业工程   1篇
能源动力   60篇
轻工业   33篇
水利工程   3篇
无线电   83篇
一般工业技术   194篇
冶金工业   119篇
自动化技术   78篇
  2023年   6篇
  2022年   4篇
  2021年   13篇
  2020年   11篇
  2019年   15篇
  2018年   17篇
  2017年   9篇
  2016年   11篇
  2015年   10篇
  2014年   14篇
  2013年   49篇
  2012年   22篇
  2011年   22篇
  2010年   16篇
  2009年   19篇
  2008年   26篇
  2007年   27篇
  2006年   18篇
  2005年   15篇
  2004年   8篇
  2003年   13篇
  2002年   15篇
  2001年   13篇
  2000年   11篇
  1999年   20篇
  1998年   35篇
  1997年   33篇
  1996年   25篇
  1995年   17篇
  1994年   12篇
  1993年   17篇
  1992年   14篇
  1991年   14篇
  1990年   11篇
  1989年   12篇
  1988年   5篇
  1987年   18篇
  1986年   20篇
  1985年   14篇
  1984年   16篇
  1983年   15篇
  1982年   16篇
  1981年   19篇
  1980年   10篇
  1979年   13篇
  1978年   8篇
  1977年   8篇
  1976年   10篇
  1974年   7篇
  1972年   5篇
排序方式: 共有789条查询结果,搜索用时 15 毫秒
21.
In recent years we have witnessed several applications of frequent sequence mining, such as feature selection for protein sequence classification and mining block correlations in storage systems. In typical applications such as clustering, it is not the complete set but only a subset of discriminating frequent subsequences which is of interest. One approach to discovering the subset of useful frequent subsequences is to apply any existing frequent sequence mining algorithm to find the complete set of frequent subsequences. Then, a subset of interesting subsequences can be further identified. Unfortunately, it is very time consuming to mine the complete set of frequent subsequences for large sequence databases. In this paper, we propose a new algorithm, CONTOUR, which efficiently mines a subset of high-quality subsequences directly in order to cluster the input sequences. We mainly focus on how to design some effective search space pruning methods to accelerate the mining process and discuss how to construct an accurate clustering algorithm based on the result of CONTOUR. We conducted an extensive performance study to evaluate the efficiency and scalability of CONTOUR, and the accuracy of the frequent subsequence-based clustering algorithm.  相似文献   
22.
23.
A Web server, when overloaded, shows a severe degradation of goodput initially, with the eventual settling of goodput as load increases further. Traditional performance models have failed to capture this behavior. In this paper, we propose an analytical model, which is a two-stage and layered queuing model of the Web server, which is able to reproduce this behavior. We do this by explicitly modelling the overhead processing, the user abandonment and retry behavior, and the contention for resources, for the FIFO and LIFO queuing disciplines. We show that LIFO provides better goodput in most overload situations. We compare our model predictions with experimental results from a test bed and find that our results match well with measurements.  相似文献   
24.
A malware mutation engine is able to transform a malicious program to create a different version of the program. Such mutation engines are used at distribution sites or in self-propagating malware in order to create variation in the distributed programs. Program normalization is a way to remove variety introduced by mutation engines, and can thus simplify the problem of detecting variant strains. This paper introduces the “normalizer construction problem” (NCP), and formalizes a restricted form of the problem called “NCP=”, which assumes a model of the engine is already known in the form of a term rewriting system. It is shown that even this restricted version of the problem is undecidable. A procedure is provided that can, in certain cases, automatically solve NCP= from the model of the engine. This procedure is analyzed in conjunction with term rewriting theory to create a list of distinct classes of normalizer construction problems. These classes yield a list of possible attack vectors. Three strategies are defined for approximate solutions of NCP=, and an analysis is provided of the risks they entail. A case study using the virus suggests the approximations may be effective in practice for countering mutated malware. R. Mathur is presently at McAfee AVERT Labs.  相似文献   
25.
The frequent difficulties encountered in the diagnosis of pediatric sarcomas, caused by the lack of observable differentiation at the light microscopic level, has led to the routine use of immunohistochemistry in pediatric surgical pathology. To a large degree the advent of this staining technique has led to the correct assessment of many perplexing lesions that previously would have been given inconclusive diagnoses. However, with increased usage and testing, it has become apparent that there are few, if any, "magic bullets" in immunohistochemistry for pediatric pathologists. Thus, it behooves diagnosticians to be careful in the usage of this technique, to be aware of possible discrepancies in its results, and to remember the ancillary nature of its application. The following article will review selected markers commonly used in pediatric surgical pathology, from both previous reports and the author's perspective, and will briefly consider several new phenotypic markers which have potential utility with childhood sarcomas.  相似文献   
26.
Facility location decisions are usually determined by cost and coverage related factors although empirical studies show that such factors as infrastructure, labor conditions and competition also play an important role in practice. The objective of this paper is to develop a multi-objective facility location model accounting for a wide range of factors affecting decision-making. The proposed model selects potential facilities from a set of pre-defined alternative locations according to the number of customers, the number of competitors and real-estate cost criteria. However, that requires large amount of both spatial and non-spatial input data, which could be acquired from distributed data sources over the Internet. Therefore, a computational approach for processing input data and representation of modeling results is elaborated. It is capable of accessing and processing data from heterogeneous spatial and non-spatial data sources. Application of the elaborated data gathering approach and facility location model is demonstrated using an example of fast food restaurants location problem.  相似文献   
27.
The rapid growth in the performance of graphics hardware, coupled with recent improvements in its programmability has lead to its adoption in many non-graphics applications, including a wide variety of scientific computing fields. At the same time, a number of important dynamic optimal policy problems in economics are athirst of computing power to help overcome dual curses of complexity and dimensionality. We investigate if computational economics may benefit from new tools on a case study of imperfect information dynamic programming problem with learning and experimentation trade-off, that is, a choice between controlling the policy target and learning system parameters. Specifically, we use a model of active learning and control of a linear autoregression with the unknown slope that appeared in a variety of macroeconomic policy and other contexts. The endogeneity of posterior beliefs makes the problem difficult in that the value function need not be convex and the policy function need not be continuous. This complication makes the problem a suitable target for massively-parallel computation using graphics processors (GPUs). Our findings are cautiously optimistic in that the new tools let us easily achieve a factor of 15 performance gain relative to an implementation targeting single-core processors. Further gains up to a factor of 26 are also achievable but lie behind a learning and experimentation barrier of their own. Drawing upon experience with CUDA programming architecture and GPUs provides general lessons on how to best exploit future trends in parallel computation in economics.  相似文献   
28.
Social media networks contain both content and context-specific information. Most existing methods work with either of the two for the purpose of multimedia mining and retrieval. In reality, both content and context information are rich sources of information for mining, and the full power of mining and processing algorithms can be realized only with the use of a combination of the two. This paper proposes a new algorithm which mines both context and content links in social media networks to discover the underlying latent semantic space. This mapping of the multimedia objects into latent feature vectors enables the use of any off-the-shelf multimedia retrieval algorithms. Compared to the state-of-the-art latent methods in multimedia analysis, this algorithm effectively solves the problem of sparse context links by mining the geometric structure underlying the content links between multimedia objects. Specifically for multimedia annotation, we show that an effective algorithm can be developed to directly construct annotation models by simultaneously leveraging both context and content information based on latent structure between correlated semantic concepts. We conduct experiments on the Flickr data set, which contains user tags linked with images. We illustrate the advantages of our approach over the state-of-the-art multimedia retrieval techniques.  相似文献   
29.
These days’ speech processing devices like voice-controlled devices, radio, and cell phones have gained more popularity in the area of military, audio forensics, speech recognition, education and health sectors. In the real world, speech signal during communication always contains background noise. The main task of speech related applications is voice activity detection (VAD) which include speech communication, speech recognition, and speech coding. Noise-reduction schemes for speech communication may increase the quality of speech and improve working efficiency in military aviation. Most of the developed algorithms can improve the quality of speech but unable to remove the background noise from the speech. This study provides researchers with a summary of the challenges in speech communication with background noise and provides research directions in the area of military personnel and workforces who work in noisy environments. Results of the study reveal that the DSP-based voice activity detection and background noise reduction algorithm reduced the spurious values of the speech signal.  相似文献   
30.
A methodology has been developed in this study wherein a genetic algorithm (GA) is used to find a global optimal solution to a groundwater flow and contaminant problem by incorporating an artificial neural network (ANN) to evaluate the objective function within the genetic algorithm. The study shows that an ANN-GA technique can be used to find the uncertainties in output parameters due to imprecision in input parameters. The ANN-GA methodology is applied to five case studies involving radial flow in a well, one-dimensional solute transport in steady uniform flow, a two-dimensional heterogeneous steady flow, a two-dimensional solute transport, and a two-dimensional unsteady groundwater flow to demonstrate the efficiency and effectiveness of the developed algorithm. The results show that, with this approach, one can successfully measure the uncertainty in groundwater flow and contaminant transport simulations and achieve a considerable reduction in computational effort when compared to the vertex method that has been widely used in the past.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号