首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2649篇
  免费   109篇
  国内免费   8篇
电工技术   37篇
综合类   6篇
化学工业   590篇
金属工艺   88篇
机械仪表   62篇
建筑科学   60篇
矿业工程   3篇
能源动力   170篇
轻工业   206篇
水利工程   17篇
石油天然气   6篇
无线电   270篇
一般工业技术   668篇
冶金工业   226篇
原子能技术   9篇
自动化技术   348篇
  2024年   10篇
  2023年   37篇
  2022年   77篇
  2021年   125篇
  2020年   75篇
  2019年   90篇
  2018年   114篇
  2017年   94篇
  2016年   84篇
  2015年   53篇
  2014年   95篇
  2013年   206篇
  2012年   124篇
  2011年   167篇
  2010年   99篇
  2009年   112篇
  2008年   91篇
  2007年   105篇
  2006年   69篇
  2005年   69篇
  2004年   55篇
  2003年   57篇
  2002年   43篇
  2001年   34篇
  2000年   34篇
  1999年   33篇
  1998年   38篇
  1997年   41篇
  1996年   40篇
  1995年   48篇
  1994年   37篇
  1993年   36篇
  1992年   36篇
  1991年   26篇
  1990年   14篇
  1989年   22篇
  1988年   22篇
  1987年   22篇
  1986年   18篇
  1985年   20篇
  1984年   30篇
  1983年   29篇
  1982年   31篇
  1981年   23篇
  1980年   17篇
  1979年   11篇
  1978年   9篇
  1977年   4篇
  1976年   21篇
  1971年   3篇
排序方式: 共有2766条查询结果,搜索用时 15 毫秒
61.
In recent past, wavelet packet (WP) based speech enhancement techniques have been gaining popularity due to their inherent nature of noise minimization. WP based techniques appeared as more robust and efficient than short-time Fourier transform based methods. In the present work, a speech enhancement method using Teager energy operated equal rectangular bandwidth (ERB)-like WP decomposition has been proposed. Twenty four sub-band perceptual wavelet packet decomposition (PWPD) structure is implemented according to the auditory ERB scale. ERB scale based decomposition structure is used because the central frequency of the ERB scale distribution is similar to the frequency response of the human cochlea. Teager energy operator is applied to estimate the threshold value for the PWPD coefficients. Lastly, Wiener filtering is applied to remove the low frequency noise before final reconstruction stage. The proposed method has been applied to evaluate the Hindi sentences database, corrupted with six noise conditions. The proposed method’s performance is analysed with respect to several speech quality parameters and output signal to noise ratio levels. Performance indicates that the proposed technique outperforms some traditional speech enhancement algorithms at all SNR levels.  相似文献   
62.
Solving shortest path problem using particle swarm optimization   总被引:6,自引:0,他引:6  
This paper presents the investigations on the application of particle swarm optimization (PSO) to solve shortest path (SP) routing problems. A modified priority-based encoding incorporating a heuristic operator for reducing the possibility of loop-formation in the path construction process is proposed for particle representation in PSO. Simulation experiments have been carried out on different network topologies for networks consisting of 15–70 nodes. It is noted that the proposed PSO-based approach can find the optimal path with good success rates and also can find closer sub-optimal paths with high certainty for all the tested networks. It is observed that the performance of the proposed algorithm surpasses those of recently reported genetic algorithm based approaches for this problem.  相似文献   
63.
Atomic force microscopy (AFM) provides a unique opportunity to study live individual bacteria at the nanometer scale. In addition to providing accurate morphological information, AFM can be exploited to investigate membrane protein localization and molecular interactions on the surface of living cells. A prerequisite for these studies is the development of robust procedures for sample preparation. While such procedures are established for intact bacteria, they are only beginning to emerge for bacterial spheroplasts. Spheroplasts are useful research models for studying mechanosensitive ion channels, membrane transport, lipopolysaccharide translocation, solute uptake, and the effects of antimicrobial agents on membranes. Furthermore, given the similarities between spheroplasts and cell wall-deficient (CWD) forms of pathogenic bacteria, spheroplast research could be relevant in biomedical research. In this paper, a new technique for immobilizing spheroplasts on mica pretreated with aminopropyltriethoxysilane (APTES) and glutaraldehyde is described. Using this mounting technique, the indentation and cell elasticity of glutaraldehyde-fixed and untreated spheroplasts of E. coli in liquid were measured. These values are compared to those of intact E. coli. Untreated spheroplasts were found to be much softer than the intact cells and the silicon nitride cantilevers used in this study.  相似文献   
64.
Visual media data such as an image is the raw data representation for many important applications. Reducing the dimensionality of raw visual media data is desirable since high dimensionality degrades not only the effectiveness but also the efficiency of visual recognition algorithms. We present a comparative study on spatial interest pixels (SIPs), including eight-way (a novel SIP detector), Harris, and Lucas‐Kanade, whose extraction is considered as an important step in reducing the dimensionality of visual media data. With extensive case studies, we have shown the usefulness of SIPs as low-level features of visual media data. A class-preserving dimension reduction algorithm (using GSVD) is applied to further reduce the dimension of feature vectors based on SIPs. The experiments showed its superiority over PCA.
Chandra KambhamettuEmail:
  相似文献   
65.
The execution model for mobile, dynamically‐linked, object‐oriented programs has evolved from fast interpretation to a mix of interpreted and dynamically compiled execution. The primary motivation for dynamic compilation is that compiled code executes significantly faster than interpreted code. However, dynamic compilation, which is performed while the application is running, introduces execution delay. In this paper we present two dynamic compilation techniques that enable high performance execution while reducing the effect of this compilation overhead. These techniques can be classified as (1) decreasing the amount of compilation performed, and (2) overlapping compilation with execution. We first present and evaluate lazy compilation, an approach used in most dynamic compilation systems in which individual methods are compiled on‐demand upon their first invocation. This is in contrast to eager compilation, in which all methods in a class are compiled when a new class is loaded. In this work, we describe our experience with eager compilation, as well as the implementation and transition to lazy compilation. We empirically detail the effectiveness of this decision. Our experimental results using the SpecJVM Java benchmarks and the Jalapeño JVM show that, compared to eager compilation, lazy compilation results in 57% fewer methods being compiled and reductions in total time of 14 to 26%. Total time in this context is compilation plus execution time. Next, we present profile‐driven, background compilation, a technique that augments lazy compilation by using idle cycles in multiprocessor systems to overlap compilation with application execution. With this approach, compilation occurs on a thread separate from that of application threads so as to reduce intermittent, and possibly substantial, delay in execution. Profile information is used to prioritize methods as candidates for background compilation. Methods are compiled according to this priority scheme so that performance‐critical methods are invoked using optimized code as soon as possible. Our results indicate that background compilation can achieve the performance of off‐line compiled applications and masks almost all compilation overhead. We show significant reductions in total time of 14 to 71% over lazy compilation. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   
66.
Summary The amount of nondeterminism in a nondeterministic finite automaton (NFA) is measured by counting the minimal number of guessing points a string w has to pass through on its way to an accepting state. NFA's with more nondeterminism can achieve greater savings in the number of states over their deterministic counterparts than NFA's with less nondeterminism. On the other hand, for some nontrivial infinite regular languages a deterministic finite automaton (DFA) can already be quite succinct in the sense that NFA's need as many states (and even context-free grammars need as many nonterminals) as the minimal DFA has states.This research was supported in part by the National Science Foundation under Grant No. MCS 76-10076  相似文献   
67.
Three results are established. The first is that every nondeterministic strict interpretation of a deterministic pushdown acceptor (dpda) has an equivalent, deterministic, strict interpretation. The second is that ifM 1 andM 2 are two compatible strict interpretations of the dpdaM, then there exist deterministic strict interpretationsM andM such thatL(M ) =L(M 1)L(M 2) andL(M ) =L(M 1)L(M 2). The third states that there is no dpda whose strict interpretations yield all the deterministic context-free languages.This author was supported in part by the National Science Foundation under Grant MCS-77-22323.  相似文献   
68.
Given an undirected graph G with edge costs and a specified set of terminals, let the density of any subgraph be the ratio of its cost to the number of terminals it contains. If G is 2-connected, does it contain smaller 2-connected subgraphs of density comparable to that of?G? We answer this question in the affirmative by giving an algorithm to pruneG and find such subgraphs of any desired size, incurring only a logarithmic factor increase in density (plus a small additive term). We apply our pruning techniques to give algorithms for two NP-Hard problems on finding large 2-vertex-connected subgraphs of low cost; no previous approximation algorithm was known for either problem. In the k-2VC problem, we are given an undirected graph G with edge costs and an integer k; the goal is to find a minimum-cost 2-vertex-connected subgraph of G containing at least k vertices. In the Budget-2VC problem, we are given a graph G with edge costs, and a budget B; the goal is to find a 2-vertex-connected subgraph H of G with total edge cost at most B that maximizes the number of vertices in H. We describe an O(log?nlog?k) approximation for the k-2VC problem, and a bicriteria approximation for the Budget-2VC problem that gives an $O(\frac{1}{\epsilon}\log^{2} n)$ approximation, while violating the budget by a factor of at most 2+ε.  相似文献   
69.
Facility location decisions are usually determined by cost and coverage related factors although empirical studies show that such factors as infrastructure, labor conditions and competition also play an important role in practice. The objective of this paper is to develop a multi-objective facility location model accounting for a wide range of factors affecting decision-making. The proposed model selects potential facilities from a set of pre-defined alternative locations according to the number of customers, the number of competitors and real-estate cost criteria. However, that requires large amount of both spatial and non-spatial input data, which could be acquired from distributed data sources over the Internet. Therefore, a computational approach for processing input data and representation of modeling results is elaborated. It is capable of accessing and processing data from heterogeneous spatial and non-spatial data sources. Application of the elaborated data gathering approach and facility location model is demonstrated using an example of fast food restaurants location problem.  相似文献   
70.
Let s be a point source of light inside a polygon P of n vertices. A polygonal path from s to some point t inside P is called a diffuse reflection path if the turning points of the path lie on edges of?P. A?diffuse reflection path is said to be optimal if it has the minimum number of reflections on the path. The problem of computing a diffuse reflection path from s to t inside P has not been considered explicitly in the past. We present three different algorithms for this problem which produce suboptimal paths. For constructing such a path, the first algorithm uses a greedy method, the second algorithm uses a transformation of a minimum link path, and the third algorithm uses the edge–edge visibility graph of?P. The first two algorithms are for polygons without holes, and they run in O(n+klogn) time, where k denotes the number of reflections in the constructed path. The third algorithm is for polygons with or without holes, and it runs in O(n 2) time. The number of reflections in the path produced by this third algorithm can be at most three times that of an optimal diffuse reflection path. Though the combinatorial approach used in the third algorithm gives a better bound on the number of reflections on the path, the first and the second algorithms stand on the merit of their elegant geometric approaches based on local geometric information.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号