首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2865篇
  免费   182篇
  国内免费   6篇
电工技术   31篇
综合类   13篇
化学工业   728篇
金属工艺   86篇
机械仪表   77篇
建筑科学   114篇
能源动力   94篇
轻工业   280篇
水利工程   9篇
石油天然气   2篇
无线电   281篇
一般工业技术   503篇
冶金工业   324篇
原子能技术   51篇
自动化技术   460篇
  2023年   31篇
  2022年   44篇
  2021年   80篇
  2020年   56篇
  2019年   51篇
  2018年   69篇
  2017年   79篇
  2016年   79篇
  2015年   90篇
  2014年   125篇
  2013年   196篇
  2012年   162篇
  2011年   254篇
  2010年   189篇
  2009年   159篇
  2008年   156篇
  2007年   123篇
  2006年   111篇
  2005年   102篇
  2004年   77篇
  2003年   78篇
  2002年   65篇
  2001年   41篇
  2000年   34篇
  1999年   52篇
  1998年   117篇
  1997年   63篇
  1996年   60篇
  1995年   24篇
  1994年   44篇
  1993年   26篇
  1992年   19篇
  1991年   14篇
  1990年   9篇
  1989年   7篇
  1988年   7篇
  1987年   10篇
  1986年   13篇
  1985年   8篇
  1984年   7篇
  1983年   8篇
  1982年   10篇
  1981年   6篇
  1980年   8篇
  1979年   6篇
  1977年   10篇
  1976年   12篇
  1971年   7篇
  1970年   6篇
  1943年   6篇
排序方式: 共有3053条查询结果,搜索用时 265 毫秒
81.
82.
Solving large sparse linear systems is essential in numerous scientific domains. Several algorithms, based on direct or iterative methods, have been developed for parallel architectures. On distributed grids consisting of processors located in distant geographical sites, their performance may be unsatisfactory because they suffer from too many synchronizations and communications. The GREMLINS code has been developed for solving large sparse linear systems on distributed grids. It implements the multisplitting method that consists in splitting the original linear system into several subsystems that can be solved independently. In this paper, the performance of the GREMLINS code obtained with several libraries for solving the linear subsystems is analyzed. Its performance is also compared with that of the widely used PETSc library that enables one to develop portable parallel applications. Numerical experiments have been carried out both on local clusters and on distributed grids.  相似文献   
83.
A family of time-varying hyperbolic systems of balance laws is considered. The partial differential equations of this family can be stabilized by selecting suitable boundary conditions. For the stabilized systems, the classical technique of construction of Lyapunov functions provides a function which is a weak Lyapunov function in some cases, but is not in others. We transform this function through a strictification approach to obtain a time-varying strict Lyapunov function. It allows us to establish asymptotic stability in the general case and a robustness property with respect to additive disturbances of input-to-state stability (ISS) type. Two examples illustrate the results.  相似文献   
84.
In this work, a new state-dependent sampling control enlarges the sampling intervals of state feedback control. We consider the case of linear time invariant systems and guarantee the exponential stability of the system origin for a chosen decay rate. The approach is based on LMIs obtained thanks to sufficient Lyapunov–Razumikhin stability conditions and follows two steps. In the first step, we compute a Lyapunov–Razumikhin function that guarantees exponential stability for all time-varying sampling intervals up to some given bound. This value can be used as a lower-bound of the state-dependent sampling function. In a second step, an off-line computation provides a mapping from the state-space into the set of sampling intervals: the state is divided into a finite number of regions, and to each of these regions is associated an allowable upper-bound of the sampling intervals that will guarantee the global (exponential or asymptotic) stability of the system. The results are based on sufficient conditions obtained using convex polytopes. Therefore, they involve some conservatism with respect to necessary and sufficient conditions. However, at each of the two steps, an optimization on the sampling upper-bounds is proposed. The approach is illustrated with numerical examples from the literature for which the number of actuations is shown to be reduced with respect to the periodic sampling case.  相似文献   
85.
Genetic programming for multibiometrics   总被引:1,自引:0,他引:1  
Biometric systems suffer from some drawbacks: a biometric system can provide in general good performances except with some individuals as its performance depends highly on the quality of the capture… One solution to solve some of these problems is to use multibiometrics where different biometric systems are combined together (multiple captures of the same biometric modality, multiple feature extraction algorithms, multiple biometric modalities…). In this paper, we are interested in score level fusion functions application (i.e., we use a multibiometric authentication scheme which accept or deny the claimant for using an application). In the state of the art, the weighted sum of scores (which is a linear classifier) and the use of an SVM (which is a non linear classifier) provided by different biometric systems provide one of the best performances. We present a new method based on the use of genetic programming giving similar or better performances (depending on the complexity of the database). We derive a score fusion function by assembling some classical primitives functions (+, ∗, −, … ). We have validated the proposed method on three significant biometric benchmark datasets from the state of the art.  相似文献   
86.
Dynamic loadings produce high stress waves leading to the fragmentation of brittle materials such as ceramics, concrete, glass and rocks. The main mechanism used to explain the change of the number of fragments with stress rate is a shielding phenomenon. However, under quasi-static loading conditions, a weakest link hypothesis may be applicable. Therefore, depending on the local strain or stress rate, different fragmentation regimes are observed. One regime corresponds to single fragmentation for which a probabilistic approach is needed. Conversely, the multiple fragmentation regime may be described by a deterministic approach. The transition between the two fragmentation regimes is discussed for high performance concrete, glass and SiC ceramics.  相似文献   
87.
The differential SAR Interferometry (DInSAR) technique has been applied to a test site near Vauvert (France) to detect and monitor ground deformation. This site corresponds to the location of an industrial exploitation of underground salt using the solution mining technique. An area of subsidence has been observed using in situ measurements. Despite conditions unfavorable for InSAR because of the vegetal cover, we show that radar remote sensing observations provide valuable information which substantially improves our knowledge of the phenomenon. An adaptive phase filtering process has been used to improve the coherence level. In particular, our study shows that the geometry of the subsidence bowl is different to that previously assumed using ground-based techniques only. The size of the subsidence bowl (8 km) is larger than expected. This information will be useful for further modeling of the deformation and to improve the coverage of the in situ measurement networks. It also shows that radar interferometry can be used for the long-term monitoring of such sites and to predict potential environmental issues.  相似文献   
88.
A module is a set of vertices H of a graph G=(V,E) such that each vertex of V?H is either adjacent to all vertices of H or to none of them. A homogeneous set is a nontrivial module. A graph Gs=(V,Es) is a sandwich for a pair of graphs Gt=(V,Et) and G=(V,E) if EtEsE. In a recent paper, Tang et al. [Inform. Process. Lett. 77 (2001) 17-22] described an O(Δn2) algorithm for testing the existence of a homogeneous set in sandwich graphs of Gt=(V,Et) and G=(V,E) and then extended it to an enumerative algorithm computing all these possible homogeneous sets. In this paper, we invalidate this latter algorithm by proving there are possibly exponentially many such sets, even if we restrict our attention to strong modules. We then give a correct characterization of a homogeneous set of a sandwich graph.  相似文献   
89.
Experience with a Hybrid Processor: K-Means Clustering   总被引:2,自引:0,他引:2  
We discuss hardware/software co-processing on a hybrid processor for a compute- and data-intensive multispectral imaging algorithm, k-means clustering. The experiments are performed on two models of the Altera Excalibur board, the first using the soft IP core 32-bit NIOS 1.1 RISC processor, and the second with the hard IP core ARM processor. In our experiments, we compare performance of the sequential k-means algorithm with three different accelerated versions. We consider granularity and synchronization issues when mapping an algorithm to a hybrid processor. Our results show that speedup of 11.8X is achieved by migrating computation to the Excalibur ARM hardware/software as compared to software only on a Gigahertz Pentium III. Speedup on the Excalibur NIOS is limited by the communication cost of transferring data from external memory through the processor to the customized circuits. This limitation is overcome on the Excalibur ARM, in which dual-port memories, accessible to both the processor and configurable logic, have the biggest performance impact of all the techniques studied.  相似文献   
90.
Combining FDI and AI approaches within causal-model-based diagnosis.   总被引:1,自引:0,他引:1  
This paper presents a model-based diagnostic method designed in the context of process supervision. It has been inspired by both artificial intelligence and control theory. AI contributes tools for qualitative modeling, including causal modeling, whose aim is to split a complex process into elementary submodels. Control theory, within the framework of fault detection and isolation (FDI), provides numerical models for generating and testing residuals, and for taking into account inaccuracies in the model, unknown disturbances and noise. Consistency-based reasoning provides a logical foundation for diagnostic reasoning and clarifies fundamental assumptions, such as single fault and exoneration. The diagnostic method presented in the paper benefits from the advantages of all these approaches. Causal modeling enables the method to focus on sufficient relations for fault isolation, which avoids combinatorial explosion. Moreover, it allows the model to be modified easily without changing any aspect of the diagnostic algorithm. The numerical submodels that are used to detect inconsistency benefit from the precise quantitative analysis of the FDI approach. The FDI models are studied in order to link this method with DX component-oriented reasoning. The recursive on-line use of this algorithm is explained and the concept of local exoneration is introduced.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号