首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1138篇
  免费   89篇
  国内免费   1篇
电工技术   6篇
综合类   2篇
化学工业   252篇
金属工艺   25篇
机械仪表   31篇
建筑科学   38篇
矿业工程   4篇
能源动力   58篇
轻工业   89篇
水利工程   10篇
石油天然气   3篇
无线电   113篇
一般工业技术   259篇
冶金工业   72篇
原子能技术   11篇
自动化技术   255篇
  2024年   1篇
  2023年   20篇
  2022年   25篇
  2021年   51篇
  2020年   43篇
  2019年   34篇
  2018年   46篇
  2017年   47篇
  2016年   55篇
  2015年   41篇
  2014年   71篇
  2013年   78篇
  2012年   91篇
  2011年   119篇
  2010年   68篇
  2009年   59篇
  2008年   62篇
  2007年   60篇
  2006年   44篇
  2005年   35篇
  2004年   25篇
  2003年   21篇
  2002年   14篇
  2001年   6篇
  2000年   13篇
  1999年   13篇
  1998年   19篇
  1997年   10篇
  1996年   11篇
  1995年   4篇
  1994年   4篇
  1993年   1篇
  1992年   4篇
  1991年   3篇
  1990年   1篇
  1989年   2篇
  1988年   4篇
  1987年   2篇
  1986年   5篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1978年   1篇
  1977年   6篇
  1976年   1篇
  1975年   1篇
  1974年   1篇
  1973年   1篇
  1972年   1篇
排序方式: 共有1228条查询结果,搜索用时 31 毫秒
21.
Topological dynamics of cellular automata (CA), inherited from classical dynamical systems theory, has been essentially studied in dimension 1. This paper focuses on higher dimensional CA and aims at showing that the situation is different and more complex starting from dimension 2. The main results are the existence of non sensitive CA without equicontinuous points, the non-recursivity of sensitivity constants, the existence of CA having only non-recursive equicontinuous points and the existence of CA having only countably many equicontinuous points. They all show a difference between dimension 1 and higher dimensions. Thanks to these new constructions, we also extend undecidability results concerning topological classification previously obtained in the 1D case. Finally, we show that the set of sensitive CA is only $\varPi _{2}^{0}$ in dimension 1, but becomes $\varSigma _{3}^{0}$ -hard for dimension 3.  相似文献   
22.
Deduction modulo is a way to combine computation and deduction in proofs, by applying the inference rules of a deductive system (e.g. natural deduction or sequent calculus) modulo some congruence that we assume here to be presented by a set of rewrite rules. Using deduction modulo is equivalent to proving in a theory corresponding to the rewrite rules, and leads to proofs that are often shorter and more readable. However, cuts may be not admissible anymore.We define a new system, the unfolding sequent calculus, and prove its equivalence with the sequent calculus modulo, especially w.r.t. cut-free proofs. It permits to show that it is even undecidable to know if cuts can be eliminated in the sequent calculus modulo a given rewrite system.Then, to recover the cut admissibility, we propose a procedure to complete the rewrite system such that the sequent calculus modulo the resulting system admits cuts. This is done by generalizing the Knuth–Bendix completion in a non-trivial way, using the framework of abstract canonical systems.These results enlighten the entanglement between computation and deduction, and the power of abstract completion procedures. They also provide an effective way to obtain systems admitting cuts, therefore extending the applicability of deduction modulo in automated theorem proving.  相似文献   
23.
24.
This paper is to introduce an application of Computational Intelligence (CI) to Moving Picture Expert Group-4 (MPEG-4) video compression over IEEE.802.15.1 wireless communication, known as Bluetooth 1.2, in order to improve picture quality. The 2.4 GHz Industrial, Scientific and Medical frequency band is used for the IEEE.802.15.1 standard. IEEE.802.15.1 can be affected by noise and interference due to other neighboring wireless devices sharing the same frequency carrier. The noise and interference create difficulties in ascertaining an accurate real-time transmission rate at the receiving end. Furthermore, the MPEG-4 codec is an object-oriented compression system and demands a high bandwidth. It is therefore difficult to avoid excessive delay, image quality degradation and/or data loss during MPEG-4 video transmission over standard systems. A new buffer entitled ‘buffer added’ has been introduced at the input of the Bluetooth 1.2 device. This buffer is controlled by a Rule-Based Fuzzy (RBF) logic controller at the input and a neural-fuzzy controller (NFC) at the output. The two new fuzzy rules manipulate and supervise the flow of video over the Bluetooth 1.2 standard. The computer simulation results illustrate the comparison between a non-CI video transmission over Bluetooth 1.2 and the proposed design, confirming that the applications of RBF and NFC do improve the image quality, reduce data loss and reduce time delay.  相似文献   
25.
Digital Elevation Models (DEMs) are used to compute the hydro-geomorphological variables required by distributed hydrological models. However, the resolution of the most precise DEMs is too fine to run these models over regional watersheds. DEMs therefore need to be aggregated to coarser resolutions, affecting both the representation of the land surface and the hydrological simulations. In the present paper, six algorithms (mean, median, mode, nearest neighbour, maximum and minimum) are used to aggregate the Shuttle Radar Topography Mission (SRTM) DEM from 3″ (90 m) to 5′ (10 km) in order to simulate the water balance of the Lake Chad basin (2.5 Mkm2). Each of these methods is assessed with respect to selected hydro-geomorphological properties that influence Terrestrial Hydrology Model with Biogeochemistry (THMB) simulations, namely the drainage network, the Lake Chad bottom topography and the floodplain extent.The results show that mean and median methods produce a smoother representation of the topography. This smoothing involves the removing of the depressions governing the floodplain dynamics (floodplain area<5000 km2) but it eliminates the spikes and wells responsible for deviations regarding the drainage network. By contrast, using other aggregation methods, a rougher relief representation enables the simulation of a higher floodplain area (>14,000 km2 with the maximum or nearest neighbour) but results in anomalies concerning the drainage network. An aggregation procedure based on a variographic analysis of the SRTM data is therefore suggested. This consists of preliminary filtering of the 3″ DEM in order to smooth spikes and wells, then resampling to 5′ via the nearest neighbour method so as to preserve the representation of depressions. With the resulting DEM, the drainage network, the Lake Chad bathymetric curves and the simulated floodplain hydrology are consistent with the observations (3% underestimation for simulated evaporation volumes).  相似文献   
26.
Action Recognition Using a Bio-Inspired Feedforward Spiking Network   总被引:2,自引:0,他引:2  
We propose a bio-inspired feedforward spiking network modeling two brain areas dedicated to motion (V1 and MT), and we show how the spiking output can be exploited in a computer vision application: action recognition. In order to analyze spike trains, we consider two characteristics of the neural code: mean firing rate of each neuron and synchrony between neurons. Interestingly, we show that they carry some relevant information for the action recognition application. We compare our results to Jhuang et al. (Proceedings of the 11th international conference on computer vision, pp. 1–8, 2007) on the Weizmann database. As a conclusion, we are convinced that spiking networks represent a powerful alternative framework for real vision applications that will benefit from recent advances in computational neuroscience.  相似文献   
27.
Coastal water mapping from remote-sensing hyperspectral data suffers from poor retrieval performance when the targeted parameters have little effect on subsurface reflectance, especially due to the ill-posed nature of the inversion problem. For example, depth cannot accurately be retrieved for deep water, where the bottom influence is negligible. Similarly, for very shallow water it is difficult to estimate the water quality because the subsurface reflectance is affected more by the bottom than by optically active water components.

Most methods based on radiative transfer model inversion do not consider the distribution of targeted parameters within the inversion process, thereby implicitly assuming that any parameter value in the estimation range has the same probability. In order to improve the estimation accuracy for the above limiting cases, we propose to regularize the objective functions of two estimation methods (maximum likelihood or ML, and hyperspectral optimization process exemplar, or HOPE) by introducing local prior knowledge on the parameters of interest. To do so, loss functions are introduced into ML and HOPE objective functions in order to reduce the range of parameter estimation. These loss functions can be characterized either by using prior or expert knowledge, or by inferring this knowledge from the data (thus avoiding the use of additional information).

This approach was tested both on simulated and real hyperspectral remote-sensing data. We show that the regularized objective functions are more peaked than their non-regularized counterparts when the parameter of interest has little effect on subsurface reflectance. As a result, the estimation accuracy of regularized methods is higher for these depth ranges. In particular, when evaluated on real data, these methods were able to estimate depths up to 20 m, while corresponding non-regularized methods were accurate only up to 13 m on average for the same data.

This approach thus provides a solution to deal with such difficult estimation conditions. Furthermore, because no specific framework is needed, it can be extended to any estimation method that is based on iterative optimization.  相似文献   
28.
3D geological models commonly built to manage natural resources are much affected by uncertainty because most of the subsurface is inaccessible to direct observation. Appropriate ways to intuitively visualize uncertainties are therefore critical to draw appropriate decisions. However, empirical assessments of uncertainty visualization for decision making are currently limited to 2D map data, while most geological entities are either surfaces embedded in a 3D space or volumes.This paper first reviews a typical example of decision making under uncertainty, where uncertainty visualization methods can actually make a difference. This issue is illustrated on a real Middle East oil and gas reservoir, looking for the optimal location of a new appraisal well. In a second step, we propose a user study that goes beyond traditional 2D map data, using 2.5D pressure data for the purposes of well design. Our experiments study the quality of adjacent versus coincident representations of spatial uncertainty as compared to the presentation of data without uncertainty; the representations' quality is assessed in terms of decision accuracy. Our study was conducted within a group of 123 graduate students specialized in geology.  相似文献   
29.
Efficient search of combinatorial maps using signatures   总被引:1,自引:0,他引:1  
In this paper, we address the problem of computing canonical representations of n-dimensional combinatorial maps and of using them for efficiently searching for a map in a database. We define two combinatorial map signatures: the first one has a quadratic space complexity and may be used to decide an isomorphism with a new map in linear time whereas the second one has a linear space complexity and may be used to decide an isomorphism in quadratic time. We show that these signatures can be used to efficiently search for a map in a database.  相似文献   
30.
The idea of decomposed software pipelining is to decouple the software pipelining problem into a cyclic scheduling problem without resource constraints and an acyclic scheduling problem with resource constraints. In terms of loop transformation and code motion, the technique can be formulated as a combination of loop shifting and loop compaction. Loop shifting amounts to moving statements between iterations thereby changing some loop independent dependences into loop carried dependences and vice versa. Then, loop compaction schedules the body of the loop considering only loop independent dependences, but taking into account the details of the target architecture. In this paper, we show how loop shifting can be optimized so as to minimize both the length of the critical path and the number of dependences for loop compaction. The first problem is well-known and can be solved by an algorithm due to Leiserson and Saxe. We show that the second optimization (and the combination with the first one) is also polynomially solvable with a fast graph algorithm, variant of minimum-cost flow algorithms. Finally, we analyze the improvements obtained on loop compaction by experiments on random graphs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号