首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5516篇
  免费   378篇
  国内免费   9篇
电工技术   74篇
综合类   11篇
化学工业   1655篇
金属工艺   78篇
机械仪表   147篇
建筑科学   229篇
矿业工程   5篇
能源动力   210篇
轻工业   660篇
水利工程   26篇
石油天然气   5篇
武器工业   1篇
无线电   507篇
一般工业技术   1036篇
冶金工业   184篇
原子能技术   58篇
自动化技术   1017篇
  2024年   8篇
  2023年   55篇
  2022年   255篇
  2021年   413篇
  2020年   168篇
  2019年   163篇
  2018年   205篇
  2017年   197篇
  2016年   227篇
  2015年   204篇
  2014年   255篇
  2013年   392篇
  2012年   325篇
  2011年   441篇
  2010年   320篇
  2009年   302篇
  2008年   276篇
  2007年   249篇
  2006年   216篇
  2005年   164篇
  2004年   148篇
  2003年   123篇
  2002年   104篇
  2001年   69篇
  2000年   51篇
  1999年   67篇
  1998年   65篇
  1997年   61篇
  1996年   51篇
  1995年   30篇
  1994年   41篇
  1993年   22篇
  1992年   16篇
  1991年   10篇
  1990年   7篇
  1989年   17篇
  1988年   8篇
  1987年   5篇
  1986年   8篇
  1985年   17篇
  1984年   22篇
  1983年   24篇
  1982年   12篇
  1981年   14篇
  1980年   15篇
  1979年   14篇
  1978年   5篇
  1977年   13篇
  1976年   5篇
  1974年   4篇
排序方式: 共有5903条查询结果,搜索用时 15 毫秒
161.
162.
Regional frequency analysis is a useful tool for accurate estimation of precipitation quantiles than at-site frequency analysis, especially in the case of regions with a short rainfall time series. The use of meteorological information, combined with rainfall data analysis, could improve the selection of homogeneous regions. Starting from 1958, 198 meteorological configurations, related to extreme events, have been identified throughout the national territory of Italy. The reanalyzed meteorological data of the 40 Year Re-analysis Archive (ERA-40) of the European Centre for Medium-Range Weather Forecast (ECMWF) have been analyzed to identify homogeneous regions with respect to the Convective Available Potential Energy (CAPE), the Q vector Divergence (QD) and the Vertically Integrated Moisture Flux (VIMF). The latter index appears to be the better candidate for finding regional homogeneity inside areas where high frequency values of CAPE or QD are present. The paper presents an application based on the delimitation of homogeneous regions using climatic indexes for the island of Sicily. By applying the proposed methodology, seven homogeneous areas over Sicily were found. The consistency of the final results has been validated by using a coupled approach based on the Valuation of Floods in Italy procedure (VAPI) and on the heterogeneity test of Hosking and Wallis (Water Resour Res 29:271–281, 1993, 1997).  相似文献   
163.
This paper addresses a visibility-based pursuit-evasion problem in which a team of mobile robots with limited sensing and communication capabilities must coordinate to detect any evaders in an unknown, multiply-connected planar environment. Our distributed algorithm to guarantee evader detection is built around maintaining complete coverage of the frontier between cleared and contaminated regions while expanding the cleared region. We detail a novel distributed method for storing and updating this frontier without building a map of the environment or requiring global localization. We demonstrate the functionality of the algorithm through simulations in realistic environments and through hardware experiments. We also compare Monte Carlo results for our algorithm to the theoretical optimum area cleared as a function of the number of robots available.  相似文献   
164.
The common paradigm employed for object detection is the sliding window (SW) search. This approach generates grid-distributed patches, at all possible positions and sizes, which are evaluated by a binary classifier: The tradeoff between computational burden and detection accuracy is the real critical point of sliding windows; several methods have been proposed to speed up the search such as adding complementary features. We propose a paradigm that differs from any previous approach since it casts object detection into a statistical-based search using a Monte Carlo sampling for estimating the likelihood density function with Gaussian kernels. The estimation relies on a multistage strategy where the proposal distribution is progressively refined by taking into account the feedback of the classifiers. The method can be easily plugged into a Bayesian-recursive framework to exploit the temporal coherency of the target objects in videos. Several tests on pedestrian and face detection, both on images and videos, with different types of classifiers (cascade of boosted classifiers, soft cascades, and SVM) and features (covariance matrices, Haar-like features, integral channel features, and histogram of oriented gradients) demonstrate that the proposed method provides higher detection rates and accuracy as well as a lower computational burden w.r.t. sliding window detection.  相似文献   
165.
In this work we present a new algorithm for accelerating the colour bilateral filter based on a subsampling strategy working in the spatial domain. The base idea is to use a suitable subset of samples of the entire kernel in order to obtain a good estimation of the exact filter values. The main advantages of the proposed approach are that it has an excellent trade‐off between visual quality and speed‐up, a very low memory overhead is required and it is straightforward to implement on the GPU allowing real‐time filtering. We show different applications of the proposed filter, in particular efficient cross‐bilateral filtering, real‐time edge‐aware image editing and fast video denoising. We compare our method against the state of the art in terms of image quality, time performance and memory usage.  相似文献   
166.
For each sufficiently large n, there exists a unary regular language L such that both L and its complement L c are accepted by unambiguous nondeterministic automata with at most n states, while the smallest deterministic automata for these two languages still require a superpolynomial number of states, at least \(e^{\Omega(\sqrt[3]{n\cdot\ln^{2}n})}\). Actually, L and L c are “balanced” not only in the number of states but, moreover, they are accepted by nondeterministic machines sharing the same transition graph, differing only in the distribution of their final states. As a consequence, the gap between the sizes of unary unambiguous self-verifying automata and deterministic automata is also superpolynomial.  相似文献   
167.
This paper shows two examples of how the analysis of option pricing problems can lead to computational methods efficiently implemented in parallel. These computational methods outperform ??general purpose?? methods (i.e., for example, Monte Carlo, finite differences methods). The GPU implementation of two numerical algorithms to price two specific derivatives (continuous barrier options and realized variance options) is presented. These algorithms are implemented in CUDA subroutines ready to run on Graphics Processing Units (GPUs) and their performance is studied. The realization of these subroutines is motivated by the extensive use of the derivatives considered in the financial markets to hedge or to take risk and by the interest of financial institutions in the use of state of the art hardware and software to speed up the decision process. The performance of these algorithms is measured using the (CPU/GPU) speed up factor, that is using the ratio between the (wall clock) times required to execute the code on a CPU and on a GPU. The choice of the reference CPU and GPU used to evaluate the speed up factors presented is stated. The outstanding performance of the algorithms developed is due to the mathematical properties of the pricing formulae used and to the ad hoc software implementation. In the case of realized variance options when the computation is done in single precision the comparisons between CPU and GPU execution times gives speed up factors of the order of a few hundreds. For barrier options, the corresponding speed up factors are of about fifteen, twenty. The CUDA subroutines to price barrier options and realized variance options can be downloaded from the website http://www.econ.univpm.it/recchioni/finance/w13. A?more general reference to the work in mathematical finance of some of the authors and of their coauthors is the website http://www.econ.univpm.it/recchioni/finance/.  相似文献   
168.
This work presents a distributed method for control centers to monitor the operating condition of a power network, i.e., to estimate the network state, and to ultimately determine the occurrence of threatening situations. State estimation has been recognized to be a fundamental task for network control centers to operate safely and reliably a power grid. We consider (static) state estimation problems, in which the state vector consists of the voltage magnitude and angle at all network buses. We consider the state to be linearly related to network measurements, which include power flows, current injections, and voltage phasors at some buses. We admit the presence of several cooperating control centers, and we design two distributed methods for them to compute the minimum variance estimate of the state, given the network measurements. The two distributed methods rely on different modes of cooperation among control centers: in the first method an incremental mode of cooperation is used, whereas, in the second method, a diffusive interaction is implemented. Our procedures, which require each control center to know only the measurements and the structure of a subpart of the whole network, are computationally efficient and scalable with respect to the network dimension, provided that the number of control centers also increases with the network cardinality. Additionally, a finite-memory approximation of our diffusive algorithm is proposed, and its accuracy is characterized. Finally, our estimation methods are exploited to develop a distributed algorithm to detect corrupted network measurements.  相似文献   
169.
Nowadays, especially after the recent financial downturn, companies are looking for much more efficient and creative business processes. They need to place better solutions in the market in a less time with less cost. There is a general intuition that communication and collaboration, especially mixed with Web 2.0 approach within companies and ecosystems, can boost the innovation process with positive impacts on business indicators. Open Innovation within an Enterprise 2.0 context is a one of the most chosen paradigm for improving the innovation processes of enterprises, based on the collaborative creation and development of ideas and products. The key feature of this new paradigm is that the knowledge is exploited in a collaborative way flowing not only among internal sources, i.e. R&D departments, but also among external ones as other employees, customers, partners, etc. In this paper we show how an ontology-based analysis of plain text can provide a semantic contextualization of content support tasks, such as finding semantic distance between contents, and can help in creating relations between people with shared knowledge and interests. Along this paper we will present the results obtained by the adoption of this technology in a large corporate environment like Bankinter, a financial institution, Telefonica I+D, an international telecommunication firm and Repsol, a major oil company in Spain.  相似文献   
170.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号