首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7197篇
  免费   386篇
  国内免费   54篇
电工技术   134篇
综合类   26篇
化学工业   1862篇
金属工艺   120篇
机械仪表   215篇
建筑科学   277篇
矿业工程   6篇
能源动力   465篇
轻工业   729篇
水利工程   77篇
石油天然气   116篇
无线电   659篇
一般工业技术   1161篇
冶金工业   346篇
原子能技术   83篇
自动化技术   1361篇
  2024年   20篇
  2023年   145篇
  2022年   293篇
  2021年   410篇
  2020年   301篇
  2019年   338篇
  2018年   418篇
  2017年   362篇
  2016年   359篇
  2015年   239篇
  2014年   323篇
  2013年   683篇
  2012年   369篇
  2011年   515篇
  2010年   359篇
  2009年   336篇
  2008年   301篇
  2007年   218篇
  2006年   210篇
  2005年   146篇
  2004年   117篇
  2003年   123篇
  2002年   94篇
  2001年   67篇
  2000年   75篇
  1999年   56篇
  1998年   84篇
  1997年   47篇
  1996年   52篇
  1995年   42篇
  1994年   38篇
  1993年   48篇
  1992年   38篇
  1991年   32篇
  1990年   30篇
  1989年   20篇
  1988年   35篇
  1987年   30篇
  1986年   20篇
  1985年   15篇
  1984年   29篇
  1983年   22篇
  1982年   22篇
  1981年   16篇
  1980年   20篇
  1979年   16篇
  1978年   17篇
  1976年   20篇
  1975年   19篇
  1974年   12篇
排序方式: 共有7637条查询结果,搜索用时 250 毫秒
151.
Almost all binarization methods have a few parameters that require setting. However, they do not usually achieve their upper-bound performance unless the parameters are individually set and optimized for each input document image. In this work, a learning framework for the optimization of the binarization methods is introduced, which is designed to determine the optimal parameter values for a document image. The framework, which works with any binarization method, has a standard structure, and performs three main steps: (i) extracts features, (ii) estimates optimal parameters, and (iii) learns the relationship between features and optimal parameters. First, an approach is proposed to generate numerical feature vectors from 2D data. The statistics of various maps are extracted and then combined into a final feature vector, in a nonlinear way. The optimal behavior is learned using support vector regression (SVR). Although the framework works with any binarization method, two methods are considered as typical examples in this work: the grid-based Sauvola method, and Lu’s method, which placed first in the DIBCO’09 contest. The experiments are performed on the DIBCO’09 and H-DIBCO’10 datasets, and combinations of these datasets with promising results.  相似文献   
152.
The main goal of the IEEE 802.11n standard is to achieve more than 100 Mbps of throughput at the MAC service access point. This high throughput has been achieved via many enhancements in both the physical and MAC layers. One of the MAC enhancements is the frame aggregation in which multiple frames are concatenated into a single large frame before being transmitted. The 802.11n MAC layer defines two types of aggregation, aggregate MAC service data unit (A-MSDU) and aggregate MAC protocol data unit (A-MPDU). The A-MPDU outperforms A-MSDU due to its large aggregation size and the subframes retransmission in erroneous channels. However, in error free channels and under the same aggregation size the A-MSDU performs better than the A-MPDU due to its smaller headers. Thus, adding a selective retransmission capability to the A-MSDU would improve the system performance. In this paper, we have proposed an MSDU frame aggregation scheme that enables selective retransmission at the MSDU level without altering the original MAC header. In this proposed scheme an implicit sequence control mechanism has been introduced in order to keep the frames in sequence and preserve their correct order at the receiver side. The results show that the proposed scheme improves the system performance in terms of throughput and delay even under highly erroneous channels.  相似文献   
153.
The Discrete Cosine Transform (DCT) is one of the most widely used techniques for image compression. Several algorithms are proposed to implement the DCT-2D. The scaled SDCT algorithm is an optimization of the DCT-1D, which consists in gathering all the multiplications at the end. In this paper, in addition to the hardware implementation on an FPGA, an extended optimization has been performed by merging the multiplications in the quantization block without having an impact on the image quality. A simplified quantization has been performed also to keep higher the performances of the all chain. Tests using MATLAB environment have shown that our proposed approach produces images with nearly the same quality of the ones obtained using the JPEG standard. FPGA-based implementations of this proposed approach is presented and compared to other state of the art techniques. The target is an an Altera Cyclone II FPGA using the Quartus synthesis tool. Results show that our approach outperforms the other ones in terms of processing-speed, used resources and power consumption. A comparison has been done between this architecture and a distributed arithmetic based architecture.  相似文献   
154.
Fast and accurate moving object segmentation in dynamic scenes is the first step in many computer vision applications. In this paper, we propose a new background modeling method for moving object segmentation based on dynamic matrix and spatio-temporal analyses of scenes. Our method copes with some challenges related to this field. A new algorithm is proposed to detect and remove cast shadow. A comparative study by quantitative evaluations shows that the proposed approach can detect foreground robustly and accurately from videos recorded by a static camera and which include several constraints. A Highway Control and Management System called RoadGuard is proposed to show the robustness of our method. In fact, our system has the ability to control highway by detecting strange events that can happen like vehicles suddenly stopped in roads, parked vehicles in emergency zones or even illegal conduct such as going out from the road. Moreover, RoadGuard is capable of managing highways by saving information about the date and time of overloaded roads.  相似文献   
155.
Social commitments have been extensively and effectively used to represent and model business contracts among autonomous agents having competing objectives in a variety of areas (e.g., modeling business processes and commitment-based protocols). However, the formal verification of social commitments and their fulfillment is still an active research topic. This paper presents CTLC+ that modifies CTLC, a temporal logic of commitments for agent communication that extends computation tree logic (CTL) logic to allow reasoning about communicating commitments and their fulfillment. The verification technique is based on reducing the problem of model checking CTLC+ into the problem of model checking ARCTL (the combination of CTL with action formulae) and the problem of model checking GCTL* (a generalized version of CTL* with action formulae) in order to respectively use the extended NuSMV symbolic model checker and the CWB-NC automata-based model checker as a benchmark. We also prove that the reduction techniques are sound and the complexity of model checking CTLC+ for concurrent programs with respect to the size of the components of these programs and the length of the formula is PSPACE-complete. This matches the complexity of model checking CTL for concurrent programs as shown by Kupferman et al. We finally provide two case studies taken from business domain along with their respective implementations and experimental results to illustrate the effectiveness and efficiency of the proposed technique. The first one is about the NetBill protocol and the second one considers the Contract Net protocol.  相似文献   
156.
Tree index structures are crucial components in data management systems. Existing tree index structure are designed with the implicit assumption that the underlying external memory storage is the conventional magnetic hard disk drives. This assumption is going to be invalid soon, as flash memory storage is increasingly adopted as the main storage media in mobile devices, digital cameras, embedded sensors, and notebooks. Though it is direct and simple to port existing tree index structures on the flash memory storage, that direct approach does not consider the unique characteristics of flash memory, i.e., slow write operations, and erase-before-update property, which would result in a sub optimal performance. In this paper, we introduce FAST (i.e., Flash-Aware Search Trees) as a generic framework for flash-aware tree index structures. FAST distinguishes itself from all previous attempts of flash memory indexing in two aspects: (1) FAST is a generic framework that can be applied to a wide class of data partitioning tree structures including R-tree and its variants, and (2) FAST achieves both efficiency and durability of read and write flash operations through memory flushing and crash recovery techniques. Extensive experimental results, based on an actual implementation of FAST inside the GiST index structure in PostgreSQL, show that FAST achieves better performance than its competitors.  相似文献   
157.
This study sought to assess sediment contamination by trace metals (cadmium, chromium, cobalt, copper, manganese, nickel, lead and zinc), to localize contaminated sites and to identify environmental risk for aquatic organisms in Wadis of Kebir Rhumel basin in the Northeast of Algeria. Water and surficial sediments (0-5 cm) were sampled in winter, spring, summer and autumn from 37 sites along permanent Wadis of the Kebir Rhumel basin. Sediment trace metal contents were measured by Flame Atomic Absorption Spectroscopy. Trace metals median concentrations in sediments followed a decreasing order: Mn > Zn > Pb > Cr > Cu > Ni > Co > Cd. Extreme values (dry weights) of the trace metals are as follows: 0.6-3.4 microg/g for Cd, 10-216 microg/g for Cr, 9-446 microg/g for Cu, 3-20 microg/g for Co, 105-576 microg/g for Mn, 10-46 microg/g for Ni, 11-167 microg/g for Pb, and 38-641 microg/g for Zn. According to world natural concentrations, all sediments collected were considered as contaminated by one or more elements. Comparing measured concentrations with American guidelines (Threshold Effect Level: TEL and Probable Effect Level: PEL) showed that biological effects could be occasionally observed for cadmium, chromium, lead and nickel levels but frequently observed for copper and zinc levels. Sediment quality was shown to be excellent for cobalt and manganese but medium to bad for cadmium, chromium, copper, lead, nickel and zinc regardless of sites.  相似文献   
158.
Landfill leachate is one of the most recalcitrant wastes for biotreatment and can be considered a potential source of contamination to surface and groundwater ecosystems. In the present study, Fenton oxidation was employed for degradation of stabilized landfill leachate. Response surface methodology was applied to analyze, model and optimize the process parameters, i.e. pH and reaction time as well as the initial concentrations of hydrogen peroxide and ferrous ion. Analysis of variance showed that good coefficients of determination were obtained (R2 > 0.99), thus ensuring satisfactory agreement of the second-order regression model with the experimental data. The results indicated that, pH and its quadratic effects were the main factors influencing Fenton oxidation. Furthermore, antagonistic effects between pH and other variables were observed. The optimum H2O2 concentration, Fe(II) concentration, pH and reaction time were 0.033 mol/L, 0.011 mol/L, 3 and 145 min, respectively, with 58.3% COD, 79.0% color and 82.1% iron removals.  相似文献   
159.
160.
Power system stability is enhanced through a novel stabilizer developed around an adaptive fuzzy sliding mode approach which applies the Nussbaum gain to a nonlinear model of a single-machine infinite-bus (SMIB) and multi-machine power system stabilizer subjected to a three phase fault. The Nussbaum gain is used to avoid the positive sign constraint and the problem of controllability of the system. A comparative simulation study is presented to evaluate the achieved performance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号