首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   83504篇
  免费   944篇
  国内免费   406篇
电工技术   770篇
综合类   2316篇
化学工业   11359篇
金属工艺   4779篇
机械仪表   3012篇
建筑科学   2151篇
矿业工程   562篇
能源动力   1098篇
轻工业   3574篇
水利工程   1266篇
石油天然气   341篇
无线电   9221篇
一般工业技术   16245篇
冶金工业   2617篇
原子能技术   253篇
自动化技术   25290篇
  2018年   14449篇
  2017年   13374篇
  2016年   9956篇
  2015年   599篇
  2014年   219篇
  2013年   182篇
  2012年   3128篇
  2011年   9391篇
  2010年   8266篇
  2009年   5530篇
  2008年   6759篇
  2007年   7765篇
  2006年   106篇
  2005年   1201篇
  2004年   1120篇
  2003年   1165篇
  2002年   531篇
  2001年   93篇
  2000年   174篇
  1999年   54篇
  1998年   49篇
  1997年   26篇
  1996年   43篇
  1995年   8篇
  1994年   11篇
  1993年   8篇
  1992年   11篇
  1991年   22篇
  1988年   9篇
  1969年   24篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1964年   11篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
  1953年   5篇
  1952年   6篇
  1951年   4篇
  1950年   6篇
  1949年   6篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Harmonic cancellation strategies have been recently presented as a promising solution for the efficient on-chip implementation of accurate sinusoidal signal generators. Classical harmonic cancellation techniques consist in combining a set of time-shifted and scaled versions of a periodical signal in such a way that some of the harmonic components of the resulting signal are cancelled. This signal manipulation strategy can be easily implemented using digital resources to provide a set of phase-shifted digital square-wave signals and a summing network for scaling and combining the phase-shifted square-waves. A critical aspect in the practical implementation of the harmonic cancellation technique is the stringent accuracy required for the scaling weight ratios between the different phase-shifted signals. Small variations between these weights due to mismatch and process variations will reduce the effectiveness of the technique and increase the magnitude of undesired harmonic components. In this work, different harmonic cancellation strategies are presented and analyzed with the goal of simplifying the practical on-chip implementation of the scaling weights. Statistical behavioral simulations are provided in order to demonstrate the feasibility of the proposed approach.  相似文献   
992.
993.
Error correction code (ECC) and built-in self-repair (BISR) techniques have been widely used for improving the yield and reliability of embedded memories. The targets of these two schemes are transient faults and hard faults, respectively. Recently, ECC is also considered as a promising solution for correcting hard error to further enhance the fabrication yield of memories. However, if the number of faulty bits within a codeword is greater than the protection capability of the adopted ECC scheme, the protection will become void. In order to cure this drawback, efficient logical to physical address remapping techniques are proposed in this paper. The goal is to reconstruct the constituent cells of codewords such that faulty cells can be evenly distributed into different codewords. A heuristic algorithm suitable for built-in implementation is presented for address remapping analysis. The corresponding built-in remapping analysis circuit is then derived. It can be easily integrated into the conventional built-in self-repair (BISR) module. A simulator is developed to evaluate the hardware overhead and repair rate. According to experimental results, the repair rate can be improved significantly with negligible hardware overhead.  相似文献   
994.
Time-frequency distributions (TFDs) allow direction of arrival (DOA) estimation algorithms to be used in scenarios when the total number of sources are more than the number of sensors. The performance of such time–frequency (t–f) based DOA estimation algorithms depends on the resolution of the underlying TFD as a higher resolution TFD leads to better separation of sources in the t–f domain. This paper presents a novel DOA estimation algorithm that uses the adaptive directional t–f distribution (ADTFD) for the analysis of close signal components. The ADTFD optimizes the direction of kernel at each point in the t–f domain to obtain a clear t–f representation, which is then exploited for DOA estimation. Moreover, the proposed methodology can also be applied for DOA estimation of sparse signals. Experimental results indicate that the proposed DOA algorithm based on the ADTFD outperforms other fixed and adaptive kernel based DOA algorithms.  相似文献   
995.
Glaucoma is a disease characterized by damaging the optic nerve head, this can result in severe vision loss. An early detection and a good treatment provided by the ophthalmologist are the keys to preventing optic nerve damage and vision loss from glaucoma. Its screening is based on the manual optic cup and disc segmentation to measure the vertical cup to disc ratio (CDR). However, obtaining the regions of interest by the expert ophthalmologist can be difficult and is often a tedious task. In most cases, the unlabeled images are more numerous than the labeled ones.We propose an automatic glaucoma screening approach named Super Pixels for Semi-Supervised Segmentation “SP3S”, which is a semi-supervised superpixel-by-superpixel classification method, consisting of three main steps. The first step has to prepare the labeled and unlabeled data, applying the superpixel method and bringing in an expert for the labeling of superpixels. In the second step, We incorporate prior knowledge of the optic cup and disc by including color and spatial information. In the final step, semi-supervised learning by the Co-forest classifier is trained only with a few number of labeled superpixels and a large number of unlabeled superpixels to generate a robust classifier. For the estimation of the optic cup and disc regions, the active geometric shape model is used to smooth the disc and cup boundary for the calculation of the CDR. The obtained results for glaucoma detection, via an automatic cup and disc segmentation, established a potential solution for glaucoma screening. The SP3S performance shows quantitatively and qualitatively similar correspondence with the expert segmentation, providing an interesting tool for semi-automatic recognition of the optic cup and disc in order to achieve a medical progress of glaucoma disease.  相似文献   
996.
In this paper, we design a variational model for restoring multiple-coil magnetic resonance images (MRI) corrupted by non-central Chi distributed noise. The energy functional corresponding to the restoration problem is derived using the maximum a posteriori (MAP) estimator. Optimizing this functional yields the solution, which corresponds to the restored version of the image. The non-local total bounded variation prior is being used as the regularization term in the functional derived using the MAP estimation process. Further, the split-Bregman iteration scheme is being followed for fast numerical computation of the model. The results are compared with the state of the art MRI restoration models using visual representations and statistical measures.  相似文献   
997.
This paper presents a comprehensive cross-layer framework on the performance of transmission control protocol (TCP) over a free-space optical (FSO) link, which employs automatic repeat request (ARQ) and adaptive modulation and coding (AMC) schemes. Not similar to conventional works in the literature of FSO, we conduct a Markov error model to accurately capture effects of burst errors caused by atmospheric turbulence on cross-layer operations. From the framework, we quantify the impacts of different parameters/settings of ARQ, AMC, and the FSO link on TCP throughput performance. We also discuss several optimization aspects for TCP performance.  相似文献   
998.
With the rapid development of Internet, it brings a lot of conveniences. However, the data transmission and storage are faced with some security issues that seem to be obstacles to overcome, such as privacy protection and integrity authentication. In this paper, an efficient speech watermarking algorithm is proposed for content authentication and recovery in encrypted domain. The proposed system consists of speech encryption, watermark generation and embedding, content authentication and recovery. In the encryption process, chaotic and block cipher are combined to eliminate the positional correlation and conceal the statistical feature. In the watermark embedding process, approximation coefficients of integer wavelet transform are used to generate watermark and the detail coefficients are reserved to carry watermark. Theoretical analysis and simulation results show that the proposed scheme has high security and excellent inaudibility. Compared with previous works, the proposed scheme has strong ability to detect de-synchronization attacks and locate the corresponding tampered area without using synchronization codes. Meanwhile, the selective encryption will not influence the selective watermarking operation. Similarly, the operation of watermarking will not affect the decryption of the encrypted speech. Additionally, the tampered samples can be recovered without any auxiliary watermark information.  相似文献   
999.
One of the major issues in LTE-Advanced (LTE-A) systems is the poor capacity at the cell edge. This is mainly due to the interference experienced by the users as a result of the aggressive frequency reuse usually implemented. Relaying offers an attractive solution for this problem by offering better links than those with the eNodeB (eNB) for the terminals suffering from high path loss or high interference. However, adding relays complicates the resource allocation problem at the eNB and therefore the need for more efficient schemes arises. This is also aggravated by the reuse of resource blocks (RBs) by the relays to fully exploit the scarce spectrum, which, in turn, leads to intra-cell interference. In this paper, we study the joint power and resource allocation problem in LTE-A relay-enhanced cells that exploit spatial reuse. To guarantee fairness among users, a max–min fair optimization objective is used. This complex problem is solved using coordinate ascent and the difference of two convex functions (DC) programming techniques and the proposed scheme indeed converges to a local-optimum quickly. This is shown to be a satisfactory solution according to the simulation results that indicate an almost sevenfold increase in the 10th percentile capacity when compared to previously proposed solutions.  相似文献   
1000.
In this paper, we present a novel computationally efficient motion estimation (ME) algorithm for high-efficiency video coding (HEVC). The proposed algorithm searches in the hexagonal pattern with a fixed number of search points at each grid. It utilizes the correlation between contiguous pixels within the frame. In order to reduce the computational complexity, the proposed algorithm utilizes pixel truncation, adaptive search range, sub-sampling and avoids some of the asymmetrical prediction unit techniques. Simulation results are obtained by using the reference software HM (e n c o d e r_l o w d e l a y_P_m a i n and e n c o d e r_r a n d o m a c c e s s_m a i n profile) and shows 55.49% improvement on search points with approximately the same PSNR and around 1% increment in bit rate as compared to the Test Zonal Search (TZS) ME algorithm. By utilizing the proposed algorithm, the BD-PSNR loss for the video sequences like B a s k e t b a l l P a s s_416 × 240@50 and J o h n n y_1280 × 720@60 is 0.0804 dB and 0.0392 dB respectively as compared to the HM reference software with the e n c o d e r_l o w d e l a y_P_m a i n profile.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号