首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17577篇
  免费   2290篇
  国内免费   1325篇
电工技术   727篇
综合类   1868篇
化学工业   1740篇
金属工艺   662篇
机械仪表   750篇
建筑科学   1503篇
矿业工程   487篇
能源动力   220篇
轻工业   181篇
水利工程   268篇
石油天然气   1300篇
武器工业   93篇
无线电   4560篇
一般工业技术   1237篇
冶金工业   399篇
原子能技术   54篇
自动化技术   5143篇
  2024年   60篇
  2023年   168篇
  2022年   319篇
  2021年   388篇
  2020年   410篇
  2019年   376篇
  2018年   337篇
  2017年   499篇
  2016年   559篇
  2015年   676篇
  2014年   1126篇
  2013年   1056篇
  2012年   1427篇
  2011年   1411篇
  2010年   1133篇
  2009年   1169篇
  2008年   1164篇
  2007年   1276篇
  2006年   1153篇
  2005年   1017篇
  2004年   845篇
  2003年   840篇
  2002年   745篇
  2001年   569篇
  2000年   474篇
  1999年   392篇
  1998年   345篇
  1997年   282篇
  1996年   190篇
  1995年   195篇
  1994年   142篇
  1993年   110篇
  1992年   65篇
  1991年   50篇
  1990年   43篇
  1989年   32篇
  1988年   25篇
  1987年   13篇
  1986年   14篇
  1985年   29篇
  1984年   19篇
  1983年   9篇
  1982年   10篇
  1981年   8篇
  1980年   5篇
  1978年   2篇
  1965年   2篇
  1963年   3篇
  1956年   1篇
  1955年   1篇
排序方式: 共有10000条查询结果,搜索用时 328 毫秒
1.
Fast image codecs are a current need in applications that deal with large amounts of images. Graphics Processing Units (GPUs) are suitable processors to speed up most kinds of algorithms, especially when they allow fine-grain parallelism. Bitplane Coding with Parallel Coefficient processing (BPC-PaCo) is a recently proposed algorithm for the core stage of wavelet-based image codecs tailored for the highly parallel architectures of GPUs. This algorithm provides complexity scalability to allow faster execution at the expense of coding efficiency. Its main drawback is that the speedup and loss in image quality is controlled only roughly, resulting in visible distortion at low and medium rates. This paper addresses this issue by integrating techniques of visually lossless coding into BPC-PaCo. The resulting method minimizes the visual distortion introduced in the compressed file, obtaining higher-quality images to a human observer. Experimental results also indicate 12% speedups with respect to BPC-PaCo.  相似文献   
2.
Fire spread and growth on real‐scale four cushion mock‐ups of residential upholstered furniture (RUF) were investigated with the goal of identifying whether changes in five classes of materials (barrier, flexible polyurethane foam, polyester fiber wrap, upholstery fabric, and sewing thread), referred to as factors, resulted in statistically significant changes in burning behavior. A fractional factorial experimental design plus practical considerations yielded a test matrix with 20 material combinations. Experiments were repeated a minimum of two times. Measurements included fire spread rates derived from video recordings and heat release rates (HRRs). A total of 13 experimental parameters (3 based on the videos and 10 on the HRR results), referred to as responses, characterized the measurements. Statistical analyses based on Main Effects Plots (main effects) and Block Plots (main effects and factor interactions) were used. The results showed that three of the factors resulted in statistically significant effects on varying numbers of the 13 responses. The Barrier and Fabric factors had the strongest main effects with roughly comparable magnitudes. Foam was statistically significant for fewer of the responses and its overall strength was weaker than for Barrier and Fabric. No statistically significant main effects were identified for Wrap or Thread. Multiple two‐term interactions between factors were identified as being statistically significant. The Barrier*Fabric interaction resulted in the highest number of and strongest statistically significant effects. The existence of two‐term interactions means that it will be necessary to consider their effects in approaches designed to predict the burning behavior of RUF.  相似文献   
3.
In this paper we combine video compression and modern image processing methods. We construct novel iterative filter methods for prediction signals based on Partial Differential Equation (PDE) based methods. The mathematical framework of the employed diffusion filter class is given and some desirable properties are stated. In particular, two types of diffusion filters are constructed: a uniform diffusion filter using a fixed filter mask and a signal adaptive diffusion filter that incorporates the structures of the underlying prediction signal. The latter has the advantage of not attenuating existing edges while the uniform filter is less complex. The filters are embedded into a software based on HEVC with additional QTBT (Quadtree plus Binary Tree) and MTT (Multi-Type-Tree) block structure. In this setting, several measures to reduce the coding complexity of the tool are introduced, discussed and tested thoroughly. The coding complexity is reduced by up to 70% while maintaining over 80% of the gain. Overall, the diffusion filter method achieves average bitrate savings of 2.27% for Random Access having an average encoder runtime complexity of 119% and 117% decoder runtime complexity. For individual test sequences, results of 7.36% for Random Access are accomplished.  相似文献   
4.
We explore a truncation error criterion to steer adaptive step length refinement and coarsening in incremental-iterative path following procedures, applied to problems in large-deformation structural mechanics. Elaborating on ideas proposed by Bergan and collaborators in the 1970s, we first describe an easily computable scalar stiffness parameter whose sign and rate of change provide reliable information on the local behavior and complexity of the equilibrium path. We then derive a simple scaling law that adaptively adjusts the length of the next step based on the rate of change of the stiffness parameter at previous points on the path. We show that this scaling is equivalent to keeping a local truncation error constant in each step. We demonstrate with numerical examples that our adaptive method follows a path with a significantly reduced number of points compared to an analysis with uniform step length of the same fidelity level. A comparison with Abaqus illustrates that the truncation error criterion effectively concentrates points around the smallest-scale features of the path, which is generally not possible with automatic incrementation solely based on local convergence properties.  相似文献   
5.
In compressive sampling theory, the least absolute shrinkage and selection operator (LASSO) is a representative problem. Nevertheless, the non-differentiable constraint impedes the use of Lagrange programming neural networks (LPNNs). We present in this article the -LPNN model, a novel algorithm that tackles the LASSO minimization together with the underlying theory support. First, we design a sequence of smooth constrained optimization problems, by introducing a convenient differentiable approximation to the non-differentiable -norm constraint. Next, we prove that the optimal solutions of the regularized intermediate problems converge to the optimal sparse signal for the LASSO. Then, for every regularized problem from the sequence, the -LPNN dynamic model is derived, and the asymptotic stability of its equilibrium state is established as well. Finally, numerical simulations are carried out to compare the performance of the proposed -LPNN algorithm with both the LASSO-LPNN model and a standard digital method.  相似文献   
6.
This paper considers the state‐dependent interference relay channel (SIRC) in which one of the two users may operate as a secondary user and the relay has a noncausal access to the signals from both users. For discrete memoryless SIRC, we first establish the achievable rate region by carefully merging Han‐Kobayashi rate splitting encoding technique, superposition encoding, and Gelfand‐Pinsker encoding technique. Then, based on the achievable rate region that we derive, the capacity of the SIRC is established in many different scenarios including (a) the weak interference regime, (b) the strong interference regime, and (c) the very strong interference regime. This means that our capacity results contain all available known results in the literature. Next, the achievable rate region and the associated capacity results are also evaluated in the case of additive Gaussian noise. Additionally, many numerical examples are investigated to show the value of our theoretical derivations.  相似文献   
7.
Facial Expression Recognition (FER) is an important subject of human–computer interaction and has long been a research area of great interest. Accurate Facial Expression Sequence Interception (FESI) and discriminative expression feature extraction are two enormous challenges for the video-based FER. This paper proposes a framework of FER for the intercepted video sequences by using feature point movement trend and feature block texture variation. Firstly, the feature points are marked by Active Appearance Model (AAM) and the most representative 24 of them are selected. Secondly, facial expression sequence is intercepted from the face video by determining two key frames whose emotional intensities are minimum and maximum, respectively. Thirdly, the trend curve which represents the Euclidean distance variations between any two selected feature points is fitted, and the slopes of specific points on the trend curve are calculated. Finally, combining Slope Set which is composed by the calculated slopes with the proposed Feature Block Texture Difference (FBTD) which refers to the texture variation of facial patch, the final expressional feature are formed and inputted to One-dimensional Convolution Neural Network (1DCNN) for FER. Five experiments are conducted in this research, and three average FER rates 95.2%, 96.5%, and 97% for Beihang University (BHU) facial expression database, MMI facial expression database, and the combination of two databases, respectively, have shown the significant advantages of the proposed method over the existing ones.  相似文献   
8.
计算机录入编辑盲文是信息处理的特殊应用领域,是特殊教育中的重要研究课题。文中将盲文制作为特殊符号,通过制作字库,编写个性化码表,然后嵌入到主流输入法,从而实现盲文与汉字混排以及实现单手盲文输入。该系统具有易学易记性、盲文编码多样性、嵌入性强等优点,并通过实验证明输入盲文效率能提高5~6倍,在盲文出版、盲文印刷、盲文教学等领域有重要的应用价值。但盲文字符在不同平台(如智能手机)与不同操作系统兼容性问题还有待进一步研究开发。  相似文献   
9.
In this paper, low-cost and two-cycle hardware structures of the PRINCE lightweight block cipher are presented. In the first structure, we proposed an area-constrained structure, and in the second structure, a high-speed implementation of the PRINCE cipher is presented. The substitution box (S-box) and the inverse of S-box (S-box−1) blocks are the most complex blocks in the PRINCE cipher. These blocks are designed by an efficient structure with low critical path delay. In the low-cost structure, the S-boxes and S-boxes−1 are shared between the round computations and the intermediate step of PRINCE cipher. Therefore, the proposed architecture is implemented based on the lowest number of computation resources. The two-cycle implementation of PRINCE cipher is designed by a processing element (PE), which is a general and reconfigurable element. This structure has a regular form with the minimum number of the control signal. Implementation results of the proposed structures in 180-nm CMOS technology and Virtex-4 and Virtex-6 FPGA families are achieved. The proposed structures, based on the results, have better critical path delay and throughput compared with other's related works.  相似文献   
10.
激光脉冲编码是激光制导武器的抗干扰措施之一。角度欺骗式干扰和高重频干扰是目前半主动激光制导武器的主要有源干扰来源。为研究不同激光脉冲编码方式对激光半主动制导武器抗这两种干扰性能的影响,本文针对敌方激光告警机的识别算法与我方导引头的解码过程,提出自相关函数与归一化互相关函数评价方法,并对目前主要编码方式进行仿真,仿真结果表明:激光脉冲编码的抗角度欺骗式干扰能力受编码序列周期性与脉冲间隔随机性的影响;抗高重频激光干扰能力受编码序列脉冲间隔随机性的影响;LFSR状态码的抗角度欺骗式干扰与抗高重频干扰效果均优于其他编码方式。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号