全文获取类型
收费全文 | 16134篇 |
免费 | 1832篇 |
国内免费 | 1401篇 |
专业分类
电工技术 | 572篇 |
综合类 | 2673篇 |
化学工业 | 126篇 |
金属工艺 | 121篇 |
机械仪表 | 609篇 |
建筑科学 | 310篇 |
矿业工程 | 103篇 |
能源动力 | 39篇 |
轻工业 | 125篇 |
水利工程 | 75篇 |
石油天然气 | 101篇 |
武器工业 | 91篇 |
无线电 | 4117篇 |
一般工业技术 | 557篇 |
冶金工业 | 665篇 |
原子能技术 | 25篇 |
自动化技术 | 9058篇 |
出版年
2024年 | 24篇 |
2023年 | 154篇 |
2022年 | 299篇 |
2021年 | 348篇 |
2020年 | 315篇 |
2019年 | 238篇 |
2018年 | 232篇 |
2017年 | 345篇 |
2016年 | 402篇 |
2015年 | 502篇 |
2014年 | 881篇 |
2013年 | 799篇 |
2012年 | 1084篇 |
2011年 | 1132篇 |
2010年 | 997篇 |
2009年 | 974篇 |
2008年 | 1168篇 |
2007年 | 1220篇 |
2006年 | 1170篇 |
2005年 | 1122篇 |
2004年 | 900篇 |
2003年 | 843篇 |
2002年 | 674篇 |
2001年 | 597篇 |
2000年 | 479篇 |
1999年 | 386篇 |
1998年 | 320篇 |
1997年 | 305篇 |
1996年 | 202篇 |
1995年 | 238篇 |
1994年 | 180篇 |
1993年 | 149篇 |
1992年 | 101篇 |
1991年 | 90篇 |
1990年 | 54篇 |
1989年 | 57篇 |
1988年 | 40篇 |
1987年 | 32篇 |
1986年 | 28篇 |
1985年 | 48篇 |
1984年 | 35篇 |
1983年 | 22篇 |
1982年 | 11篇 |
1981年 | 17篇 |
1980年 | 15篇 |
1979年 | 11篇 |
1978年 | 13篇 |
1977年 | 10篇 |
1975年 | 15篇 |
1963年 | 8篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
Fast image codecs are a current need in applications that deal with large amounts of images. Graphics Processing Units (GPUs) are suitable processors to speed up most kinds of algorithms, especially when they allow fine-grain parallelism. Bitplane Coding with Parallel Coefficient processing (BPC-PaCo) is a recently proposed algorithm for the core stage of wavelet-based image codecs tailored for the highly parallel architectures of GPUs. This algorithm provides complexity scalability to allow faster execution at the expense of coding efficiency. Its main drawback is that the speedup and loss in image quality is controlled only roughly, resulting in visible distortion at low and medium rates. This paper addresses this issue by integrating techniques of visually lossless coding into BPC-PaCo. The resulting method minimizes the visual distortion introduced in the compressed file, obtaining higher-quality images to a human observer. Experimental results also indicate 12% speedups with respect to BPC-PaCo. 相似文献
2.
3.
In this paper we combine video compression and modern image processing methods. We construct novel iterative filter methods for prediction signals based on Partial Differential Equation (PDE) based methods. The mathematical framework of the employed diffusion filter class is given and some desirable properties are stated. In particular, two types of diffusion filters are constructed: a uniform diffusion filter using a fixed filter mask and a signal adaptive diffusion filter that incorporates the structures of the underlying prediction signal. The latter has the advantage of not attenuating existing edges while the uniform filter is less complex. The filters are embedded into a software based on HEVC with additional QTBT (Quadtree plus Binary Tree) and MTT (Multi-Type-Tree) block structure. In this setting, several measures to reduce the coding complexity of the tool are introduced, discussed and tested thoroughly. The coding complexity is reduced by up to 70% while maintaining over 80% of the gain. Overall, the diffusion filter method achieves average bitrate savings of 2.27% for Random Access having an average encoder runtime complexity of 119% and 117% decoder runtime complexity. For individual test sequences, results of 7.36% for Random Access are accomplished. 相似文献
5.
Anis Zeglaoui Anouar Houmia Maher Mejai Radhouane Aloui 《International Journal of Adaptive Control and Signal Processing》2021,35(9):1842-1859
In compressive sampling theory, the least absolute shrinkage and selection operator (LASSO) is a representative problem. Nevertheless, the non-differentiable constraint impedes the use of Lagrange programming neural networks (LPNNs). We present in this article the -LPNN model, a novel algorithm that tackles the LASSO minimization together with the underlying theory support. First, we design a sequence of smooth constrained optimization problems, by introducing a convenient differentiable approximation to the non-differentiable -norm constraint. Next, we prove that the optimal solutions of the regularized intermediate problems converge to the optimal sparse signal for the LASSO. Then, for every regularized problem from the sequence, the -LPNN dynamic model is derived, and the asymptotic stability of its equilibrium state is established as well. Finally, numerical simulations are carried out to compare the performance of the proposed -LPNN algorithm with both the LASSO-LPNN model and a standard digital method. 相似文献
6.
This paper considers the state‐dependent interference relay channel (SIRC) in which one of the two users may operate as a secondary user and the relay has a noncausal access to the signals from both users. For discrete memoryless SIRC, we first establish the achievable rate region by carefully merging Han‐Kobayashi rate splitting encoding technique, superposition encoding, and Gelfand‐Pinsker encoding technique. Then, based on the achievable rate region that we derive, the capacity of the SIRC is established in many different scenarios including (a) the weak interference regime, (b) the strong interference regime, and (c) the very strong interference regime. This means that our capacity results contain all available known results in the literature. Next, the achievable rate region and the associated capacity results are also evaluated in the case of additive Gaussian noise. Additionally, many numerical examples are investigated to show the value of our theoretical derivations. 相似文献
7.
Maqbool Ali Jamil Hussain Sungyoung Lee Byeong Ho Kang Kashif Sattar 《Expert Systems》2020,37(1):e12401
The case-based learning (CBL) approach has gained attention in medical education as an alternative to traditional learning methodology. However, current CBL systems do not facilitate and provide computer-based domain knowledge to medical students for solving real-world clinical cases during CBL practice. To automate CBL, clinical documents are beneficial for constructing domain knowledge. In the literature, most systems and methodologies require a knowledge engineer to construct machine-readable knowledge. Keeping in view these facts, we present a knowledge construction methodology (KCM-CD) to construct domain knowledge ontology (i.e., structured declarative knowledge) from unstructured text in a systematic way using artificial intelligence techniques, with minimum intervention from a knowledge engineer. To utilize the strength of humans and computers, and to realize the KCM-CD methodology, an interactive case-based learning system(iCBLS) was developed. Finally, the developed ontological model was evaluated to evaluate the quality of domain knowledge in terms of coherence measure. The results showed that the overall domain model has positive coherence values, indicating that all words in each branch of the domain ontology are correlated with each other and the quality of the developed model is acceptable. 相似文献
8.
Semantic search is gradually establishing itself as the next generation search paradigm, which meets better a wider range of information needs, as compared to traditional full-text search. At the same time, however, expanding search towards document structure and external, formal knowledge sources (e.g. LOD resources) remains challenging, especially with respect to efficiency, usability, and scalability.This paper introduces Mímir—an open-source framework for integrated semantic search over text, document structure, linguistic annotations, and formal semantic knowledge. Mímir supports complex structural queries, as well as basic keyword search.Exploratory search and sense-making are supported through information visualisation interfaces, such as co-occurrence matrices and term clouds. There is also an interactive retrieval interface, where users can save, refine, and analyse the results of a semantic search over time. The more well-studied precision-oriented information seeking searches are also well supported.The generic and extensible nature of the Mímir platform is demonstrated through three different, real-world applications, one of which required indexing and search over tens of millions of documents and fifty to hundred times as many semantic annotations. Scaling up to over 150 million documents was also accomplished, via index federation and cloud-based deployment. 相似文献
9.
张居晓 《计算机技术与发展》2015,(1)
计算机录入编辑盲文是信息处理的特殊应用领域,是特殊教育中的重要研究课题。文中将盲文制作为特殊符号,通过制作字库,编写个性化码表,然后嵌入到主流输入法,从而实现盲文与汉字混排以及实现单手盲文输入。该系统具有易学易记性、盲文编码多样性、嵌入性强等优点,并通过实验证明输入盲文效率能提高5~6倍,在盲文出版、盲文印刷、盲文教学等领域有重要的应用价值。但盲文字符在不同平台(如智能手机)与不同操作系统兼容性问题还有待进一步研究开发。 相似文献
10.
激光脉冲编码是激光制导武器的抗干扰措施之一。角度欺骗式干扰和高重频干扰是目前半主动激光制导武器的主要有源干扰来源。为研究不同激光脉冲编码方式对激光半主动制导武器抗这两种干扰性能的影响,本文针对敌方激光告警机的识别算法与我方导引头的解码过程,提出自相关函数与归一化互相关函数评价方法,并对目前主要编码方式进行仿真,仿真结果表明:激光脉冲编码的抗角度欺骗式干扰能力受编码序列周期性与脉冲间隔随机性的影响;抗高重频激光干扰能力受编码序列脉冲间隔随机性的影响;LFSR状态码的抗角度欺骗式干扰与抗高重频干扰效果均优于其他编码方式。 相似文献