全文获取类型
收费全文 | 6087篇 |
免费 | 617篇 |
国内免费 | 524篇 |
专业分类
电工技术 | 714篇 |
综合类 | 654篇 |
化学工业 | 65篇 |
金属工艺 | 74篇 |
机械仪表 | 290篇 |
建筑科学 | 35篇 |
矿业工程 | 85篇 |
能源动力 | 81篇 |
轻工业 | 71篇 |
水利工程 | 17篇 |
石油天然气 | 36篇 |
武器工业 | 44篇 |
无线电 | 1163篇 |
一般工业技术 | 134篇 |
冶金工业 | 515篇 |
原子能技术 | 25篇 |
自动化技术 | 3225篇 |
出版年
2024年 | 12篇 |
2023年 | 35篇 |
2022年 | 93篇 |
2021年 | 116篇 |
2020年 | 145篇 |
2019年 | 133篇 |
2018年 | 122篇 |
2017年 | 150篇 |
2016年 | 146篇 |
2015年 | 201篇 |
2014年 | 273篇 |
2013年 | 234篇 |
2012年 | 405篇 |
2011年 | 480篇 |
2010年 | 363篇 |
2009年 | 369篇 |
2008年 | 490篇 |
2007年 | 523篇 |
2006年 | 518篇 |
2005年 | 459篇 |
2004年 | 342篇 |
2003年 | 325篇 |
2002年 | 248篇 |
2001年 | 206篇 |
2000年 | 168篇 |
1999年 | 98篇 |
1998年 | 88篇 |
1997年 | 82篇 |
1996年 | 42篇 |
1995年 | 40篇 |
1994年 | 45篇 |
1993年 | 39篇 |
1992年 | 35篇 |
1991年 | 19篇 |
1990年 | 17篇 |
1989年 | 21篇 |
1988年 | 13篇 |
1987年 | 5篇 |
1986年 | 15篇 |
1985年 | 13篇 |
1984年 | 9篇 |
1983年 | 12篇 |
1982年 | 7篇 |
1979年 | 5篇 |
1964年 | 6篇 |
1963年 | 5篇 |
1962年 | 4篇 |
1961年 | 8篇 |
1955年 | 5篇 |
1954年 | 6篇 |
排序方式: 共有7228条查询结果,搜索用时 416 毫秒
1.
双语词嵌入通常采用从源语言空间到目标语言空间映射,通过源语言映射嵌入到目标语言空间的最小距离线性变换实现跨语言词嵌入。然而大型的平行语料难以获得,词嵌入的准确率难以提高。针对语料数量不对等、双语语料稀缺情况下的跨语言词嵌入问题,该文提出一种基于小字典不对等语料的跨语言词嵌入方法,首先对单语词向量进行归一化,对小字典词对正交最优线性变换求得梯度下降初始值,然后通过对大型源语言(英语)语料进行聚类,借助小字典找到与每一聚类簇相对应的源语言词,取聚类得到的每一簇词向量均值和源语言与目标语言对应的词向量均值,建立新的双语词向量对应关系,将新建立的双语词向量扩展到小字典中,使得小字典得以泛化和扩展。最后,利用泛化扩展后的字典对跨语言词嵌入映射模型进行梯度下降求得最优值。在英语—意大利语、德语和芬兰语上进行了实验验证,实验结果证明该文方法可以在跨语言词嵌入中减少梯度下降迭代次数,减少训练时间,同时在跨语言词嵌入上表现出较好的正确率。 相似文献
2.
针对基于规则和统计的传统中文简历解析方法效率低、成本高、泛化能力差的缺点,提出一种基于特征融合的中文简历解析方法,即级联Word2Vec生成的词向量和用BLSTM(Bidirectional Long Short-Term Memory)建模字序列生成的词向量,然后再结合BLSTM和CRF(Conditional Random Fields)对中文简历进行解析(BLSTM-CRF)。为了提高中文简历解析的效率,级联包含字序列信息的词向量和用Word2Vec生成的词向量,融合成一个新的词向量表示;再由BLSTM强大的学习能力融合词的上下文信息,输出所有可能标签序列的分值给CRF层;再由CRF引入标签之间约束关系求解最优序列。利用梯度下降算法训练神经网络,使用预先训练的词向量和Dropout优化神经网络,最终完成对中文简历的解析工作。实验结果表明,所提的特征融合方法优于传统的简历解析方法。 相似文献
3.
Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a system, long words should produce stronger lexical activation than short words, for 2 reasons: Long words provide more bottom-up evidence than short words, and short words are subject to greater inhibition due to the existence of more similar words. Four experiments provide evidence for this view. In addition, reaction-time-based partitioning of the data shows that long words generate greater activation that is available both earlier and for a longer time than is the case for short words. As a result, lexical influences on phoneme identification are extremely robust for long words but are quite fragile and condition-dependent for short words. Models of word recognition must consider words of all lengths to capture the true dynamics of lexical activation. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
4.
Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye-mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
5.
6.
文章针对一维长序DFT计算问题,分析其计算结构以及算法的并行性,提出一种阵列协处理结构.并分析这种协处理机结构上DFT计算的组织及具体实施算法步骤和方法,并对这种协处理阵列结构上运行的DFT进行复杂性分析。这对计算DFT专用集成协处理结构芯片开发,提高专用嵌套系统性能非常实用。 相似文献
7.
8.
9.
This paper proposes a software pipelining framework, CALiBeR (ClusterAware Load Balancing Retiming Algorithm), suitable for compilers targetingclustered embedded VLIW processors. CALiBeR can be used by embedded systemdesigners to explore different code optimization alternatives, that is, high-qualitycustomized retiming solutions for desired throughput and program memory sizerequirements, while minimizing register pressure. An extensive set of experimentalresults is presented, demonstrating that our algorithm compares favorablywith one of the best state-of-the-art algorithms, achieving up to 50% improvementin performance and up to 47% improvement in register requirements. In orderto empirically assess the effectiveness of clustering for high ILP applications,additional experiments are presented contrasting the performance achievedby software pipelined kernels executing on clustered and on centralized machines. 相似文献
10.
Yanqiu Shao Jiqing Han Ting Liu Yongzhen Zhao 《International Journal of Speech Technology》2007,10(1):45-55
In real speech, not like lexical words (LWs), prosodic words (PWs) are basic rhythmic units. The naturalness of a Text-to-Speech
(TTS) system is directly influenced by the segmentation of the PWs. Most of the PWs are the combination of several LWs. In
this paper, three Lexical Combination Models are proposed to combine LWs into PWs, including a Directed Acyclic Graph Model,
a Segmentation Model and a Markov Model (MM). To cope with the situation where some long LWs should be segmented into two
or more PWs, a Lexical Split Model (LSM) is applied to the long LWs. Experimental results prove that relatively constant results
with various training data can be obtained from a MM. The Transformation-Based Error Driven Learning (TBED) algorithm, for
its high performance of individual property, is applied in combination with the MM to improve the precision of PW segmentation.
Experiments show that among the three proposed models, the MM combined with TBED and LSM, leads to the best performance, in
which a precision of 93.00% and a recall of 93.23% are achieved. The perception test indicates that by using PWs as the lowest
prosodic units a speech sounds more natural and acceptable than by using LWs.
This paper is supported by NSFC Project (60503071); 973 Natural Basic Research Program of China (2004CB318102); Postdoctor
Science Foundation of P. R. China (20070420275). 相似文献