首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10187篇
  免费   904篇
  国内免费   93篇
电工技术   193篇
综合类   41篇
化学工业   2829篇
金属工艺   226篇
机械仪表   415篇
建筑科学   385篇
矿业工程   22篇
能源动力   665篇
轻工业   1027篇
水利工程   202篇
石油天然气   159篇
武器工业   5篇
无线电   1069篇
一般工业技术   1765篇
冶金工业   255篇
原子能技术   73篇
自动化技术   1853篇
  2024年   37篇
  2023年   201篇
  2022年   322篇
  2021年   646篇
  2020年   590篇
  2019年   705篇
  2018年   823篇
  2017年   767篇
  2016年   770篇
  2015年   476篇
  2014年   760篇
  2013年   1162篇
  2012年   707篇
  2011年   804篇
  2010年   522篇
  2009年   437篇
  2008年   279篇
  2007年   202篇
  2006年   160篇
  2005年   127篇
  2004年   121篇
  2003年   69篇
  2002年   65篇
  2001年   35篇
  2000年   29篇
  1999年   36篇
  1998年   40篇
  1997年   27篇
  1996年   30篇
  1995年   27篇
  1994年   12篇
  1993年   20篇
  1992年   11篇
  1991年   22篇
  1990年   20篇
  1989年   15篇
  1988年   10篇
  1987年   12篇
  1986年   12篇
  1985年   10篇
  1984年   15篇
  1983年   13篇
  1982年   4篇
  1981年   4篇
  1980年   5篇
  1979年   6篇
  1978年   4篇
  1977年   4篇
  1976年   3篇
  1973年   2篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
151.
152.
153.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
154.
In this paper, a reliable video communication system using adaptive Hierarchical QAM (HQAM) is designed to provide optimized unequal error protection (UEP) to embedded video bitstreams. Based on the relative importance of bits, video bitstream is partitioned into two priorities, namely High Priority (HP) and Low Priority (LP) substreams. Then, the optimal value of modulation (or hierarchical) parameter (α) of HQAM, which controls the relative error protection of these substreams, is selected from a pre-designed look-up table. The proposed system adapts itself by adapting the optimal α according to the varying channel condition, without changing the modulation level. This is in contrast to conventional WiMAX and LTE systems, in which dynamic switching among multiple modulations is used to adapt the varying channel conditions. This paper proposes HQAM with adaptive α as an alternative to the multiple modulation schemes. Moreover, for fixed average transmission power, receiver demodulates symbols without the knowledge of α. In order to further improve the video quality and to reduce the effects of erroneously received LP bits, the proposed system uses another level of adaptation, in which received LP bits are adaptively considered or discarded, before decoding the video, depending on the channel conditions (or optimized α). Simulation results show that proposed system can achieve significant improvement in the video quality compared to QAM based EEP scheme and non-adaptive HQAM.  相似文献   
155.
156.
This paper presents a method for reconstructing unreliable spectral components of speech signals using the statistical distributions of the clean components. Our goal is to model the temporal patterns in speech signal and take advantage of correlations between speech features in both time and frequency domain simultaneously. In this approach, a hidden Markov model (HMM) is first trained on clean speech data to model the temporal patterns which appear in the sequences of the spectral components. Using this model and according to the probabilities of occurring noisy spectral component at each states, a probability distributions for noisy components are estimated. Then, by applying maximum a posteriori (MAP) estimation on the mentioned distributions, the final estimations of the unreliable spectral components are obtained. The proposed method is compared to a common missing feature method which is based on the probabilistic clustering of the feature vectors and also to a state of the art method based on sparse reconstruction. The experimental results exhibits significant improvement in recognition accuracy over a noise polluted Persian corpus.  相似文献   
157.
158.
This paper explores how different forms of anticipatory work contribute to reliability in high-risk space operations. It is based on ethnographic field work, participant observation and interviews supplemented with video recordings from a control room responsible for operating a microgravity greenhouse at the International Space Station (ISS). Drawing on examples from different stages of a biological experiment on the ISS, we demonstrate how engineers, researchers and technicians work to anticipate and proactively mitigate possible problems. Space research is expensive and risky. The experiments are planned over the course of many years by a globally distributed network of organizations. Owing to the inaccessibility of the ISS, every trivial detail that could possibly cause a problem is subject to scrutiny. We discuss what we label anticipatory work: practices constituted of an entanglement of cognitive, social and technical elements involved in anticipating and proactively mitigating everything that might go wrong. We show how the nature of anticipatory work changes between planning and the operational phases of an experiment. In the planning phase, operators inscribe their anticipation into technology and procedures. In the operational phase, we show how troubleshooting involves the ability to look ahead in the evolving temporal trajectory of the ISS operations and to juggle pre-planned fixes along these trajectories. A key objective of this paper is to illustrate how anticipation is shared between humans and different forms of technology. Moreover, it illustrates the importance of including considerations of temporality in safety and reliability research.  相似文献   
159.
This paper presents a historical Arabic corpus named HAC. At this early embryonic stage of the project, we report about the design, the architecture and some of the experiments which we have conducted on HAC. The corpus, and accordingly the search results, will be represented using a primary XML exchange format. This will serve as an intermediate exchange tool within the project and will allow the user to process the results offline using some external tools. HAC is made up of Classical Arabic texts that cover 1600 years of language use; the Quranic text, Modern Standard Arabic texts, as well as a variety of monolingual Arabic dictionaries. The development of this historical corpus assists linguists and Arabic language learners to effectively explore, understand, and discover interesting knowledge hidden in millions of instances of language use. We used techniques from the field of natural language processing to process the data and a graph-based representation for the corpus. We provided researchers with an export facility to render further linguistic analysis possible.  相似文献   
160.
Semantic similarity has typically been measured across items of approximately similar sizes. As a result, similarity measures have largely ignored the fact that different types of linguistic item can potentially have similar or even identical meanings, and therefore are designed to compare only one type of linguistic item. Furthermore, nearly all current similarity benchmarks within NLP contain pairs of approximately the same size, such as word or sentence pairs, preventing the evaluation of methods that are capable of comparing different sized items. To address this, we introduce a new semantic evaluation called cross-level semantic similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Our pilot CLSS task was presented as part of SemEval-2014, which attracted 19 teams who submitted 38 systems. CLSS data contains a rich mixture of pairs, spanning from paragraphs to word senses to fully evaluate similarity measures that are capable of comparing items of any type. Furthermore, data sources were drawn from diverse corpora beyond just newswire, including domain-specific texts and social media. We describe the annotation process and its challenges, including a comparison with crowdsourcing, and identify the factors that make the dataset a rigorous assessment of a method’s quality. Furthermore, we examine in detail the systems participating in the SemEval task to identify the common factors associated with high performance and which aspects proved difficult to all systems. Our findings demonstrate that CLSS poses a significant challenge for similarity methods and provides clear directions for future work on universal similarity methods that can compare any pair of items.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号