首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9318篇
  免费   838篇
  国内免费   88篇
电工技术   180篇
综合类   41篇
化学工业   2579篇
金属工艺   211篇
机械仪表   394篇
建筑科学   353篇
矿业工程   20篇
能源动力   615篇
轻工业   909篇
水利工程   187篇
石油天然气   140篇
武器工业   5篇
无线电   975篇
一般工业技术   1607篇
冶金工业   210篇
原子能技术   67篇
自动化技术   1751篇
  2024年   35篇
  2023年   186篇
  2022年   314篇
  2021年   607篇
  2020年   550篇
  2019年   682篇
  2018年   774篇
  2017年   732篇
  2016年   722篇
  2015年   429篇
  2014年   711篇
  2013年   1037篇
  2012年   651篇
  2011年   731篇
  2010年   471篇
  2009年   404篇
  2008年   244篇
  2007年   181篇
  2006年   148篇
  2005年   101篇
  2004年   101篇
  2003年   58篇
  2002年   57篇
  2001年   28篇
  2000年   23篇
  1999年   25篇
  1998年   22篇
  1997年   18篇
  1996年   23篇
  1995年   19篇
  1994年   10篇
  1993年   15篇
  1992年   10篇
  1991年   17篇
  1990年   16篇
  1989年   12篇
  1988年   7篇
  1987年   7篇
  1986年   8篇
  1985年   8篇
  1984年   14篇
  1983年   12篇
  1982年   4篇
  1981年   3篇
  1980年   2篇
  1979年   6篇
  1978年   3篇
  1977年   2篇
  1973年   2篇
  1967年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
131.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
132.
In this paper, a reliable video communication system using adaptive Hierarchical QAM (HQAM) is designed to provide optimized unequal error protection (UEP) to embedded video bitstreams. Based on the relative importance of bits, video bitstream is partitioned into two priorities, namely High Priority (HP) and Low Priority (LP) substreams. Then, the optimal value of modulation (or hierarchical) parameter (α) of HQAM, which controls the relative error protection of these substreams, is selected from a pre-designed look-up table. The proposed system adapts itself by adapting the optimal α according to the varying channel condition, without changing the modulation level. This is in contrast to conventional WiMAX and LTE systems, in which dynamic switching among multiple modulations is used to adapt the varying channel conditions. This paper proposes HQAM with adaptive α as an alternative to the multiple modulation schemes. Moreover, for fixed average transmission power, receiver demodulates symbols without the knowledge of α. In order to further improve the video quality and to reduce the effects of erroneously received LP bits, the proposed system uses another level of adaptation, in which received LP bits are adaptively considered or discarded, before decoding the video, depending on the channel conditions (or optimized α). Simulation results show that proposed system can achieve significant improvement in the video quality compared to QAM based EEP scheme and non-adaptive HQAM.  相似文献   
133.
134.
This paper presents a method for reconstructing unreliable spectral components of speech signals using the statistical distributions of the clean components. Our goal is to model the temporal patterns in speech signal and take advantage of correlations between speech features in both time and frequency domain simultaneously. In this approach, a hidden Markov model (HMM) is first trained on clean speech data to model the temporal patterns which appear in the sequences of the spectral components. Using this model and according to the probabilities of occurring noisy spectral component at each states, a probability distributions for noisy components are estimated. Then, by applying maximum a posteriori (MAP) estimation on the mentioned distributions, the final estimations of the unreliable spectral components are obtained. The proposed method is compared to a common missing feature method which is based on the probabilistic clustering of the feature vectors and also to a state of the art method based on sparse reconstruction. The experimental results exhibits significant improvement in recognition accuracy over a noise polluted Persian corpus.  相似文献   
135.
136.
This paper explores how different forms of anticipatory work contribute to reliability in high-risk space operations. It is based on ethnographic field work, participant observation and interviews supplemented with video recordings from a control room responsible for operating a microgravity greenhouse at the International Space Station (ISS). Drawing on examples from different stages of a biological experiment on the ISS, we demonstrate how engineers, researchers and technicians work to anticipate and proactively mitigate possible problems. Space research is expensive and risky. The experiments are planned over the course of many years by a globally distributed network of organizations. Owing to the inaccessibility of the ISS, every trivial detail that could possibly cause a problem is subject to scrutiny. We discuss what we label anticipatory work: practices constituted of an entanglement of cognitive, social and technical elements involved in anticipating and proactively mitigating everything that might go wrong. We show how the nature of anticipatory work changes between planning and the operational phases of an experiment. In the planning phase, operators inscribe their anticipation into technology and procedures. In the operational phase, we show how troubleshooting involves the ability to look ahead in the evolving temporal trajectory of the ISS operations and to juggle pre-planned fixes along these trajectories. A key objective of this paper is to illustrate how anticipation is shared between humans and different forms of technology. Moreover, it illustrates the importance of including considerations of temporality in safety and reliability research.  相似文献   
137.
This paper presents a historical Arabic corpus named HAC. At this early embryonic stage of the project, we report about the design, the architecture and some of the experiments which we have conducted on HAC. The corpus, and accordingly the search results, will be represented using a primary XML exchange format. This will serve as an intermediate exchange tool within the project and will allow the user to process the results offline using some external tools. HAC is made up of Classical Arabic texts that cover 1600 years of language use; the Quranic text, Modern Standard Arabic texts, as well as a variety of monolingual Arabic dictionaries. The development of this historical corpus assists linguists and Arabic language learners to effectively explore, understand, and discover interesting knowledge hidden in millions of instances of language use. We used techniques from the field of natural language processing to process the data and a graph-based representation for the corpus. We provided researchers with an export facility to render further linguistic analysis possible.  相似文献   
138.
Semantic similarity has typically been measured across items of approximately similar sizes. As a result, similarity measures have largely ignored the fact that different types of linguistic item can potentially have similar or even identical meanings, and therefore are designed to compare only one type of linguistic item. Furthermore, nearly all current similarity benchmarks within NLP contain pairs of approximately the same size, such as word or sentence pairs, preventing the evaluation of methods that are capable of comparing different sized items. To address this, we introduce a new semantic evaluation called cross-level semantic similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Our pilot CLSS task was presented as part of SemEval-2014, which attracted 19 teams who submitted 38 systems. CLSS data contains a rich mixture of pairs, spanning from paragraphs to word senses to fully evaluate similarity measures that are capable of comparing items of any type. Furthermore, data sources were drawn from diverse corpora beyond just newswire, including domain-specific texts and social media. We describe the annotation process and its challenges, including a comparison with crowdsourcing, and identify the factors that make the dataset a rigorous assessment of a method’s quality. Furthermore, we examine in detail the systems participating in the SemEval task to identify the common factors associated with high performance and which aspects proved difficult to all systems. Our findings demonstrate that CLSS poses a significant challenge for similarity methods and provides clear directions for future work on universal similarity methods that can compare any pair of items.  相似文献   
139.
This study examines the development of an automated particle tracking algorithm to predict the hindered Brownian movement of fluorescent nanoparticles within an evanescent wave field created using total internal reflection fluorescent microscopy. The two-dimensional motion of the fluorescent nanoparticles was tracked, with sub-pixel resolution, by fitting the intensity distribution of the particles to a known Gaussian distribution, thus providing the particle center within a single pixel. Spherical yellow-green polystyrene nanoparticles (200, 500, and 1000 nm in diameter) were suspended in deionized water (control), 10 wt% d-glucose, and 10 wt% glycerol solutions, with 1 mM of NaCl added to each. The motion of tracked nanoparticles was compared with the theoretical tangential hindered Brownian motion to estimate particle diameters and fluid viscosity using a nonlinear regression technique. The automatic tracking algorithm was initially validated by comparing the automated results with manually tracked particles, 1 µm in size. Our results showed that both particle size and solution viscosity were accurately predicted from the experimental mean square displacement. Specifically, the results show that the error of particle size prediction is below 10 % and the error of solution viscosity prediction is less than 1 %. The proposed automatic analysis tool could prove to be useful in bio-application fields for examination of single protein tracking, drug delivery, and cytotoxicity. Furthermore, the proposed tool could be useful in microfluidic areas such as particle tracking velocimetry and noninvasive viscosimetry.  相似文献   
140.
In the beginning of the e-commerce era, retailers mostly adopted vertically integrated solutions to control the entire e-commerce value chain. However, they began to realize that to achieve agility, a better approach would be to focus on certain core capabilities and then create a partner ecosystem around them. From a technical point of view, this means it is advised to have a lightweight platform architecture with small core e-commerce functionality which can be extended by additional services from third party providers. In a typical e-commerce ecosystem with diverse information systems of network partners, integration and interoperability become critical factors to enable seamless coordination among the partners. Furthermore an increasing adoption of cloud computing technology could be observed resulting in more challenging integration scenarios involving cloud services. Thus, an e-commerce platform is required that suites the advanced needs for flexible and agile service integration. Therefore, this paper aims to present a reference architecture of a novel pluggable service platform for e-commerce. We investigate on currently available online shop platform solutions and integration platforms in the market. Based on the findings and motivated by literature on service-oriented design, we develop an architecture of a service-based pluggable platform for online retailers. This design is then instantiated by means of a prototype for an e-commerce returns handling scenario to demonstrate the feasibility of our architecture design.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号