首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5345篇
  免费   272篇
  国内免费   38篇
电工技术   74篇
综合类   14篇
化学工业   1195篇
金属工艺   159篇
机械仪表   180篇
建筑科学   171篇
矿业工程   14篇
能源动力   600篇
轻工业   638篇
水利工程   36篇
石油天然气   83篇
无线电   407篇
一般工业技术   1046篇
冶金工业   232篇
原子能技术   49篇
自动化技术   757篇
  2024年   14篇
  2023年   113篇
  2022年   224篇
  2021年   297篇
  2020年   221篇
  2019年   204篇
  2018年   313篇
  2017年   259篇
  2016年   268篇
  2015年   170篇
  2014年   227篇
  2013年   472篇
  2012年   302篇
  2011年   339篇
  2010年   231篇
  2009年   275篇
  2008年   248篇
  2007年   198篇
  2006年   161篇
  2005年   103篇
  2004年   96篇
  2003年   101篇
  2002年   64篇
  2001年   45篇
  2000年   47篇
  1999年   43篇
  1998年   77篇
  1997年   48篇
  1996年   36篇
  1995年   43篇
  1994年   24篇
  1993年   37篇
  1992年   32篇
  1991年   29篇
  1990年   21篇
  1989年   25篇
  1988年   16篇
  1987年   18篇
  1986年   20篇
  1985年   28篇
  1984年   22篇
  1983年   10篇
  1982年   19篇
  1981年   18篇
  1980年   13篇
  1979年   12篇
  1977年   8篇
  1976年   9篇
  1973年   12篇
  1970年   8篇
排序方式: 共有5655条查询结果,搜索用时 15 毫秒
91.
92.
Real-time services require reliable and fault tolerant communication networks to support their stringent Quality of Service requirements. Multi Topology Routing based IP Fast Re-route (MT-IPFRR) technologies provide seamless forwarding of IP packets during network failures by constructing virtual topologies (VTs) to re-route the disrupted traffic. Multiple Routing Configurations (MRC) is a widely studied MT-IPFRR technique. In this paper, we propose two heuristics, namely mMRC-1 and mMRC-2, to reduce the number of VTs required by the MRC to provide full coverage for single link/node failures, and hence, to decrease its operational complexity. Both heuristics are designed to construct more robust VTs against network partitioning by taking their topological characteristics into consideration. We perform extensive experiments on 3200 topologies with diverse structural properties using our automated topology generation and analysis tool. Numerical results show that the amount of reductions in VT requirements get higher up to 31.84 %, as the networks tend to have more hub nodes whose degree is much higher than the rest of the network.  相似文献   
93.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
94.
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.  相似文献   
95.
Citizens’ satisfaction is acknowledged as one of the most significant influences for e-government adoption and diffusion. This study examines the impact of information quality, system quality, trust, and cost on user satisfaction of e-government services. Using a survey, this study collected 1518 valid responses from e-government service adopters across the United Kingdom. Our empirical outcomes show the five factors identified in this study have a significant impact on U.K. citizens’ satisfaction with e-government services.  相似文献   
96.
97.
The success of using Hidden Markov Models (HMMs) for speech recognition application has motivated the adoption of these models for handwriting recognition especially the online handwriting that has large similarity with the speech signal as a sequential process. Some languages such as Arabic, Farsi and Urdo include large number of delayed strokes that are written above or below most letters and usually written delayed in time. These delayed strokes represent a modeling challenge for the conventional left-right HMM that is commonly used for Automatic Speech Recognition (ASR) systems. In this paper, we introduce a new approach for handling delayed strokes in Arabic online handwriting recognition using HMMs. We also show that several modeling approaches such as context based tri-grapheme models, speaker adaptive training and discriminative training that are currently used in most state-of-the-art ASR systems can provide similar performance improvement for Hand Writing Recognition (HWR) systems. Finally, we show that using a multi-pass decoder that use the computationally less expensive models in the early passes can provide an Arabic large vocabulary HWR system with practical decoding time. We evaluated the performance of our proposed Arabic HWR system using two databases of small and large lexicons. For the small lexicon data set, our system achieved competing results compared to the best reported state-of-the-art Arabic HWR systems. For the large lexicon, our system achieved promising results (accuracy and time) for a vocabulary size of 64k words with the possibility of adapting the models for specific writers to get even better results.  相似文献   
98.
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection.  相似文献   
99.
 Envelope analysis is an effective method for characterizing impulsive vibrations in wired condition monitoring (CM) systems. This paper depicts the implementation of envelope analysis on a wireless sensor node for obtaining a more convenient and reliable CM system. To maintain CM performances under the constraints of resources available in the cost effective Zigbee based wireless sensor network (WSN), a low cost cortex-M4F microcontroller is employed as the core processor to implement the envelope analysis algorithm on the sensor node. The on-chip 12 bit analog-to-digital converter (ADC) working at 10 kHz sampling rate is adopted to acquire vibration signals measured by a wide frequency band piezoelectric accelerometer. The data processing flow inside the processor is optimized to satisfy the large memory usage in implementing fast Fourier transform (FFT) and Hilbert transform (HT). Thus, the envelope spectrum can be computed from a data frame of 2048 points to achieve a frequency resolution acceptable for identifying the characteristic frequencies of different bearing faults. Experimental evaluation results show that the embedded envelope analysis algorithm can successfully diagnose the simulated bearing faults and the data transmission throughput can be reduced by at least 95% per frame compared with that of the raw data, allowing a large number of sensor nodes to be deployed in the network for real time monitoring.  相似文献   
100.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号