全文获取类型
收费全文 | 20506篇 |
免费 | 1252篇 |
国内免费 | 840篇 |
专业分类
电工技术 | 1171篇 |
技术理论 | 2篇 |
综合类 | 1930篇 |
化学工业 | 1368篇 |
金属工艺 | 363篇 |
机械仪表 | 904篇 |
建筑科学 | 3267篇 |
矿业工程 | 776篇 |
能源动力 | 320篇 |
轻工业 | 1176篇 |
水利工程 | 882篇 |
石油天然气 | 848篇 |
武器工业 | 124篇 |
无线电 | 1640篇 |
一般工业技术 | 907篇 |
冶金工业 | 1026篇 |
原子能技术 | 72篇 |
自动化技术 | 5822篇 |
出版年
2024年 | 25篇 |
2023年 | 92篇 |
2022年 | 210篇 |
2021年 | 240篇 |
2020年 | 278篇 |
2019年 | 255篇 |
2018年 | 274篇 |
2017年 | 340篇 |
2016年 | 425篇 |
2015年 | 487篇 |
2014年 | 1129篇 |
2013年 | 973篇 |
2012年 | 1314篇 |
2011年 | 1431篇 |
2010年 | 1278篇 |
2009年 | 1633篇 |
2008年 | 1374篇 |
2007年 | 1749篇 |
2006年 | 1476篇 |
2005年 | 1343篇 |
2004年 | 1216篇 |
2003年 | 1022篇 |
2002年 | 873篇 |
2001年 | 635篇 |
2000年 | 497篇 |
1999年 | 382篇 |
1998年 | 286篇 |
1997年 | 216篇 |
1996年 | 192篇 |
1995年 | 160篇 |
1994年 | 138篇 |
1993年 | 89篇 |
1992年 | 77篇 |
1991年 | 67篇 |
1990年 | 44篇 |
1989年 | 41篇 |
1988年 | 36篇 |
1987年 | 10篇 |
1986年 | 15篇 |
1985年 | 28篇 |
1984年 | 25篇 |
1983年 | 29篇 |
1982年 | 31篇 |
1981年 | 26篇 |
1980年 | 27篇 |
1979年 | 31篇 |
1978年 | 11篇 |
1977年 | 6篇 |
1976年 | 13篇 |
1964年 | 6篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
1.
2.
3.
We present an optimization-based unsupervised approach to automatic document summarization. In the proposed approach, text summarization is modeled as a Boolean programming problem. This model generally attempts to optimize three properties, namely, (1) relevance: summary should contain informative textual units that are relevant to the user; (2) redundancy: summaries should not contain multiple textual units that convey the same information; and (3) length: summary is bounded in length. The approach proposed in this paper is applicable to both tasks: single- and multi-document summarization. In both tasks, documents are split into sentences in preprocessing. We select some salient sentences from document(s) to generate a summary. Finally, the summary is generated by threading all the selected sentences in the order that they appear in the original document(s). We implemented our model on multi-document summarization task. When comparing our methods to several existing summarization methods on an open DUC2005 and DUC2007 data sets, we found that our method improves the summarization results significantly. This is because, first, when extracting summary sentences, this method not only focuses on the relevance scores of sentences to the whole sentence collection, but also the topic representative of sentences. Second, when generating a summary, this method also deals with the problem of repetition of information. The methods were evaluated using ROUGE-1, ROUGE-2 and ROUGE-SU4 metrics. In this paper, we also demonstrate that the summarization result depends on the similarity measure. Results of the experiment showed that combination of symmetric and asymmetric similarity measures yields better result than their use separately. 相似文献
4.
Private information retrieval(PIR) is an important privacy protection issue of secure multi-party computation, but the PIR protocols based on classical cryptography are vulnerable because of new technologies,such as quantum computing and cloud computing. The quantum private queries(QPQ) protocols available, however, has a high complexity and is inefficient in the face of large database. This paper, based on the QKD technology which is mature now, proposes a novel QPQ protocol utilizing the key dilution and auxiliary parameter. Only N quits are required to be sent in the quantum channel to generate the raw key, then the straight k bits in the raw key are added bitwise to dilute the raw key, and a final key is consequently obtained to encrypt the database. By flexible adjusting of auxiliary parameters θ and k, privacy is secured and the query success ratio is improved. Feasibility and performance analyses indicate that the protocol has a high success ratio in first-trial query and is easy to implement, and that the communication complexity of O(N) is achieved. 相似文献
5.
Based on the multi-item Food Choice Questionnaire (FCQ) originally developed by Steptoe and colleagues (1995), the current study developed a single-item FCQ that provides an acceptable balance between practical needs and psychometric concerns. Studies 1 (N = 1851) and 2 (2a (N = 3290), 2b (N = 4723), 2c (N = 270)) showed that the single-item FCQ scale has good convergent and discriminant validity. Generally, the results showed the highest correlations with the related multi-item dimensions (>0.40). Study 2 refined the scale. Only the items for convenience (Study 2a), sensory appeal (Study 2b) and mood (Study 2c) needed to be revised (as Study 1 showed a correlation between the multi-item and the single-item below the threshold of 0.60). The results also showed comparable predictive validity. Both methods revealed similar association patterns between food motives and consumption behaviours (Fisher’s z tests revealed agreements of 86.2% for Study 1, 92.9% for Study 2a and 100% for Studies 2b and 2c). Study 3 (N = 6062) showed an example of the added value of a context-specific application for the single-item FCQ. Different motives were shown to be relevant across contexts, and the context-specific motives had additional explained variance beyond the general multi-item FCQ. Studies 2b and 3 also showed the performance of the single-item FCQ in an international context. In sum, the results indicate that the single-item FCQ can be used as a flexible and short substitute for the multi-item FCQ. The study also discusses the conditions that should be considered when using the single-item scale. 相似文献
6.
Clip-art image segmentation is widely used as an essential step to solve many vision problems such as colorization and vectorization. Many of these applications not only demand accurate segmentation results, but also have little tolerance for time cost, which leads to the main challenge of this kind of segmentation. However, most existing segmentation techniques are found not sufficient for this purpose due to either their high computation cost or low accuracy. To address such issues, we propose a novel segmentation approach, ECISER, which is well-suited in this context. The basic idea of ECISER is to take advantage of the particular nature of cartoon images and connect image segmentation with aliased rasterization. Based on such relationship, a clip-art image can be quickly segmented into regions by re-rasterization of the original image and several other computationally efficient techniques developed in this paper. Experimental results show that our method achieves dramatic computational speedups over the current state-of-the-art approaches, while preserving almost the same quality of results. 相似文献
7.
The pharmacy service requires that some pharmacies are always available and shifts have to be organized: a shift corresponds to a subset of pharmacies that must be open 24 hours a day on a particular week. Under the requirement that each pharmacy belongs to exactly one shift and the assumption that users minimize the distance to the closest open pharmacy during each shift, we want to determine a partition of the pharmacies into a given number of shifts, such that the total distance covered by users is minimized. It may be also required that shift cardinalities are balanced. We discuss different versions and the related computational complexity, showing that the problem is NP-hard in general. A set packing formulation is presented and solved by branch-and-price, together with a fast solution technique based on a tabu search. They have been applied to real and random instances showing that (i) the set packing formulation is very tight and often exhibits no integrality gap; (ii) the branch-and-price solves problems of practical relevance to optimality in a reasonable amount of time (order of minutes); (iii) the tabu search finds optimal or near-optimal solutions in order of seconds. 相似文献
8.
9.
Greenish yellow organic light-emitting diodes (GYOLEDs) have steadily attracted researcher's attention since they are important to our life. However, their performance significantly lags behind compared with the three primary colors based OLEDs. Herein, for the first time, an ideal host-guest system has been demonstrated to accomplish high-performance phosphorescent GYOLEDs, where the guest concentration is as low as 2%. The GYOLED exhibits a forward-viewing power efficiency of 57.0 lm/W at 1000 cd/m2, which is the highest among GYOLEDs. Besides, extremely low efficiency roll-off and voltages are achieved. The origin of the high performance is unveiled and it is found that the combined mechanisms of host-guest energy transfer and direct exciton formation on the guest are effective to furnish the greenish yellow emission. Then, by dint of this ideal host-guest system, a simplified but high-performance hybrid white OLED (WOLED) has been developed. The WOLED can exhibit an ultrahigh color rendering index (CRI) of 92, a maximum total efficiency of 27.5 lm/W and a low turn-on voltage of 2.5 V (1 cd/m2), unlocking a novel avenue to simultaneously achieve simplified structure, ultrahigh CRI (>90), high efficiency and low voltage. 相似文献
10.
Clustering is a solution for classifying enormous data when there is not any early knowledge about classes. With emerging new concepts like cloud computing and big data and their vast applications in recent years, research works have been increased on unsupervised solutions like clustering algorithms to extract knowledge from this avalanche of data. Clustering time-series data has been used in diverse scientific areas to discover patterns which empower data analysts to extract valuable information from complex and massive datasets. In case of huge datasets, using supervised classification solutions is almost impossible, while clustering can solve this problem using un-supervised approaches. In this research work, the focus is on time-series data, which is one of the popular data types in clustering problems and is broadly used from gene expression data in biology to stock market analysis in finance. This review will expose four main components of time-series clustering and is aimed to represent an updated investigation on the trend of improvements in efficiency, quality and complexity of clustering time-series approaches during the last decade and enlighten new paths for future works. 相似文献