全文获取类型
收费全文 | 7090篇 |
免费 | 313篇 |
国内免费 | 21篇 |
专业分类
电工技术 | 74篇 |
综合类 | 25篇 |
化学工业 | 1491篇 |
金属工艺 | 132篇 |
机械仪表 | 146篇 |
建筑科学 | 376篇 |
矿业工程 | 13篇 |
能源动力 | 269篇 |
轻工业 | 434篇 |
水利工程 | 67篇 |
石油天然气 | 33篇 |
无线电 | 592篇 |
一般工业技术 | 1402篇 |
冶金工业 | 898篇 |
原子能技术 | 50篇 |
自动化技术 | 1422篇 |
出版年
2023年 | 75篇 |
2022年 | 155篇 |
2021年 | 246篇 |
2020年 | 166篇 |
2019年 | 205篇 |
2018年 | 252篇 |
2017年 | 180篇 |
2016年 | 217篇 |
2015年 | 195篇 |
2014年 | 248篇 |
2013年 | 445篇 |
2012年 | 389篇 |
2011年 | 456篇 |
2010年 | 328篇 |
2009年 | 328篇 |
2008年 | 360篇 |
2007年 | 343篇 |
2006年 | 260篇 |
2005年 | 236篇 |
2004年 | 198篇 |
2003年 | 192篇 |
2002年 | 166篇 |
2001年 | 101篇 |
2000年 | 110篇 |
1999年 | 99篇 |
1998年 | 250篇 |
1997年 | 167篇 |
1996年 | 101篇 |
1995年 | 113篇 |
1994年 | 91篇 |
1993年 | 74篇 |
1992年 | 59篇 |
1991年 | 46篇 |
1990年 | 38篇 |
1989年 | 34篇 |
1988年 | 40篇 |
1987年 | 36篇 |
1986年 | 23篇 |
1985年 | 33篇 |
1984年 | 23篇 |
1983年 | 20篇 |
1982年 | 30篇 |
1981年 | 33篇 |
1980年 | 23篇 |
1979年 | 24篇 |
1978年 | 20篇 |
1977年 | 29篇 |
1976年 | 47篇 |
1975年 | 15篇 |
1972年 | 15篇 |
排序方式: 共有7424条查询结果,搜索用时 10 毫秒
91.
James E. Corter Sven K. Esche Constantin Chassapis Jing Ma Jeffrey V. Nickerson 《Computers & Education》2011
A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the experiments in one of three lab formats (hands-on, remotely-operated, or simulation-based), collecting data either individually or as a team. Lab format and data-collection mode showed an interaction, such that for the hands-on lab format learning outcomes were higher when the lab team collected data sets working as a group rather than individually collecting data sets to be combined later, while for remotely-operated labs individual data collection was best. The pattern of time spent on various lab-related activities suggests that working with real instead of simulated data may induce higher levels of motivation. The results also suggest that learning with computer-mediated technologies can be improved by careful design and coordination of group and individual activities. 相似文献
92.
Sven Groppe Jinghua Groppe Stefan Böttcher Thomas Wycisk Le Gruenwald 《Knowledge and Information Systems》2009,18(3):331-391
We have to deal with different data formats whenever data formats evolve or data must be integrated from heterogeneous systems.
These data when implemented in XML for data exchange cannot be shared freely among applications without data transformation.
A common approach to solve this problem is to convert the entire XML data from their source format to the applications’ target
formats using the transformations rules specified in XSLT stylesheets. However, in many cases, not all XML data are required
to be transformed except for a smaller part described by a user’s query (application). In this paper, we present an approach
that optimizes the execution time of an XSLT stylesheet for answering a given XPath query by modifying the XSLT stylesheet
in such a way that it would (a) capture only the parts in the XML data that are relevant to the query and (b) process only
those XSLT instructions that are relevant to the query. We prove the correctness of our optimization approach, analyze its
complexity and present experimental results. The experimental results show that our approach performs the best in terms of
execution time, especially when many cost-intensive XSLT instructions can be excluded in the XSLT stylesheet. 相似文献
93.
One approach to limiting disclosure risk in public-use microdata is to release multiply-imputed, partially synthetic data sets. These are data on actual respondents, but with confidential data replaced by multiply-imputed synthetic values. A mis-specified imputation model can invalidate inferences based on the partially synthetic data, because the imputation model determines the distribution of synthetic values. We present a practical method to generate synthetic values when the imputer has only limited information about the true data generating process. We combine a simple imputation model (such as regression) with density-based transformations that preserve the distribution of the confidential data, up to sampling error, on specified subdomains. We demonstrate through simulations and a large scale application that our approach preserves important statistical properties of the confidential data, including higher moments, with low disclosure risk. 相似文献
94.
André Twele Wenxi Cao Simon Plank Sandro Martinis 《International journal of remote sensing》2016,37(13):2990-3004
This article presents an automated Sentinel-1-based processing chain designed for flood detection and monitoring in near-real-time (NRT). Since no user intervention is required at any stage of the flood mapping procedure, the processing chain allows deriving time-critical disaster information in less than 45 min after a new data set is available on the Sentinel Data Hub of the European Space Agency (ESA). Due to the systematic acquisition strategy and high repetition rate of Sentinel-1, the processing chain can be set up as a web-based service that regularly informs users about the current flood conditions in a given area of interest. The thematic accuracy of the thematic processor has been assessed for two test sites of a flood situation at the border between Greece and Turkey with encouraging overall accuracies between 94.0% and 96.1% and Cohen’s kappa coefficients (κ) ranging from 0.879 to 0.910. The accuracy assessment, which was performed separately for the standard polarizations (VV/VH) of the interferometric wide swath (IW) mode of Sentinel-1, further indicates that under calm wind conditions, slightly higher thematic accuracies can be achieved by using VV instead of VH polarization data. 相似文献
95.
96.
Simon Andrews Helen Gibson Konstantinos Domdouzis Babak Akhgar 《Journal of Intelligent Information Systems》2016,47(2):287-312
During a crisis citizens reach for their smart phones to report, comment and explore information surrounding the crisis. These actions often involve social media and this data forms a large repository of real-time, crisis related information. Law enforcement agencies and other first responders see this information as having untapped potential. That is, it has the capacity extend their situational awareness beyond the scope of a usual command and control centre. Despite this potential, the sheer volume, the speed at which it arrives, and unstructured nature of social media means that making sense of this data is not a trivial task and one that is not yet satisfactorily solved; both in crisis management and beyond. Therefore we propose a multi-stage process to extract meaning from this data that will provide relevant and near real-time information to command and control to assist in decision support. This process begins with the capture of real-time social media data, the development of specific LEA and crisis focused taxonomies for categorisation and entity extraction, the application of formal concept analysis for aggregation and corroboration and the presentation of this data via map-based and other visualisations. We demonstrate that this novel use of formal concept analysis in combination with context-based entity extraction has the potential to inform law enforcement and/or humanitarian responders about on-going crisis events using social media data in the context of the 2015 Nepal earthquake. 相似文献
97.
At the very core of most automated sorting systems— for example, at airports for baggage handling and in parcel distribution centers for sorting mail—we find closed-loop tilt tray sortation conveyors. In such a system, trays are loaded with cargo as they pass through loading stations, and are later tilted upon reaching the outbound container dedicated to a shipment’s destination. This paper addresses the question of whether the simple decision rules typically applied in the real world when deciding which parcel should be loaded onto what tray are, indeed, a good choice. We formulate a short-term deterministic scheduling problem where a finite set of shipments must be loaded onto trays such that the makespan is minimized. We consider different levels of flexibility in how to arrange shipments on the feeding conveyors, and distinguish between unidirectional and bidirectional systems. In a comprehensive computational study, we compare these sophisticated optimization procedures with widespread rules of thumb, and find that the latter perform surprisingly well. For almost all problem settings, some priority rule can be identified which leads to a low-single-digit optimality gap. In addition, we systematically evaluate the performance gains promised by different sorter layouts. 相似文献
98.
Yunji Jung Yulong Xi Seoungjae Cho Wei Song Simon Fong Kyungeun Cho 《Multimedia Tools and Applications》2017,76(9):11429-11447
The objective of this study is to solve the problem of user data not being precisely received from sensors because of sensing region limitations in invoked reality (IR) space, distortion of colors or patterns by lighting, and blocking or overlapping of a user by other users. The sensing scope range is thus expanded using multiple sensors in the IR space. Moreover, user feature data are accurately identified by user sensing. Specifically, multiple sensors are employed when not all of user data are sensed because they overlap with data of other users. In the proposed approach, all clients share the user feature data from multiple sensors. Accordingly, each client recognizes that the user is the same individual on the basis of the shared data. Furthermore, the identification accuracy is improved by identifying the user features based on colors and patterns that are less affected by lighting. Therefore, accurate identification of the user feature data is enabled, even under lighting changes. The proposed system was implemented based on system performance analysis standards. The practicality and system performance in identifying the same person using the proposed method were verified through an experiment. 相似文献
99.
For the efficient analysis and optimization of flexible multibody systems, gradient information is often required. Next to simple and easy-to-implement finite difference approaches, analytical methods, such as the adjoint variable method, have been developed and are now well established for the sensitivity analysis in multibody dynamics. They allow the computation of exact gradients and require normally less computational effort for large-scale problems. In the current work, we apply the adjoint variable method to flexible multibody systems with kinematic loops, which are modeled using the floating frame of reference formulation. Thereby, in order to solve ordinary differential equations only, the equations of motion are brought into minimal form using coordinate partitioning, and the constraint equations at position and velocity level are incorporated in the adjoint dynamics. For testing and illustrative purposes, the procedure is applied to compute the structural gradient for a flexible piston rod of a slider–crank mechanism. 相似文献
100.
Yongrui?QinEmail author Quan?Z.?Sheng Nickolas?J.?G.?Falkner Lina?Yao Simon?Parkinson 《World Wide Web》2017,20(5):915-937
Since today’s real-world graphs, such as social network graphs, are evolving all the time, it is of great importance to perform graph computations and analysis in these dynamic graphs. Due to the fact that many applications such as social network link analysis with the existence of inactive users need to handle failed links or nodes, decremental computation and maintenance for graphs is considered a challenging problem. Shortest path computation is one of the most fundamental operations for managing and analyzing large graphs. A number of indexing methods have been proposed to answer distance queries in static graphs. Unfortunately, there is little work on answering such queries for dynamic graphs. In this paper, we focus on the problem of computing the shortest path distance in dynamic graphs, particularly on decremental updates (i.e., edge deletions). We propose maintenance algorithms based on distance labeling, which can handle decremental updates efficiently. By exploiting properties of distance labeling in original graphs, we are able to efficiently maintain distance labeling for new graphs. We experimentally evaluate our algorithms using eleven real-world large graphs and confirm the effectiveness and efficiency of our approach. More specifically, our method can speed up index re-computation by up to an order of magnitude compared with the state-of-the-art method, Pruned Landmark Labeling (PLL). 相似文献