全文获取类型
收费全文 | 9116篇 |
免费 | 309篇 |
国内免费 | 6篇 |
专业分类
电工技术 | 89篇 |
综合类 | 12篇 |
化学工业 | 1837篇 |
金属工艺 | 135篇 |
机械仪表 | 167篇 |
建筑科学 | 540篇 |
矿业工程 | 18篇 |
能源动力 | 218篇 |
轻工业 | 687篇 |
水利工程 | 74篇 |
石油天然气 | 27篇 |
武器工业 | 1篇 |
无线电 | 524篇 |
一般工业技术 | 1685篇 |
冶金工业 | 1928篇 |
原子能技术 | 126篇 |
自动化技术 | 1363篇 |
出版年
2023年 | 47篇 |
2022年 | 100篇 |
2021年 | 145篇 |
2020年 | 109篇 |
2019年 | 135篇 |
2018年 | 179篇 |
2017年 | 173篇 |
2016年 | 196篇 |
2015年 | 160篇 |
2014年 | 243篇 |
2013年 | 612篇 |
2012年 | 388篇 |
2011年 | 527篇 |
2010年 | 364篇 |
2009年 | 408篇 |
2008年 | 430篇 |
2007年 | 454篇 |
2006年 | 432篇 |
2005年 | 361篇 |
2004年 | 310篇 |
2003年 | 294篇 |
2002年 | 247篇 |
2001年 | 171篇 |
2000年 | 174篇 |
1999年 | 179篇 |
1998年 | 161篇 |
1997年 | 150篇 |
1996年 | 154篇 |
1995年 | 166篇 |
1994年 | 133篇 |
1993年 | 128篇 |
1992年 | 138篇 |
1991年 | 84篇 |
1990年 | 90篇 |
1989年 | 109篇 |
1988年 | 103篇 |
1987年 | 101篇 |
1986年 | 98篇 |
1985年 | 104篇 |
1984年 | 103篇 |
1983年 | 97篇 |
1982年 | 62篇 |
1981年 | 76篇 |
1980年 | 60篇 |
1979年 | 57篇 |
1978年 | 76篇 |
1977年 | 52篇 |
1976年 | 48篇 |
1975年 | 46篇 |
1974年 | 43篇 |
排序方式: 共有9431条查询结果,搜索用时 12 毫秒
81.
82.
Bahador Saket Carlos Scheidegger Stephen G. Kobourov Katy Börner 《Computer Graphics Forum》2015,34(3):441-450
We investigate the memorability of data represented in two different visualization designs. In contrast to recent studies that examine which types of visual information make visualizations memorable, we examine the effect of different visualizations on time and accuracy of recall of the displayed data, minutes and days after interaction with the visualizations. In particular, we describe the results of an evaluation comparing the memorability of two different visualizations of the same relational data: node‐link diagrams and map‐based visualization. We find significant differences in the accuracy of the tasks performed, and these differences persist days after the original exposure to the visualizations. Specifically, participants in the study recalled the data better when exposed to map‐based visualizations as opposed to node‐link diagrams. We discuss the scope of the study and its limitations, possible implications, and future directions. 相似文献
83.
Kevin Curran Michelle Murray David Stephen Norrby Martin Christian 《New Review of Information Networking》2013,18(1-2):47-59
Libraries, as we know them today, can be defined by the term Library 1.0. This defines the way resources are kept on shelves or at a computer behind a login. These resources can be taken from a shelf, checked out by the library staff, taken home for a certain length of time and absorbed, and then returned to the library for someone else to avail of. Library 1.0 is a one-directional service that takes people to the information they require. Library 2.0 – or L2 as it is now more commonly addressed as – aims to take the information to the people by bringing the library service to the Internet and getting the users more involved by encouraging feedback participation. This paper presents an overview of Library 2.0 and introduces web 2.0 concepts. 相似文献
84.
Stephen J. Walsh Amy L. McCleary Carlos F. Mena Yang Shao Julie P. Tuttle Augusto Gonzlez Rachel Atkinson 《Remote sensing of environment》2008,112(5):291-1941
In the Galapagos Islands of Ecuador, one of the greatest threats to the terrestrial ecosystem is the increasing number and areal extent of invasive species. Increased human presence on the islands has hastened the introduction of plant and animal species that threaten the native and endemic flora and fauna. Considerable research on invasive species in the Galapagos Islands has been conducted by the Charles Darwin Foundation. We complement that work through a spatially- and spectrally-explicit satellite assessment of an important invasive plant species (Psidium guajava — guava) on Isabela Island that integrates diverse remote sensing systems, data types, spatial and spectral resolutions, and analytical and image processing approaches. QuickBird and Hyperion satellite data are processed to characterize the areal extent and spatial structure of guava through the following approaches: (1) QuickBird data are classified through a traditional pixel-based approach (i.e., an unsupervised classification approach using the ISODATA algorithm), as well as an Object-Based Image Analysis (OBIA) approach; (2) multiple approaches for spectral “unmixing” of the Hyperion hyper-spectral data are assessed to construct spectral end-members from QuickBird data using linear and non-linear mixture modeling approaches; and (3) landscape pattern metrics are calculated and compared for the pixel-based, object-based, and spectral unmixing approaches. The spectral–spatial characteristics of guava are interpreted relative to management strategies for the control of guava and the restoration of natural ecosystems in the Galapagos National Park. 相似文献
85.
Eric M. Nielsen Stephen D. Prince Gregory T. Koeln 《Remote sensing of environment》2008,112(11):4061-4074
Although the impacts of wetland loss are often felt at regional scales, effective planning and management require a comparative assessment of local needs, costs, and benefits. Satellite remote sensing can provide spatially explicit, synoptic land cover change information to support such an assessment. However, a common challenge in conventional remote sensing change detection is the difficulty of obtaining phenologically and radiometrically comparable data from the start and end of the time period of interest. An alternative approach is to use a prior land cover classification as a surrogate for historic satellite data and to examine the self-consistency of class spectral reflectances in recent imagery. We produced a 30-meter resolution wetland change probability map for the U.S. mid-Atlantic region by applying an outlier detection technique to a base classification provided by the National Wetlands Inventory (NWI). Outlier-resistant measures – the median and median absolute deviation – were used to represent spectral reflectance characteristics of wetland class populations, and formed the basis for the calculation of a pixel change likelihood index. The individual scene index values were merged into a consistent region-wide map and converted to pixel change probability using a logistic regression calibrated through interpretation of historic and recent aerial photography. The accuracy of a regional change/no-change map produced from the change probabilities was estimated at 89.6%, with a Kappa of 0.779. The change probabilities identify areas for closer inspection of change cause, impact, and mitigation potential. With additional work to resolve confusion resulting from natural spatial heterogeneity and variations in land use, automated updating of NWI maps and estimates of areal rates of wetland change may be possible. We also discuss extensions of the technique to address specific applications such as monitoring marsh degradation due to sea level rise and mapping of invasive species. 相似文献
86.
Radiometric normalization and image mosaic generation of ASTER thermal infrared data: An application to extensive sand sheets and dune fields 总被引:1,自引:0,他引:1
Data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) have a significant advantage over previous datasets because of the combination of high spatial resolution (15-90 m) and enhanced multispectral capabilities, particularly in the thermal infrared (TIR) atmospheric window (8-12 μm) of the Earth where common silicate minerals are more easily identified. However, the 60 km swath width of ASTER can limit the effectiveness of accurately tracing large-scale features, such as eolian sediment transport pathways, over long distances. The primary goal of this paper is to describe a method for generating a seamless and radiometrically accurate ASTER TIR mosaic of atmospherically corrected radiance and from that, extract surface emissivity for arid lands, specifically, sand seas. The Gran Desierto in northern Sonora, Mexico was used as a test location for the radiometric normalization technique because of past remote sensing studies of the region, its compositional diversity, and its size. A linear approach was taken to transform adjacent image swaths into a direct linear relationship between image acquisition dates. Pseudo-invariant features (PIFs) were selected using a threshold of correlation between radiance values, and change-pixels were excluded from the linear regression used to determine correction factors. The degree of spectral correlation between overlapping pixels is directly related to the amount of surface change over time; therefore, the gain and offsets between scenes were based only on regions of high spectral correlation. The result was a series of radiometrically normalized radiance-at-surface images that were combined with a minimum of image edge seams present. These edges were subsequently blended to create the final mosaic. The advantages of this approach for TIR radiance (as opposed to emissivity) data include the ability to: (1) analyze data acquired on different dates (with potentially very different surface temperatures) as one seamless compositional dataset; (2) perform decorrelation stretches (DCS) on the entire dataset in order to identify and discriminate compositional units; and (3) separate brightness temperature from surface emissivity for quantitative compositional analysis of the surface, reducing seam-line error in the emissivity mosaic. The approach presented here is valid for any ASTER-related study of large geographic regions where numerous images spanning different temporal and atmospheric conditions are encountered. 相似文献
87.
Georgia Frantzeskou Author Vitae Stephen MacDonell Author Vitae 《Journal of Systems and Software》2008,81(3):447-460
The use of Source Code Author Profiles (SCAP) represents a new, highly accurate approach to source code authorship identification that is, unlike previous methods, language independent. While accuracy is clearly a crucial requirement of any author identification method, in cases of litigation regarding authorship, plagiarism, and so on, there is also a need to know why it is claimed that a piece of code is written by a particular author. What is it about that piece of code that suggests a particular author? What features in the code make one author more likely than another? In this study, we describe a means of identifying the high-level features that contribute to source code authorship identification using as a tool the SCAP method. A variety of features are considered for Java and Common Lisp and the importance of each feature in determining authorship is measured through a sequence of experiments in which we remove one feature at a time. The results show that, for these programs, comments, layout features and package-related naming influence classification accuracy whereas user-defined naming, an obvious programmer related feature, does not appear to influence accuracy. A comparison is also made between the relative feature contributions in programs written in the two languages. 相似文献
88.
Multilayer hybrid visualizations to support 3D GIS 总被引:3,自引:0,他引:3
In this paper, we present a unique hybrid visualization system for spatial data. Although some existing 3D GIS systems offer 2D views they are typically isolated from the 3D view in that they are presented in a separate window. Our system is a novel hybrid 2D/3D approach that seamlessly integrates 2D and 3D views of the same data. In our interface, multiple layers of information are continuously transformed between the 2D and 3D modes under the control of the user, directly over a base terrain. In this way, our prototype system is able to depict 2D and 3D views within the same window. This has advantages, since 2D and 3D visualizations can each be easier to interpret in different contexts.In this work we develop this concept of a hybrid visualization by presenting a comprehensive set of capabilities within our distinctive system. These include new facilities such as: hybrid landmark, 3D point, and chart layers, the grouping of multiple hybrid layers, layer painting, the merging of layer controls and consistent zooming functionality. 相似文献
89.
Stephen Smith David Petty David Trustrum Ashraf Labib Ali Khan 《Robotics and Computer》2008,24(4):579-584
During the late 1990s and early 2000s, the profile of global manufacturing has experienced many changes. There is anecdotal evidence that many western manufacturing companies have chosen to expand their manufacturing base across geographical boundaries. The common reasons sited for these ventures are to exploit less expensive labour markets, to establish a presence in expanding markets and in response to the threat of new competition. Whilst a global manufacturing base can prove to have many cost and sales benefits, there are also many disadvantages. Logistics operations can often increase in complexity leading to higher reliance on planning and effective interpretation of demand data. In response, systems modelling has remerged as a fertile research area after many years. Many modelling and simulation techniques have been developed, but these have had very limited practical success. The authors have identified that majority of these simulation techniques rely upon a detailed market structure being known, when this is rarely the case. This paper describes the outcome of a research project to develop of a pragmatic set of tools to gather, assess and verify supply chain structure data. A hybrid collection of technologies are utilised to assist these operations and to build a dynamic supply network model. 相似文献
90.
Most search techniques within ILP require the evaluation of a large number of inconsistent clauses. However, acceptable clauses
typically need to be consistent, and are only found at the “fringe” of the search space. A search approach is presented, based
on a novel algorithm called QG (Quick Generalization). QG carries out a random-restart stochastic bottom-up search which efficiently
generates a consistent clause on the fringe of the refinement graph search without needing to explore the graph in detail.
We use a Genetic Algorithm (GA) to evolve and re-combine clauses generated by QG. In this QG/GA setting, QG is used to seed
a population of clauses processed by the GA. Experiments with QG/GA indicate that this approach can be more efficient than
standard refinement-graph searches, while generating similar or better solutions.
Editors: Ramon Otero, Simon Colton. 相似文献