全文获取类型
收费全文 | 10163篇 |
免费 | 370篇 |
国内免费 | 7篇 |
专业分类
电工技术 | 96篇 |
综合类 | 13篇 |
化学工业 | 2023篇 |
金属工艺 | 141篇 |
机械仪表 | 178篇 |
建筑科学 | 579篇 |
矿业工程 | 20篇 |
能源动力 | 232篇 |
轻工业 | 753篇 |
水利工程 | 79篇 |
石油天然气 | 27篇 |
武器工业 | 1篇 |
无线电 | 613篇 |
一般工业技术 | 1832篇 |
冶金工业 | 2405篇 |
原子能技术 | 132篇 |
自动化技术 | 1416篇 |
出版年
2023年 | 46篇 |
2022年 | 81篇 |
2021年 | 149篇 |
2020年 | 115篇 |
2019年 | 142篇 |
2018年 | 184篇 |
2017年 | 181篇 |
2016年 | 203篇 |
2015年 | 166篇 |
2014年 | 252篇 |
2013年 | 657篇 |
2012年 | 401篇 |
2011年 | 552篇 |
2010年 | 388篇 |
2009年 | 430篇 |
2008年 | 448篇 |
2007年 | 475篇 |
2006年 | 461篇 |
2005年 | 377篇 |
2004年 | 333篇 |
2003年 | 315篇 |
2002年 | 264篇 |
2001年 | 186篇 |
2000年 | 195篇 |
1999年 | 201篇 |
1998年 | 269篇 |
1997年 | 223篇 |
1996年 | 199篇 |
1995年 | 205篇 |
1994年 | 170篇 |
1993年 | 163篇 |
1992年 | 156篇 |
1991年 | 111篇 |
1990年 | 117篇 |
1989年 | 130篇 |
1988年 | 128篇 |
1987年 | 120篇 |
1986年 | 118篇 |
1985年 | 126篇 |
1984年 | 117篇 |
1983年 | 114篇 |
1982年 | 85篇 |
1981年 | 101篇 |
1980年 | 72篇 |
1979年 | 69篇 |
1978年 | 89篇 |
1977年 | 68篇 |
1976年 | 66篇 |
1975年 | 61篇 |
1974年 | 49篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
191.
Stephen J. Walsh Amy L. McCleary Carlos F. Mena Yang Shao Julie P. Tuttle Augusto Gonzlez Rachel Atkinson 《Remote sensing of environment》2008,112(5):291-1941
In the Galapagos Islands of Ecuador, one of the greatest threats to the terrestrial ecosystem is the increasing number and areal extent of invasive species. Increased human presence on the islands has hastened the introduction of plant and animal species that threaten the native and endemic flora and fauna. Considerable research on invasive species in the Galapagos Islands has been conducted by the Charles Darwin Foundation. We complement that work through a spatially- and spectrally-explicit satellite assessment of an important invasive plant species (Psidium guajava — guava) on Isabela Island that integrates diverse remote sensing systems, data types, spatial and spectral resolutions, and analytical and image processing approaches. QuickBird and Hyperion satellite data are processed to characterize the areal extent and spatial structure of guava through the following approaches: (1) QuickBird data are classified through a traditional pixel-based approach (i.e., an unsupervised classification approach using the ISODATA algorithm), as well as an Object-Based Image Analysis (OBIA) approach; (2) multiple approaches for spectral “unmixing” of the Hyperion hyper-spectral data are assessed to construct spectral end-members from QuickBird data using linear and non-linear mixture modeling approaches; and (3) landscape pattern metrics are calculated and compared for the pixel-based, object-based, and spectral unmixing approaches. The spectral–spatial characteristics of guava are interpreted relative to management strategies for the control of guava and the restoration of natural ecosystems in the Galapagos National Park. 相似文献
192.
Eric M. Nielsen Stephen D. Prince Gregory T. Koeln 《Remote sensing of environment》2008,112(11):4061-4074
Although the impacts of wetland loss are often felt at regional scales, effective planning and management require a comparative assessment of local needs, costs, and benefits. Satellite remote sensing can provide spatially explicit, synoptic land cover change information to support such an assessment. However, a common challenge in conventional remote sensing change detection is the difficulty of obtaining phenologically and radiometrically comparable data from the start and end of the time period of interest. An alternative approach is to use a prior land cover classification as a surrogate for historic satellite data and to examine the self-consistency of class spectral reflectances in recent imagery. We produced a 30-meter resolution wetland change probability map for the U.S. mid-Atlantic region by applying an outlier detection technique to a base classification provided by the National Wetlands Inventory (NWI). Outlier-resistant measures – the median and median absolute deviation – were used to represent spectral reflectance characteristics of wetland class populations, and formed the basis for the calculation of a pixel change likelihood index. The individual scene index values were merged into a consistent region-wide map and converted to pixel change probability using a logistic regression calibrated through interpretation of historic and recent aerial photography. The accuracy of a regional change/no-change map produced from the change probabilities was estimated at 89.6%, with a Kappa of 0.779. The change probabilities identify areas for closer inspection of change cause, impact, and mitigation potential. With additional work to resolve confusion resulting from natural spatial heterogeneity and variations in land use, automated updating of NWI maps and estimates of areal rates of wetland change may be possible. We also discuss extensions of the technique to address specific applications such as monitoring marsh degradation due to sea level rise and mapping of invasive species. 相似文献
193.
Radiometric normalization and image mosaic generation of ASTER thermal infrared data: An application to extensive sand sheets and dune fields 总被引:1,自引:0,他引:1
Data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) have a significant advantage over previous datasets because of the combination of high spatial resolution (15-90 m) and enhanced multispectral capabilities, particularly in the thermal infrared (TIR) atmospheric window (8-12 μm) of the Earth where common silicate minerals are more easily identified. However, the 60 km swath width of ASTER can limit the effectiveness of accurately tracing large-scale features, such as eolian sediment transport pathways, over long distances. The primary goal of this paper is to describe a method for generating a seamless and radiometrically accurate ASTER TIR mosaic of atmospherically corrected radiance and from that, extract surface emissivity for arid lands, specifically, sand seas. The Gran Desierto in northern Sonora, Mexico was used as a test location for the radiometric normalization technique because of past remote sensing studies of the region, its compositional diversity, and its size. A linear approach was taken to transform adjacent image swaths into a direct linear relationship between image acquisition dates. Pseudo-invariant features (PIFs) were selected using a threshold of correlation between radiance values, and change-pixels were excluded from the linear regression used to determine correction factors. The degree of spectral correlation between overlapping pixels is directly related to the amount of surface change over time; therefore, the gain and offsets between scenes were based only on regions of high spectral correlation. The result was a series of radiometrically normalized radiance-at-surface images that were combined with a minimum of image edge seams present. These edges were subsequently blended to create the final mosaic. The advantages of this approach for TIR radiance (as opposed to emissivity) data include the ability to: (1) analyze data acquired on different dates (with potentially very different surface temperatures) as one seamless compositional dataset; (2) perform decorrelation stretches (DCS) on the entire dataset in order to identify and discriminate compositional units; and (3) separate brightness temperature from surface emissivity for quantitative compositional analysis of the surface, reducing seam-line error in the emissivity mosaic. The approach presented here is valid for any ASTER-related study of large geographic regions where numerous images spanning different temporal and atmospheric conditions are encountered. 相似文献
194.
Peter Potapov Matthew C. Hansen Stephen V. Stehman Thomas R. Loveland Kyle Pittman 《Remote sensing of environment》2008,112(9):3708-3719
Estimation of forest cover change is important for boreal forests, one of the most extensive forested biomes, due to its unique role in global timber stock, carbon sequestration and deposition, and high vulnerability to the effects of global climate change. We used time-series data from the MODerate Resolution Imaging Spectroradiometer (MODIS) to produce annual forest cover loss hotspot maps. These maps were used to assign all blocks (18.5 by 18.5 km) partitioning the boreal biome into strata of high, medium and low likelihood of forest cover loss. A stratified random sample of 118 blocks was interpreted for forest cover and forest cover loss using high spatial resolution Landsat imagery from 2000 and 2005. Area of forest cover gross loss from 2000 to 2005 within the boreal biome is estimated to be 1.63% (standard error 0.10%) of the total biome area, and represents a 4.02% reduction in year 2000 forest cover. The proportion of identified forest cover loss relative to regional forest area is much higher in North America than in Eurasia (5.63% to 3.00%). Of the total forest cover loss identified, 58.9% is attributable to wildfires. The MODIS pan-boreal change hotspot estimates reveal significant increases in forest cover loss due to wildfires in 2002 and 2003, with 2003 being the peak year of loss within the 5-year study period. Overall, the precision of the aggregate forest cover loss estimates derived from the Landsat data and the value of the MODIS-derived map displaying the spatial and temporal patterns of forest loss demonstrate the efficacy of this protocol for operational, cost-effective, and timely biome-wide monitoring of gross forest cover loss. 相似文献
195.
Georgia Frantzeskou Author Vitae Stephen MacDonell Author Vitae 《Journal of Systems and Software》2008,81(3):447-460
The use of Source Code Author Profiles (SCAP) represents a new, highly accurate approach to source code authorship identification that is, unlike previous methods, language independent. While accuracy is clearly a crucial requirement of any author identification method, in cases of litigation regarding authorship, plagiarism, and so on, there is also a need to know why it is claimed that a piece of code is written by a particular author. What is it about that piece of code that suggests a particular author? What features in the code make one author more likely than another? In this study, we describe a means of identifying the high-level features that contribute to source code authorship identification using as a tool the SCAP method. A variety of features are considered for Java and Common Lisp and the importance of each feature in determining authorship is measured through a sequence of experiments in which we remove one feature at a time. The results show that, for these programs, comments, layout features and package-related naming influence classification accuracy whereas user-defined naming, an obvious programmer related feature, does not appear to influence accuracy. A comparison is also made between the relative feature contributions in programs written in the two languages. 相似文献
196.
Multilayer hybrid visualizations to support 3D GIS 总被引:3,自引:0,他引:3
In this paper, we present a unique hybrid visualization system for spatial data. Although some existing 3D GIS systems offer 2D views they are typically isolated from the 3D view in that they are presented in a separate window. Our system is a novel hybrid 2D/3D approach that seamlessly integrates 2D and 3D views of the same data. In our interface, multiple layers of information are continuously transformed between the 2D and 3D modes under the control of the user, directly over a base terrain. In this way, our prototype system is able to depict 2D and 3D views within the same window. This has advantages, since 2D and 3D visualizations can each be easier to interpret in different contexts.In this work we develop this concept of a hybrid visualization by presenting a comprehensive set of capabilities within our distinctive system. These include new facilities such as: hybrid landmark, 3D point, and chart layers, the grouping of multiple hybrid layers, layer painting, the merging of layer controls and consistent zooming functionality. 相似文献
197.
Stephen Smith David Petty David Trustrum Ashraf Labib Ali Khan 《Robotics and Computer》2008,24(4):579-584
During the late 1990s and early 2000s, the profile of global manufacturing has experienced many changes. There is anecdotal evidence that many western manufacturing companies have chosen to expand their manufacturing base across geographical boundaries. The common reasons sited for these ventures are to exploit less expensive labour markets, to establish a presence in expanding markets and in response to the threat of new competition. Whilst a global manufacturing base can prove to have many cost and sales benefits, there are also many disadvantages. Logistics operations can often increase in complexity leading to higher reliance on planning and effective interpretation of demand data. In response, systems modelling has remerged as a fertile research area after many years. Many modelling and simulation techniques have been developed, but these have had very limited practical success. The authors have identified that majority of these simulation techniques rely upon a detailed market structure being known, when this is rarely the case. This paper describes the outcome of a research project to develop of a pragmatic set of tools to gather, assess and verify supply chain structure data. A hybrid collection of technologies are utilised to assist these operations and to build a dynamic supply network model. 相似文献
198.
Most search techniques within ILP require the evaluation of a large number of inconsistent clauses. However, acceptable clauses
typically need to be consistent, and are only found at the “fringe” of the search space. A search approach is presented, based
on a novel algorithm called QG (Quick Generalization). QG carries out a random-restart stochastic bottom-up search which efficiently
generates a consistent clause on the fringe of the refinement graph search without needing to explore the graph in detail.
We use a Genetic Algorithm (GA) to evolve and re-combine clauses generated by QG. In this QG/GA setting, QG is used to seed
a population of clauses processed by the GA. Experiments with QG/GA indicate that this approach can be more efficient than
standard refinement-graph searches, while generating similar or better solutions.
Editors: Ramon Otero, Simon Colton. 相似文献
199.
A Markov chain Monte Carlo method has previously been introduced to estimate weighted sums in multiplicative weight update
algorithms when the number of inputs is exponential. However, the original algorithm still required extensive simulation of
the Markov chain in order to get accurate estimates of the weighted sums. We propose an optimized version of the original
algorithm that produces exactly the same classifications while often using fewer Markov chain simulations. We also apply three
other sampling techniques and empirically compare them with the original Metropolis sampler to determine how effective each
is in drawing good samples in the least amount of time, in terms of accuracy of weighted sum estimates and in terms of Winnow’s
prediction accuracy. We found that two other samplers (Gibbs and Metropolized Gibbs) were slightly better than Metropolis
in their estimates of the weighted sums. For prediction errors, there is little difference between any pair of MCMC techniques
we tested. Also, on the data sets we tested, we discovered that all approximations of Winnow have no disadvantage when compared
to brute force Winnow (where weighted sums are exactly computed), so generalization accuracy is not compromised by our approximation.
This is true even when very small sample sizes and mixing times are used.
An early version of this paper appeared as Tao and Scott (2003). 相似文献
200.
Brain reading using full brain support vector machines for object recognition: there is no "face" identification area 总被引:1,自引:0,他引:1
Over the past decade, object recognition work has confounded voxel response detection with potential voxel class identification. Consequently, the claim that there are areas of the brain that are necessary and sufficient for object identification cannot be resolved with existing associative methods (e.g., the general linear model) that are dominant in brain imaging methods. In order to explore this controversy we trained full brain (40,000 voxels) single TR (repetition time) classifiers on data from 10 subjects in two different recognition tasks on the most controversial classes of stimuli (house and face) and show 97.4% median out-of-sample (unseen TRs) generalization. This performance allowed us to reliably and uniquely assay the classifier's voxel diagnosticity in all individual subjects' brains. In this two-class case, there may be specific areas diagnostic for house stimuli (e.g., LO) or for face stimuli (e.g., STS); however, in contrast to the detection results common in this literature, neither the fusiform face area nor parahippocampal place area is shown to be uniquely diagnostic for faces or places, respectively. 相似文献