全文获取类型
收费全文 | 5161篇 |
免费 | 255篇 |
国内免费 | 3篇 |
专业分类
电工技术 | 43篇 |
综合类 | 3篇 |
化学工业 | 1041篇 |
金属工艺 | 79篇 |
机械仪表 | 97篇 |
建筑科学 | 185篇 |
矿业工程 | 30篇 |
能源动力 | 129篇 |
轻工业 | 339篇 |
水利工程 | 70篇 |
石油天然气 | 12篇 |
武器工业 | 1篇 |
无线电 | 410篇 |
一般工业技术 | 872篇 |
冶金工业 | 1241篇 |
原子能技术 | 33篇 |
自动化技术 | 834篇 |
出版年
2023年 | 28篇 |
2022年 | 30篇 |
2021年 | 83篇 |
2020年 | 69篇 |
2019年 | 77篇 |
2018年 | 83篇 |
2017年 | 84篇 |
2016年 | 116篇 |
2015年 | 108篇 |
2014年 | 165篇 |
2013年 | 285篇 |
2012年 | 255篇 |
2011年 | 370篇 |
2010年 | 264篇 |
2009年 | 288篇 |
2008年 | 307篇 |
2007年 | 307篇 |
2006年 | 246篇 |
2005年 | 193篇 |
2004年 | 183篇 |
2003年 | 176篇 |
2002年 | 154篇 |
2001年 | 113篇 |
2000年 | 109篇 |
1999年 | 97篇 |
1998年 | 97篇 |
1997年 | 94篇 |
1996年 | 85篇 |
1995年 | 73篇 |
1994年 | 74篇 |
1993年 | 81篇 |
1992年 | 50篇 |
1991年 | 43篇 |
1990年 | 53篇 |
1989年 | 61篇 |
1988年 | 51篇 |
1987年 | 42篇 |
1986年 | 38篇 |
1985年 | 47篇 |
1984年 | 47篇 |
1983年 | 32篇 |
1982年 | 32篇 |
1981年 | 43篇 |
1980年 | 22篇 |
1979年 | 20篇 |
1978年 | 36篇 |
1977年 | 21篇 |
1976年 | 23篇 |
1975年 | 15篇 |
1974年 | 15篇 |
排序方式: 共有5419条查询结果,搜索用时 15 毫秒
91.
This paper considers the theory of database queries on the complex value data model with external functions. Motivated by concerns regarding query evaluation, we first identify recursive sets of formulas, called embedded allowed, which is a class with desirable properties of “reasonable” queries.We then show that all embedded allowed calculus (or fix-point) queries are domain independent and continuous. An algorithm for translating embedded allowed queries into equivalent algebraic expressions as a basis for evaluating safe queries in all calculus-based query classes has been developed.Finally we discuss the topic of “domain independent query programs”, compare the expressive power of the various complex value query languages and their embedded allowed versions, and discuss the relationship between safety, embedded allowed, and domain independence in the various calculus-based queries. 相似文献
92.
Validation of GOES and MODIS active fire detection products using ASTER and ETM+ data 总被引:2,自引:0,他引:2
Wilfrid Schroeder Elaine Prins Louis Giglio Ivan Csiszar Christopher Schmidt Jeffrey Morisette Douglas Morton 《Remote sensing of environment》2008,112(5):2711-2726
In this study we implemented a comprehensive analysis to validate the MODIS and GOES satellite active fire detection products (MOD14 and WFABBA, respectively) and characterize their major sources of omission and commission errors which have important implications for a large community of fire data users. Our analyses were primarily based on the use of 30 m resolution ASTER and ETM+ imagery as our validation data. We found that at the 50% true positive detection probability mark, WFABBA requires four times more active fire area than is necessary for MOD14 to achieve the same probability of detection, despite the 16× factor separating the nominal spatial resolutions of the two products. Approximately 75% and 95% of all fires sampled were omitted by the MOD14 and WFABBA instantaneous products, respectively; whereas an omission error of 38% was obtained for WFABBA when considering the 30-minute interval of the GOES data. Commission errors for MOD14 and WFABBA were found to be similar and highly dependent on the vegetation conditions of the areas imaged, with the larger commission errors (approximately 35%) estimated over regions of active deforestation. Nonetheless, the vast majority (> 80%) of the commission errors were indeed associated with recent burning activity where scars could be visually confirmed in the higher resolution data. Differences in thermal dynamics of vegetated and non-vegetated areas were found to produce a reduction of approximately 50% in the commission errors estimated towards the hours of maximum fire activity (i.e., early-afternoon hours) which coincided with the MODIS/Aqua overpass. Lastly, we demonstrate the potential use of temporal metrics applied to the mid-infrared bands of MODIS and GOES data to reduce the commission errors found with the validation analyses. 相似文献
93.
Use of a dark object concept and support vector machines to automate forest cover change analysis 总被引:5,自引:0,他引:5
Chengquan Huang Kuan Song Sunghee Kim Paul Davis Jeffrey G. Masek 《Remote sensing of environment》2008,112(3):970-985
An automated method was developed for mapping forest cover change using satellite remote sensing data sets. This multi-temporal classification method consists of a training data automation (TDA) procedure and uses the advanced support vector machines (SVM) algorithm. The TDA procedure automatically generates training data using input satellite images and existing land cover products. The derived high quality training data allow the SVM to produce reliable forest cover change products. This approach was tested in 19 study areas selected from major forest biomes across the globe. In each area a forest cover change map was produced using a pair of Landsat images acquired around 1990 and 2000. High resolution IKONOS images and independently developed reference data sets were available for evaluating the derived change products in 7 of those areas. The overall accuracy values were over 90% for 5 areas, and were 89.4% and 89.6% for the remaining two areas. The user's and producer's accuracies of the forest loss class were over 80% for all 7 study areas, demonstrating that this method is especially effective for mapping major disturbances with low commission errors. IKONOS images were also available in the remaining 12 study areas but they were either located in non-forest areas or in forest areas that did not experience forest cover change between 1990 and 2000. For those areas the IKONOS images were used to assist visual interpretation of the Landsat images in assessing the derived change products. This visual assessment revealed that for most of those areas the derived change products likely were as reliable as those in the 7 areas where accuracy assessment was conducted. The results also suggest that images acquired during leaf-off seasons should not be used in forest cover change analysis in areas where deciduous forests exist. Being highly automatic and with demonstrated capability to produce reliable change products, the TDA-SVM method should be especially useful for quantifying forest cover change over large areas. 相似文献
94.
Aniruddha Gokhale Jaiganesh Balasubramanian Gan Deng Jeffrey Parsons Douglas C. Schmidt 《Science of Computer Programming》2008,73(1):39-58
Distributed real-time and embedded (DRE) systems have become critical in domains such as avionics (e.g., flight mission computers), telecommunications (e.g., wireless phone services), tele-medicine (e.g., robotic surgery), and defense applications (e.g., total ship computing environments). These types of system are increasingly interconnected via wireless and wireline networks to form systems of systems. A challenging requirement for these DRE systems involves supporting a diverse set of quality of service (QoS) properties, such as predictable latency/jitter, throughput guarantees, scalability, 24x7 availability, dependability, and security that must be satisfied simultaneously in real-time. Although increasing portions of DRE systems are based on QoS-enabled commercial-off-the-shelf (COTS) hardware and software components, the complexity of managing long lifecycles (often ∼15-30 years) remains a key challenge for DRE developers and system integrators. For example, substantial time and effort is spent retrofitting DRE applications when the underlying COTS technology infrastructure changes.This paper provides two contributions that help improve the development, validation, and integration of DRE systems throughout their lifecycles. First, we illustrate the challenges in creating and deploying QoS-enabled component middleware-based DRE applications and describe our approach to resolving these challenges based on a new software paradigm called Model Driven Middleware (MDM), which combines model-based software development techniques with QoS-enabled component middleware to address key challenges faced by developers of DRE systems — particularly composition, integration, and assured QoS for end-to-end operations. Second, we describe the structure and functionality of CoSMIC (Component Synthesis using Model Integrated Computing), which is an MDM toolsuite that addresses key DRE application and middleware lifecycle challenges, including partitioning the components to use distributed resources effectively, validating software configurations, assuring multiple simultaneous QoS properties in real-time, and safeguarding against rapidly changing technology. 相似文献
95.
We introduce a general discrete time dynamic framework to value pilot project investments that reduce idiosyncratic uncertainty with respect to the final cost of a project. The model generalizes different settings introduced previously in the literature by incorporating both market and technical uncertainty and differentiating between the commercial phase and the pilot phase of a project. In our model, the pilot phase requires N stages of investment for completion. With this distinction we are able to frame the problem as a compound perpetual Bermudan option. We work in an incomplete markets setting where market uncertainty is spanned by tradable assets and technical uncertainty is idiosyncratic to the firm. The value of the option to invest as well as the optimal exercise policy are solved by an approximate dynamic programming algorithm that relies on the independence of the state variables increments. We prove the convergence of our algorithm and derive a theoretical bound on how the errors compound as the number of stages of the pilot phase is increased. We implement the algorithm for a simplified version of the model where revenues are fixed, providing an economic interpretation of the effects of the main parameters driving the model. In particular, we explore how the value of the investment opportunity and the optimal investment threshold are affected by changes in market volatility, technical volatility, the learning coefficient, the drift rate of costs and the time to completion of a pilot stage. 相似文献
96.
Kendall Richard Carver Jeffrey C. Fisher David Henderson Dale Mark Andrew Post Douglass Rhoades Jr. Clifford E. Squires Susan 《Software, IEEE》2008,25(4):59-65
Computational science is increasingly supporting advances in scientific and engineering knowledge. The unique constraints of these types of projects result in a development process that differs from the process more traditional information technology projects use. This article reports the results of the sixth case study conducted under the support of the Darpa High Productivity Computing Systems Program. The case study aimed to investigate the technical challenges of code development in this environment, understand the use of development tools, and document the findings as concrete lessons learned for other developers' benefit. The project studied here is a major component of a weather forecasting system of systems. It includes complex behavior and interaction of several individual physical systems (such as the atmosphere and the ocean). This article describes the development of the code and presents important lessons learned. 相似文献
97.
Cylindrical fibre actuators have been constructed by a coextrusion method using a thermoplastic polyurethane wall and a conductive grease filler. These actuators may be operated as single fibres or bundled together as actuating ropes. Key results include the validation of Carpi’s wall pressure model [F. Carpi, D.D. Rossi, Dielectric elastomer cylindrical actuators: electromechanical modelling and experimental evaluation, Mater. Sci. Eng. C-Biomimetic Supramol. Syst. 24 (2004) 555–562] and the proof-of-concept demonstration of a technique that can be used for producing inexpensive dielectric elastomer actuators on an industrial scale. 相似文献
98.
A dynamic workflow framework for mass customization using web service and autonomous agent techniques 总被引:3,自引:1,他引:2
Daniel J. Karpowitz Jordan J. Cox Jeffrey C. Humpherys Sean C. Warnick 《Journal of Intelligent Manufacturing》2008,19(5):537-552
Custom software development and maintenance is one of the key expenses associated with developing automated systems for mass
customization. This paper presents a method for reducing the risk associated with this expense by developing a flexible environment
for determining and executing dynamic workflow paths. Strategies for developing an autonomous agent-based framework and for
identifying and creating web services for specific process tasks are presented. The proposed methods are outlined in two different
case studies to illustrate the approach for both a generic process with complex workflow paths and a more specific sequential
engineering process. 相似文献
99.
This paper describes the implementation and benchmarking of a parallel version of the LISFLOOD-FP hydraulic model based on the OpenMP Application Programming Interface. The motivation behind the study was that reducing model run time through parallelisation would increase the utility of such models by expanding the domains over which they can be practically implemented, allowing previously inaccessible scientific questions to be addressed. Parallel speedup was calculated for 13 models distributed over seven study sites and implemented on one, two, four and in selected cases eight processor cores. The models represent a range of previous applications from large area, coarse resolution models of the Amazon, to fine resolution models of urban areas, to orders of magnitude smaller models of rural floodplains. Parallel speedups were greater for larger model domains, especially for models with over 0.2–0.4 million cells where parallel efficiencies of up to 0.75 on four and eight cores were achieved. A key advantage of using OpenMP and an explicit rather than implicit model was the ease of implementation and minimal code changes required to run simulations in parallel. 相似文献
100.
Yoram Bachrach Ariel Parnes Ariel D. Procaccia Jeffrey S. Rosenschein 《Autonomous Agents and Multi-Agent Systems》2009,19(2):153-172
Decentralized Reputation Systems have recently emerged as a prominent method of establishing trust among self-interested agents
in online environments. A key issue is the efficient aggregation of data in the system; several approaches have been proposed,
but they are plagued by major shortcomings. We put forward a novel, decentralized data management scheme grounded in gossip-based
algorithms. Rumor mongering is known to possess algorithmic advantages, and indeed, our framework inherits many of their salient
features: scalability, robustness, a global perspective, and simplicity. We demonstrate that our scheme motivates agents to
maintain a very high reputation, by showing that the higher an agent’s reputation is above the threshold set by its peers,
the more transactions it would be able to complete within a certain time unit. We analyze the relation between the amount
by which an agent’s average reputation exceeds the threshold and the time required to close a deal. This analysis is carried
out both theoretically, and empirically through a simulation system called GossipTrustSim. Finally, we show that our approach is inherently impervious to certain kinds of attacks.
A preliminary version of this article appeared in the proceedings of IJCAI 2007. 相似文献