全文获取类型
收费全文 | 15509篇 |
免费 | 827篇 |
国内免费 | 28篇 |
专业分类
电工技术 | 166篇 |
综合类 | 43篇 |
化学工业 | 3341篇 |
金属工艺 | 307篇 |
机械仪表 | 315篇 |
建筑科学 | 685篇 |
矿业工程 | 45篇 |
能源动力 | 503篇 |
轻工业 | 1228篇 |
水利工程 | 124篇 |
石油天然气 | 51篇 |
无线电 | 1112篇 |
一般工业技术 | 2961篇 |
冶金工业 | 2239篇 |
原子能技术 | 117篇 |
自动化技术 | 3127篇 |
出版年
2023年 | 165篇 |
2022年 | 359篇 |
2021年 | 573篇 |
2020年 | 374篇 |
2019年 | 380篇 |
2018年 | 496篇 |
2017年 | 461篇 |
2016年 | 531篇 |
2015年 | 465篇 |
2014年 | 594篇 |
2013年 | 1060篇 |
2012年 | 971篇 |
2011年 | 1155篇 |
2010年 | 807篇 |
2009年 | 747篇 |
2008年 | 788篇 |
2007年 | 774篇 |
2006年 | 581篇 |
2005年 | 504篇 |
2004年 | 371篇 |
2003年 | 381篇 |
2002年 | 338篇 |
2001年 | 213篇 |
2000年 | 186篇 |
1999年 | 205篇 |
1998年 | 211篇 |
1997年 | 185篇 |
1996年 | 183篇 |
1995年 | 177篇 |
1994年 | 153篇 |
1993年 | 147篇 |
1992年 | 119篇 |
1991年 | 74篇 |
1990年 | 122篇 |
1989年 | 116篇 |
1988年 | 80篇 |
1987年 | 93篇 |
1986年 | 100篇 |
1985年 | 100篇 |
1984年 | 88篇 |
1983年 | 76篇 |
1982年 | 96篇 |
1981年 | 90篇 |
1980年 | 71篇 |
1979年 | 70篇 |
1978年 | 62篇 |
1977年 | 74篇 |
1976年 | 61篇 |
1975年 | 45篇 |
1974年 | 41篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
101.
Oxidation rates in air at 1000–1250°C are reported for a series of Co-Cr-W alloys with 34–40 wt. % Cr and up to 10 wt. % W. Alloys with larger W contents exhibited slower oxidation rates and their parabolic rate constants agreed well with those for binary and ternary, Cr2O3 protected, Ni-base and Co-base alloys in the Co-Cr and Ni-Cr-W systems. The resulting scales were characterized by optical and scanning electron metallography, and electron microprobe analysis. The favorable effect of W additions to a Cr2O3-forming Co-Cr base alloy was the opposite of that reported for Ni-Cr-W alloys. The resupply of Cr to a Cr-depleted matrix beneath a protective CrO3 scale is achieved by the dissolution (denuding) of Cr-rich second phases in the Co-Cr-W alloys. Thus, the internal oxidation of Cr beneath the Cr2O3 scale is avoided for high W alloys. No catastrophic failure by liquid phase formation was observed for high-W alloys oxidized 20 hr at 1250°C. 相似文献
102.
Hadar Averbuch‐Elor Yunhai Wang Yiming Qian Minglun Gong Johannes Kopf Hao Zhang Daniel Cohen‐Or 《Computer Graphics Forum》2015,34(2):131-142
We present a distillation algorithm which operates on a large, unstructured, and noisy collection of internet images returned from an online object query. We introduce the notion of a distilled set, which is a clean, coherent, and structured subset of inlier images. In addition, the object of interest is properly segmented out throughout the distilled set. Our approach is unsupervised, built on a novel clustering scheme, and solves the distillation and object segmentation problems simultaneously. In essence, instead of distilling the collection of images, we distill a collection of loosely cutout foreground “shapes”, which may or may not contain the queried object. Our key observation, which motivated our clustering scheme, is that outlier shapes are expected to be random in nature, whereas, inlier shapes, which do tightly enclose the object of interest, tend to be well supported by similar shapes captured in similar views. We analyze the commonalities among candidate foreground segments, without aiming to analyze their semantics, but simply by clustering similar shapes and considering only the most significant clusters representing non‐trivial shapes. We show that when tuned conservatively, our distillation algorithm is able to extract a near perfect subset of true inliers. Furthermore, we show that our technique scales well in the sense that the precision rate remains high, as the collection grows. We demonstrate the utility of our distillation results with a number of interesting graphics applications. 相似文献
103.
104.
Inspired by the relational algebra of data processing, this paper addresses the foundations of data analytical processing from a linear algebra perspective. The paper investigates, in particular, how aggregation operations such as cross tabulations and data cubes essential to quantitative analysis of data can be expressed solely in terms of matrix multiplication, transposition and the Khatri–Rao variant of the Kronecker product. The approach offers a basis for deriving an algebraic theory of data consolidation, handling the quantitative as well as qualitative sides of data science in a natural, elegant and typed way. It also shows potential for parallel analytical processing, as the parallelization theory of such matrix operations is well acknowledged. 相似文献
105.
106.
Frank E. Curtis Nicholas I.M. Gould Hao Jiang Daniel P. Robinson 《Optimization methods & software》2016,31(1):157-186
In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp. 201–245.]. The first focal point of this paper is a new variant of the approach that employs a line search rather than a trust region strategy, where a critical algorithmic feature for the line search strategy is the use of convexified piecewise quadratic models of the AL function for computing the search directions. We prove global convergence guarantees for our line search algorithm that are on par with those for the previously proposed trust region method. A second focal point of this paper is the practical performance of the line search and trust region algorithm variants in Matlab software, as well as that of an adaptive penalty parameter updating strategy incorporated into the Lancelot software. We test these methods on problems from the CUTEst and COPS collections, as well as on challenging test problems related to optimal power flow. Our numerical experience suggests that the adaptive algorithms outperform traditional AL methods in terms of efficiency and reliability. As with traditional AL algorithms, the adaptive methods are matrix-free and thus represent a viable option for solving large-scale problems. 相似文献
107.
Bingbing Nie Taewung Kim Yan Wang Varun Bollapragada Tom Daniel Jeff R. Crandall 《Multibody System Dynamics》2016,38(3):297-316
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population. 相似文献
108.
Bram Adams Ryan Kavanagh Ahmed E. Hassan Daniel M. German 《Empirical Software Engineering》2016,21(3):960-1001
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration. 相似文献
109.
Martin Bresler Daniel Průša Václav Hlaváč 《International Journal on Document Analysis and Recognition》2016,19(3):253-267
We introduce a new, online, stroke-based recognition system for hand-drawn diagrams which belong to a group of documents with an explicit structure obvious to humans but only loosely defined from the machine point of view. We propose a model for recognition by selection of symbol candidates, based on evaluation of relations between candidates using a set of predicates. It is suitable for simpler structures where the relations are explicitly given by symbols, arrows in the case of diagrams. Knowledge of a specific diagram domain is used—the two domains are flowcharts and finite automata. Although the individual pipeline steps are tailored for these, the system can readily be adapted for other domains. Our entire diagram recognition pipeline is outlined. Its core parts are text/non-text separation, symbol segmentation, their classification and structural analysis. Individual parts have been published by the authors previously and so are described briefly and referenced. Thorough evaluation on benchmark databases shows the accuracy of the system reaches the state of the art and is ready for practical use. The paper brings several contributions: (a) the entire system and its state-of-the-art performance; (b) the methodology exploring document structure when it is loosely defined; (c) the thorough experimental evaluation; (d) the new annotated database for online sketched flowcharts and finite automata diagrams. 相似文献
110.
Peter Schrammel Tom Melham Daniel Kroening 《International Journal on Software Tools for Technology Transfer (STTT)》2016,18(3):319-334
Testing of reactive systems is challenging because long input sequences are often needed to drive them into a state to test a desired feature. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the amount of time required for resetting is high. This article presents an approach to discovering a test case chain—a single software execution that covers a group of test goals and minimizes overall test execution time. Our technique targets the scenario in which test goals for the requirements are given as safety properties. We give conditions for the existence and minimality of a single test case chain and minimize the number of test case chains if a single test case chain is infeasible. We report experimental results with our ChainCover tool for C code generated from Simulink models and compare it to state-of-the-art test suite generators. 相似文献