全文获取类型
收费全文 | 3513篇 |
免费 | 211篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 29篇 |
综合类 | 1篇 |
化学工业 | 826篇 |
金属工艺 | 79篇 |
机械仪表 | 83篇 |
建筑科学 | 136篇 |
矿业工程 | 12篇 |
能源动力 | 131篇 |
轻工业 | 269篇 |
水利工程 | 31篇 |
石油天然气 | 8篇 |
无线电 | 273篇 |
一般工业技术 | 745篇 |
冶金工业 | 491篇 |
原子能技术 | 17篇 |
自动化技术 | 595篇 |
出版年
2024年 | 8篇 |
2023年 | 49篇 |
2022年 | 40篇 |
2021年 | 171篇 |
2020年 | 90篇 |
2019年 | 84篇 |
2018年 | 125篇 |
2017年 | 114篇 |
2016年 | 123篇 |
2015年 | 103篇 |
2014年 | 165篇 |
2013年 | 257篇 |
2012年 | 225篇 |
2011年 | 285篇 |
2010年 | 207篇 |
2009年 | 167篇 |
2008年 | 172篇 |
2007年 | 182篇 |
2006年 | 120篇 |
2005年 | 104篇 |
2004年 | 101篇 |
2003年 | 88篇 |
2002年 | 78篇 |
2001年 | 43篇 |
2000年 | 39篇 |
1999年 | 38篇 |
1998年 | 74篇 |
1997年 | 72篇 |
1996年 | 36篇 |
1995年 | 35篇 |
1994年 | 27篇 |
1993年 | 25篇 |
1992年 | 21篇 |
1991年 | 15篇 |
1990年 | 24篇 |
1989年 | 18篇 |
1988年 | 10篇 |
1987年 | 9篇 |
1986年 | 11篇 |
1985年 | 19篇 |
1984年 | 7篇 |
1983年 | 13篇 |
1982年 | 11篇 |
1981年 | 11篇 |
1980年 | 12篇 |
1979年 | 8篇 |
1977年 | 9篇 |
1976年 | 20篇 |
1975年 | 8篇 |
1974年 | 8篇 |
排序方式: 共有3726条查询结果,搜索用时 15 毫秒
91.
In recent years, there has been significant shift from rigid development (RD) toward agile. However, it has also been spotted that agile methodologies are hardly ever followed in their pure form. Hybrid processes as combinations of RD and agile practices emerge. In addition, agile adoption has been reported to result in both benefits and limitations. This exploratory study (a) identifies development models based on RD and agile practice usage by practitioners; (b) identifies agile practice adoption scenarios based on eliciting practice usage over time; (c) prioritizes agile benefits and limitations in relation to (a) and (b). Practitioners provided answers through a questionnaire. The development models are determined using hierarchical cluster analysis. The use of practices over time is captured through an interactive board with practices and time indication sliders. This study uses the extended hierarchical voting analysis framework to investigate benefit and limitation prioritization. Four types of development models and six adoption scenarios have been identified. Overall, 45 practitioners participated in the prioritization study. A common benefit among all models and adoption patterns is knowledge and learning, while high requirements on professional skills were perceived as the main limitation. Furthermore, significant variances in terms of benefits and limitations have been observed between models and adoption patterns. The most significant internal benefit categories from adopting agile are knowledge and learning, employee satisfaction, social skill development, and feedback and confidence. Professional skill-specific demands, scalability, and lack of suitability for specific product domains are the main limitations of agile practice usage. Having a balanced agile process allows to achieve a high number of benefits. With respect to adoption, a big bang transition from RD to agile leads to poor quality in comparison with the alternatives. 相似文献
92.
Damien Sulla-Menashe Mark A. Friedl Olga N. Krankina Alessandro Baccini Curtis E. Woodcock Adam Sibley Guoqing Sun Viacheslav Kharuk Vladimir Elsakov 《Remote sensing of environment》2011,115(2):392-403
The Northern Eurasian land mass encompasses a diverse array of land cover types including tundra, boreal forest, wetlands, semi-arid steppe, and agricultural land use. Despite the well-established importance of Northern Eurasia in the global carbon and climate system, the distribution and properties of land cover in this region are not well characterized. To address this knowledge and data gap, a hierarchical mapping approach was developed that encompasses the study area for the Northern Eurasia Earth System Partnership Initiative (NEESPI). The Northern Eurasia Land Cover (NELC) database developed in this study follows the FAO-Land Cover Classification System and provides nested groupings of land cover characteristics, with separate layers for land use, wetlands, and tundra. The database implementation is substantially different from other large-scale land cover datasets that provide maps based on a single set of discrete classes. By providing a database consisting of nested maps and complementary layers, the NELC database provides a flexible framework that allows users to tailor maps to suit their needs. The methods used to create the database combine empirically derived climate–vegetation relationships with results from supervised classifications based on Moderate Resolution Imaging Spectroradiometer (MODIS) data. The hierarchical approach provides an effective framework for integrating climate–vegetation relationships with remote sensing-based classifications, and also allows sources of error to be characterized and attributed to specific levels in the hierarchy. The cross-validated accuracy was 73% for the land cover map and 73% and 91% for the agriculture and wetland classifications, respectively. These results support the use of hierarchical classification and climate–vegetation relationships for mapping land cover at continental scales. 相似文献
93.
Adam C. Woodbury Jason F. Shepherd Matthew L. Staten Steven E. Benzley 《Engineering with Computers》2011,27(1):95-104
Finite element mesh adaptation methods can be used to improve the efficiency and accuracy of solutions to computational modeling
problems. In many applications involving hexahedral meshes, localized modifications which preserve a conforming all-hexahedral
mesh are desired. Effective hexahedral refinement methods that satisfy these criteria have recently become available; however,
due to hexahedral mesh topology constraints, little progress has been made in the area of hexahedral coarsening. This paper
presents a new method to locally coarsen conforming all-hexahedral meshes. The method works on both structured and unstructured
meshes and is not based on undoing previous refinement. Building upon recent developments in quadrilateral coarsening, the
method utilizes hexahedral sheet and column operations, including pillowing, column collapsing, and sheet extraction. A general
algorithm for automated coarsening is presented and examples of models that have been coarsened with this new algorithm are
shown. While results are promising, further work is needed to improve the automated process. 相似文献
94.
The polynomial-time solvable k-hurdle problem is a natural generalization of the classical s-t minimum cut problem where we must select a minimum-cost subset S of the edges of a graph such that |p∩S|≥k for every s-t path p. In this paper, we describe a set of approximation algorithms for “k-hurdle” variants of the NP-hard multiway cut and multicut problems. For the k-hurdle multiway cut problem with r terminals, we give two results, the first being a pseudo-approximation algorithm that outputs a (k−1)-hurdle solution whose cost is at most that of an optimal solution for k hurdles. Secondly, we provide a
2(1-\frac1r)2(1-\frac{1}{r})-approximation algorithm based on rounding the solution of a linear program, for which we give a simple randomized half-integrality
proof that works for both edge and vertex k-hurdle multiway cuts that generalizes the half-integrality results of Garg et al. for the vertex multiway cut problem. We
also describe an approximation-preserving reduction from vertex cover as evidence that it may be difficult to achieve a better
approximation ratio than
2(1-\frac1r)2(1-\frac{1}{r}). For the k-hurdle multicut problem in an n-vertex graph, we provide an algorithm that, for any constant ε>0, outputs a ⌈(1−ε)k⌉-hurdle solution of cost at most O(log n) times that of an optimal k-hurdle solution, and we obtain a 2-approximation algorithm for trees. 相似文献
95.
The reduction of Langmuir triple and quadruple probe data, i.e., the determination of the electron temperature T(e) from the measured voltages and currents, requires the solution of an implicit transcendental equation in T(e), at every point in time. Random errors and noise in the measurements occasionally precludes solution of the equation, resulting in an indeterminate temperature at those times. We present a method for overcoming this problem that uses the method of maximum likelihood. The experimental uncertainties, assumed to be normally distributed, are used in solving the implicit equation in T(e). At every point in time, a likelihood function is calculated, and the temperature which maximizes this function is taken to be the solution T(e). The uncertainty in the resulting measurement is taken to be the width of the likelihood function. Examples of this technique are shown. 相似文献
96.
Fatigue is a serious issue for the rail industry, increasing inefficiency and accident risk. The performance of 20 train drivers in a rail simulator was investigated at low, moderate and high fatigue levels. Psychomotor vigilance (PVT), self-rated performance and subjective alertness were also assessed. Alertness, PVT reaction times, extreme speed violations (>25% above the limit) and penalty brake applications increased with increasing fatigue level. In contrast, fuel use, draft (stretch) forces and braking errors were highest at moderate fatigue levels. Thus, at high fatigue levels, errors involving a failure to act (errors of omission) increased, whereas incorrect responses (errors of commission) decreased. The differential effect of fatigue on error types can be explained through a cognitive disengagement with the virtual train at high fatigue levels. Interaction with the train reduced dramatically, and accident risk increased. Awareness of fatigue-related performance changes was moderate at best. These findings are of operational concern. 相似文献
97.
We prove new lower bounds for learning intersections of halfspaces, one of the most important concept classes in computational learning theory. Our main result is that any statistical-query algorithm for learning the intersection of $\sqrt{n}$ halfspaces in n dimensions must make $2^{\varOmega (\sqrt{n})}$ queries. This is the first non-trivial lower bound on the statistical query dimension for this concept class (the previous best lower bound was n Ω(log?n)). Our lower bound holds even for intersections of low-weight halfspaces. In the latter case, it is nearly tight. We also show that the intersection of two majorities (low-weight halfspaces) cannot be computed by a polynomial threshold function (PTF) with fewer than n Ω(log?n/log?log?n) monomials. This is the first super-polynomial lower bound on the PTF length of this concept class, and is nearly optimal. For intersections of k=ω(log?n) low-weight halfspaces, we improve our lower bound to $\min\{2^{\varOmega (\sqrt{n})},n^{\varOmega (k/\log k)}\},$ which too is nearly optimal. As a consequence, intersections of even two halfspaces are not computable by polynomial-weight PTFs, the most expressive class of functions known to be efficiently learnable via Jackson’s Harmonic Sieve algorithm. Finally, we report our progress on the weak learnability of intersections of halfspaces under the uniform distribution. 相似文献
98.
The comparison of digital images to determine their degree of similarity is one of the fundamental problems of computer vision.
Many techniques exist which accomplish this with a certain level of success, most of which involve either the analysis of
pixel-level features or the segmentation of images into sub-objects that can be geometrically compared. In this paper we develop
and evaluate a new variation of the pixel feature and analysis technique known as the color correlogram in the context of
a content-based image retrieval system. Our approach is to extend the autocorrelogram by adding multiple image features in
addition to color. We compare the performance of each index scheme with our method for image retrieval on a large database
of images. The experiment shows that our proposed method gives a significant improvement over histogram or color correlogram
indexing, and it is also memory-efficient.
相似文献
Peter YoonEmail: |
99.
Olle EW Deogracias MP Messamore JE McClintock SD Barron AG Anderson TD Johnson KJ 《Proteomics. Clinical applications》2007,1(10):1212-1220
Wegener's Granulomatosis (WG) is an idiopathic granulomatosis autoimmune vasculitis that primarily affects small vessels and is associated with glomerulonephritis and pulmonary granulomatous vasculitis. Anti‐neutrophil cytoplasmic auto‐antibodies (cANCA) against proteinase‐3 are used to identify WG, but ANCA titers are not present in some patients with the localized disease. The objective of this study was to develop an antibody array to help identify protein expression patterns in serum from patients with WG as compared to normals. The arrays were tested for limits of detection, background, and cross reactivity using standard proteins. The arrays were hybridized with either normal patient serum (n = 30) or with serum samples from a population of WG patients (n = 26) that were age and sex matched. Data analysis and curve fitting of the standard dilution series calculated r2 values and determined a sensitivity of <50 pg/mL for the majority of proteins. A total of 24 proteins were assessed. Several statistically significant increases (p<0.05) were seen in the expression of: angiotensin converting enzyme‐I, IFN‐γ, IL‐8, s‐ICAM‐1 and s‐VCAM in WG patients as compared to controls. Utilizing the antibody microarray technology has led to the identification of potential biomarkers of vascular injury in the serum of WG patients. 相似文献
100.
Balancing systematic and flexible exploration of social networks 总被引:1,自引:0,他引:1
Social network analysis (SNA) has emerged as a powerful method for understanding the importance of relationships in networks. However, interactive exploration of networks is currently challenging because: (1) it is difficult to find patterns and comprehend the structure of networks with many nodes and links, and (2) current systems are often a medley of statistical methods and overwhelming visual output which leaves many analysts uncertain about how to explore in an orderly manner. This results in exploration that is largely opportunistic. Our contributions are techniques to help structural analysts understand social networks more effectively. We present SocialAction, a system that uses attribute ranking and coordinated views to help users systematically examine numerous SNA measures. Users can (1) flexibly iterate through visualizations of measures to gain an overview, filter nodes, and find outliers, (2) aggregate networks using link structure, find cohesive subgroups, and focus on communities of interest, and (3) untangle networks by viewing different link types separately, or find patterns across different link types using a matrix overview. For each operation, a stable node layout is maintained in the network visualization so users can make comparisons. SocialAction offers analysts a strategy beyond opportunism, as it provides systematic, yet flexible, techniques for exploring social networks 相似文献