首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1075篇
  免费   28篇
电工技术   12篇
综合类   1篇
化学工业   190篇
金属工艺   11篇
机械仪表   12篇
建筑科学   55篇
矿业工程   5篇
能源动力   22篇
轻工业   99篇
水利工程   7篇
石油天然气   13篇
无线电   96篇
一般工业技术   157篇
冶金工业   288篇
原子能技术   14篇
自动化技术   121篇
  2021年   9篇
  2020年   13篇
  2019年   9篇
  2018年   12篇
  2017年   16篇
  2016年   12篇
  2015年   12篇
  2014年   26篇
  2013年   56篇
  2012年   42篇
  2011年   46篇
  2010年   43篇
  2009年   32篇
  2008年   35篇
  2007年   33篇
  2006年   33篇
  2005年   23篇
  2004年   38篇
  2003年   32篇
  2002年   29篇
  2001年   20篇
  2000年   14篇
  1999年   17篇
  1998年   56篇
  1997年   55篇
  1996年   34篇
  1995年   27篇
  1994年   31篇
  1993年   28篇
  1992年   15篇
  1991年   13篇
  1990年   12篇
  1989年   16篇
  1988年   7篇
  1987年   20篇
  1986年   16篇
  1985年   14篇
  1984年   16篇
  1983年   5篇
  1982年   8篇
  1981年   12篇
  1980年   8篇
  1979年   9篇
  1978年   8篇
  1977年   9篇
  1976年   20篇
  1975年   6篇
  1974年   5篇
  1971年   7篇
  1970年   5篇
排序方式: 共有1103条查询结果,搜索用时 15 毫秒
11.
Leaching and characterisation studies have been undertaken on two chromate-inhibited epoxy polyamide primers. Leaching was carried out in 5% (w/v) NaCl solutions at different pH values (1, 3, 5 and 7) and the amount of Cr released into solution was monitored over time. Cr release was initially high, but as the immersion time increased the leaching from the primers slowed. Prior to and after immersion, the primers were characterised by a number of techniques including electron microprobe analysis, X-ray microdiffraction, Raman spectroscopy, and positron annihilation lifetime spectroscopy. The unexposed primers were found to contain the inorganic phases SrCrO4, BaSO4 and TiO2 (anatase or rutile). Upon immersion, water uptake by the primers was observed, together with a decrease in the amount of SrCrO4 in the primers. These studies provide insights into the mechanism of chromate leaching from inhibited primers.  相似文献   
12.
13.
In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis et al. [An adaptive augmented Lagrangian method for large-scale constrained optimization, Math. Program. 152 (2015), pp. 201–245.]. The first focal point of this paper is a new variant of the approach that employs a line search rather than a trust region strategy, where a critical algorithmic feature for the line search strategy is the use of convexified piecewise quadratic models of the AL function for computing the search directions. We prove global convergence guarantees for our line search algorithm that are on par with those for the previously proposed trust region method. A second focal point of this paper is the practical performance of the line search and trust region algorithm variants in Matlab software, as well as that of an adaptive penalty parameter updating strategy incorporated into the Lancelot software. We test these methods on problems from the CUTEst and COPS collections, as well as on challenging test problems related to optimal power flow. Our numerical experience suggests that the adaptive algorithms outperform traditional AL methods in terms of efficiency and reliability. As with traditional AL algorithms, the adaptive methods are matrix-free and thus represent a viable option for solving large-scale problems.  相似文献   
14.
Statistical detection of mass malware has been shown to be highly successful. However, this type of malware is less interesting to cyber security officers of larger organizations, who are more concerned with detecting malware indicative of a targeted attack. Here we investigate the potential of statistically based approaches to detect such malware using a malware family associated with a large number of targeted network intrusions. Our approach is complementary to the bulk of statistical based malware classifiers, which are typically based on measures of overall similarity between executable files. One problem with this approach is that a malicious executable that shares some, but limited, functionality with known malware is likely to be misclassified as benign. Here a new approach to malware classification is introduced that classifies programs based on their similarity with known malware subroutines. It is illustrated that malware and benign programs can share a substantial amount of code, implying that classification should be based on malicious subroutines that occur infrequently, or not at all in benign programs. Various approaches to accomplishing this task are investigated, and a particularly simple approach appears the most effective. This approach simply computes the fraction of subroutines of a program that are similar to malware subroutines whose likes have not been found in a larger benign set. If this fraction exceeds around 1.5 %, the corresponding program can be classified as malicious at a 1 in 1000 false alarm rate. It is further shown that combining a local and overall similarity based approach can lead to considerably better prediction due to the relatively low correlation of their predictions.  相似文献   
15.
Ecosystem energy has been shown to be a strong correlate with biological diversity at continental scales. Early efforts to characterize this association used the normalized difference vegetation index (NDVI) to represent ecosystem energy. While this spectral vegetation index covaries with measures of ecosystem energy such as net primary production, the covariation is known to degrade in areas of very low vegetation or in areas of dense forest. Two of the new vegetation products from the MODIS sensor, derived by integrating spectral reflectance, climate data, and land cover, are thought to better approximate primary productivity than NDVI. In this study, we determine if the new MODIS derived measures of primary production, gross primary productivity (GPP) and net primary productivity (NPP) better explain variation in bird richness than historically used NDVI. Moreover, we evaluate if the two productivity measures covary more strongly with bird diversity in those vegetation conditions where limitations of NDVI are well recognized.Biodiversity was represented as native landbird species richness derived from the North American Breeding Bird Survey. Analyses included correlation analyses among predictor variables, and univariate regression analyses between each predictor variable and bird species richness. Analyses were done at two levels: for all BBS routes across natural landscapes in North America; and for routes in 10 vegetation classes stratified by vegetated cover along a gradient from bare ground to herbaceous cover to tree cover. We found that NDVI, GPP and NPP were highly correlated and explained similar variation in bird species richness when analyzed for all samples across North America. However, when samples were stratified by vegetated cover, strength of correlation between NDVI and both productivity measures was low for samples with bare ground and for dense forest. The NDVI also explained substantially less variation in bird species richness than the primary production in areas with more bare ground and in areas of dense forest. We conclude that MODIS productivity measures have higher utility in studies of the relationship of species richness and productivity and that MODIS GPP and NPP improve on NDVI, especially for studies with large variation in vegetated cover and density.  相似文献   
16.
We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of “procedural representative triangles” to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of “orphan sets” to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.  相似文献   
17.
The interpretation of pore dimensions based on physical ad-desorption analyses is central to the characterization of pore network structure. Several approaches have been proposed and are commonly employed in the analysis of physical adsorption and/or desorption to deduce the dimensions of the porous network. These approaches assume either theoretical (e.g., BET, the Halsey equation as interpreted by Pierce et al., or the more recent analyses of microporosity) or standard isotherms as model(s) for the sequential calculations required in estimating the pore network dimensions. Subsequent representation of the pore dimensions and the relationship between these distributions in dimension and other experimental parameters (such as catalytic activity, adsorptivity or transport); thus, depend explicitly on the model employed in the analyses. Each instrument currently available for the measurement of porous solid structure by sorption employs the same specific models for the relationship between the volume ad-desorbed and the dimensions of the porous network that is being characterized.This paper analyzes the interpretation of porous dimensions based on the sequential calculations required in the analyses. A new approach is proposed which is based on a modification to current practices reflecting Halsey's original theory for the thickness of the adsorbed layer (as a function of P/P 0). Further, the calculations of the incremental changes in the exposed surface area are discussed as they relate to pore network structure. A method is proposed to infer the differences in pore shape. Sorption data are analyzed by these new approaches, and these analyses will be compared with those approaches currently employed. Analyses based on these modified approaches provide a dramatically more consistent interpretation of the sorption data and the corresponding pore network structures.  相似文献   
18.
Multiple grooved substrata with groove depth 5 m were found to facilitate the healing of completely divided rat flexor tendons in vitro. Sections of tendons cultured on plain substrata showed only partial healing with incompletely sealed epitenon layers and immature thin collagen fibres. Tendons cultured on patterned substrata healed with complete restoration of the epitenon layer and reconstitution of the internal structure of collagen fibres. Epitenon fibroblasts isolated from the surface of rat flexor tendons were shown to be more sensitive to topographical features than fibroblasts of the same size BHK fibroblasts. They remained more elongated and better aligned to the groove direction than BHK cells. Multiple grooved substrata facilitated epitenon cell movement. Cells were found to move with higher speed on patterned substrata than on plain substrata. In summary, we conclude that the use of multiple grooved substrata promotes tendon healing in vitro and may find application in clinical practice in tendon repair.  相似文献   
19.
20.
Web applications are fast becoming more widespread, larger, more interactive, and more essential to the international use of computers. It is well understood that web applications must be highly dependable, and as a field we are just now beginning to understand how to model and test Web applications. One straightforward technique is to model Web applications as finite state machines. However, large numbers of input fields, input choices and the ability to enter values in any order combine to create a state space explosion problem. This paper evaluates a solution that uses constraints on the inputs to reduce the number of transitions, thus compressing the FSM. The paper presents an analysis of the potential savings of the compression technique and reports actual savings from two case studies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号