全文获取类型
收费全文 | 6627篇 |
免费 | 234篇 |
国内免费 | 7篇 |
专业分类
电工技术 | 66篇 |
综合类 | 2篇 |
化学工业 | 1086篇 |
金属工艺 | 140篇 |
机械仪表 | 130篇 |
建筑科学 | 296篇 |
矿业工程 | 21篇 |
能源动力 | 147篇 |
轻工业 | 456篇 |
水利工程 | 72篇 |
石油天然气 | 35篇 |
无线电 | 538篇 |
一般工业技术 | 1150篇 |
冶金工业 | 1803篇 |
原子能技术 | 34篇 |
自动化技术 | 892篇 |
出版年
2023年 | 48篇 |
2022年 | 80篇 |
2021年 | 132篇 |
2020年 | 90篇 |
2019年 | 106篇 |
2018年 | 114篇 |
2017年 | 109篇 |
2016年 | 107篇 |
2015年 | 112篇 |
2014年 | 173篇 |
2013年 | 361篇 |
2012年 | 277篇 |
2011年 | 341篇 |
2010年 | 214篇 |
2009年 | 204篇 |
2008年 | 257篇 |
2007年 | 256篇 |
2006年 | 227篇 |
2005年 | 163篇 |
2004年 | 160篇 |
2003年 | 171篇 |
2002年 | 137篇 |
2001年 | 103篇 |
2000年 | 102篇 |
1999年 | 123篇 |
1998年 | 406篇 |
1997年 | 272篇 |
1996年 | 182篇 |
1995年 | 129篇 |
1994年 | 113篇 |
1993年 | 128篇 |
1992年 | 85篇 |
1991年 | 68篇 |
1990年 | 81篇 |
1989年 | 80篇 |
1988年 | 81篇 |
1987年 | 76篇 |
1986年 | 53篇 |
1985年 | 73篇 |
1984年 | 51篇 |
1983年 | 47篇 |
1982年 | 50篇 |
1981年 | 58篇 |
1980年 | 52篇 |
1979年 | 51篇 |
1978年 | 52篇 |
1977年 | 74篇 |
1976年 | 83篇 |
1975年 | 45篇 |
1973年 | 41篇 |
排序方式: 共有6868条查询结果,搜索用时 515 毫秒
151.
Dowson N Bowden R 《IEEE transactions on pattern analysis and machine intelligence》2008,30(1):180-185
Mutual Information (MI) is popular for registration via function optimisation. This work proposes an inverse compositional formulation of MI for Levenberg-Marquardt optimisation. This yields a constant Hessian, which may be pre-computed. Speed improvements of 15% were obtained, with convergence accuracies similar to those of the standard formulation. 相似文献
152.
Marcelo Siqueira Longin Jan Latecki Nicholas Tustison Jean Gallier James Gee 《Journal of Mathematical Imaging and Vision》2008,30(3):249-274
We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images.
By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary
image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground
voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in
practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be
simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing
surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed
images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such
an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital
multivalued images.
相似文献
James GeeEmail: |
153.
Nicholas P. Webb Hamish A. McGowan Stuart R. Phinn John F. Leys Grant H. McTainsh 《Environmental Modelling & Software》2009,24(2):214-227
This paper describes the development and validation of the Australian Land Erodibility Model (AUSLEM), designed to predict land susceptibility to wind erosion in western Queensland, Australia. The model operates at a 5 × 5 km spatial resolution on a daily time-step with inputs of grass and tree cover, soil moisture, soil texture and surficial stone cover. The system was implemented to predict land erodibility, i.e. susceptibility to wind erosion, for the period 1980–1990. Model performance was evaluated using cross-correlation analyses to compare trajectories of mean annual land erodibility at selected locations with trends in wind speed and observational records of dust events and a Dust Storm Index (DSI). The validation was conducted at four spatial length scales from 25 to 150 km using windows to represent potential dust source areas centered on and positioned around eight meteorological stations within the study area. The predicted land erodibility had strong correlations with dust-event frequencies at half of the stations. Poor correlations at the other stations were linked to the inability of the model to account for temporal changes in soil erodibility, and comparing trends in the land erodibility of regions with dust events whose source areas lie outside the regions of interest. The model agreement with dust-event frequency trends was found to vary across spatial scales and was highly dependent on land type characteristics around the stations and on the types of dust events used for validation. 相似文献
154.
Efficient application identification and the temporal and spatial stability of classification schema
Motivated by the importance of accurate identification for a range of applications, this paper compares and contrasts the effective and efficient classification of network-based applications using behavioral observations of network-traffic and those using deep-packet inspection.Importantly, throughout our work we are able to make comparison with data possessing an accurate, independently determined ground-truth that describes the actual applications causing the network-traffic observed.In a unique study in both the spatial-domain: comparing across different network-locations and in the temporal-domain: comparing across a number of years of data, we illustrate the decay in classification accuracy across a range of application–classification mechanisms. Further, we document the accuracy of spatial classification without training data possessing spatial diversity.Finally, we illustrate the classification of UDP traffic. We use the same classification approach for both stateful flows (TCP) and stateless flows based upon UDP. Importantly, we demonstrate high levels of accuracy: greater than 92% for the worst circumstance regardless of the application. 相似文献
155.
Hierarchical classification of protein function with ensembles of rules and particle swarm optimisation 总被引:1,自引:1,他引:0
Nicholas Holden Alex A. Freitas 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2009,13(3):259-272
This paper focuses on hierarchical classification problems where the classes to be predicted are organized in the form of
a tree. The standard top-down divide and conquer approach for hierarchical classification consists of building a hierarchy
of classifiers where a classifier is built for each internal (non-leaf) node in the class tree. Each classifier discriminates
only between its child classes. After the tree of classifiers is built, the system uses them to classify test examples one
class level at a time, so that when the example is assigned a class at a given level, only the child classes need to be considered
at the next level. This approach has the drawback that, if a test example is misclassified at a certain class level, it will
be misclassified at deeper levels too. In this paper we propose hierarchical classification methods to mitigate this drawback.
More precisely, we propose a method called hierarchical ensemble of hierarchical rule sets (HEHRS), where different ensembles
are built at different levels in the class tree and each ensemble consists of different rule sets built from training examples
at different levels of the class tree. We also use a particle swarm optimisation (PSO) algorithm to optimise the rule weights
used by HEHRS to combine the predictions of different rules into a class to be assigned to a given test example. In addition,
we propose a variant of a method to mitigate the aforementioned drawback of top-down classification. These three types of
methods are compared against the standard top-down hierarchical classification method in six challenging bioinformatics datasets,
involving the prediction of protein function. Overall HEHRS with the rule weights optimised by the PSO algorithm obtains the
best predictive accuracy out of the four types of hierarchical classification method. 相似文献
156.
Wireless sensor networks are increasingly seen as a solution to the problem of performing continuous wide-area monitoring in many environmental, security, and military scenarios. The distributed nature of such networks and the autonomous behavior expected of them present many novel challenges. In this article, the authors argue that a new synthesis of electronic engineering and agent technology is required to address these challenges, and they describe three examples where this synthesis has succeeded. In more detail, they describe how these novel approaches address the need for communication and computationally efficient decentralized algorithms to coordinate the behavior of physically distributed sensors, how they enable the real-world deployment of sensor agent platforms in the field, and finally, how they facilitate the development of intelligent agents that can autonomously acquire data from these networks and perform information processing tasks such as fusion, inference, and prediction. 相似文献
157.
A. Saleem C.B. Wong J. Pu P.R. Moore 《Simulation Modelling Practice and Theory》2009,17(10):1575-1586
This paper outlines a method to identify the friction parameters for servo-pneumatic systems using a mixed-reality environment. To acquire system friction parameters accurately can be extremely difficult once the servo-system has been assembled because of its highly nonlinear nature, which causes a great difficulty in servo-pneumatic system modelling and control. In this research, a mixed-reality environment has been employed to determine the friction parameters effectively and efficiently through online identification. Traditionally, friction parameters identification can be performed manually or automatically using traditional optimization methods or modern ones such as neural networks. The advantages of the proposed method are the high accuracy in the estimated parameters, its simplicity and its speed. An experimental case study has been conducted and the results showed the accuracy and effectiveness of the proposed method. 相似文献
158.
Long MM DeLucas LJ Smith C Carson M Moore K Harrington MD Pillion DJ Bishop SP Rosenblum WM Naumann RJ Chait A Prahl J Bugg CE 《Microgravity science and technology》1994,7(2):196-202
One of the major stumbling blocks that prevents rapid structure determination using x-ray crystallography is macromolecular crystal growth. There are many examples where crystallization takes longer than structure determination. In some cases, it is impossible to grow useful crystals on earth. Recent experiments conducted in conjunction with NASA on various Space Shuttle missions have demonstrated that protein crystals often grow larger and display better internal molecular order than their earth-grown counterparts. This paper reports results from three Shuttle flights using the Protein Crystallization Facility (PCF). The PCF hardware produced large, high-quality insulin crystals by using a temperature change as the sole means to affect protein solubility and thus, crystallization. The facility consists of cylinders/containers with volumes of 500, 200, 100, and 50 ml. Data from the three Shuttle flights demonstrated that larger, higher resolution crystals (as evidenced by x-ray diffraction data) were obtained from the microgravity experiments when compared to earth-grown crystals. 相似文献
159.
Edge crossings in drawings of bipartite graphs 总被引:7,自引:0,他引:7
Systems engineers have recently shown interest in algorithms for drawing directed graphs so that they are easy to understand and remember. Each of the commonly used methods has a step which aims to adjust the drawing to decrease the number of arc crossings. We show that the most popular strategy involves an NP-complete problem regarding the minimization of the number of arcs in crossings in a bipartite graph. The performance of the commonly employed barycenter heuristic for this problem is analyzed. An alternative method, the median heuristic, is proposed and analyzed. The new method is shown to compare favorably with the old in terms of performance guarantees. As a bonus, we show that the median heuristic performs well with regard to the total length of the arcs in the drawing. 相似文献
160.