全文获取类型
收费全文 | 36221篇 |
免费 | 1451篇 |
国内免费 | 60篇 |
专业分类
电工技术 | 369篇 |
综合类 | 29篇 |
化学工业 | 7139篇 |
金属工艺 | 724篇 |
机械仪表 | 739篇 |
建筑科学 | 1977篇 |
矿业工程 | 114篇 |
能源动力 | 1054篇 |
轻工业 | 2891篇 |
水利工程 | 431篇 |
石油天然气 | 118篇 |
武器工业 | 5篇 |
无线电 | 2479篇 |
一般工业技术 | 6124篇 |
冶金工业 | 6678篇 |
原子能技术 | 271篇 |
自动化技术 | 6590篇 |
出版年
2023年 | 202篇 |
2022年 | 349篇 |
2021年 | 680篇 |
2020年 | 462篇 |
2019年 | 618篇 |
2018年 | 780篇 |
2017年 | 694篇 |
2016年 | 832篇 |
2015年 | 757篇 |
2014年 | 1043篇 |
2013年 | 2381篇 |
2012年 | 1680篇 |
2011年 | 2097篇 |
2010年 | 1653篇 |
2009年 | 1550篇 |
2008年 | 1801篇 |
2007年 | 1775篇 |
2006年 | 1594篇 |
2005年 | 1443篇 |
2004年 | 1174篇 |
2003年 | 1123篇 |
2002年 | 1053篇 |
2001年 | 704篇 |
2000年 | 551篇 |
1999年 | 600篇 |
1998年 | 603篇 |
1997年 | 586篇 |
1996年 | 556篇 |
1995年 | 583篇 |
1994年 | 530篇 |
1993年 | 514篇 |
1992年 | 501篇 |
1991年 | 289篇 |
1990年 | 420篇 |
1989年 | 391篇 |
1988年 | 323篇 |
1987年 | 356篇 |
1986年 | 311篇 |
1985年 | 419篇 |
1984年 | 419篇 |
1983年 | 318篇 |
1982年 | 300篇 |
1981年 | 282篇 |
1980年 | 271篇 |
1979年 | 270篇 |
1978年 | 247篇 |
1977年 | 227篇 |
1976年 | 209篇 |
1975年 | 194篇 |
1974年 | 175篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Marai GE Grimm CM Laidlaw DH 《IEEE transactions on visualization and computer graphics》2007,13(5):1095-1104
Orthopedists invest significant amounts of effort and time trying to understand the biomechanics of arthrodial (gliding) joints. Although new image acquisition and processing methods currently generate richer-than-ever geometry and kinematic data sets that are individual specific, the computational and visualization tools needed to enable the comparative analysis and exploration of these data sets lag behind. In this paper, we present a framework that enables the cross-data-set visual exploration and analysis of arthrodial joint biomechanics. Central to our approach is a computer-vision-inspired markerless method for establishing pairwise correspondences between individual-specific geometry. Manifold models are subsequently defined and deformed from one individual-specific geometry to another such that the markerless correspondences are preserved while minimizing model distortion. The resulting mutually consistent parameterization and visualization allow the users to explore the similarities and differences between two data sets and to define meaningful quantitative measures. We present two applications of this framework to human-wrist data: articular cartilage transfer from cadaver data to in vivo data and cross-data-set kinematics analysis. The method allows our users to combine complementary geometries acquired through different modalities and thus overcome current imaging limitations. The results demonstrate that the technique is useful in the study of normal and injured anatomy and kinematics of arthrodial joints. In principle, the pairwise cross-parameterization method applies to all spherical topology data from the same class and should be particularly beneficial in instances where identifying salient object features is a nontrivial task. 相似文献
992.
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations. 相似文献
993.
Davis CH Deerfield D Wymore T Stafford DW Pedersen LG 《Journal of molecular graphics & modelling》2007,26(2):401-408
A reaction path including transition states is generated for the Silverman mechanism [R.B. Silverman, Chemical model studies for the mechanism of Vitamin K epoxide reductase, J. Am. Chem. Soc. 103 (1981) 5939-5941] of action for Vitamin K epoxide reductase (VKOR) using quantum mechanical methods (B3LYP/6-311G**). VKOR, an essential enzyme in mammalian systems, acts to convert Vitamin K epoxide, formed by Vitamin K carboxylase, to its (initial) quinone form for cellular reuse. This study elaborates on a prior work that focused on the thermodynamics of VKOR [D.W. Deerfield II, C.H. Davis, T. Wymore, D.W. Stafford, L.G. Pedersen, Int. J. Quant. Chem. 106 (2006) 2944-2952]. The geometries of proposed model intermediates and transition states in the mechanism are energy optimized. We find that once a key disulfide bond is broken, the reaction proceeds largely downhill. An important step in the conversion of the epoxide back to the quinone form involves initial protonation of the epoxide oxygen. We find that the source of this proton is likely a free mercapto group rather than a water molecule. The results are consistent with the current view that the widely used drug Warfarin likely acts by blocking binding of Vitamin K at the VKOR active site and thereby effectively blocking the initiating step. These results will be useful for designing more complete QM/MM studies of the enzymatic pathway once three-dimensional structural data is determined and available for VKOR. 相似文献
994.
995.
Biddiscombe J Geveci B Martin K Moreland K Thompson D 《IEEE transactions on visualization and computer graphics》2007,13(6):1376-1383
Pipeline architectures provide a versatile and efficient mechanism for constructing visualizations, and they have been implemented in numerous libraries and applications over the past two decades. In addition to allowing developers and users to freely combine algorithms, visualization pipelines have proven to work well when streaming data and scale well on parallel distributed-memory computers. However, current pipeline visualization frameworks have a critical flaw: they are unable to manage time varying data. As data flows through the pipeline, each algorithm has access to only a single snapshot in time of the data. This prevents the implementation of algorithms that do any temporal processing such as particle tracing; plotting over time; or interpolation, fitting, or smoothing of time series data. As data acquisition technology improves, as simulation time-integration techniques become more complex, and as simulations save less frequently and regularly, the ability to analyze the time-behavior of data becomes more important. This paper describes a modification to the traditional pipeline architecture that allows it to accommodate temporal algorithms. Furthermore, the architecture allows temporal algorithms to be used in conjunction with algorithms expecting a single time snapshot, thus simplifying software design and allowing adoption into existing pipeline frameworks. Our architecture also continues to work well in parallel distributed-memory environments. We demonstrate our architecture by modifying the popular VTK framework and exposing the functionality to the ParaView application. We use this framework to apply time-dependent algorithms on large data with a parallel cluster computer and thereby exercise a functionality that previously did not exist. 相似文献
996.
Scheidegger C Vo H Koop D Freire J Silva C 《IEEE transactions on visualization and computer graphics》2007,13(6):1560-1567
While there have been advances in visualization systems, particularly in multi-view visualizations and visual exploration, the process of building visualizations remains a major bottleneck in data exploration. We show that provenance metadata collected during the creation of pipelines can be reused to suggest similar content in related visualizations and guide semi-automated changes. We introduce the idea of query-by-example in the context of an ensemble of visualizations, and the use of analogies as first-class operations in a system to guide scalable interactions. We describe an implementation of these techniques in VisTrails, a publicly-available, open-source system. 相似文献
997.
Zhen EY Berna MJ Jin Z Pritt ML Watson DE Ackermann BL Hale JE 《Proteomics. Clinical applications》2007,1(7):661-671
Heart fatty acid binding protein (Fabp3) is a cytosolic protein expressed primarily in heart, and to a lesser extent in skeletal muscle, brain, and kidney. During myocardial injury, the Fabp3 level in serum is elevated rapidly, making it an ideal early marker for myocardial infarction. In this study, an MS‐based selected reaction monitoring method (LC‐SRM) was developed for quantifying Fabp3 in rat serum. Fabp3 was enriched first through an immobilized antibody, and the protein was digested on beads directly. A marker peptide of Fabp3 was quantified using LC‐SRM with a stable isotope‐labeled peptide standard. For six quality control samples with Fabp3 ranging from 0.256 to 25 ng, the average recovery following the procedure was about 73%, and the precision (%CV) between replicates was less than 7%. The Fabp3 concentrations in rat serum peaked 1 h after isoproterenol treatment, and returned to baseline levels 24 h after the dose. Elevated Fabp3 levels were also detected in rats administered with a PPAR α/δ agonist, which has shown to cause skeletal muscle necrosis. Fabp3 can be used as a biomarker for both cardiac and skeletal necroses. The cross‐validation of the LC‐SRM method with an existing ELISA method is described. 相似文献
998.
Anna Korzynska Wojciech Strojny Andreas Hoppe David Wertheim Pawel Hoser 《Pattern Analysis & Applications》2007,10(4):301-319
This paper describes a segmentation method combining a texture based technique with a contour based method. The technique
is designed to enable the study of cell behaviour over time by segmenting brightfield microscope image sequences. The technique
was tested on artificial images, based on images of living cells and on real sequences acquired from microscope observations
of neutrophils and lymphocytes as well as on a sequence of MRI images. The results of the segmentation are compared with the
results of the watershed and snake segmentation methods. The results show that the method is both effective and practical.
相似文献
Anna KorzynskaEmail: |
999.
One of the cornerstones of expert performance in complex domains is the ability to perceive problem situations in terms of
their task-relevant semantic properties. One such class of properties consists of phenomena that are defined in terms of patterns
of change over time, i.e., events. A basic pre-requisite for working towards tools to support event recognition is a method for understanding the events that
expert practitioners find meaningful in a given field of practice. In this article we present the modified unit marking procedure
(mUMP), a technique adapted from work on social perception to facilitate identification of the meaningful phenomena which
observers attend to in a dynamic data array. The mUMP and associated data analysis techniques are presented with examples
from a first of a kind study where they were used to elicit and understand the events practitioners found meaningful in a
scenario from an actual complex work domain.
相似文献
David D. WoodsEmail: |
1000.
We describe a decision support system to distinguish among hematology cases directly from microscopic specimens. The system
uses an image database containing digitized specimens from normal and four different hematologic malignancies. Initially,
the nuclei and cytoplasmic components of the specimens are segmented using a robust color gradient vector flow active contour
model. Using a few cell images from each class, the basic texture elements (textons) for the nuclei and cytoplasm are learned,
and the cells are represented through texton histograms. We propose to use support vector machines on the texton histogram
based cell representation and achieve major improvement over the commonly used classification methods in texture research.
Experiments with 3,691 cell images from 105 patients which originated from four different hospitals indicate more than 84%
classification performance for individual cells and 89% for case based classification for the five class problem.
相似文献
Oncel TuzelEmail: |