首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   36238篇
  免费   1453篇
  国内免费   67篇
电工技术   372篇
综合类   32篇
化学工业   7144篇
金属工艺   727篇
机械仪表   752篇
建筑科学   1965篇
矿业工程   117篇
能源动力   1057篇
轻工业   2888篇
水利工程   432篇
石油天然气   120篇
武器工业   7篇
无线电   2495篇
一般工业技术   6137篇
冶金工业   6665篇
原子能技术   268篇
自动化技术   6580篇
  2023年   204篇
  2022年   355篇
  2021年   683篇
  2020年   465篇
  2019年   624篇
  2018年   785篇
  2017年   697篇
  2016年   833篇
  2015年   754篇
  2014年   1050篇
  2013年   2379篇
  2012年   1682篇
  2011年   2094篇
  2010年   1656篇
  2009年   1553篇
  2008年   1805篇
  2007年   1778篇
  2006年   1593篇
  2005年   1443篇
  2004年   1177篇
  2003年   1127篇
  2002年   1056篇
  2001年   705篇
  2000年   551篇
  1999年   601篇
  1998年   613篇
  1997年   591篇
  1996年   554篇
  1995年   576篇
  1994年   527篇
  1993年   512篇
  1992年   500篇
  1991年   289篇
  1990年   418篇
  1989年   389篇
  1988年   320篇
  1987年   356篇
  1986年   312篇
  1985年   420篇
  1984年   416篇
  1983年   319篇
  1982年   296篇
  1981年   283篇
  1980年   269篇
  1979年   270篇
  1978年   247篇
  1977年   226篇
  1976年   210篇
  1975年   194篇
  1974年   173篇
排序方式: 共有10000条查询结果,搜索用时 171 毫秒
991.
Orthopedists invest significant amounts of effort and time trying to understand the biomechanics of arthrodial (gliding) joints. Although new image acquisition and processing methods currently generate richer-than-ever geometry and kinematic data sets that are individual specific, the computational and visualization tools needed to enable the comparative analysis and exploration of these data sets lag behind. In this paper, we present a framework that enables the cross-data-set visual exploration and analysis of arthrodial joint biomechanics. Central to our approach is a computer-vision-inspired markerless method for establishing pairwise correspondences between individual-specific geometry. Manifold models are subsequently defined and deformed from one individual-specific geometry to another such that the markerless correspondences are preserved while minimizing model distortion. The resulting mutually consistent parameterization and visualization allow the users to explore the similarities and differences between two data sets and to define meaningful quantitative measures. We present two applications of this framework to human-wrist data: articular cartilage transfer from cadaver data to in vivo data and cross-data-set kinematics analysis. The method allows our users to combine complementary geometries acquired through different modalities and thus overcome current imaging limitations. The results demonstrate that the technique is useful in the study of normal and injured anatomy and kinematics of arthrodial joints. In principle, the pairwise cross-parameterization method applies to all spherical topology data from the same class and should be particularly beneficial in instances where identifying salient object features is a nontrivial task.  相似文献   
992.
We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.  相似文献   
993.
A reaction path including transition states is generated for the Silverman mechanism [R.B. Silverman, Chemical model studies for the mechanism of Vitamin K epoxide reductase, J. Am. Chem. Soc. 103 (1981) 5939-5941] of action for Vitamin K epoxide reductase (VKOR) using quantum mechanical methods (B3LYP/6-311G**). VKOR, an essential enzyme in mammalian systems, acts to convert Vitamin K epoxide, formed by Vitamin K carboxylase, to its (initial) quinone form for cellular reuse. This study elaborates on a prior work that focused on the thermodynamics of VKOR [D.W. Deerfield II, C.H. Davis, T. Wymore, D.W. Stafford, L.G. Pedersen, Int. J. Quant. Chem. 106 (2006) 2944-2952]. The geometries of proposed model intermediates and transition states in the mechanism are energy optimized. We find that once a key disulfide bond is broken, the reaction proceeds largely downhill. An important step in the conversion of the epoxide back to the quinone form involves initial protonation of the epoxide oxygen. We find that the source of this proton is likely a free mercapto group rather than a water molecule. The results are consistent with the current view that the widely used drug Warfarin likely acts by blocking binding of Vitamin K at the VKOR active site and thereby effectively blocking the initiating step. These results will be useful for designing more complete QM/MM studies of the enzymatic pathway once three-dimensional structural data is determined and available for VKOR.  相似文献   
994.
995.
Pipeline architectures provide a versatile and efficient mechanism for constructing visualizations, and they have been implemented in numerous libraries and applications over the past two decades. In addition to allowing developers and users to freely combine algorithms, visualization pipelines have proven to work well when streaming data and scale well on parallel distributed-memory computers. However, current pipeline visualization frameworks have a critical flaw: they are unable to manage time varying data. As data flows through the pipeline, each algorithm has access to only a single snapshot in time of the data. This prevents the implementation of algorithms that do any temporal processing such as particle tracing; plotting over time; or interpolation, fitting, or smoothing of time series data. As data acquisition technology improves, as simulation time-integration techniques become more complex, and as simulations save less frequently and regularly, the ability to analyze the time-behavior of data becomes more important. This paper describes a modification to the traditional pipeline architecture that allows it to accommodate temporal algorithms. Furthermore, the architecture allows temporal algorithms to be used in conjunction with algorithms expecting a single time snapshot, thus simplifying software design and allowing adoption into existing pipeline frameworks. Our architecture also continues to work well in parallel distributed-memory environments. We demonstrate our architecture by modifying the popular VTK framework and exposing the functionality to the ParaView application. We use this framework to apply time-dependent algorithms on large data with a parallel cluster computer and thereby exercise a functionality that previously did not exist.  相似文献   
996.
While there have been advances in visualization systems, particularly in multi-view visualizations and visual exploration, the process of building visualizations remains a major bottleneck in data exploration. We show that provenance metadata collected during the creation of pipelines can be reused to suggest similar content in related visualizations and guide semi-automated changes. We introduce the idea of query-by-example in the context of an ensemble of visualizations, and the use of analogies as first-class operations in a system to guide scalable interactions. We describe an implementation of these techniques in VisTrails, a publicly-available, open-source system.  相似文献   
997.
Heart fatty acid binding protein (Fabp3) is a cytosolic protein expressed primarily in heart, and to a lesser extent in skeletal muscle, brain, and kidney. During myocardial injury, the Fabp3 level in serum is elevated rapidly, making it an ideal early marker for myocardial infarction. In this study, an MS‐based selected reaction monitoring method (LC‐SRM) was developed for quantifying Fabp3 in rat serum. Fabp3 was enriched first through an immobilized antibody, and the protein was digested on beads directly. A marker peptide of Fabp3 was quantified using LC‐SRM with a stable isotope‐labeled peptide standard. For six quality control samples with Fabp3 ranging from 0.256 to 25 ng, the average recovery following the procedure was about 73%, and the precision (%CV) between replicates was less than 7%. The Fabp3 concentrations in rat serum peaked 1 h after isoproterenol treatment, and returned to baseline levels 24 h after the dose. Elevated Fabp3 levels were also detected in rats administered with a PPAR α/δ agonist, which has shown to cause skeletal muscle necrosis. Fabp3 can be used as a biomarker for both cardiac and skeletal necroses. The cross‐validation of the LC‐SRM method with an existing ELISA method is described.  相似文献   
998.
This paper describes a segmentation method combining a texture based technique with a contour based method. The technique is designed to enable the study of cell behaviour over time by segmenting brightfield microscope image sequences. The technique was tested on artificial images, based on images of living cells and on real sequences acquired from microscope observations of neutrophils and lymphocytes as well as on a sequence of MRI images. The results of the segmentation are compared with the results of the watershed and snake segmentation methods. The results show that the method is both effective and practical.
Anna KorzynskaEmail:
  相似文献   
999.
One of the cornerstones of expert performance in complex domains is the ability to perceive problem situations in terms of their task-relevant semantic properties. One such class of properties consists of phenomena that are defined in terms of patterns of change over time, i.e., events. A basic pre-requisite for working towards tools to support event recognition is a method for understanding the events that expert practitioners find meaningful in a given field of practice. In this article we present the modified unit marking procedure (mUMP), a technique adapted from work on social perception to facilitate identification of the meaningful phenomena which observers attend to in a dynamic data array. The mUMP and associated data analysis techniques are presented with examples from a first of a kind study where they were used to elicit and understand the events practitioners found meaningful in a scenario from an actual complex work domain.
David D. WoodsEmail:
  相似文献   
1000.
We describe a decision support system to distinguish among hematology cases directly from microscopic specimens. The system uses an image database containing digitized specimens from normal and four different hematologic malignancies. Initially, the nuclei and cytoplasmic components of the specimens are segmented using a robust color gradient vector flow active contour model. Using a few cell images from each class, the basic texture elements (textons) for the nuclei and cytoplasm are learned, and the cells are represented through texton histograms. We propose to use support vector machines on the texton histogram based cell representation and achieve major improvement over the commonly used classification methods in texture research. Experiments with 3,691 cell images from 105 patients which originated from four different hospitals indicate more than 84% classification performance for individual cells and 89% for case based classification for the five class problem.
Oncel TuzelEmail:
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号