首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4890篇
  免费   275篇
  国内免费   4篇
电工技术   76篇
综合类   1篇
化学工业   1127篇
金属工艺   75篇
机械仪表   138篇
建筑科学   190篇
矿业工程   2篇
能源动力   137篇
轻工业   377篇
水利工程   30篇
石油天然气   7篇
武器工业   1篇
无线电   452篇
一般工业技术   830篇
冶金工业   647篇
原子能技术   48篇
自动化技术   1031篇
  2023年   57篇
  2022年   164篇
  2021年   190篇
  2020年   93篇
  2019年   111篇
  2018年   128篇
  2017年   129篇
  2016年   188篇
  2015年   136篇
  2014年   187篇
  2013年   313篇
  2012年   269篇
  2011年   318篇
  2010年   255篇
  2009年   271篇
  2008年   245篇
  2007年   227篇
  2006年   160篇
  2005年   137篇
  2004年   150篇
  2003年   139篇
  2002年   93篇
  2001年   70篇
  2000年   66篇
  1999年   68篇
  1998年   212篇
  1997年   130篇
  1996年   122篇
  1995年   65篇
  1994年   70篇
  1993年   62篇
  1992年   34篇
  1991年   22篇
  1990年   14篇
  1989年   32篇
  1988年   13篇
  1987年   18篇
  1986年   17篇
  1985年   29篇
  1984年   21篇
  1983年   17篇
  1982年   15篇
  1981年   18篇
  1980年   7篇
  1979年   10篇
  1978年   7篇
  1977年   13篇
  1976年   16篇
  1973年   5篇
  1967年   8篇
排序方式: 共有5169条查询结果,搜索用时 15 毫秒
51.
52.
53.
Examined the relationship between aphasia type and lesion site in 80 subcortical stroke patients. Ss were classified as affected by aphasia, dysarthria, and nonaphasia, nondysarthria. Sites of lesions were identified by means of computerized tomography (CT) scan. No correlation between site of lesions and category group was found. Lesions of the same subcortical structures yielded different neurolinguistic impairment, whereas comparable linguistic patterns were observed with lesions of different deep areas. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
54.
This paper discuss several quantitative issues that arise in the analysis of health risks, beginning with principles such as de minimis and zero-risk. The paper also provides a probabilistic definition of risk in terms of hazard, context, consequence, magnitude, and uncertainty. The example relies on this definition to investigate, through sensitivity analysis, the effect that uncertainty has on the results obtained. The results, from a case study based on waterborne total arsenic, show that the choice of dose—response functions causes more uncertainty than any other component of risk analysis. Chemical carcinogenesis provides the basis for discussing inability to know as well as uncertainty. The conclusion is that risk analysis keeps uncertainty and inability to know separate; through this function, it provides a much needed method to present information to decision makers and the public.  相似文献   
55.
The suppression of apoptosis may contribute to the carcinogenicity of the peroxisome proliferators (PPs), a class of non-genotoxic rodent hepatocarcinogens. Our previous work demonstrated that the PP nafenopin suppressed both spontaneous and transforming growth factor beta1 (TGFbeta1)-induced hepatocyte apoptosis both in vivo and in vitro. Here, we extend these observations by demonstrating the ability of nafenopin to suppress apoptosis induced by other major candidates for the signalling of cell death in the liver. Treatment of rat or mouse hepatocyte monolayers with TGFbeta1 or the DNA damaging drugs etoposide or hydroxyurea induced high levels of apoptosis. Western blot analysis did not support a role for either p53 or p21waf1 in etoposide-induced apoptosis in rat hepatocytes. Treatment of mouse hepatocytes with an agonistic anti-Fas antibody also resulted in an induction of high levels of apoptosis. Pre-addition and continued exposure to nafenopin suppressed apoptosis induced by all three stimuli. Overall, our studies demonstrate that the ability of nafenopin to protect hepatocytes from apoptosis is not restricted to species or apoptotic stimulus. It is possible, therefore, that the PPs may suppress apoptosis by acting on diverse signalling pathways. However, it seems more likely that nafenopin suppresses hepatocyte apoptosis elicited by each death stimulus by impinging on a core apoptotic mechanism.  相似文献   
56.
The exploitation of the salient features of capability-based addressing environments leads to a high number of small objects existing in memory at the same time. It is thus necessary to enhance the efficiency of the mechanisms for object relocation, and to avoid congestion of input/output devices due to swapping. In this paper, we present an approach to the management of a large virtual memory space aimed at solving these problems. We insert partial information concerning the physical allocation of each object into the virtual identifier of this object. Objects are grouped into large swapping units, called pages. The page size is independent of the average object size. This results in enhanced efficiency in managing the relocation information both with regard to memory requirements and access times. The allocation of objects into pages, and the movement of pages through the memory hierarchy, are controlled by user processes. This means that programs which have knowledge of their own use of virtual memory can increase their locality of reference, diminish the number of swap operations and reduce fragmentation.  相似文献   
57.
This paper extends the Bayesian direct deconvolution method to the case in which the convolution kernel depends on few unknown nuisance parameters. Moreover, an acceleration procedure is proposed that drastically reduces the computational burden. Finally, the implementation of the method by means of the fast Fourier transform is fully discussed in the multidimensional case.  相似文献   
58.

Studying the changes of shape is a common concern in many scientific fields. We address here two problems: (1) quantifying the deformation between two given shapes and (2) transporting this deformation to morph a third shape. These operations can be done with or without point correspondence, depending on the availability of a surface matching algorithm, and on the type of mathematical procedure adopted. In computer vision, the re-targeting of emotions mapped on faces is a common application. We contrast here four different methods used for transporting the deformation toward a target once it was estimated upon the matching of two shapes. These methods come from very different fields such as computational anatomy, computer vision and biology. We used the large diffeomorphic deformation metric mapping and thin plate spline, in order to estimate deformations in a deformational trajectory of a human face experiencing different emotions. Then we use naive transport (NT), linear shift (LS), direct transport (DT) and fanning scheme (FS) to transport the estimated deformations toward four alien faces constituted by 240 homologous points and identifying a triangulation structure of 416 triangles. We used both local and global criteria for evaluating the performance of the 4 methods, e.g., the maintenance of the original deformation. We found DT, LS and FS very effective in recovering the original deformation while NT fails under several aspects in transporting the shape change. As the best method may differ depending on the application, we recommend carefully testing different methods in order to choose the best one for any specific application.

  相似文献   
59.
60.
The projection of a photographic data set on a 3D model is a robust and widely applicable way to acquire appearance information of an object. The first step of this procedure is the alignment of the images on the 3D model. While any reconstruction pipeline aims at avoiding misregistration by improving camera calibrations and geometry, in practice a perfect alignment cannot always be reached. Depending on the way multiple camera images are fused on the object surface, remaining misregistrations show up either as ghosting or as discontinuities at transitions from one camera view to another. In this paper we propose a method, based on the computation of Optical Flow between overlapping images, to correct the local misalignment by determining the necessary displacement. The goal is to correct the symptoms of misregistration, instead of searching for a globally consistent mapping, which might not exist. The method scales up well with the size of the data set (both photographic and geometric) and is quite independent of the characteristics of the 3D model (topology cleanliness, parametrization, density). The method is robust and can handle real world cases that have different characteristics: low level geometric details and images that lack enough features for global optimization or manual methods. It can be applied to different mapping strategies, such as texture or per-vertex attribute encoding.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号