首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1256篇
  免费   71篇
  国内免费   2篇
电工技术   9篇
综合类   1篇
化学工业   292篇
金属工艺   32篇
机械仪表   20篇
建筑科学   60篇
矿业工程   2篇
能源动力   42篇
轻工业   77篇
水利工程   4篇
石油天然气   1篇
无线电   65篇
一般工业技术   291篇
冶金工业   224篇
原子能技术   6篇
自动化技术   203篇
  2024年   5篇
  2023年   34篇
  2022年   41篇
  2021年   58篇
  2020年   42篇
  2019年   47篇
  2018年   53篇
  2017年   50篇
  2016年   39篇
  2015年   33篇
  2014年   38篇
  2013年   85篇
  2012年   65篇
  2011年   57篇
  2010年   53篇
  2009年   58篇
  2008年   61篇
  2007年   40篇
  2006年   41篇
  2005年   24篇
  2004年   24篇
  2003年   24篇
  2002年   21篇
  2001年   13篇
  2000年   16篇
  1999年   9篇
  1998年   56篇
  1997年   32篇
  1996年   34篇
  1995年   15篇
  1994年   11篇
  1993年   16篇
  1992年   17篇
  1991年   5篇
  1990年   6篇
  1989年   8篇
  1988年   6篇
  1987年   5篇
  1985年   4篇
  1984年   5篇
  1983年   4篇
  1982年   5篇
  1981年   7篇
  1979年   6篇
  1978年   5篇
  1976年   12篇
  1969年   3篇
  1957年   3篇
  1954年   3篇
  1943年   4篇
排序方式: 共有1329条查询结果,搜索用时 0 毫秒
991.
Poly(lactic acid) (PLA) is an important polymer because of its significant biocompatibility and biodegradability. Supported H3PW12O40 (H3PW) on activated carbon was utilized for the catalytic polymerization of D,L-lactic acid, resulting in blends of PLA. The stability of the polymer was monitored by thermogravimetry (TGA), and the decomposition temperature (Td) was used to determine the optimal production conditions (i.e., temperature of 180 °C for 15 h; 0.1 wt. % catalyst; 20 wt. % H3PW/carbon calcined at 400 °C). The best catalyst was reused three times with good activity and recovery (95 %) and was analyzed to confirm the consistency of its Keggin structure, dispersion, and acidity, which are important parameters that affect the catalyst’s activity. The obtained polymer was characterized by gel permeation chromatography (GPC), Fourier-transform infrared spectroscopy (FT-IR), 1H/13C nuclear magnetic resonance (NMR) spectroscopy, specific optical rotation ([α]D 25), powder X-ray diffraction (XRD), and differential scanning calorimetry (DSC). The average molar mass of the polymer was 17,400 g mol?1. Blends of poly(lactic acid) with 85 % poly(L-lactic acid) stereospecific isomer were obtained.
Graphical Abstract Stereoselective synthesis of 85 % PLLA from polymerization of d,l-lactic acid using 12-tungstophosphoric acid supported on carbon as a catalyst
  相似文献   
992.
Two ductile iron grades, EN‐GJS‐600‐3 a ferritic–pearlitic grade, and EN‐GJS‐600‐10 a silicon strengthened ferritic nodular iron grade, are studied in the very high cycle fatigue range using a 20 kHz ultrasonic test equipment. Fatigue strengths and SN‐curves are achieved, and fracture surfaces and microstructures are investigated. The ferritic grade with higher ductility displays a lower fatigue strength at 108 load cycles than the ferritic–pearlitic grade, 142 and 167 MPa, respectively. Examination of fracture surfaces shows that fatigue failures are controlled by micropores in both of the ductile iron grades, while the graphite nodule distributions do not seem to influence the difference in fatigue strengths. Prediction of the fatigue strengths, using a model for ductile iron proposed by Endo and Yanase, indicates a large potential for improvement in particular for the ferritic grade.  相似文献   
993.
Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.  相似文献   
994.
Over the last years, comparative genomics analyses have become more compute-intensive due to the explosive number of available genome sequences. Comparative genomics analysis is an important a prioristep for experiments in various bioinformatics domains. This analysis can be used to enhance the performance and quality of experiments in areas such as evolution and phylogeny. A common phylogenetic analysis makes extensive use of Multiple Sequence Alignment (MSA) in the construction of phylogenetic trees, which are used to infer evolutionary relationships between homologous genes. Each phylogenetic analysis aims at exploring several different MSA methods to verify which execution produces trees with the best quality. This phylogenetic exploration may run during weeks, even when executed in High Performance Computing (HPC) environments. Although there are many approaches that model and parallelize phylogenetic analysis as scientific workflows, exploring all MSA methods becomes a complex and expensive task to be performed. If scientists determine a priorithe most adequate MSA method to use in the phylogenetic analysis, it would save time, and, in some cases, financial resources. Comparative genomics analyses play an important role in optimizing phylogenetic analysis workflows. In this paper, we extend the SciHmm scientific workflow, aimed at determining the most suitable MSA method, to use it in a phylogenetic analysis. SciHmm uses SciCumulus, a cloud workflow execution engine, for parallel execution. Experimental results show that using SciHmm considerably reduces the total execution time of the phylogenetic analysis (up to 80%). Experiments also show that trees built with the MSA program elected by using SciHmm presented more quality than the remaining, as expected. In addition, the parallel execution of SciHmm shows that this kind of bioinformatics workflow has an excellent cost/benefit when executed in cloud environments.  相似文献   
995.
In recent years, research on movement primitives has gained increasing popularity. The original goals of movement primitives are based on the desire to have a sufficiently rich and abstract representation for movement generation, which allows for efficient teaching, trial-and-error learning, and generalization of motor skills (Schaal 1999). Thus, motor skills in robots should be acquired in a natural dialog with humans, e.g., by imitation learning and shaping, while skill refinement and generalization should be accomplished autonomously by the robot. Such a scenario resembles the way we teach children and connects to the bigger question of how the human brain accomplishes skill learning. In this paper, we review how a particular computational approach to movement primitives, called dynamic movement primitives, can contribute to learning motor skills. We will address imitation learning, generalization, trial-and-error learning by reinforcement learning, movement recognition, and control based on movement primitives. But we also want to go beyond the standard goals of movement primitives. The stereotypical movement generation with movement primitives entails predicting of sensory events in the environment. Indeed, all the sensory events associated with a movement primitive form an associative skill memory that has the potential of forming a most powerful representation of a complete motor skill.  相似文献   
996.
This is the second part of a large survey paper in which we analyze recent literature on Formal Concept Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract and keywords. We developed a knowledge browsing environment to support our literature analysis process. We use the visualization capabilities of FCA to explore the literature, to discover and conceptually represent the main research topics in the FCA community. In this second part, we zoom in on and give an extensive overview of the papers published between 2003 and 2011 which applied FCA-based methods for knowledge discovery and ontology engineering in various application domains. These domains include software mining, web analytics, medicine, biology and chemistry data.  相似文献   
997.
998.
With the introduction of automatic vehicle guidance (AV), mixed traffic scenarios between automatically and manually guided vehicles are to be expected, at least during a transitions phase. To ensure the safety of motor vehicle transportation, it will be essential to develop a cooperative relationship between human drivers and AV. Research in this area is currently being done to gain insight into the manner of human drivers’ decisions to transfer the same behaviours to automatic vehicle guidance. A lot of research is being done to prepare for the introduction of AV, but there is still a lack of information on how individual road users make decisions in cooperative decisions. Currently, there is no study that has tried to understand the decision-making process with the help of an online-survey. For that reason, a questionnaire study on cooperative traffic situations (N?=?281) was carried out, which was analysed with the Natural Decision Making approach. By means of the NDM approach and the use of the recognition module, links between planned action and the expected action between road users were identified. Furthermore, it was possible to categorize individual communication signals into offensive or defensive signals and thus make predictions about the intention of the driver. These findings can be used to derive design recommendations for automatic vehicle guidance in cooperative situations.  相似文献   
999.
Photo‐realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods for advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state‐of‐the‐art in this field, and presents a categorization and comparison of current methods. Our in‐depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.  相似文献   
1000.
The processor evolution has reached a critical moment in time where it will soon be impossible to increase the frequency much further. Processor designers such as Motorola, Intel and IBM have all realised that the only way to improve the FLOP/Watt ratio is to develop multi-core devices. One of the most current examples of multi-core processors is the new Sony/Toshiba/IBM Cell/B.E. multi-core processor. For the suitability to run in parallel, Monte Carlo methods are often considered embarrassingly parallel. This paper describes how a common Monte Carlo based financial simulation can be calculated in parallel using the Cell/B.E. multi-core processor. The measured performance with the achieved multi-core speed-up is also presented. With the recent availability of this increasingly available technology, financial simulations can now be performed in a fraction of the time it used to. This can also be achieved with a limited power and volume budget using commercially available technology. The main challenge with multi-core devices is clearly the programmability. The work presented here describes how this challenge could be dealt with.A basic MPI library has been developed to handle the partitioning and communication of data. The thread creation follows a POSIX thread creation model. MPI together with POSIX make the application portable in between various multi-processor systems and multi-core devices. The conclusions made indicate that a function offload MPI implementation on the Cell/B.E. multi-core processor can efficiently be used to speed-up the Monte Carlo solution of financial simulations. The conclusions made herein are also applicable to other situations where an algorithm can be easily parallelized.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号