首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   291篇
  免费   14篇
电工技术   10篇
化学工业   58篇
金属工艺   2篇
机械仪表   1篇
建筑科学   19篇
矿业工程   2篇
能源动力   29篇
轻工业   17篇
水利工程   6篇
无线电   17篇
一般工业技术   35篇
冶金工业   27篇
原子能技术   2篇
自动化技术   80篇
  2023年   3篇
  2022年   10篇
  2021年   16篇
  2020年   6篇
  2019年   8篇
  2017年   8篇
  2016年   11篇
  2015年   10篇
  2014年   14篇
  2013年   20篇
  2012年   16篇
  2011年   22篇
  2010年   21篇
  2009年   25篇
  2008年   12篇
  2007年   13篇
  2006年   11篇
  2005年   6篇
  2004年   5篇
  2003年   5篇
  2002年   6篇
  2001年   4篇
  2000年   4篇
  1999年   4篇
  1998年   10篇
  1997年   5篇
  1996年   5篇
  1995年   6篇
  1994年   1篇
  1993年   1篇
  1992年   3篇
  1990年   1篇
  1989年   2篇
  1986年   1篇
  1982年   1篇
  1980年   1篇
  1979年   1篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
  1973年   3篇
排序方式: 共有305条查询结果,搜索用时 15 毫秒
61.
62.
Edge matching puzzles have been amongst us for a long time now and traditionally they have been considered, both, a children’s game and an interesting mathematical divertimento. Their main characteristics have already been studied, and their worst-case complexity has been properly classified as a NP-complete problem. It is in recent times, specially after being used as the problem behind a money-prized contest, with a prize of 2US$ million for the first solver, that edge matching puzzles have attracted mainstream attention from wider audiences, including, of course, computer science people working on solving hard problems. We consider these competitions as an interesting opportunity to showcase SAT/CSP solving techniques when confronted to a real world problem to a broad audience, a part of the intrinsic, i.e. monetary, interest of such a contest. This article studies the NP-complete problem known as edge matching puzzle using SAT and CSP approaches for solving it. We will focus on providing, first and foremost, a theoretical framework, including a generalized definition of the problem. We will design and show algorithms for easy and fast problem instances generation, generators with easily tunable hardness. Afterwards we will provide with SAT and CSP models for the problems and we will study problem complexity, both typical case and worst-case complexity. We will also provide some specially crafted heuristics that result in a boost in solving time and study which is the effect of such heuristics.  相似文献   
63.
A model for the Bunsen section of the Sulfur–Iodine thermo-chemical cycle is proposed, where sulfur dioxide reacts with excess water and iodine to produce two demixing liquid aqueous phases (H2SO4 rich and HI rich) in equilibrium. Considering the mild temperature and pressure conditions, the UNIQUAC activity coefficient model combined with Engels' solvation model is used. The complete model is discussed, with HI solvation by water and by iodine as well as H2SO4 solvation by water, leading to a very high complexity with almost hundred parameters to be estimated from experimental data. Taking into account the water excess, a successful reduced model with only 15 parameters is proposed after defining new apparent species. Acids total dissociation and total H+ solvation by water are the main assumptions. Results show a good agreement with published experimental data between 25 °C and 120 °C.  相似文献   
64.
A conceptual design is presented for the I/S process for the production of hydrogen using a high-temperature nuclear heat source to split water. The process includes a countercurrent reactor being developed by CEA within the framework of an international collaboration (I-NERI project) with DOE at General Atomics (San Diego, CA). A ProsimPlus? model of the flowsheet indicates 600 kJ high-temperature heat and 69 kJ electric power are consumed per mole of H2 product (with an assumed pressure of 120 bars). The net thermal efficiency would be 38% (HHV basis) if electric power is available at a conversion efficiency of 45%.  相似文献   
65.
The HIx ternary system (H2O–HI–I2) is the latent source of hydrogen for the Sulfur–Iodine thermo-chemical cycle. After analysis of the literature data and models, a homogeneous approach with the Peng–Robinson equation of state used for both the vapor and liquid phase fugacity calculations is proposed for the first time to describe the phase equilibrium of this system. The MHV2 mixing rule is used, with UNIQUAC activity coefficient model combined with of hydrogen iodide solvation by water. This approach is theoretically consistent for HIx separation processes operating above HI critical temperature. Model estimation is done on selected literature vapor–liquid, liquid–liquid, vapor–liquid–liquid and solid–liquid equilibrium data for the ternary system and the three binaries subsystems. Validation is done on the remaining literature data. Results agree well with the published data, but more experimental effort is needed to improve modeling of the HIx system.  相似文献   
66.
Wear and corrosion are the most important factors that the surface of the engineering parts must confront. The need for protection and improvement of the mechanical characteristics of the surface of engineering parts can be to some extent satisfied by coatings. Coatings are considered as an excellent solution when resistance to corrosion, oxidation or low friction is demanded, but due the complexity of selecting the appropriate one, engineers often avoid them. The need for simultaneous consideration of qualitative and quantitative properties, render the use of classic material selection theories inadequate. An expert system for coating selection is presented in this paper, which can handle both qualitative and quantitative variables. The mathematical model used combines the multi-criteria decision making theories (MCDM) together with the fuzzy sets theory. The “Max-Min set” method is applied to calculate the ordering value of the alternatives while the TOPSIS method is used to rank them. A numerical example is provided to illustrate the method. Finally, the process presented can be easily computerized, to create the relative software.  相似文献   
67.
Software Quality Journal - Quality requirements (QRs) are a key artifact needed to ensure the quality and success of a software system. Despite their importance, QRs rarely get the same degree of...  相似文献   
68.
A rewrite closure is an extension of a term rewrite system with new rules, usually deduced by transitivity. Rewrite closures have the nice property that all rewrite derivations can be transformed into derivations of a simple form. This property has been useful for proving decidability results in term rewriting. Unfortunately, when the term rewrite system is not linear, the construction of a rewrite closure is quite challenging. In this paper, we construct a rewrite closure for term rewrite systems that satisfy two properties: the right-hand side term in each rewrite rule contains no repeated variable (right-linear) and contains no variable occurring at depth greater than one (right-shallow). The left-hand side term is unrestricted, and in particular, it may be non-linear. As a consequence of the rewrite closure construction, we are able to prove decidability of the weak normalization problem for right-linear right-shallow term rewrite systems. Proving this result also requires tree automata theory. We use the fact that right-shallow right-linear term rewrite systems are regularity preserving. Moreover, their set of normal forms can be represented with a tree automaton with disequality constraints, and emptiness of this kind of automata, as well as its generalization to reduction automata, is decidable. A preliminary version of this work was presented at LICS 2009 (Creus 2009).  相似文献   
69.
Degraders have illustrated that compound-induced proximity to E3 ubiquitin ligases can prompt the ubiquitination and degradation of disease-relevant proteins. Hence, this pharmacology is becoming a promising alternative and complement to available therapeutic interventions (e. g., inhibitors). Degraders rely on protein binding instead of inhibition and, hence, they hold the promise to broaden the druggable proteome. Biophysical and structural biology approaches have been the cornerstone of understanding and rationalizing degrader-induced ternary complex formation. Computational models have now started to harness the experimental data from these approaches with the aim to identify and rationally help design new degraders. This review outlines the current experimental and computational strategies used to study ternary complex formation and degradation and highlights the importance of effective crosstalk between these approaches in the advancement of the targeted protein degradation (TPD) field. As our understanding of the molecular features that govern drug-induced interactions grows, faster optimizations and superior therapeutic innovations for TPD and other proximity-inducing modalities are sure to follow.  相似文献   
70.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号