首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   445篇
  免费   12篇
电工技术   8篇
化学工业   57篇
金属工艺   6篇
机械仪表   3篇
建筑科学   5篇
能源动力   12篇
轻工业   21篇
水利工程   1篇
石油天然气   2篇
无线电   72篇
一般工业技术   74篇
冶金工业   123篇
原子能技术   8篇
自动化技术   65篇
  2023年   3篇
  2022年   12篇
  2021年   7篇
  2020年   5篇
  2019年   3篇
  2018年   7篇
  2017年   9篇
  2016年   11篇
  2015年   7篇
  2014年   9篇
  2013年   18篇
  2012年   16篇
  2011年   19篇
  2010年   17篇
  2009年   18篇
  2008年   22篇
  2007年   7篇
  2006年   11篇
  2005年   11篇
  2004年   13篇
  2003年   12篇
  2002年   10篇
  2001年   11篇
  2000年   10篇
  1999年   10篇
  1998年   35篇
  1997年   34篇
  1996年   15篇
  1995年   8篇
  1994年   7篇
  1993年   9篇
  1992年   8篇
  1991年   6篇
  1990年   4篇
  1988年   2篇
  1987年   4篇
  1986年   4篇
  1985年   2篇
  1983年   5篇
  1982年   6篇
  1980年   3篇
  1977年   2篇
  1976年   9篇
  1975年   3篇
  1974年   3篇
  1966年   2篇
  1964年   1篇
  1960年   1篇
  1932年   1篇
  1929年   1篇
排序方式: 共有457条查询结果,搜索用时 19 毫秒
421.
Classical scheduling formulations typically assume static resource requirements and focus on deciding when to start the problem activities, so as to optimize some performance metric. In many practical cases, however, the decision maker has the ability to choose the resource assignment as well as the starting times: this is a far-from-trivial task, with deep implications on the quality of the final schedule. Joint resource assignment and scheduling problems are incredibly challenging from a computational perspective. They have been subject of active research in Constraint Programming (CP) and in Operations Research (OR) for a few decades, with quite difference techniques. Both the approaches report individual successes, but they overall perform equally well or (from a different perspective) equally poorly. In particular, despite the well known effectiveness of global constraints for scheduling, comparable results for joint filtering of assignment and scheduling variables have not yet been achieved. Recently, hybrid approaches have been applied to this class of problems: most of them work by splitting the overall problem into an assignment and a scheduling subparts; those are solved in an iterative and interactive fashion with a mix of CP and OR techniques, often reporting impressive speed-ups compared to pure CP and OR methods. Motivated by the success of hybrid algorithms on resource assignment and scheduling, we provide a cross-disciplinary survey on such problems, including CP, OR and hybrid approaches. The main effort is to identify key modeling and solution techniques: they may then be applied in the construction of new hybrid algorithms, or they may provide ideas for novel filtering methods (possibly based on decomposition, or on alternative representations of the domain store). In detail, we take a constraint-based perspective and, according to the equation CP = model + propagation + search, we give an overview of state-of-art models, propagation/bounding techniques and search strategies.  相似文献   
422.
Multimedia Tools and Applications - The adoption of multimedia and multimodal applications inside museums and exhibitions is becoming a common practice. These installations proved to be...  相似文献   
423.
Data-Flow models are attracting renewed attention because they lend themselves to efficient mapping on multi-core architectures. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs (SDFGs) onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms with no guarantee of global optimality. In this paper we propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. This is, to the best of our knowledge, the first complete algorithm for generic SDF graphs, including those with loops and a finite iteration bound. Our approach is based on Constraint Programming, it guarantees optimality and can handle realistic instances in terms of size and complexity. Extensive experiments on a large number of SDFGs demonstrate that our approach is effective and robust.  相似文献   
424.
Gold nanoparticles have unique properties that are highly dependent on their shape and size. Synthetic methods that enable precise control over nanoparticle morphology currently require shape‐directing agents such as surfactants or polymers that force growth in a particular direction by adsorbing to specific crystal facets. These auxiliary reagents passivate the nanoparticles' surface, and thus decrease their performance in applications like catalysis and surface‐enhanced Raman scattering. Here, a surfactant‐ and polymer‐free approach to achieving high‐performance gold nanoparticles is reported. A theoretical framework to elucidate the growth mechanism of nanoparticles in surfactant‐free media is developed and it is applied to identify strategies for shape‐controlled syntheses. Using the results of the analyses, a simple, green‐chemistry synthesis of the four most commonly used morphologies: nanostars, nanospheres, nanorods, and nanoplates is designed. The nanoparticles synthesized by this method outperform analogous particles with surfactant and polymer coatings in both catalysis and surface‐enhanced Raman scattering.  相似文献   
425.
This work investigates a model reduction method applied to coupled multi‐physics systems. The case in which a system of interest interacts with an external system is considered. An approximation of the Poincaré–Steklov operator is computed by simulating, in an offline phase, the external problem when the inputs are the Laplace–Beltrami eigenfunctions defined at the interface. In the online phase, only the reduced representation of the operator is needed to account for the influence of the external problem on the main system. An online basis enrichment is proposed in order to guarantee a precise reduced‐order computation. Several test cases are proposed on different fluid–structure couplings. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
426.
The results of an investigation of the accuracy of monitor unit (MU) calculation in clinical shaped beams are presented. Measured doses at the reference depth on the beam central axis (isocentre) or on a beam axis representative of the irradiated area (when the isocentre lies under a block or near the edges of the block's shadow) were compared with the expected doses when calculating MUs, by applying different methods normally used in clinical practice. Empirical (areas weighted, Wrede) and scatter summation (Clarkson) methods as well as a pencil-beam based algorithm were applied. 40 irregular fields (6 MV X-rays, CLinac, Varian 6/100), divided into six categories, were considered. Dose measurements were performed with a NE2571 ionization chamber in an acrylic 30 x 30 x 30 cm3 phantom. The depths in acrylic were converted into water-equivalent depths through a correction factor derived from TMR measurements. The method of dose measurements in acrylic was found to be sufficiently accurate for the purpose of this study by comparing expected and measured doses in open square and rectangular fields (mean deviation +0.2%, SD = 0.5%). Results show that all the considered methods are sufficiently reliable in calculating MUs in clinical situations. Mean deviations between measured and expected dose values are around 0 for all the methods; standard deviations range from 1% for the Wrede method to 0.75% for the pencil-beam method. The differences between expected and measured doses were within 1% for about 3/4 of the fields when calculating MUs with all the considered methods. Maximum deviations range from 1.6% (pencil-beam) to 3% (Wrede). Slight differences among the methods of MU calculation were revealed within the different categories of blocked fields analysed. The surprising agreement between measured and expected dose values obtained by using empirical methods (area weighted and Wrede) is probably due to the fact that the reference points were positioned in a "central" region of the unblocked areas.  相似文献   
427.
This paper proposes the control of monomer concentration as a novel improvement of the kinetic Tile Assembly Model (kTAM) to reduce the error rate in DNA self-assembly. Tolerance to errors in this process is very important for manufacturing scaffolds for highly dense ICs; the proposed technique significantly decreases error rates (i.e. it increases error tolerance) by controlling the concentration of the monomers (tiles) for a specific pattern to be assembled. By profiling, this feature is shown to be applicable to different tile sets. A stochastic analysis based on a new state model is presented. The analysis is extended to the cases of single, double and triple bondings. The kinetic trap model is modified to account for the different monomer concentrations. Different scenarios (such as dynamic and adaptive) for monomer control are proposed: in the dynamic (adaptive) control case, the concentration of each tile is assessed based on the current (average) demand during growth as found by profiling the pattern. Significant error rate reductions are found by evaluating the proposed schemes compared to a scheme with constant concentration. One of the significant advantages of the proposed schemes is that it doesn’t entail an overhead such as increase in size and a slow growth, while still achieving a significant reduction in error rate. Simulation results are provided.  相似文献   
428.
The concurrent production of heat and electricity within residential buildings using solid-oxide fuel cell (SOFC) micro-cogeneration devices has the potential to reduce primary energy consumption, greenhouse gas emissions, and air pollutants. A realistic assessment of this emerging technology requires the accurate simulation of the thermal and electrical production of SOFC micro-cogeneration devices concurrent with the simulation of the building, its occupants, and coupled plant components. The calibration of such a model using empirical data gathered from experiments conducted with a 2.8 kWAC SOFC micro-cogeneration device is demonstrated. The experimental configuration, types of instrumentation employed, and the operating scenarios examined are treated. The propagation of measurement uncertainty into the derived quantities that are necessary for model calibration are demonstrated by focusing upon the SOFC micro-cogeneration system’s gas-to-water heat exchanger. The calibration coefficients necessary to accurately simulate the thermal and electrical performance of this prototype device are presented and the types of analyses enabled to study the potential of the technology are demonstrated.  相似文献   
429.
430.
This paper presents a novel statistical characterization for accurate timing and a new probabilistic-based analysis for estimating the leakage power in partially depleted silicon-on-insulator (PD-SOI) circuits in 100-nm BSIMSOI3.2 technology. This paper shows that the accuracy of modeling the leakage current in PD-SOI complementary metal-oxide-semiconductor (CMOS) circuits is improved by considering the interactions between the subthreshold leakage and the gate tunneling leakage, the stacking effect, the history effect, and the fan-out effect, along with a new input-independent method for estimating the leakage power based on a probabilistic approach. The proposed timing and leakage power estimate algorithms are implemented in MATLAB, HSPICE, and C. The proposed methodology is applied to ISCAS85 benchmarks, and the results show that the error is within 5%, compared with random simulation results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号