首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   130篇
  免费   3篇
电工技术   1篇
化学工业   26篇
金属工艺   3篇
机械仪表   1篇
建筑科学   3篇
能源动力   15篇
轻工业   1篇
石油天然气   12篇
无线电   18篇
一般工业技术   17篇
冶金工业   16篇
自动化技术   20篇
  2022年   2篇
  2021年   6篇
  2020年   2篇
  2019年   6篇
  2018年   2篇
  2017年   4篇
  2016年   5篇
  2015年   3篇
  2014年   5篇
  2013年   12篇
  2012年   3篇
  2011年   5篇
  2010年   2篇
  2009年   5篇
  2008年   3篇
  2007年   4篇
  2006年   3篇
  2003年   2篇
  2002年   3篇
  2001年   1篇
  2000年   4篇
  1999年   5篇
  1998年   3篇
  1997年   8篇
  1996年   4篇
  1995年   2篇
  1994年   2篇
  1993年   3篇
  1992年   1篇
  1991年   3篇
  1990年   2篇
  1989年   1篇
  1988年   2篇
  1987年   5篇
  1986年   4篇
  1985年   1篇
  1981年   2篇
  1977年   1篇
  1973年   1篇
  1972年   1篇
排序方式: 共有133条查询结果,搜索用时 312 毫秒
1.
Data collection, both automatic and manual, lies at the heart of all empirical studies. The quality of data collected from software informs decisions on maintenance, testing and wider issues such as the need for system re-engineering. While of the two types stated, automatic data collection is preferable, there are numerous occasions when manual data collection is unavoidable. Yet, very little evidence exists to assess the error-proneness of the latter. Herein, we investigate the extent to which manual data collection for Java software compared with its automatic counterpart for the same data. We investigate three hypotheses relating to the difference between automated and manual data collection. Five Java systems were used to support our investigation. Results showed that, as expected, manual data collection was error-prone, but nowhere near the extent we had initially envisaged. Key indicators of mistakes in manual data collection were found to be poor developer coding style, poor adherence to sound OO coding principles, and the existence of relatively large classes in some systems. Some interesting results were found relating to the collection of public class features and the types of error made during manual data collection. The study thus offers an insight into some of the typical problems associated with collecting data manually; more significantly, it highlights the problems that poorly written systems have on the quality of visually extracted data.  相似文献   
2.
Bifurcated nozzles are used in continuous casting of molten steel, where they influence the quality of the cast steel slabs. The present study performs two-dimensional (2-D) and three-dimensional (3-D) simulations of steady turbulent(K- ε) flow in bifurcated nozzles, using a finite-element (FIDAP) model, which has been verified previously with water model experiments. The effects of nozzle design and casting process operating variables on the jet characteristics exiting the nozzle are investigated. The nozzle design parameters studied include the shape, angle, height, width, and thickness of the ports and the bottom geometry. The process operating practices include inlet velocity profile and angle as well as port curvature caused by erosion or inclusion buildup. Results show that the jet angle is controlled mainly by the port angle but is steeper with larger port area and thinner walls. The degree of swirl is increased by larger or rounder ports. The effective port area, where there is no recirculation, is increased by smaller or curved ports. Flow asymmetry is more severe with skewed or angled inlet conditions or unequal port sizes. Turbulence levels in the jet are higher with higher casting speed and smaller ports. Formerly Research Assistant, Department of Mechanical and Industrial Engineering. Formerly Research Assistant in the same department.  相似文献   
3.
Silicon - In the present report, a photonic crystal based micro-ring resonator (MRR) structure is proposed which is very compact in size and has very fast response and is employed for temperature...  相似文献   
4.
With lower costs and greater availability, heavy fuel oil appears as an attractive alternative to the conventional gas oil used in industrial gas turbines. However, higher levels of radiation and smoke are expected, and this note reports on some preliminary tests made with a combustion chamber burning fuels of different carbon content, ranging from kerosine to a 25% blend of residual fuel oil in gas oil, at a chamber pressure of 10 atm*. The combustion rig was equipped with a total-radiation pyrometer and black-body furnace capable of measurement at different axial stations along the spray-stabilized flame. The presence of the residual fuel oil in the gas oil was found to promote significant increases in the mean levels of radiation, emissivity and smoke density, with a modest increase in liner temperature.  相似文献   
5.
Advanced material characterization of asphalt concrete is essential for realistic and accurate performance prediction of flexible pavements. However, such characterization requires rigorous testing regimes that involve mechanical testing of a large number of laboratory samples at various conditions and set-ups. Advanced measurement instrumentation in addition to meticulous and accurate data analysis and analytical representation are also of high importance. Such steps as well as the heterogeneous nature of asphalt concrete (AC) constitute major factors of inherent variability. Thus, it is imperative to model and quantify the variability of the needed asphalt material’s properties, mainly the linear viscoelastic response functions such as: relaxation modulus, \(E(t)\), and creep compliance, \(D(t)\). The objective of this paper is to characterize the inherent uncertainty of both \(E(t)\) and \(D(t)\) over the time domain of their master curves. This is achieved through a probabilistic framework using Monte Carlo simulations and First Order approximations, utilizing \(E^{*}\) data for six AC mixes with at least eight replicates per mix. The study shows that the inherent variability, presented by the coefficient of variation (COV), in \(E(t)\) and \(D(t)\) is low at small reduced times, and increases with the increase in reduced time. At small reduced times, the COV in \(E(t)\) and \(D(t)\) are similar in magnitude; however, differences become significant at large reduced times. Additionally, the probability distributions and COVs of \(E(t)\) and \(D(t)\) are mix dependent. Finally, a case study is considered in which the inherent uncertainty in \(D(t)\) is forward propagated to assess the effect of variability on the predicted number of cycles to fatigue failure of an asphalt mix.  相似文献   
6.
High-level language abstraction for reconfigurable computing   总被引:1,自引:0,他引:1  
  相似文献   
7.
This study aims to contribute to the definition of a methodology, which can help to select a relevant roughness parameter with a view to describing the topography of orthopaedic bearing surfaces. In this investigation, the surface topography of a retrieved titanium alloy (TA6V) femoral head was characterized using visual inspection, optical microscopy and three-dimensional contacting profilometry. A numerical analysis of roughness measurements was then undertaken to assess in a first step the values of different roughness parameters of interest found in papers dealing with the topography of orthopaedic bearing surfaces. In a second step, the Analysis of Variance (ANOVA) and the Computer-Based Bootstrap Method were combined to determine statistically, and without preconceived opinion, which of those parameters is the most relevant to describe the different investigated worn regions of the studied femoral head.  相似文献   
8.
This paper presents a performance analysis of a recently proposed preamble-based reduced-complexity (RC) two-stage synchronization technique. The preamble, composed of two identical subsequences, is first used to determine an uncertainty interval based on Cox and Schmidl algorithm. Then, a differential correlation-based metric is carried using a new sequence obtained by element wise multiplication of the preamble subsequence and a shifted version of it. This second step is performed to fine tune the coarse time estimate, by carrying the differential correlation-based metric over the uncertainty interval of limited width around the coarse estimate, thus leading to low computational load. In this paper, we first discuss some complexity issues of the RC approach compared to previously proposed algorithms. Then, we study the effect of the training sequence class and length choices on the synchronization performance in the case of multipath channels. The impact of the uncertainty interval width on the trade-off between performance and complexity is also studied. The two-stage approach was found to provide almost equal performance to those obtained by the most efficient differential correlation-based benchmarks. However, it has a very reduced computational load, equivalent to that of sliding correlation-based approaches.  相似文献   
9.
In this work, the Cascaded waste‐heat recovery (WHR) is analyzed from the thermodynamic point of view. Typically, WHR is most effective with small gas turbines and old machines which have a relatively higher design mass flow per kW and higher exhaust temperatures than new designs. The working fluid used in the WHR technology is propane, which vaporizes and condenses at low temperatures. The temperature of the heat source, the outlet pressure of the two expanders, and the mass flow rate of the working fluid are assumed as working variables of the technology. The effect of these variables on the thermal efficiency and power output is evaluated. The obtained results are analyzed and discussed. The results of the calculation are also compared with similar published studies. The overall efficiency considering the gas turbine upstream ranges from about 35% up to 39%. The highest efficiency and power output of the WHR alone at 900 K heat source temperature, 800 kPa condenser pressure, and 100 kg/s mass‐flow rate are 30% and 18 MW, respectively, for two‐expander WHR, and 18% and 9 MW, respectively, for single expander WHR. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
10.
Multi-robot system attracted attention in various applications in order to replace the human operators. To achieve the intended goal, one of the main challenges of this system is to ensure the integrity of localization by adding a sensor fault diagnosis step to the localization task. In this paper, we present a framework able, in addition of localizing a group of robots, to detect and exclude the faulty sensors from the group with an optimized thresholding method. The estimator has the informational form of the Kalman Filter (KF) namely Information Filter (IF). A residual test based on the Kullback-Leibler divergence (KLD) between the predicted and the corrected distributions of the IF is developed. It is generated from two tests: the first acts on the means and the second deals with the covariance matrices. Thresholding using entropy based criterion and Receiver Operating Characteristics (ROC) curve are discussed. Finally, the validation of this framework is studied on real experimental data from a group of robots.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号