首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes development of two undergraduate laboratory courses in semiconductor materials and devices. The project was supported through a National Science Foundation (NSF) Instrumentation and Laboratory Improvement (ILI) Grant with supplementary support from the NASA JOVE Program, Arkansas State University, and equipment vendors. The courses complement lecture courses, cover semiconductor growth, characterization, processing, and simple devices, and enhance intuition of abstract concepts. They consist of “Activity Sets” covering particular topics and consisting of three related but distinct experiments, one each to be performed by a two-four-person team. Each team orally presents results of its experiment and all results are then discussed to form overall Activity Set conclusions. Innovations include use of compound semiconductor thin-film samples grown by liquid solution techniques directly by student teams, and team research on original topics during the second course. Emphasis is also on laboratory and chemical safety, technical communication through laboratory notebooks, oral presentations, and formal reports, and creative and team-oriented solutions of frequent, experimental challenges. Students are provided “open-ended” experiences more typical of the “real world” than in many instructional laboratories. The popular courses enhance student confidence, maturity, and marketability  相似文献   

2.
Ozone has been found to be effective in many forms of water treatment. As concerns about the safety of alternate methods of water treatment increase (in particular, chlorination), ozone, which is already extensively used in Europe, offers an effective option. This paper describes a new method of ozone generation particularly suited for use in water purification. Most current industrial ozone production is based on “silent” electrical discharges in a gap between concentric electrodes separated by a glass or ceramic dielectric barrier. The authors present experimental results obtained using a parallel-plate discharge geometry. The lower electrode consists of a grounded “pool” of still water separated by a discharge gap from an upper insulated planar electrode. When the electrode is energized by an AC high voltage, a multitude of “Taylor cones” forms on the water surface. The Taylor cones form and collapse randomly and continuously, depending on the electric field. The tips of the cones provide points for electrical discharge pulses which initiate ozone generation. This method generates ozone in close proximity to the water surface. Laboratory experiments show efficiencies for gaseous ozone production as high as 110 g/kWh  相似文献   

3.
This paper explores the concepts of steady-state, asymptotic and transient response for linear time-invariant systems. While these concepts appear in many texts in circuits and systems, very often precise definitions are not given. In the paper the linear system is assumed to be a continuous-time single-input-single-output (SISO) system, characterized by its transfer function W(s) or impulse response w(t). The forced response of the system is decomposed into three components, the “input” component, the “system” component, and the “interaction” component. For rational transfer functions and transformed inputs, the above concepts are developed in terms of these three components. The concepts are extended to the case where either the system transfer function W(s) or the transformed input U(s) may be nonrational  相似文献   

4.
Many years ago, the National Electrical Code established a maximum setting for this instantaneous trip breaker of seven times motor full load amperes (FLA), the theory being that that value was just above the typical 6X locked rotor current. In a 1986 IEEE IAS paper Scheda noted that the two most commonly encountered problems in applying high-efficiency motors are (1) replacing a standard motor with a high-efficiency motor, and (2) the need to use thermal sensors on larger machines since higher settings for electronic control would violate the NEC. His paper concludes: “It is suggested that work be done in the industry and standards organizations to serve the needs arising from the use of electronic detection of instantaneous currents for motor protection.” This article addresses some of the work done in the industry to serve these needs and, in particular, how the use of an “electronic detection” inverse time circuit breaker can be appropriately applied when an instantaneous trip breaker would nuisance-trip  相似文献   

5.
Enhancing the human-computer interface of power system applications   总被引:1,自引:0,他引:1  
This paper examines a topic of increasing importance: the interpretation of the massive amount of data available to power system engineers. The solutions currently adopted in the presentation of data in graphical interfaces are discussed. It is demonstrated that the representations of electric diagrams can be considerably enhanced through the adequate exploitation of resources available in full-graphics screens and the use of basic concepts from human-factors research. Enhanced representations of electric diagrams are proposed and tested. The objective is to let the user “see” the behavior of the power system, allowing for better interpretation of program data and results and improving user's productivity  相似文献   

6.
Congress included a two-sentence provision in the Omnibus 1999 Appropriations Bill [Public Law 105-277] directing the Office of Management and Budget (OMB) to amend OMB Circular A-110 to extend the Freedom of Information Act (FOL4) to “require Federal awarding agencies to ensure that all data produced under an award will be made available to the public under the FOIA”. The Circular applies to grants and other financial assistance provided to institutions of higher education, hospitals, and nonprofit institutions from all Federal agencies. Therefore, the final revision will affect the full range of research activities funded by the Federal Government. According to congressional floor statements made in support of the provision in P.L. 105-277, its aim is to “provide the public with access to federally funded research data” that are “used by the Federal Government in developing policy and rules” (Statement of Sen. Lott)  相似文献   

7.
The issue of point location is an important problem in computer graphics and the study of efficient data structures and fast algorithms is an important research area for both computer graphics and computational geometry disciplines. When filling the interior region of a planar polygon in computer graphics, it is necessary to identify all points that lie within the interior region and those that are outside. Sutherland and Hodgman are credited for designing the first algorithm to solve the problem. Their approach utilizes vector construction and vector cross products, and forms the basis of the “odd parity” rule. To verify whether a test point is within or outside a given planar polygon, a ray from the test point is drawn extending to infinity in any direction without intersecting a vertex. If the ray intersects the polygon outline an odd number of times, the region is considered interior. Otherwise, the point is outside the region. In three-dimensional space (3-space), Lee and Preparata propose an algorithm but their approach is limited to point location relative to convex polyhedrons with vertices in 3-space. Although it is rich on optimal data structures to reduce the storage requirement and efficient algorithms for fast execution, a proof of correctness of the algorithm, applied to the general problem of point location relative to any arbitrary surface in 3-space, is absent in the literature. This paper argues that the electromagnetic field theory and Gauss's Law constitute a fundamental basis for the “odd parity” rule and shows that the “odd parity” rule may be correctly extended to point location relative to any arbitrary closed surface in 3-space  相似文献   

8.
Describes a differential diagnostic-decision support system to aid in early detection in primary-care environments. In general, the authors are able to give correct differential diagnostic support under primary-care conditions without any further financial investments. In some cases, it is difficult to differentiate between “glaucomatous” and “pathological” situation classes. There is an unclear transition between these two situations. Initial tests of a combination crisp/fuzzy output refined the authors' results, and these are now under investigation. More data are needed for the “normal” class. An evaluation of the complete monitoring system will be started, including all components and sampling of more data sets. For consecutive patient cases, time-dependent characteristic changes in the visual fields could be detected. Further developmental work will be done in this area, parallel to the growth of the database  相似文献   

9.
A novel framework for dynamic equivalencing of interconnected power systems that the authors recently introduced in the context of classical swing-equation models is extended in this paper to detailed models in structure-preserving, differential/algebraic-equation form. The system is partitioned into a study area and one or more external areas on the basis of synchrony, a generalization of slow-coherency that forms one leg of their framework. Retaining a detailed model for a single reference generator from each external area, the dynamics of the remaining external generators are then modally equivalenced in the style of selective modal analysis; this modal equivalencing is the other leg of their framework. The equivalenced external generators are thereby collectively replaced by a linear multi-port “admittance”, which is easily represented using controlled current injectors at the buses of the replaced generators. The rest of the system model can be retained in its original nonlinear dynamic form. The approach is tested-with encouraging results-on the familiar third-order, 10-machine, 39-bus New England model, using an implementation in the EUROSTAG simulation package  相似文献   

10.
The authors discuss the underlying principles of image and video compression. The network model they use for image compression is the random neural network (RNN). This pulsed network model provides a somewhat more accurate representation of what occurs in “real” neurons. Signals in the form of pulse trains travel between neurons. These pulses can be either excitatory (we call these “positive” pulses), or they can be inhibitory or “negative”. Just like many naturally occurring neural nets, these pulses all have the same magnitude which is normalized as 1. A neuron in the RNN emits pulses at an instantaneous rate proportional to its degree of excitation and to its rate of firing. Besides being more accurate, the RNN is also useful because an algorithm, which allows for the training of a fully recurrent RNN, has been designed. This means it is possible to find good weights between neurons even if every neuron has a connection to every other neuron. This full recurrence is not easily allowed in standard back propagation networks  相似文献   

11.
为进一步提升终端的调峰能力,有效削弱风电出力的不确定性对风电消纳的影响,提出了一种考虑终端“储热”差异性的风电消纳最优化技术。首先,基于用户电负荷、热负荷、采暖室温等数据分析了终端热负荷的差异性;其次,基于扰动单价、扰动时长、采暖温度等变量分析了终端“储热”的差异性;最后,利用场景分析法构造了不同的终端热负荷空间分布场景,并基于所提出的技术进行了仿真计算。将调控前后的效果进行比较,主要结论如下:(1)近端型热负荷空间分布下风电消纳量最大提高7.23%;(2)不同种类用户室温在各自接受的范围内波动,A类用户波动幅度最大为6 ℃,C类用户采暖室温最大为30 ℃;(3)风电消纳与热电厂收益的提高表明了文章所提策略的技术可行性与经济可行性。  相似文献   

12.
The article deals only with simulation models that have stochastic, or random input. Classical statistical methods for independent observations assume that each observation carries the maximum information, and therefore they compute the smallest confidence interval. Since stationary simulation output data carries less information, the confidence interval resulting from applying classical statistical computations using autocorrelated observations would be too small. This would lead one to conclude the parameter estimate is much more precise than is actually the case. To get around this problem, several methods have been suggested in the output data analysis literature. Two of the most widely accepted methods are: 1) the method of independent replications; and 2) the method of batch means. Both methods try to avoid autocorrelation by breaking the data into “independent” segments. The sample means of these segments are considered i.i.d, and used to calculate confidence intervals. In the first method, several independent runs are executed. In the second method, a long simulation run is executed and divided into several “nearly uncorrelated” batches. The article specifically examines the Java Simulation (JSIM) Web based environment which has evolved to incorporate component based technology. If component based technology succeeds, the long hoped for gains in software development productivity may finally be realized  相似文献   

13.
A license to practice engineering is a privilege granted by a state to call oneself an “engineer” and to practice “engineering” before the public. In most states, these are protected terms having very specific meaning to the public. To become licensed as an “engineer”, an individual has to be willing to meet a minimum standard of education, experience, and examination to demonstrate their technical competence and their concern for the welfare of the public. The educational requirement is provided by an institution offering an engineering program that has been reviewed and accredited by the Accreditation Board for Engineering and Technology (ABET). The degree awarded by such an institution allows one to claim that one is a graduate of an accredited engineering program but not that one is an engineer. The experience requirement is satisfied by engaging in the “practice of engineering” as defined in the empowering statutes of each state engineering licensing board. The examination requirement is met by passing two eight-hour national examinations prepared by the National Council of Examiners for Engineering and Surveying (NCEES), which are offered twice each year. The philosophy and content of the national examinations is the subject of this article  相似文献   

14.
The control of valuable technical information determines its beneficiaries. The “owner” may exploit the information for a profit or share it with colleagues and the general public. When used for a profit, benefits focus on monetary considerations and the narrower interests of the “owner” of the information. When information is in the public domain, benefits extend to everyone. Information, of course, exists whether there is an “owner” or not. Until discovered, information “belongs” to everyone but can be used by no one. After discovery, the ownership of the information does not change, but who can use it does. Thus, one who discovers information becomes its first custodian, but not its owner. In his thoughtful “GNU Manifesto” (1985), Professor Richard Stallman, a well-known computer scientist, points out that the desire to be rewarded for one's creativity does not justify depriving the world of that creativity, and that creativity is a social contribution only insofar as society is free to use the results. Indeed, if the initial custodians of valuable information deserve to be rewarded for their creativity, they also deserve to be punished if they restrict the use of it. Nevertheless, most legal systems recognize at least 3 methods for the control of information. Trade-secret law recognizes the right of the first custodian of information to keep the information secret and be protected from misappropriation by a subsequent custodian of the same information. The first custodian of information comprising a significant advance in a useful art is permitted to use the information as embodied in the specific advancement exclusively for a period of time in return for disclosing it promptly to the public in a patent  相似文献   

15.
In a previous paper, a simple frequency-domain stability criterion was proposed for networks near the stability limit subjected to a 3-phase fault with no loss of line. The criterion can be summarized as follows: if a system is stable, the phase angle of the Fourier transform of a network's transient voltage response exhibits a clockwise polar plot behaviour at all buses (i.e. for increasing frequency); if the system is unstable, it exhibits a counter-clockwise behaviour in at least one location. Though these results are of interest, the criterion would be of greater practical use in mechanizing dynamic security analysis if it could be extended to the types of contingencies actually used in security analysis, namely “normal contingencies”. Normal contingencies are commonly defined as the loss of any element in a power system, either spontaneously or preceded by a fault, and such changes in topology impact post-contingency steady-state voltages in addition to their transient behaviour. The paper shows how such cases can be treated, thereby extending the applicable range of the criterion to normal contingencies  相似文献   

16.
This article presents a new method for solving the “optimal power flow” problem in electric power systems. The method is fast and accurate. It has been implemented through two separate optimization models: MODINOP-the initial optimization model-which finds quickly a near optimal solution; and MODFINOP-the final optimization model-which leads to the optimal solution from the initial solution. The initial optimization model uses the principal component of the branch power flows as control variables. The network losses are computed at each optimization step and attached to the branch extremity buses as additional loads. The final optimization model improves the accuracy. It uses the state variable changes as control variables and incorporates the losses in the objective function. The resulting optimization models are solved using linear programming which is easy to implement and provides a fast solution. The utilization of these models with actual data of the Moroccan transmission network operation yield quickly, in a few seconds on a VAX 11/780, accurate results consistent with the real time data and compatible with the results provided by other models  相似文献   

17.
Sometimes a user who is familiar with fixed speed AC induction motors may specify a variable speed AC requirement with a preconception of the number of motor poles. This is appropriate in the case of an application where the motor will be run “across-the-line”, and is expected to run at the same speed and load as provided on inverter operation. If, on the other hand, “bypass” operation is not required, a more optimal choice of motor designs might be available. For example, an application requiring 3000 or 3600 RPM operation would demand a two-pole motor design if bypass (at 50 or 60 Hz) must be provided. However, when bypass is not required, it is often the case that a smaller motor can be provided in a four-pole design (utilizing 100 or 120 Hz base frequency) compared to a two-pole configuration. Another aspect of applying adjustable frequency power supplies to AC induction motors is that it allows an essentially infinite number of possible “base speeds”, including base speeds in excess of 3600 RPM. In this paper the author discusses the number of poles required for fixed speed AC motors, DC motors, and adjustable frequency AC motors. Motor performance issues are also discussed  相似文献   

18.
This paper addresses the issues of post-measurement processing of data collected in a power quality assessment study. Three broad types of post-measurement processing objectives are considered: to enhance accuracy; to estimate data; and to reduce the volume of the collected data. The methods used to enhance accuracy are bad data identification and rejection. Averaging is discussed as a method to “filter” measurement error. The methods used for data estimation are state estimation techniques in both the time and frequency domain. The methods used to reduce the volume of the collected data are based on the calculation of marginal and conditional probabilities and expectations. The integrated use of these techniques in an instrumentation system for power quality assessment is discussed. The main suggested application is for the measurement of harmonics  相似文献   

19.
A number of issues related to successful power system design practices for maximum harmony between industrial energy users, their utility suppliers and the local community are discussed in this article. In many cases, industrial power system engineers concentrate on “the plant side of the meter” and utility engineers concentrate on “the utility side of the meter“. The authors show, however, that it is important for the industrial power engineer to be aware of the utility perspective and vice verse. If not, design flaws may not appear until production begins. Problems occurring at startup that were not anticipated will require significant investments, not necessarily monetary, by all parties involved  相似文献   

20.
闫湖  黄碧斌  卢毓东  洪博文  刘周斌 《中国电力》2018,51(5):160-165,178
大数据时代,数据增值服务将会成为打造客户侧分布式电源市场的新蓝海。紧紧围绕“发现数据价值,让数据创造价值”这一理念,通过对客户侧分布式电源数据价值传递和相关利益主体分析,构建了“互联网+”客户侧分布式电源数据价值链,提出了3类增值服务:数据即服务、信息即服务、知识即服务。在此基础上,对几种典型增值服务进行了价值分析。为了实现数据的价值,让增值服务成为企业真正的盈利方式,提出了数据驱动型客户侧分布式电源增值服务商业模型,包括成本优势、价值创造、价值定位、价值获取4方面内容。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号