首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
无线电   7篇
自动化技术   1篇
  1976年   2篇
  1970年   3篇
  1969年   1篇
  1968年   1篇
  1964年   1篇
排序方式: 共有8条查询结果,搜索用时 15 毫秒
1
1.
The techniques of coding theory are used to improve the reliability of digital devices. Redundancy is added to the device by the addition of extra digits which are independently computed from the input digits. A decoding device examines the original outputs along with the redundant outputs. The decoder may correct any errors it detects, not correct but locate the defective logic gate or subsystem, or only issue a general error warning. Majority voting and parity bit checking are introduced, and computations are made for several binary addition circuits. A detailed summary of coding theory is presented. This includes a discussion of algebraic codes, binary group codes, nonbinary linear codes, and error locating codes.  相似文献   
2.
The generalized stress-strength model which is prevalent in current literature is perhaps the closest that analysts have come to a general physical model. To obtain a failure density function and associated hazard function one must assume a certain probability distribution for the part strength and a particular amplitude distribution and frequency of occurrence distribution for the part stress. If one assumes a normal strength distribution and Poisson distributed stress occurrence times with normally distributed amplitudes, then this leads to an exponential failure density function and a constant hazard. Such a model is probably best suited for situations in which the part generally lasts a long time and only seems to fail when on occasion a large stress occurs. In many situations the failure of parts seems to fit a different pattern. The part is operated at nearly a constant stress level; however, the part strength gradually deteriorates with time. As time goes on the rate of deterioration should increase sharply as wear-out is reached and cause an increase in hazard. A probabilistic model which fits this hypothesis is a constantly applied stress and a Rayleigh distributed part strength. The parameter of the Rayleigh distribution is allowed to increase in an exponential fashion with time which produces the strength deterioration effect. Basically the failure rate turns out to depend on the square of the applied stress; however, if the strength deterioration rate is allowed to be a function of the input stress, other behaviors are predicted.  相似文献   
3.
In many modern complex systems the problem of achieving high reliability leads to the use of interchangeable modular components accompanied by a stock of spare parts. This paper examines, compares, and assesses several of the techniques presented in the literature for allocating the numbers of spares of each part type to be stocked in order to maximize the system reliability subject to constraints on resources (i.e., weight, volume, cost, etc.). The problem of optimum spares allocation is complicated since resources are consumed in a discrete fashion and the expression for the system reliability is a nonlinear transcendental function. The classical dynamic programming algorithm produces all optimal spares allocations; however, the solution can become computationally intractable even with the aid of a modern high-speed digital computer. In the case of multiple constraints the time problem is vastly exacerbated. In such a case one must turn to a procedure that yields a near-optimal solution in a reasonable amount of computer time. Two approximate methods discussed in this paper are the incremental reliability per pound algorithm and the Lagrange multiplier algorithm. These algorithms are readily adaptable to handle multiple constraints. Computer programs are developed for each of the three optimization algorithms and are utilized to obtain the spares allocation for a few systems. The optimization theory presented is directly applicable to series or parallel systems. A concluding example illustrates how this can be extended to certain series-parallel systems.  相似文献   
4.
A many-state Markov model has been developed for the purpose of providing performance criteria for computer software. The model provides estimates and closed form predictions of the availability and of the most probable number of errors that will have been corrected at a given time in the operation of a large software package. The model is based on constant rates for error-occurrence ? and error cortection ?. An interesting application case is when ? and ? are functions of the state of debugging achieved. This case is discussed and solved numerically. Extensions and modifications of the basic model are briefly discussed.  相似文献   
5.
This paper reports the results of an experiment whose objectives were to: 1. Develop and utilize a set of terms for describing possible types of errors, their nature, and their frequency; 2. Perform a pilot study to determine if data of the type reported in this paper could be collected; 3. Investigate the error density and its correspondence to predictions from previous data reported; and 4. Develop data on how resources are expended in debugging. A program of approximately 4k machine instructions was chosen. Programmers were asked to fill out for each error, in addition to the regular Trouble Report (TR)/Correction Report (CR) form, a special Supplementary TR/CR form for the purposes of this experiment. Both the regular and the Supplementary forms were divided into two sections. Each form consisted of a single sheet; the upper half of each form was the Trouble Report (TR), and the lower half of each form was the Correction Report (CR). Sixty-three regular and Supplementary TR/CR forms were completed during the Test and Integration phase of the program; these forms represented a little over 1.5% of the total number of machine instructions of the program (in good agreement with the 1% to 2% range noted in previous studies). A large fraction of the errors was found by hand processing (without the aid of a computer), which was much cheaper than machine testing.  相似文献   
6.
7.
8.
Many practicing engineers model their systems using reliability diagrams, while others use fault-tree analysis. The theoretical equivalence of the two techniques is described. System reliability can be expressed in two ways: probability of success and probability of failure approach, in terms of the tie-sets (forward paths) of a reliability diagram. Similarly, one can write two other expressions in terms of the cut-sets of the system reliability diagram. If one uses the fault-tree analysis approach, the probability of failure is written in terms of element failures by applying the rules of symbolic logic (union and intersection). This equation is identical with the tie-set probability of failure equation. Also by applying DeMorgan's logic theorem to the fault-tree probability of the failure equation, one obtains the tie-set probability of success equation. Thus the two techniques are shown to be identical. The choice between the techniques is a matter of convenience and familiarity.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号