首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Telephone switching systems require using more and more complex and bulky software. Accordingly, before the operational phase France Telecom must evaluate software quality in particular reliability. This paper sets out to present an experiment conducted at the CNET-Lannion A centre. This experiment was based on the use of 3 conventional reliability growth models during the qualification phase of a product (analytical and overall tests) and also during the validation phase of a different product. The future prospects with regards the reliability data book and the use of the different models are also discussed.  相似文献   

2.
基于Pspice的测试性验证与评估系统的设计   总被引:1,自引:0,他引:1  
针对基于Pspice仿真开发的测试性验证与评估系统的使用测试性进行研究。该系统以PXI总线信号转接模块作为系统的硬件平台,采用硬件与软件相结合的方式进行,使用配置文件实现软、硬模块之间的信息交换和资源的动态调用。此方法可使测试性验证和评估摆脱对硬件系统和装备实体的依赖,使得装备在研制阶段、定型或验收阶段,不需要样本就可以进行测试性的验证与评估,为雷达装备测试性验证和评估提供了一条新的有效的途径。该系统经验证能够实现软、硬模块之间的信息交换和资源的动态调用。  相似文献   

3.
With increasing levels of integration of multiple processing cores and new features to support software functionality, recent generations of microprocessors face difficult validation challenges. The systematic validation approach starts with defining the correct behaviors of the hardware and software components and their interactions. This requires new modeling paradigms that support multiple levels of abstraction. Mutual consistency of models at adjacent levels of abstraction is crucial for manual refinement of models from the full chip level to production register transfer level, which is likely to remain the dominant design methodology of complex microprocessors in the near future. In this paper, we present microprocessor modeling and validation environment (MMV), a validation environment based on metamodeling, that can be used to create models at various abstraction levels and to generate most of the important validation collaterals, viz., simulators, checkers, coverage, and test generation tools. We illustrate the functionalities in MMV by modeling a 32-bit reduced instruction set computer processor at the system, instruction set architecture, and microarchitecture levels. We show by examples how consistency across levels is enforced during modeling and also how to generate constraints for automatic test generation.  相似文献   

4.
This paper presents the results of the software reliability evaluation of a telecommunication equipment observed during its validation phase. Among the 2 146 collected failure reports, about 45% were discarded mainly because of the redundancy or the incompleteness of the information they contained. The statistical analysis of the selected failure reports allowed to study the classes of identified defects and their distribution among the software components and finally the failure modes. The evaluation of the software reliability measures was preceded by a trend analysis of the software reliability growth based on the Laplace test. Finally, the hyperexponential model was applied to follow up the evolution of the number of software failures during the validation phase and also to evaluate the software failure rate before the use of the system in operation with respect to the whole set of failures, and with respect to the most critical failures only.  相似文献   

5.
The model program for the LHC main dipoles is dedicated to the study and validation of design variants and assembly parameters to achieve reproducible performance and optimise components and assembly costs. The topics investigated in the last year include the material of the coil end spacers, the use of polyimide films from different manufacturers, the definition of optimum azimuthal and longitudinal coil pre-stress values, shimming of coil ends, collaring around the “cold bore” and different layouts of the yoke ends. This paper presents the main characteristics of such recent models, the results obtained during cold tests and the plans for the final phase of the model program for the LHC dipoles  相似文献   

6.
During software validation phase one of the models more often utilized is the input domain-based model. It is based on statistical principles and the errors found in this phase are not corrected. The software might be rejected if errors are discovered. The parameter of importance in this model is the proportion of elements in the program input domain for which it fails. It is also a measure of correctness. The results of program testing permit estimation of it. In this paper a normal approach for determining an upper s-confidence bound of this parameter is presented.  相似文献   

7.
8.
This paper describes a program for computing optimal sampling schedules for multiinput-multioutput experiments designed for parameter estimation of physiological systems models. Theory of the algorithm and details of its implementation are given. Practical applications of the software to models of glucose-insulin regulation, ketone body, and insulin kinetics are presented. Results document the potentiality of the software for designing experiments, and show that optimal design can considerably reduce the number of samples withdrawn from a patient in in vivo clinical studies.  相似文献   

9.
For functional validation of heterogeneous embedded systems, hardware/software (Hw/Sw) cosimulation methodology is mandatory. This paper deals with a distributed cosimulation environment for heterogeneous systems prototyping. The cosimulation environment allows handling all kinds of distributed architectures regardless the communication scheme used, cosimulation at different levels of abstraction and smooth transition to the cosynthesis process. The approach can handle any number of hardware modules, software modules, and debugging tools, which can be used simultaneously. This flexibility is obtained thanks to an automatic cosimulation interface generation tool, which creates links between Hw and Sw simulation environments. The resulting environment is very easy to use and our cosimulation model has been validated on very large industrial examples. The experiments show that VHDL-C cosimulation is faster than classical simulation approaches.  相似文献   

10.
Closed-Loop Modeling in Future Automation System Engineering and Validation   总被引:1,自引:0,他引:1  
This paper presents a new framework for design and validation of industrial automation systems based on systematic application of formal methods. The engineering methodology proposed in this paper is based on the component design of automated manufacturing systems from intelligent mechatronic components. Foundations of such componentspsila information infrastructure are the new IEC 61499 architecture and the automation object concept. It is illustrated in this paper how these architectures, in conjunction with other advanced technologies, such as Unified Modeling Language, Simulink, and net condition/event systems, form a framework that enables pick-and-place design, simulation, formal verification, and deployment with the support of a suite of software tools. The key feature of the framework is the inherent support of formal validation techniques achieved on account of automated transformation among different system models. The paper appeals to developers of automation systems and automation software tools via showing the pathway to improve the system development practices by combining several design and validation methodologies and technologies.  相似文献   

11.
Test and validation of embedded array blocks remains a major challenge in today's microprocessor design environment. The difficulty comes from twofold, the sizes of the arrays and the complexity of their timing and control. This paper describes a novel test generation methodology for test and validation of microprocessor embedded arrays. Unlike traditional ATPG methods, our test generation method is based upon the high-level assertion specification which is originally used for the purpose of formal verification. The superiority of these assertion tests over the traditional ATPG tests will be discussed and shown through various experiments on recent PowerPC microprocessor designs.  相似文献   

12.
The input domain based model is the one most utilized during software validation phase. The parameter characteristic of this model is the proportion of elements in the program input domain for which it fails. It is also a measure of correctness. This paper is concerned with the determination of a critical value of this parameter for testing the hypothesis that a software has a given maximum value with certain risk.  相似文献   

13.
Software reliability models are used to estimate the probability that a software fails at a given time. They are fundamental to plan test activities, and to ensure the quality of the software being developed. Each project has a different reliability growth behavior, and although several different models have been proposed to estimate the reliability growth, none has proven to perform well considering different project characteristics. Because of this, some authors have introduced the use of Machine Learning techniques, such as neural networks, to obtain software reliability models. Neural network-based models, however, are not easily interpreted, and other techniques could be explored. In this paper, we explore an approach based on genetic programming, and also propose the use of boosting techniques to improve performance. We conduct experiments with reliability models based on time, and on test coverage. The obtained results show some advantages of the introduced approach. The models adapt better to the reliability curve, and can be used in projects with different characteristics.  相似文献   

14.
Extrapolating reliability from accelerated tests for technologies without field data always carries the risk that the accelerated tests do not show the mechanisms which dominate at operating conditions. In statistical terminology, such accelerated testing carries a risk of confounding. For linear models, there is theory which allows one to determine which models are confounded with others. This paper develops analogous theory for a simple kind of confounding model, evanescent processes, when kinetics is used as the basis of modeling accelerated testing. A heuristic for identifying simple evanescent processes that can give rise to disruptive alternatives (alternative models that reverse the decision which would be made based on modeling to date) is outlined. Then, we develop activity mapping, a tool for quantitatively identifying the parameter values of that evanescent process which can result in disruptive alternatives. Finally, we see how activity mapping can be used to identify experiments which help reveal such disruptive evanescent processes  相似文献   

15.
This paper investigates: 1) the sensitivity of reliability-growth models to errors in the estimate of the operational profile (OP); and 2) the relation between this sensitivity and the testing accuracy for computer software. The investigation is based on the results of a case study in which several reliability-growth models are applied during the testing phase of a software system. The faults contained in the system are known in advance; this allows measurement of the software reliability-growth and comparison with the estimates provided by the models. Measurement and comparison are repeated for various OPs, thus giving information about the effect of a possible error in the estimate of the OP. The results show that: 1) the predictive accuracy of the models is not heavily affected by errors in the estimate of the OP; and 2) this relation depends on the accuracy with which the software system has been tested  相似文献   

16.
One notable advantage of Model-Driven Architecture (MDA) method is that software developers could do sufficient analysis and tests on software models in the design phase, which helps construct high confidence on the expected software behaviors and performance, especially for safety-critical real-time software. Most existing literature of reliability analysis ignores the effects from those deadline requirements of tasks which are critical properties for real-time software and thus cannot be ignored. Considering the contradictory relationship between the deadline requirements and time costs of fault tolerance in real-time tasks, in this paper, we present a novel reliability model, which takes schedulability as one of the major factors affecting the reliability, to analyze reliability of the task execution model in real-time software design phase. The tasks in this reliability model has no restrictions on their distributions and thus could be distributed on a multiprocessor or on a distributed system. Furthermore, the tasks also define arrival rates of faults and fault-tolerant mechanisms to model the occurrences of non-permanent faults and the corresponding time costs of fault handling. By analyzing the probability of tasks still being schedulable in the worst-case execution scenario with faults occurring, reliability and schedulability are combined into an unified analysis framework, and two algorithms for reliability analysis are given. To make this reliability model more pragmatic, we also present an estimation technique for estimating the fault arrival rate of each task. We show through two case studies respectively the detailed derivation process under static-priority scheduling in a multiprocessor system and in the design process of avionics software, and then analyze the factors affecting the reliability analysis by setting up simulation experiments. When no assumptions of fault occurrences made on the task model, this reliability model regresses to a generic schedulability model.  相似文献   

17.
In this paper, a new saliency detection model is proposed based on a space-to-frequency transformation. Firstly, the equivalence of spatial filtering and spectral modulation is demonstrated to explain the intrinsic mechanism of typical frequency-based saliency models. Then a novel frequency-based saliency model is presented based on the Fourier Transformation of multiple spatial Gabor filters. Besides, a new saliency measurement is proposed to implement the competition between saliency maps at multiple scales and the fusion of color channels. In experiments, we use a set of typical psychological patterns and four popular human fixation datasets to test and evaluate the proposed model. In addition, a new energy-based criterion is proposed to evaluate the performance of our model and is compared with five traditional saliency metrics for validation. Experimental results show that our model outperforms most of the competing models in salient object detection and human fixation prediction.  相似文献   

18.
文章重点介绍了一种FPGA验证与测试的方法。该测试方法的优点是不依赖于芯片设计与测试机台,低成本、开发周期短。基于PC、ATE与自制转换软件,对FPGA验证与测试开发技术进行研究。PC主要完成bin文件的生成,自制转换软件主要将bin文件转换为机器可识别的atp文件。ATE导入配置文件、完成信号输入与输出验证。基于该理论对Xilinx公司的XCV1000进行了实验,实验表明该方法可行并能快速实现测试开发与芯片验证,且具有很好的通用性,可用于其他FPGA芯片的测试、研究与验证,还可以应用于不同的ATE机台。  相似文献   

19.
Censored software-reliability models   总被引:2,自引:0,他引:2  
Nonfailure stops of software execution processes can be viewed as a type of censored data. They can occur in a wide range of computing systems (e.g., concurrent computing systems, data sampling systems, transaction processing systems) due to technical or nontechnical reasons. Using existing software reliability models to deal with this type of censored software reliability data, viz, successive inter-stop times, where a stop can be failure or nonfailure stop, means that nonfailure stops are disregarded. This paper develops censored software reliability models, or censored forms of existing software reliability models, to account for nonfailure stops and directly deal with censored data of software reliability. The paper shows how to develop censored forms for the models: Jelinski-Moranda, Schick-Wolverton, Moranda Geometric, and Littlewood-Verrall, and discusses the corresponding validation forms. Censored forms of other software reliability models can be developed in a similar way. Censored software reliability models reduce to noncensored software reliability models if no nonfailure stop occurs  相似文献   

20.
Generalized frequency division multiplexing (GFDM) is the foremost contender for physical layer of 5G communication. The flexible nature of GFDM allows multiple symbols in time domain to superimpose, thereby creating high peak to average power ratio (PAPR). The idea of selective mapping (SLM) is to use a sequence of phase rotation vectors for generating alternatives using original GFDM signal. Among them, signal with lowest PAPR is selected for transmission. This procedure requires mandatory side information (SI) estimation or transmission which in turn decreases data reliability or efficiency. To address these issues, authors present a modified approach of pilot-assisted GFDM SLM system without need of SI transmission and to enable joint PAPR reduction and data recovery. In the proposed approach, we utilize a common modulating phase to modulate all subcarriers in a subsymbol assuming that each subsymbol consists of at least two pilots. This creates an inherent SI cancellation mechanism using the pilots which are employed for channel estimation. For practical validation of proposed concepts, we used software-defined radio (SDR) experimental setup employing universal software radio peripheral USRP) 2953R as hardware and Labview as software. Experimental results show a significant reduction in out-of-band spectral leakage without disturbing the estimated channel response.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号