共查询到20条相似文献,搜索用时 219 毫秒
1.
Maria Elena De Giuli Dean Fantazzini Mario Alessandro Maggi 《Computational Economics》2008,31(2):161-180
In this paper we present a new model to assess the firm value and the default probability by using a bivariate contingent
claim analysis and copula theory. First we discuss an unfeasible case, given the current derivative market on corporate bonds,
which involves univariate digital options to compute the risk neutral probabilities. We then discuss a feasible model, which
considers risky interest rates, instead. Moreover, we develop in this framework a new methodology to extract default probabilities
from stock prices, only, going beyond the standard KMV-Merton model. Besides, the non-observability of the Merton model’s
state variable requires numerical methods, but the results can be unstable with noisy risky data. We show how the null price
can be used as a useful barrier to separate an operative firm from a defaulted one, and to estimate its default probability.
We then present an empirical application with both operative and defaulted firms to show the advantages of our approach.
相似文献
2.
This paper presents a methodology for the assessment of the performance of industrial hazardous materials emergency plans. This approach is based on a structuro‐functional model of the emergency plan, which highlights the plan's functions, resources, support services and interactions. A resource taxonomy is used to manage the complexity of the emergency response system. The model can be used both as a planning guide and for the analysis of the performance of the emergency response system, based on the risk assessment of the system. The failure probability is estimated through the plan's functions and resources fault trees. The failure severity of each function is determined by using the facility's hazard study. The failure criticality of each function is hence obtained. 相似文献
3.
《Software, IEEE》2004,21(4):86-88
New-product development is commonly risky, judging by the number of high-profile failures that continue to occur-especially in software engineering. We can trace many of these failures back to requirements-related issues. Triage is a technique that the medical profession uses to prioritize treatment to patients on the basis of their symptoms' severity. Trauma triage provides some tantalizing insights into how we might measure risk of failure early, quickly, and accurately. For projects at significant risk, we could activate a "requirements trauma system" to include specialists, processes, and tools designed to correct the issues and improve the probability that the project ends successfully. We explain these techniques and suggest how we can adapt them to help identify and quantify requirements-related risks. 相似文献
4.
Siti Noor Hasanah Ghazali Siti Salwah Salim Irum Inayat Siti Hafizah Ab Hamid 《计算机系统科学与工程》2018,33(3):169-185
In agile software development, project estimation often depends on group discussion and expert opinions. Literature claims that group discussion in risk
analysis helps to identify some of the crucial issues that might affect development, testing, and implementation. However, risk prioritization often relies
on individual expert judgment. Therefore, Risk Poker, a lightweight risk-based testing methodology in which risk analysis is performed through group
discussion that outperforms the individual analyst’s estimation is introduced in agile methods. Keeping in view aforementioned benefits Risk Poker can
offer, unfortunately, no study has been conducted to empirically prove its ability to improve the testing process to date. Therefore, this research is aimed
at closing this research gap by (i) deploying Risk Poker technique as a risk-based strategy in the agile development lifecycle, and (ii) empirically evaluating
improvement of the proposed test process. For this purpose, Risk Poker technique is coupled with test coverage for an innovated testing process in an agile
project following Scrum in order to provide adequate test coverage for testing activity. A case study was conducted with 6 teams of undergraduate students
to estimate test coverage using Risk Poker for an e-commerce system. Three teams estimated their user stories using Risk Poker, while the rest estimated
individually and used an average to obtain the statistical combination. The results showed that the proposed usage of Risk Poker for risk analysis and estimate
test coverage outperformed the averaged statistical estimation of risk analysis for user stories. 相似文献
5.
6.
Goseva-Popstojanova K. Hassan A. Guedem A. Abdelmoez W. Nassar D.E.M. Ammar H. Mili A. 《IEEE transactions on pattern analysis and machine intelligence》2003,29(10):946-960
Risk assessment is an essential part in managing software development. Performing risk assessment during the early development phases enhances resource allocation decisions. In order to improve the software development process and the quality of software products, we need to be able to build risk analysis models based on data that can be collected early in the development process. These models will help identify the high-risk components and connectors of the product architecture, so that remedial actions may be taken in order to control and optimize the development process and improve the quality of the product. In this paper, we present a risk assessment methodology which can be used in the early phases of the software life cycle. We use the Unified Modeling Language (UML) and commercial modeling environment Rational Rose Real Time (RoseRT) to obtain UML model statistics. First, for each component and connector in software architecture, a dynamic heuristic risk factor is obtained and severity is assessed based on hazard analysis. Then, a Markov model is constructed to obtain scenarios risk factors. The risk factors of use cases and the overall system risk factor are estimated using the scenarios risk factors. Within our methodology, we also identify critical components and connectors that would require careful analysis, design, implementation, and more testing effort. The risk assessment methodology is applied on a pacemaker case study. 相似文献
7.
8.
为保证和提高软件安全性水平,针对软件安全性与软件失效后果和发生可能性密切相关,提出一种有效的软件安全性测试方法。在JM模型中注入软件失效严重度参数,对软件失效后果严重度进行降级处理;根据软件失效对系统安全影响程度,推导了软件安全可靠度计算公式;以软件错误严重度和发生概率为核心,建立了软件风险计算公式,更直观地反映软件安全性能,其中所定义的含权软件缺陷严重度变化矩阵,直接反映软件安全改善力度。改进后的JM模型以降低软件风险为目的开展测试过程,更符合软件安全性特征,为软件安全性测试的工程实践提供了一种可行、可信的方法。 相似文献
9.
《Microprocessors and Microsystems》2003,27(5-6):265-276
We propose a software framework for a non-repudiation security service in e-commerce (electronic commerce) on the Internet. The proposed software framework is a security framework for the non-repudiation security service. In the framework, we propose a systematic design methodology that provides a security class concept. Our framework can be differentiated from others. First, unlike other frameworks, it is interested in a successful completion of e-commerce transactions by supporting a non-repudiation security service. Second, the proposed framework is based on a dynamic mapping mechanism that improves the performance of e-commerce transactions. We made a set of experiments to measure the performance of three software frameworks—SSL, SET, and our NA framework. The experimental results show that our framework improves the performance of e-commerce transactions while providing a high quality of security services for desired e-commerce transactions. 相似文献
10.
Risk management is becoming increasingly important for railway companies in order to safeguard their passengers and employees while improving safety and reducing maintenance costs. However, in many circumstances, the application of probabilistic risk analysis tools may not give satisfactory results because the risk data are incomplete or there is a high level of uncertainty involved in the risk data. This article presents the development of a risk management system for railway risk analysis using fuzzy reasoning approach and fuzzy analytical hierarchy decision making process. In the system, fuzzy reasoning approach (FRA) is employed to estimate the risk level of each hazardous event in terms of failure frequency, consequence severity and consequence probability. This allows imprecision or approximate information in the risk analysis process. Fuzzy analytical hierarchy process (fuzzy-AHP) technique is then incorporated into the risk model to use its advantage in determining the relative importance of the risk contributions so that the risk assessment can be progressed from hazardous event level to hazard group level and finally to railway system level. This risk assessment system can evaluate both qualitative and quantitative risk data and information associated with a railway system effectively and efficiently, which will provide railway risk analysts, managers and engineers with a method and tool to improve their safety management of railway systems and set safety standards. A case study on risk assessment of shunting at Hammersmith depot is used to illustrate the application of the proposed risk assessment system. 相似文献
11.
Ming-Hung Shu Lin-Ying Hsu Bi-Min Hsu 《International journal of systems science》2013,44(9):1121-1131
Many manufacturing processes and production systems suffer from progressive degradation with usage and age and are subject to random failures resulting from such deterioration. Traditional models for evaluating the reliability and performance of a process/system use binary-state models, with working success or failure, to classify the states of the process/system that are unrealistic. This article proposes the classification of discrete multiple states of the deterioration process. A nonhomogeneous continuous-time Markov process is employed for modelling the process deterioration because we assume that the time for which a process stays in certain state depends not only on the current state but also on the time for which the process has been in the current state. For major and minor deteriorations, we present symbolic solutions of several differential equations by using MATLAB to estimate the probability of the process being in each state at time t. We contribute dynamic performance and cost measures for the state-age-dependent process deterioration to assess the severity of the deterioration at some point in time as well as the total severity that it causes over the entire process. The optimal setup time is determined in order to estimate the minimum total expected cost during the production period. A practical application of the proposed methodology is illustrated throughout this article. 相似文献
12.
13.
《Information and Software Technology》2003,45(7):373-388
Risk is the potential for realization of undesirable consequences of an event. Operational risk of software is the likelihood of untoward events occurring during operations due to software failures. NASA IV&V Facility is an independent institution which conducts Independent Assessments for various NASA projects. Its responsibilities, among others, include the assessments of operational risks of software. In this study, we investigate Independent Assessments that are conducted very early in the software development life cycle.Existing risk assessment methods are largely based on checklists and analysis of a risk matrix, in which risk factors are scored according to their influence on the potential operational risk. These scores are then arithmetically aggregated into an overall risk score. However, only incomplete project information is available during the very early phases of the software life cycle, and thus, a quantitative method, such as a risk matrix, must make arbitrary assumptions to assess operational risk.We have developed a fuzzy expert system, called the Research Prototype Early Assessment System, to support Independent Assessments of projects during the very early phases of the software life cycle. Fuzzy logic provides a convenient way to represent linguistic variables, subjective probability, and ordinal categories. To represent risk, subjective probability is a better way than quantitative objective probability of failure. Furthermore, fuzzy severity categories are more credible than numeric scores. We illustrated how fuzzy expert systems can infer useful results by using the limited facts about a current project, and rules about software development. This approach can be extended to add planned IV&V level, history of past NASA projects, and rules from NASA experts. 相似文献
14.
Bedir Tekinerdogan Author Vitae Hasan Sozer Author VitaeAuthor Vitae 《Journal of Systems and Software》2008,81(4):558-575
With the increasing size and complexity of software in embedded systems, software has now become a primary threat for the reliability. Several mature conventional reliability engineering techniques exist in literature but traditionally these have primarily addressed failures in hardware components and usually assume the availability of a running system. Software architecture analysis methods aim to analyze the quality of software-intensive system early at the software architecture design level and before a system is implemented. We propose a Software Architecture Reliability Analysis Approach (SARAH) that benefits from mature reliability engineering techniques and scenario-based software architecture analysis to provide an early software reliability analysis at the architecture design level. SARAH defines the notion of failure scenario model that is based on the Failure Modes and Effects Analysis method (FMEA) in the reliability engineering domain. The failure scenario model is applied to represent so-called failure scenarios that are utilized to derive fault tree sets (FTS). Fault tree sets are utilized to provide a severity analysis for the overall software architecture and the individual architectural elements. Despite conventional reliability analysis techniques which prioritize failures based on criteria such as safety concerns, in SARAH failure scenarios are prioritized based on severity from the end-user perspective. SARAH results in a failure analysis report that can be utilized to identify architectural tactics for improving the reliability of the software architecture. The approach is illustrated using an industrial case for analyzing reliability of the software architecture of the next release of a Digital TV. 相似文献
15.
Ning-Cong Xiao Hong-Zhong Huang Zhonglai Wang Yu Liu Xiao-Ling Zhang 《Structural and Multidisciplinary Optimization》2012,46(6):803-812
Uncertainties exist in products or systems widely. In general, uncertainties are classified as epistemic uncertainty or aleatory uncertainty. This paper proposes a unified uncertainty analysis (UUA) method based on the mean value first order saddlepoint approximation (MVFOSPA), denoted as MVFOSPA-UUA, to estimate the systems probabilities of failure considering both epistemic and aleatory uncertainties simultaneously. In this method, the input parameters with epistemic uncertainty are modeled using interval variables while input parameters with aleatory uncertainty are modeled using probability distribution or random variables. In order to calculate the lower and upper bounds of system probabilities of failure, both the best case and the worst case scenarios of the system performance function need to be considered, and the proposed MVFOSPA-UUA method can handle these two cases easily. The proposed method is demonstrated to be more efficient, robust and in some situations more accurate than the existing methods such as uncertainty analysis based on the first order reliability method. The proposed method is demonstrated using several examples. 相似文献
16.
Mauro Roisenberg Cíntia Schoeninger Reneu Rodrigues da Silva 《Expert systems with applications》2009,36(3):6282-6294
Petroleum exploration is an economical activity where many billions of dollars are invested every year. Despite these enormous investments, it is still considered a classical example of decision-making under uncertainty. In this paper, a new hybrid fuzzy-probabilistic methodology is proposed and the implementation of a software tool for assessing the risk of petroleum prospects is described. The methodology is based in a fuzzy-probabilistic representation of uncertain geological knowledge where the risk can be seen as a stochastic variable whose probability distribution counts on a codified geological argumentation. The risk of each geological factor is calculated as a fuzzy set through a fuzzy system and then associated with a probability interval. Then the risk of the whole prospect is calculated using simulation and fitted to a beta probability distribution. Finally, historical and direct hydrocarbon indicators data are incorporated in the model. The methodology is implemented in a prototype software tool called RCSUEX (“Certainty Representation of the Exploratory Success”). The results show that the method can be applied in systematizing the arguing and measuring the probability of success of a petroleum accumulation discovery. 相似文献
17.
This article proposes an efficient simulation-based methodology to estimate the system reliability of large structures. The proposed method uses a hybrid approach: first, a probabilistic enumeration technique is used to identify a significant system failure sequence. This provides an initial sampling domain for an adaptive importance sampling procedure. As further simulations are performed, information about other significant sequences is incorporated to refine the sampling domain and to estimate the failure probability of the system. The adaptive sampling overcomes the restrictive assumptions of analytical techniques, yet achieves the robustness and accuracy of basic Monte Carlo simulation in an efficient manner. In this article, the proposed method is implemented using the ANSYS finite element software, and is applied to the system reliability estimation of two redundant structural systems, a six-story building frame and a transmission tower. Both ductile and brittle failure modes are considered. The method is structured in a modular form such that it can be easily applied to different types of problems and commercial software, thus facilitating practical application. 相似文献
18.
This study presents a methodology to convert an RBDO problem requiring very high reliability to an RBDO problem requiring relatively low reliability by appropriately increasing the input standard deviations for efficient computation in sampling-based RBDO. First, for linear performance functions with independent normal random inputs, an exact probability of failure is derived in terms of the ratio of the input standard deviation, which is denoted by $\boldsymbol {\delta } $ . Then, the probability of failure estimation is generalized for other types of random inputs and performance functions. For the generalization of the probability of failure estimation, two types of coefficients need to be determined by equating the probability of failure and its sensitivities with respect to the input standard deviation at the given design point. The sensitivities of the probability of failure with respect to the standard deviation are obtained using the first-order score function for the standard deviation. To apply the proposed method to an RBDO problem, a concept of an equivalent target probability of failure, which is an increased target probability of failure corresponding to the increased input standard deviations, is also introduced. Numerical results indicate that the proposed method can estimate the probability of failure accurately as a function of the input standard deviation compared to the Monte Carlo simulation results. As anticipated, the sampling-based RBDO using equivalent target probability of failure helps find the optimum design very efficiently while yielding reasonably accurate optimum design, which is close to the one obtained using the original target probability of failure. 相似文献
19.
The aim of this study is to show how a timed Petri nets framework can be used to model and analyze a supply chain (SC) network which is subject to various risks. The method is illustrated by an industrial case study. We first investigate the disruption factors of the SC network by a failure mode, effects and criticality analysis (FMECA) technique. We then integrate the risk management procedures into design, planning, and performance evaluation process of supply chain networks through Petri net (PN) based simulation. The developed PN model provides an efficient environment for defining uncertainties in the system and evaluating the added value of the risk mitigation actions. The findings of the case study shows that the system performance can be improved using risk management actions and the overall system costs can be reduced by mitigation scenarios. 相似文献
20.
In this paper, we propose a secure and efficient software framework for non-repudiation service based on an adaptive secure methodology in e-commerce (electronic commerce). First, we introduce an explicit security framework of the e-commerce transaction called notary service. The proposed framework supports non-repudiation of service for a successful e-commerce transaction in terms of generation, delivery, retrieval, and verification of the evidence for resolving disputes. Second, we propose an adaptive secure methodology to support secure and efficient non-repudiation of service in the proposed framework. Our adaptive secure methodology dynamically adapts security classes based on the nature and sensitivity of interactions among participants. The security classes incorporate security levels of cryptographic techniques with a degree of information sensitivity. As Internet e-businesses exponentially grow, a need for high security level categories to identify a group of connections or individual transactions is manifest. Therefore, development of an efficient and secure methodology is in high demand. We have done extensive experiments on the performance of the proposed adaptive secure methodology. Experimental results show that the adaptive secure methodology provides e-commerce transactions with high quality of security services. Our software framework incorporating the adaptive secure methodology is compared with existing well-known e-commerce frameworks such as SSL (Secure Socket Layer) and SET (Secure Electronic Transaction). 相似文献