首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
现有绝大多数风险评估模型均是基于静态模型指导下的统计学方法,并未考虑到网络空间要素间的动态作用,已知的风险评估工具也不支持在风险分析和评估过程中考虑安全措施的延迟问题。针对上述问题,分析了安全防护措施延迟的原因,提出了一个考虑了延迟因素的信息安全风险评估动态模型,为基于时滞非线性模型所得的统计数据和定性评估所得的结果创建更为灵活的风险评估工具提供了可能。利用模型对安全措施延迟对信息安全风险的影响进行了仿真研究,结果表明,针对威胁及时采取安全措施能有效地降低信息安全风险。  相似文献   

2.
Advanced analytical tools are enabling the estimation of new measures of operational reliability in power systems. In bid-based and bilateral contract markets, these estimates provide valuable information to market participants. Such information improves market efficiency through more informed decision-making. However, it may also result in opportunistic bidding strategies that adversely affects economic efficiency. The relationship between risk and bidding strategies is explored in the paper. The importance of the availability of a good information set on reliability is demonstrated analytically. Examples of information availability in restructured markets is given and questions about the dimensions of market information policies are discussed. An illustrative example shows how private gains from withholding information can be obtained.  相似文献   

3.
International portfolios which are composed of domestic assets and foreign assets are popular investment tools for financial institutions in highly integrated global financial markets. However, the focus of past studies had been on either domestic assets or foreign assets, but not both in the same context. They paid no attention to the studies of controlling the market risk of the international portfolios in the risk management literature. In contrast to the existing literature in portfolios, this paper considers not only domestic assets but also foreign assets, and provides an analytical value-at-risk (VaR) with common jump risk and exchange rate risk to manage market risk of international portfolios with exchange rate risk and common jumps over the subprime mortgage crisis. In general, the analytical solution can be used to accurately calculate VaRs by the backtesting criterion in terms of in-sample and out-of-sample fitting for an international portfolio with common jumps.  相似文献   

4.
In this paper, we present a tool for the simulation of fluid models of high-speed telecommunication networks. The aim of such a simulator is to evaluate measures which cannot be obtained through standard tools in reasonable time or through analytical approaches. We follow an event-driven approach in which events are associated with rate changes in fluid flows. We show that under some loose restrictions on the sources, this suffices to efficiently simulate the evolution in time of fairly complex models. Some examples illustrate the utilization of this approach and the gain that can be observed over standard simulation tools.  相似文献   

5.
The risk of non-fulfilment of a contract can harm public administration or even interrupt public services. Therefore, models that assist manager decision making in the audit and control of contracts with a higher disqualification risk may be important tools, with economic and even social repercussions. In this article, public contracts are classified with respect to the risk of non-compliance with their terms of delivery. The quantitative tools used are statistical and machine learning models, similar to credit risk rating of loans. As dependent variables, the models use data found in electronic databases present in e-government implementations. A previously classified listing of suspended companies is used as a proxy for risky contracts, as it contains private companies which failed with their contractual obligations. The classification techniques utilized are logistic regression, k-nearest neighbours, discriminant analysis, support vector machine and random forests. Although the methods can be applied to any government with electronic procurement and contracts systems, Brazilian data is used to illustrate the benefits of contract governance for emerging economies. It is concluded that the credit rating techniques used directly apply to contractual risk in public administration. Considering real public administration contract data, the classification algorithm that generates the best performance is k-nearest neighbours.  相似文献   

6.
In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning  相似文献   

7.
Queueing networks have been widely used to evaluation performance of mainframe computer systems. In contrast, few results have been reported for modern open systems, so it was not clear whether queueing networks are useful for modern systems or not. We think this situation has partly been due to lack of handy evaluation tools. This paper presents tow tools that we developed to evaluate open system performance. On is a measuring tool that is capable of accurately obtaining the service times of system resources requested by an application transaction. The other is an estimating tool which calculates various performance measures based on queueing network models. This paper also describes a case study in which the performance of a medium-sized UNIX client–server system (up to 24 clients) is estimated using the tools and these estimates are then compared with experimental results. The estimates closely agree with the measured results and are accurate enough for practical applications. Based on this, we conclude that queueing network models are also useful for modern systems.  相似文献   

8.
In regression models not only the parameter estimates and significances of explanatory variables are of interest, but also the degree to which variation in the dependent variable can be explained by covariates. In recent publications, an R(2) measure based on deviance was recommended for Poisson regression models, one of the most frequently used modelling tools in epidemiological studies. However, when sample size is small relative to the number of covariates in the model, simple R(2) measures may be seriously inflated and may need to be adjusted according to the number of covariates in the model. We present a SAS-macro that calculates adjustments for the R(2) measures in Poisson regression models based on log-likelihood and on sums of squares. The proposed measures are applied to real data sets and their performance is discussed.  相似文献   

9.
10.
The ability for grocery retailers to have a single view of customers across all their grocery purchases remains elusive and has become increasingly important in recent years (especially in the United Kingdom) where competition has intensified, shopping habits and demographics have changed and price sensitivity has increased following the 2008 recession. Numerous studies have been conducted on understanding independent items that are frequently bought together (association rule mining/frequent itemsets) with several measures proposed to aggregate item support and rule confidence with varying levels of accuracy as these measures are highly context dependent. Uninorms were used as an alternative measure to aggregate support and confidence in analysing market basket data using the UK grocery retail sector as a case study. Experiments were conducted on consumer panel data with the aim of comparing the uninorm against three other popular measures (Jaccard, Cosine and Conviction). It was found that the uninorm outperformed other models on its adherence to the fundamental monotonicity property of support in market basket analysis (MBA). Future work will include the extension of this analysis to provide a generalised model for market basket analysis.  相似文献   

11.
There exist dual listed stocks which are issued by the same company in some stock markets. Although these stocks bare the same firm-specific risks and enjoy identical dividends and voting policies, they are priced differently. Some previous studies show this seeming deviation from the law of one price can be solved by allowing different expected returns and market prices of risk for investors holding heterogeneous beliefs. This paper provides empirical evidence for that argument by testing the expected return and market price of risk between Chinese A and B shares listed in Shanghai and Shenzhen stock markets. Models with dynamic of Geometric Brownian Motion are adopted. Multivariate GARCH models are also introduced to capture the feature of time-varying volatility in stock returns. The results suggest that the different pricing can be explained by the difference in expected returns between A and B shares. However, the difference between market price of risk is insignificant for both markets if GARCH models are adopted.  相似文献   

12.
Least squares support vector machines ensemble models for credit scoring   总被引:1,自引:0,他引:1  
Due to recent financial crisis and regulatory concerns of Basel II, credit risk assessment is becoming one of the most important topics in the field of financial risk management. Quantitative credit scoring models are widely used tools for credit risk assessment in financial institutions. Although single support vector machines (SVM) have been demonstrated with good performance in classification, a single classifier with a fixed group of training samples and parameters setting may have some kind of inductive bias. One effective way to reduce the bias is ensemble model. In this study, several ensemble models based on least squares support vector machines (LSSVM) are brought forward for credit scoring. The models are tested on two real world datasets and the results show that ensemble strategies can help to improve the performance in some degree and are effective for building credit scoring models.  相似文献   

13.
The IDEF methodology has been extensively used for modeling various processes. Qualitative and quantitative reliability analysis and risk assessment of IDEF models is of interest to industry for several reasons. It identifies critical activities in a process, improves the process performance, and decreases downtime and operating cost of the process. To evaluate the risk associated with an IDEF3 model formal tools and techniques are required. In this paper, the fault tree analysis technique and minimum cut and path sets generation algorithms are applied for reliability evaluation and risk assessment of the parent activities in an IDEF3 model. A structural and reliability importance measure for parent activities in an IDEF3 model as well as for the elementary activities in a decomposed model are presented.  相似文献   

14.
针对现有的企业安全风险管理中,风险处理方案的制定和管理措施的选择缺乏量化手段、手动风险分析方式耗时过长等问题,提出了一种基于马尔科夫逻辑网的信息安全风险管理方法。首先利用马尔科夫逻辑网对被评估系统组件及服务间依赖关系进行描述,进而利用马尔科夫逻辑网的边际推理模型来预估不同安全管理措施情况下的系统可用性值,从而为管理措施的选择提供了量化依据。案例研究表明,该方法能够为企业信息系统安全风险管理措施的选择提供可靠的量化依据,且方法实施简单易行。  相似文献   

15.
Kemerer  C.F. 《Software, IEEE》1992,9(3):23-28
Part of adopting an industrial process is to go through a learning curve that measures the rate at which the average unit cost of production decreases as the cumulative amount produced increases. It is argued that organizations buy integrated CASE tools only to leave them on the shelf because they misinterpret the learning curve and its effect on productivity. It is shown that learning-curve models can quantitatively document the productivity effect of integrated CASE tools by factoring out the learning costs so that managers can use model results to estimate future projects with greater accuracy. Without this depth of understanding, managers are likely to make less-than-optimal decisions about integrated CASE and may abandon the technology too soon. The influence of learning curves on CASE tools and the adaptation of learning-curve models to integrate CASE are discussed. The three biggest tasks in the implementation of learning-curves in integrated CASE settings, locating a suitable data site, collecting the data, and validating the results, are also discussed  相似文献   

16.
A multi-scale framework for decision support is presented that uses a combination of experiments, models, communication, education and decision support tools to arrive at a realistic strategy to minimise diffuse pollution. Effective partnerships between researchers and stakeholders play a key part in successful implementation of this strategy. The Decision Support Matrix (DSM) is introduced as a set of visualisations that can be used at all scales, both to inform decision making and as a communication tool in stakeholder workshops. A demonstration farm is presented and one of its fields is taken as a case study. Hydrological and nutrient flow path models are used for event based simulation (TOPCAT), catchment scale modelling (INCA) and field scale flow visualisation (TopManage). One of the DSMs; The Phosphorus Export Risk Matrix (PERM) is discussed in detail. The PERM was developed iteratively as a point of discussion in stakeholder workshops, as a decision support and education tool. The resulting interactive PERM contains a set of questions and proposed remediation measures that reflect both expert and local knowledge. Education and visualisation tools such as GIS, risk indicators, TopManage and the PERM are found to be invaluable in communicating improved farming practice to stakeholders.  相似文献   

17.
Department editor Dave Kasik highlights some of the new computer graphics tools and products on the market. In this issue the featured tools and products are the "Smart Camera" precision position tracking system from WorldViz; the Nvidia Quadro FX 3600M professional GPU for notebook and laptop workstations; the OpticBook 4600 book scanner from Plustek Technology; a news item on the unveiling of the world's highest-resolution display system for scientific visualization at the University of California, San Diego; an advanced indicator from TransLumen; enhancements to the Knights Camelot CAD navigation tool from Magma Design Automation; and a 3D application from CityEngine that can generate models of cities up to 10 times faster than previous applications.  相似文献   

18.
Many real‐time systems are safety‐and security‐critical systems and, as a result, tools and techniques for verifying them are extremely important. Simulation and testing such systems can be exceedingly time‐consuming and these techniques provide only probabilistic measures of correctness. There are a number of model‐checking tools for real‐time systems. Although they provide formal verification for models, we still need to implement these models. To increase the confidence in real‐time programs written in real‐time Java, this paper proposes a model‐based approach to the development of such programs. First, models can be mechanically verified, to check whether they satisfy particular properties, by using current real‐time model‐checking tools. Then, programs can be derived from the model by following a systematic approach. We introduce a timed automata to RTSJ Tool (TART), a prototype tool to automatically generate real‐time Java code from the model. Finally, we show the applicability of our approach by means of four examples: a gear controller, an audio/video protocol, a producer/consumer and the Fischer protocol. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
In the last decade,market financial forecasting has attracted high interests amongst the researchers in pattern recognition.Usually,the data used for analysing the market,and then gamble on its future trend,are provided as time series;this aspect,along with the high fluctuation of this kind of data,cuts out the use of very efficient classification tools,very popular in the state of the art,like the well known convolutional neural networks(CNNs)models such as Inception,Res Net,Alex Net,and so on.This forces the researchers to train new tools from scratch.Such operations could be very time consuming.This paper exploits an ensemble of CNNs,trained over Gramian angular fields(GAF)images,generated from time series related to the Standard&Poor's 500 index future;the aim is the prediction of the future trend of the U.S.market.A multi-resolution imaging approach is used to feed each CNN,enabling the analysis of different time intervals for a single observation.A simple trading system based on the ensemble forecaster is used to evaluate the quality of the proposed approach.Our method outperforms the buyand-hold(B&H)strategy in a time frame where the latter provides excellent returns.Both quantitative and qualitative results are provided.  相似文献   

20.
The conditioning of strategies by market environment and the simultaneous emergence of market structure in the presence of evolving trading strategies are investigated with major international stock indexes. Models for price forecasting and trading strategies evolution are examined under different time horizons. The results demonstrate that trading strategies can become performative in thin markets, thereby shaping the price dynamics, which in turn feeds back into the strategy. The dominance in thin markets by some (short-memory) traders produces a better environment for learning profitable strategies with computational intelligence tools.The experiment conducted contradicts assertions that long-term fitness of traders is not a function of an accurate prediction, but only of an appropriate risk aversion through a stable saving rate. The stock traders’ economic performance is found to be best with a 1-year forward time horizon, and it deteriorates significantly for tests with horizons exceeding 2 years, identifying frequent structural breaks. To model the turmoil in an economic system with recurrent shocks, short-memory horizons are optimal, as older data is not informative about current or future states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号