首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Ratios used in financial analysis suffer from several drawbacks, and the tools – ranging from linear least-squares regressions to neural networks – suggested as alternatives also have serious disadvantages. We propose an alternative approach, based on quantile regression techniques, which exploits financial information in a more efficient way, not achievable by conventional tools. Our proposal is applied to the ROA (return on assets) ratio, this being one of the most popular ratios among both economic analysts and researchers. An empirical analysis is carried out on real data. Results indicate that the quantile approach provides a more accurate assessment of the financial position of the firm.  相似文献   

3.
The impact of Internet security breaches on firms has been a concern to both researchers and practitioners. One measure of the damage to the breached firm is the observed cumulative abnormal stock market return (CAR) when there is announcement of the attack in the public media. To develop effective Internet security investment strategies for preventing such damage, firms need to understand the factors that lead to the occurrence of CAR. While previous research have involved the use of regression analysis to explore the relationship between firm and attack characteristics and the occurrence of CAR, in this paper we use decision tree (DT) induction to explore this relationship. The results of our DT-based analysis indicate that both attack and firm characteristics determine CAR. While each of our results is consistent with that of at least one previous study, no previous single study has provided evidence that both firm and attack characteristics are determinants of CAR. Further, the DT-based analysis provides an interpretable model in the form of understandable and actionable rules that may be used by decision makers. The DT-based approach thus provides additional insights beyond what may be provided by the regression approach that has been employed in previous research. The paper makes methodological, theoretical and practical contribution to understanding the predictors of damage when a firm is breached.  相似文献   

4.
Cost estimation is one of the most critical activities in software life cycle. In past decades, a number of techniques have been proposed for cost estimation. Linear regression is yet the most frequently applied method in the literature. However, a number of studies point out that linear regression is prone to low prediction accuracy. The low prediction accuracy is due to a number of reasons such as non-linearity and non-normality. One less addressed reason is the multi-collinearities which may lead to unstable regression coefficients. On the other hand, it has been reported that multi-collinearity spreads widely across the software engineering datasets. To tackle this problem and improve regression's accuracy, we propose a holistic problem-solving approach (named adaptive ridge regression system) integrating data transformation, multi-collinearity diagnosis, ridge regression technique and multi-objective optimization. The proposed system is tested on two real world datasets with the comparisons with OLS regression, stepwise regression and other machine learning methods. The results indicate that adaptive ridge regression system can significantly improve the performance of regressions on multi-collinear datasets and produce more explainable results than machine learning methods.  相似文献   

5.
Operational efficiencies of a firm play a crucial role in determining the survival and growth of a firm, especially when the industry is going through a dynamic structural transformation owing to external changes. In this paper, we explore the effect of managerial and strategic parameters on the degree of operational efficiency achieved by a firm in the Indian pharmaceutical industry using data envelopment analysis (DEA). During the period 1992–2002, the relaxation of import restrictions and foreign direct investment, along with a major change in the regulatory norms, resulted in increased competition from firms with superior resources in this industry. We use non-parametric DEA models and parametric methods such as regression analysis to determine the factors that have contributed to the internal operational efficiencies of these firms. The findings indicate that domestic firms, most of which are controlled by family-based governance structures, enjoy higher efficiencies than affiliates of multinational pharmaceutical majors. After controlling for firm size and initial efficiency levels, we find that firms with higher levels of innovation through higher R&D investments and older establishments are associated with higher efficiencies, when compared with their less R&D intensive and younger counterparts, respectively.  相似文献   

6.
New product introduction is vital to a firm's growth and prosperity. A company needs to develop a process to determine how to find and develop new product ideas and finally, how to successfully introduce them into the marketplace. To address this problem, this study proposes a new approach that combines neural network, analytic hierarchy process (AHP) and linear programming (LP). A neural network is used to classify the feasibility of ideas/projects in new product development and the theory of AHP is applied to evaluate the interdependence among projects. A linear programming model is then developed and used to allocate the limited financial resources of the firm to the competing projects.  相似文献   

7.
The assessment of semantic similarity between terms is one of the challenging tasks in knowledge-based applications, such as multimedia retrieval, automatic service discovery and emotion mining. By means of similarity estimation, the comprehension of textual resources can become more feasible and accurate. Some studies have proposed the integration of various assessment methods for taking advantage of different semantic resources, but most of them simply employ average operation or regression training. In this paper, we address this problem by combining the corpus-based similarity methods with the WordNet-based methods based on a differential evolution (DE) algorithm. Specifically, this DE-based approach conducts similarity assessment in a continuous vector space. It is validated against a variety of similarity approaches on multiple benchmark datasets. Empirical results demonstrate that our approach outperforms existing works and more conforms to the human judgement of similarity. The results also prove the expressiveness of continuous vectors learned from neural network on latent lexical semantics.  相似文献   

8.
Resource allocation between exploration of emerging technological possibilities and exploitation of known technological possibilities involves a delicate trade-off. We develop a model to represent this trade-off under the time-pressing situation where the firm’s existing basis of survival is constantly challenged by competitors’ innovation and imitation. We examine how the employment of an adaptive rule improves a balance between the exploration and the exploitation. Simulation experiments show that an adaptively rational decision rule, or a step-by-step exploration of unknown opportunities based on feedback on returns, is more likely to increase firm survival under diverse conditions than an all-or-nothing approach regarding the unknown opportunities. Furthermore, our study suggests that the adaptively rational rule is self-protected from too much loss, while its potential pay-off can be unbounded above.  相似文献   

9.
We address the problem of porting parallel distributed applications from static homogeneous cluster environments to dynamic heterogeneous Grid resources. We introduce a generic technique for adaptive load balancing of parallel applications on heterogeneous resources and evaluate it using a case study application: a Virtual Reactor for simulation of plasma chemical vapour deposition. This application has a modular architecture with a number of loosely coupled components suitable for distribution over the Grid. It requires large parameter space exploration that allows using Grid resources for high-throughput computing. The Virtual Reactor contains a number of parallel solvers originally designed for homogeneous computer clusters that needed adaptation to the heterogeneity of the Grid. In this paper we study the performance of one of the parallel solvers, apply the technique developed for adaptive load balancing, evaluate the efficiency of this approach and outline an automated procedure for optimal utilization of heterogeneous Grid resources for high-performance parallel computing.  相似文献   

10.
The presence of parametric uncertainties decreases the performance in controlling dynamic systems such as the DC motor. In this work, an adaptive control strategy is proposed to deal with parametric uncertainties in the speed regulation task of the DC motor. This adaptive strategy is based on a bio-inspired optimization approach, where an optimization problem is stated and solved online by using a modification of the differential evolution optimizer. This modification includes a mechanism that promotes the exploration in the early generations and takes advantage of the exploitation power of the DE/best class in the last generations of the algorithm to find suitable optimal control parameters to control the DC motor speed efficiently. Comparative statistical analysis with other bio-inspired adaptive strategies and with linear, adaptive and robust controllers shows the effectiveness of the proposed bio-inspired adaptive control approach both in simulation and experimentation.  相似文献   

11.
Electrical impedance tomography (EIT) determines the resistivity distribution inside an inhomogeneous object by means of voltage and/or current measurements conducted at the object boundary. A genetic algorithm (GA) approach is proposed for the solution of the EIT inverse problem, in particular for the reconstruction of “static” images. Results of numerical experiments of EIT solved by the GA approach (GA-EIT in the following) are presented and compared to those obtained by other more-established inversion methods, such as the modified Newton-Raphson and the double-constraint method. The GA approach is relatively expensive in terms of computing time and resources, and at present this limits the applicability of GA-EIT to the field of static imaging. However, the continuous and rapid growth of computing resources makes the development of real-time dynamic imaging applications based on GAs conceivable in the near future  相似文献   

12.
The “shadow costs” capability of Linear Programming (LP) provides office Automation (OA) managers with an instrument that determines optimal future allocations of money for resources such as computer workstatiions, software, training, awareness sessions, and user support. The approach can be implemented at any stage of OA development and based on current experience predicts how future restricted expenditures of money should be allocated among the listed resources in order to achieve the greatest return.  相似文献   

13.
14.
The exponential growth in the demands of users to access various resources during mobility has led to the popularity of Vehicular Mobile Cloud. Vehicular users may access various resources on road from the cloud which acts as a service provider for them. Most of the existing proposals on vehicular cloud use unicast sender-based data forwarding, which results in an overall performance degradation with respect to the metrics such as packet delivery ratio, end-to-end delay, and reliable data transmission. Most of the applications for vehicular cloud have tight upper bounds with respect to reliable transmission. In view of the above, in this paper, we formulate the problem of reliable data forwarding as a Bayesian Coalition Game (BCG) using Learning Automata concepts. Learning Automata (LA) are assumed as the players in the game stationed on the vehicles. For taking adaptive decisions about reliable data forwarding, each player observes the moves of the other players in the game. For this purpose, a coalition game is formulated among the players of the game for taking adaptive decisions. For each action taken by a player in the game, it gets a reward or a penalty from the environment, and accordingly, it updates its action probability vector. An adaptive Learning Automata based Contention Aware Data Forwarding (LACADF) is also proposed. The proposed scheme is evaluated in different network scenarios with respect to parameters such as message overhead, throughput, and delay by varying the density and mobility of the vehicles. The results obtained show that the proposed scheme is better than the other conventional schemes with respect to the above metrics.  相似文献   

15.
《Information & Management》2001,39(3):191-210
Modeling and measurement issues have been considered the heart of information technology (IT) productivity paradox problem. By collecting data from seven mortgage firms, this research attempts to shed light on the causal relationships and complementarity properties among IT and performance variables. The result is a multi-level business value model that connects the use of IT to a firm’s profit. It is concluded that although there exists a causal relationship between IT and profit, this relationship is indirect and complex. Due to the complementary nature of the relationships, such a complexity is not reducible. All complementary factors must be in favorable conditions for a positive return of IT investments.  相似文献   

16.
随着新型业务类型如视频会议、网络游戏、交互应用等不断涌现,如何利用有限的网络资源进行有效的流量控制,以保障业务的服务质量(Quality of Service,QoS)已成为一个非常迫切的问题。而目前已有的QoS流量控制方法大多存在着对网络资源的利用率低、可靠性差、粒度粗、实现困难,可扩展性差等问题。软件定义网络(Software Defined Network,SDN)提出的控制层与数据层分离思想,为解决此类问题提供了崭新的思路。本文提出了一种基于OpenFlow技术的QoS流量控制方法,利用自适应多约束QoS路由技术,提高了QoS控制的灵活性与可靠性,实现了对网络资源的高效利用和业务流控制的细粒度。最后,我们在OpenvSwitch环境下验证了该方法的有效性。  相似文献   

17.
Recent service perspectives (represented by service–dominant logic, service logic and service science; henceforth, service-logics) provide a mindset for understanding value co-creation as a mutual service process in which firms and customers integrate their resources. The idea that customer and firm resources should be jointly considered to properly explain perceived value is appealing, particularly in interactional settings such as e-commerce. However, we conduct a literature review and show that cross-sectional empirical e-commerce research intended to explain customer value perceptions continues to rely heavily on a unilateral approach (firm resources), which could be misleading. Subsequently, we identify possible barriers for considering service-logics in e-commerce research, which include the lack of a clear definition and classification of resources and an integrated set of valid and reliable measures of resources. We then take a step forward towards overcoming these barriers by providing a summary, a synthesis and new ideas or, at least, a new emphasis on the implications of existing ideas. The new idea/emphasis is that cross-sectional empirical e-commerce research should jointly consider customer resources and firm resources. No prior work has stressed this proposition. We provide a synthesis by re-organizing scarce and scattered service-logics-oriented existing literature on resources to offer a definition and a comprehensive framework for the classification of resources. Finally, we provide a summary by putting together valid and reliable measures of firm resources and customer resources that have been sparsely considered in the 69 studies selected. These measures could be used by researchers in order to model and test value co-creation processes in e-commerce B2C contexts.  相似文献   

18.
In the developing of an optimal operation schedule for dams and reservoirs, reservoir simulation is one of the critical steps that must be taken into consideration. For reservoirs to have more reliable and flexible optimization models, their simulations must be very accurate. However, a major problem with this simulation is the phenomenon of nonlinearity relationships that exist between some parameters of the reservoir. Some of the conventional methods use a linear approach in solving such problems thereby obtaining not very accurate simulation most especially at extreme values, and this greatly influences the efficiency of the model. One method that has been identified as a possible replacement for ANN and other common regression models currently in use for most analysis involving nonlinear cases in hydrology and water resources–related problems is the adaptive neuro-fuzzy inference system (ANFIS). The use of this method and two other different approaches of the ANN method, namely feedforward back-propagation neural network and radial basis function neural network, were adopted in the current study for the simulation of the relationships that exist between elevation, surface area and storage capacity at Langat reservoir system, Malaysia. Also, another model, auto regression (AR), was developed to compare the analysis of the proposed ANFIS and ANN models. The major revelation from this study is that the use of the proposed ANFIS model would ensure a more accurate simulation than the ANN and the classical AR models. The results obtained showed that the simulations obtained through ANFIS were actually more accurate than those of ANN and AR; it is thus concluded that the use of ANFIS method for simulation of reservoir behavior will give better predictions than the use of any new or existing regression models.  相似文献   

19.
This paper introduces a new hybrid algorithmic nature inspired approach based on particle swarm optimization, for successfully solving one of the most popular supply chain management problems, the vehicle routing problem. The vehicle routing problem is considered one of the most well studied problems in operations research. The proposed algorithm for the solution of the vehicle routing problem, the hybrid particle swarm optimization (HybPSO), combines a particle swarm optimization (PSO) algorithm, the multiple phase neighborhood search–greedy randomized adaptive search procedure (MPNS–GRASP) algorithm, the expanding neighborhood search (ENS) strategy and a path relinking (PR) strategy. The algorithm is suitable for solving very large-scale vehicle routing problems as well as other, more difficult combinatorial optimization problems, within short computational time. It is tested on a set of benchmark instances and produced very satisfactory results. The algorithm is ranked in the fifth place among the 39 most known and effective algorithms in the literature and in the first place among all nature inspired methods that have ever been used for this set of instances.  相似文献   

20.
王秋月  曹巍  史少晨 《计算机应用》2015,35(9):2553-2559
联邦搜索是从大规模深层网上获取信息的一种重要技术。给定一个用户查询,联邦搜索系统需要解决的一个主要问题是数据源选择问题,即从海量数据源中选出一组最有可能返回相关结果的数据源。现有的数据源选择算法大多基于数据源的样本文档集和查询之间的关键词匹配,通常无法很好地解决少量样本文档的信息缺失问题。针对这一问题,提出了基于隐含狄利克雷分布(LDA)主题模型进行数据源选择的方法。首先,使用LDA主题模型获得数据源和查询的主题概率分布;然后,通过比较两者主题概率分布的相近性来对所有数据源进行排序。通过将数据源和查询映射到低维的主题空间来解决高维词条空间稀疏性所带来的信息缺失问题。在TREC FedWeb 2013和2014 Track的测试集上分别进行了实验,并和其他参赛方法的结果进行了比较。在FedWeb 2013测试集上的实验结果显示比其他参赛方法的最好结果提高了24%;在FedWeb 2014测试集上的实验结果显示比传统的基于小文档和大文档的关键词匹配方法分别提高了22%和43%。另外,使用文档片段来代替文档还可以大幅提升系统的效率,更增加了此方法的实用性和可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号