首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Efficiency frontier analysis has been an important approach of evaluating firms’ performance in private and public sectors. There have been many efficiency frontier analysis methods reported in the literature. However, the assumptions made for each of these methods are restrictive. Each of these methodologies has its strength as well as major limitations. This study proposes two non-parametric efficiency frontier analysis sub-algorithms based on (1) Artificial Neural Network (ANN) technique and (2) ANN and Fuzzy C-Means for measuring efficiency as a complementary tool for the common techniques of the efficiency studies in the previous studies. Normal probability plot is used to find the outliers and select from these two methods. The proposed computational algorithms are able to find a stochastic frontier based on a set of input–output observational data and do not require explicit assumptions about the functional structure of the stochastic frontier. In these algorithms, for calculating the efficiency scores, a similar approach to econometric methods has been used. Moreover, the effect of the return to scale of decision-making unit (DMU) on its efficiency is included and the unit used for the correction is selected by notice of its scale (under constant return to scale assumption). Also in the second algorithm, for increasing DMUs’ homogeneousness, Fuzzy C-Means method is used to cluster DMUs. Two examples using real data are presented for illustrative purposes. First example which deals with power generation sector shows the superiority of Algorithm 2 while the second example dealing auto industries of various developed countries shows the superiority of Algorithm 1. Overall, we find that the proposed integrated algorithm based on ANN, Fuzzy C-Means and Normalization approach provides more robust results and identifies more efficient units than the conventional methods since better performance patterns are explored.  相似文献   

2.
There have been many efficiency frontier analysis methods reported in the literature. However, each of these methodologies has its strength as well as major limitations. This study proposes a Meta heuristic approach based on adaptive neural network (ANN) technique, fuzzy C-means and numerical taxonomy (NT) for measuring efficiency as a complementary tool for the common techniques of the efficiency studies in the previous studies. Homogenous test is done by NT. It is used to determine if the DMUs are homogenous or not. The proposed computational methods are able to find a stochastic frontier based on a set of input–output observational data and do not require explicit assumptions about the functional structure of the stochastic frontier. In this algorithm, for calculating the efficiency scores, a similar approach to za has been used. Moreover, the effect of the return to scale of decision making unit (DMU) on its efficiency is included and the unit used for the correction is selected by notice of its scale (under constant return to scale assumption). Also in non homogenous situation, for increasing DMUs’ homogeneousness, fuzzy C-means method is used to cluster DMUs. Two examples using real data are presented for illustrative purposes. Homogenous test result is positive in the first example, which deals with power generation sectors, and is negative in the second example dealing auto industries of various developed countries. Overall, we find that the proposed integrated algorithm based on ANN, fuzzy C-means and numerical taxonomy provides more robust results and identifies more efficient units than the conventional methods since better performance patterns are explored.  相似文献   

3.
Developing decision support system (DSS) can overcome the issues with personnel attributes and specifications. Personnel specifications have greatest impact on total efficiency. They can enhance total efficiency of critical personnel attributes. This study presents an intelligent integrated decision support system (DSS) for forecasting and optimization of complex personnel efficiency. DSS assesses the impact of personnel efficiency by data envelopment analysis (DEA), artificial neural network (ANN), rough set theory (RST), and K-Means clustering algorithm. DEA has two roles in this study. It provides data to ANN and finally it selects the best reduct through ANN results. Reduct is described as a minimum subset of features, completely discriminating all objects in a data set. The reduct selection is achieved by RST. ANN has two roles in the integrated algorithm. ANN results are basis for selecting the best reduct and it is used for forecasting total efficiency. Finally, K-Means algorithm is used to develop the DSS. A procedure is proposed to develop the DSS with stated tools and completed rule base. The DSS could help managers to forecast and optimize efficiencies by selected attributes and grouping inferred efficiency. Also, it is an ideal tool for careful forecasting and planning. The proposed DSS is applied to an actual banking system and its superiorities and advantages are discussed.  相似文献   

4.
Two competing approaches for the measurement of efficiency are the stochastic frontier model and data envelopment analysis (DEA). Previous research has established that the two models applied to cross‐sectional data are both adversely affected by measurement error. While the cross‐sectional stochastic frontier model does not effectively handle statistical noise, panel data models do. This is true because additional information from multiple time periods is incorporated into the estimation. A panel data DEA model that uses averaged data has been shown to effectively smooth out measurement error. In this paper, we compare the panel data models using simulated data.  相似文献   

5.
Personnel specifications have greatest impact on total efficiency. They can help us to design work environment and enhance total efficiency. Determination of critical personnel attributes is a useful procedure to overcome complication associated with multiple inputs and outputs. The proposed algorithm assesses the impact of personnel efficiency attributes on total efficiency through Data Envelopment Analysis (DEA), Artificial Neural Network (ANN) and Rough Set Theory (RST). DEA has two roles in the proposed integrated algorithm of this study. It provides data ANN and finally it selects the best reduct through ANN result. Reduct is described as a minimum subset of attributes, completely discriminating all objects in a data set. The reduct selection is achieved by RST. ANN has two roles in the integrated algorithm. ANN results are basis for selecting the best reduct and it is also used for forecasting total efficiency. The proposed integrated approach is applied to an actual banking system and its superiorities and advantages are discussed.  相似文献   

6.
This paper presents a flexible algorithm based on artificial neural networks (ANNs), genetic algorithms (GAs), and multivariate analysis for performance assessment and optimization of complex production units (CPUs) with respect to machinery productivity indicators (MPIs). Multivariate techniques include data envelopment analysis (DEA), principal component analysis (PCA) and numerical taxonomy (NT). Two case studies are considered to show the applicability of the proposed approach. In the first case, the machinery productivity indicators are categorized into four standard classes as availability, machinery stoppage, random failure and value added and production value. In the second case, the productivity of production units in terms of health, safety, environment and ergonomics indicators is evaluated. The flexible algorithm is capable of handling both linearity and complexity of data sets. Moreover, ANN and GA are efficiently applied to cover nonlinearity and complexity of CPUs. The results are also validated and verified by the internal mechanism of the algorithm. The algorithm is applied to a large set of production units to show its superiority and applicability over conventional approaches. Results show that, in the case of having non-linear data sets, ANN outperforms GA and conventional approaches. The flexible algorithm of this study may be easily extended to other units for assessment and optimization of CPUs with respect to machinery indicators.  相似文献   

7.
The paper targets to devise a genuine Knowledge Management (KM) performance measurement model in a stochastic setting based on Data Envelopment Analysis (DEA), Monte Carlo simulation and Genetic Algorithm (GA). The proposed model evaluates KM using a set of proxy measures correlated with the major KM processes. Data Collection Budget Allocation (DCBA) that maximizes the model accuracy is determined using GA. Additional data are generated and analyzed using a Monte-Carlo-enhanced DEA model to obtain the overall KM efficiency and KM processes’ efficiency scores. An application of the model has been carried out to evaluate KM performance in higher educational institutions. It is found that with GA, the accuracy of the model has been greatly improved. Lastly, comparing with a conventional deterministic DEA model, the results from the proposed model would be more useful for managers to determine future strategies to improve their KM.  相似文献   

8.
In this paper, a novel multi objective model is proposed for portfolio selection. The proposed model incorporates the DEA cross-efficiency into Markowitz mean–variance model and considers return, risk and efficiency of the portfolio. Also, in order to take uncertainty in proposed model, the asset returns are considered as trapezoidal fuzzy numbers. Due to the computational complication of the proposed model, the second version of non-dominated sorting genetic algorithm (NSGA-II) is applied. To illustrate the performance of our model, the model is implemented for 52 firms listed in stock exchange market of Iran and the results are analyzed. The results show that the proposed model is suitable in compared with Markowitz and DEA models due to considering return, risk and efficiency, simultaneously.  相似文献   

9.
This paper addresses the range image registration problem for views having low overlap and which may include substantial noise. The current state of the art in range image registration is best represented by the well-known iterative closest point (ICP) algorithm and numerous variations on it. Although this method is effective in many domains, it nevertheless suffers from two key limitations: it requires prealignment of the range surfaces to a reasonable starting point; and it is not robust to outliers arising either from noise or low surface overlap. This paper proposes a new approach that avoids these problems. To that end, there are two key, novel contributions in this work: a new, hybrid genetic algorithm (GA) technique, including hill climbing and parallel-migration, combined with a new, robust evaluation metric based on surface interpenetration. Up to now, interpenetration has been evaluated only qualitatively; we define the first quantitative measure for it. Because they search in a space of transformations, GA are capable of registering surfaces even when there is low overlap between them and without need for prealignment. The novel GA search algorithm we present offers much faster convergence than prior GA methods, while the new robust evaluation metric ensures more precise alignments, even in the presence of significant noise, than mean squared error or other well-known robust cost functions. The paper presents thorough experimental results to show the improvements realized by these two contributions.  相似文献   

10.
As a predictive application of data envelopment analysis (DEA), technology forecasting using DEA (TFDEA) measures the rate of frontier shift by which the arrival of future technologies can be estimated. However, it is well known that DEA and therefore TFDEA may suffer from the issue of infeasible super‐efficiency especially under the condition of variable returns to scale. This study develops an extended TFDEA model based on the modified super‐efficiency model proposed in the literature, which has the benefit of yielding radial super‐efficiency scores equivalent to those obtained from the original super‐efficiency model when feasibility is present. The previously published application of liquid crystal displays (LCD) is revisited to illustrate the use of the new model. The results show that the proposed approach makes a reasonable forecast for formerly infeasible targets as well as a consistent forecast for feasible targets.  相似文献   

11.
Data envelopment analysis (DEA) uses extreme observations to identify superior performance, making it vulnerable to outliers. This paper develops a unified model to identify both efficient and inefficient outliers in DEA. Finding both types is important since many post analyses, after measuring efficiency, depend on the entire distribution of efficiency estimates. Thus, outliers that are distinguished by poor performance can significantly alter the results. Besides allowing the identification of outliers, the method described is consistent with a relaxed set of DEA axioms. Several examples demonstrate the need for identifying both efficient and inefficient outliers and the effectiveness of the proposed method. Applications of the model reveal that observations with low efficiency estimates are not necessarily outliers. In addition, a strategy to accelerate the computation is proposed that can apply to influential observation detection.  相似文献   

12.
This study proposes an alternative to the conventional empirical analysis approach for evaluating the relative efficiency of distinct combinations of algorithmic operators and/or parameter values of genetic algorithms (GAs) on solving the pickup and delivery vehicle routing problem with soft time windows (PDVRPSTW). Our approach considers each combination as a decision-making unit (DMU) and adopts data envelopment analysis (DEA) to determine the relative and cross efficiencies of each combination of GA operators and parameter values on solving the PDVRPSTW. To demonstrate the applicability and advantage of this approach, we implemented a number of combinations of GA’s three main algorithmic operators, namely selection, crossover and mutation, and employed DEA to evaluate and rank the relative efficiencies of these combinations. The numerical results show that DEA is well suited for determining the efficient combinations of GA operators. Among the combinations under consideration, the combinations using tournament selection and simple crossover are generally more efficient. The proposed approach can be adopted to evaluate the relative efficiency of other meta-heuristics, so it also contributes to the algorithm development and evaluation for solving combinatorial optimization problems from the operational research perspective.  相似文献   

13.
Injection molding is an ideal manufacturing process for producing high volumes of products from both thermoplastic and thermo setting materials. Nevertheless, in some cases, this type of manufacturing process decelerates the production rate as a bottleneck. Thus, layout optimization plays a crucial role in this type of problem in terms of increasing the efficiency of the production line. In this regard, a novel computer simulation–stochastic data envelopment analysis (CS-SDEA) algorithm is proposed in this paper to deal with a single row job-shop layout problem in an injection molding process. First, the system is modeled with discrete-event-simulation as a powerful tool for analyzing complex stochastic systems. Then, due to lack of information about some operational parameters, theory of uncertainty is imported to the simulation model. Finally, an output-oriented stochastic DEA model is used for ranking the outputs of simulation model. The proposed CS-SDEA algorithm is capable of modeling and optimizing non-linear, stochastic, and uncertain injection process problems. The solution quality is illustrated by an actual case study in a refrigerator manufacturing company.  相似文献   

14.
In financial times series analysis, unit root test is one of the most important research issues. This paper is aimed to propose a new simple and efficient stochastic simulation algorithm for computing Bayes factor to detect the unit root of stochastic volatility models. The proposed algorithm is based on a classical thermodynamic integration technique named path sampling. Simulation studies show that the test procedure is efficient under moderate sample size. In the end, the performance of the proposed approach is investigated with a Monte Carlo simulation study and illustrated with a time series of S&P500 return data.  相似文献   

15.
Due to various seasonal and monthly changes in electricity consumption and difficulties in modeling it with the conventional methods, a novel algorithm is proposed in this paper. This study presents an approach that uses Artificial Neural Network (ANN), Principal Component Analysis (PCA), Data Envelopment Analysis (DEA) and ANOVA methods to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption. Pre-processing and post-processing techniques in the data mining field are used in the present study. We analyze the impact of the data pre-processing and post-processing on the ANN performance and a 680 ANN-MLP is constructed for this purpose. DEA is used to compare the constructed ANN models as well as ANN learning algorithm performance. The average, minimum, maximum and standard deviation of mean absolute percentage error (MAPE) of each constructed ANN are used as the DEA inputs. The DEA helps the user to use an appropriate ANN model as an acceptable forecasting tool. In the other words, various error calculation methods are used to find a robust ANN learning algorithm. Moreover, PCA is used as an input selection method, and a preferred time series model is chosen from the linear (ARIMA) and nonlinear models. After selecting the preferred ARIMA model, the Mcleod–Li test is applied to determine the nonlinearity condition. Once the nonlinearity condition is satisfied, the preferred nonlinear model is selected and compared with the preferred ARIMA model, and the best time series model is selected. Then, a new algorithm is developed for the time series estimation; in each case an ANN or conventional time series model is selected for the estimation and prediction. To show the applicability and superiority of the proposed ANN-PCA-DEA-ANOVA algorithm, the data regarding the Iranian electricity consumption from April 1992 to February 2004 are used. The results show that the proposed algorithm provides an accurate solution for the problem of estimating electricity consumption.  相似文献   

16.
The purpose of this paper is to estimate the efficiencies of, and to discuss the managerial implications for 12 international airports in the Asia–Pacific region based on data from the period 1998–2006. We applied data envelopment analysis (DEA) and stochastic frontier analysis (SFA) to compute efficiency estimates, and the empirical results are discussed in terms of management perspectives and mathematical analysis. From the management perspectives, we suggest that airports should focus more on investment than on human resources. In addition, we found that inefficiency effects associated with the production functions of airports increased over the investigated period. From the perspective of mathematical analysis, we determined that deviations from the efficient frontiers of production functions are largely attributed to technical inefficiency. Finally, the empirical results imply that employing the discretion to adjust the scale size of the production function appears to improve efficiency. The main contribution of the paper is in showing how DEA and SFA can be used together to complement each other.  相似文献   

17.
利用遗传规划和遗传算法相结合的方法可以确立发酵过程模型的结构和参数得出简明直观的模型表达形式。利用已有的关于发酵过程的机理知识,建立一些模型来代替初始群体中的差个体以提高种群的质量,使其中所包含的优良因子能够被优化过程所利用,从而缩短搜索时间。另外,针对在工业过程中所得的测量值被大噪声甚至野值污染可能性大的问题,提出了基于M估计的具有抗差性能的遗传规划。实验表明,所提出的改进遗传规划所得的模型较为简单,具有较好的泛化性能。  相似文献   

18.
This paper presents a variational algorithm for feature‐preserved mesh denoising. At the heart of the algorithm is a novel variational model composed of three components: fidelity, regularization and fairness, which are specifically designed to have their intuitive roles. In particular, the fidelity is formulated as an L1 data term, which makes the regularization process be less dependent on the exact value of outliers and noise. The regularization is formulated as the total absolute edge‐lengthed supplementary angle of the dihedral angle, making the model capable of reconstructing meshes with sharp features. In addition, an augmented Lagrange method is provided to efficiently solve the proposed variational model. Compared to the prior art, the new algorithm has crucial advantages in handling large scale noise, noise along random directions, and different kinds of noise, including random impulsive noise, even in the presence of sharp features. Both visual and quantitative evaluation demonstrates the superiority of the new algorithm.  相似文献   

19.
In this paper, we propose an algorithm to calculate cross-efficiency scores which used the equations forming the efficient frontier in data envelopment analysis (DEA). In many standard DEA models, each decision-making unit (DMU) is evaluated by using the advantageous weight for itself. Then, many DMUs are evaluated as efficient, and those efficient DMUs are not ranked by the models. The cross-efficiency evaluation is a method to rank DMUs by using the advantageous weights for all DMUs. Previously, the cross-efficiency scores based on different ideas are calculated by solving multiple linear or nonlinear programming problems. However, it is often hard to solve such a nonlinear programming problem. Therefore, by analysing the efficient frontier, we construct an algorithm to calculate alternative cross-efficiency scores.  相似文献   

20.
One of the primary concerns on target setting the electricity distribution companies is the uncertainty on input/output data. In this paper, an interactive robust data envelopment analysis (IRDEA) model is proposed to determine the input and output target values of electricity distribution companies with considering the existence perturbation in data. Target setting is implemented with the uncertain data and the decision maker (DM) can search the envelop frontier and find the targets based on his preference. In order to search the envelop frontier, the paper combine the DEA and multi-objective linear programming method such as STEM. The proposed method of this paper is capable of handling uncertainty in data and finding the target values according to the DM’s preferences. To illustrate ability the proposed model, a numerical example is solved. Also, the input and output target values for some of the electricity distribution companies in Iran are reported. The results indicate that the IRDEA model is suitable for target setting based on DM’s preferences and with considering uncertain data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号