首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
Consider a basic “price-only” supply chain interaction in which the “players” are a manufacturer and a retailer. The manufacturer sets the wholesale price ($w/unit) of a product she supplies to a retailer, who in turn sets the retail price ($p/unit) at which he sells to the consumers. The product's demand curve is a function of p. The players select to play one of several non-cooperative games such as the manufacturer-Stackelberg game. How should the players set their prices w and p? Most existing studies assume information symmetry i.e., the cost and market parameters are known equally and perfectly to both players. In reality, the retailer's knowledge of the manufacturing cost c is often controlled by the manufacturer. This paper considers explicitly the asymmetry of knowledge in c. This approach reveals interesting and surprising deviations from earlier symmetrical-c-knowledge results. Moreover, the approach also ameliorates some of the internal inconsistencies within the symmetric-information framework. We also show how the effect of knowledge asymmetry varies with the shape of the demand curve and with the degree of relative dominance between the players. We find that under a linear demand curve a manufacturer should overstate c, which is an intuitively expected result. However, under an iso-elastic demand curve she benefits herself and the entire system by understating c, which is counter-intuitive. Also, under asymmetric c-knowledge, the simultaneous decision (or “vertical Nash”) game becomes non-viable under a linear demand curve, but the game becomes quite viable and desirable under an iso-elastic demand curve.  相似文献   

3.
Up to now, of all the containers received in USA ports, roughly between 2% and 5% are scrutinized to determine if they could cause some type of danger or contain suspicious goods. Recently, concerns have been raised regarding the type of attack that could happen via container cargo leading to devastating economic, psychological and sociological effects. Overall, this paper is concerned with developing an inspection strategy that minimizes the total cost of inspection while maintaining a user-specified detection rate for “suspicious” containers. In this respect, a general model for describing an inspection strategy is proposed. The strategy is regarded as an (n+1)-echelon decision tree where at each of these echelons, a decision has to be taken, regarding which sensor to be used, if at all. Second, based on the general decision-tree model, this paper presents a minimum cost container inspection strategy that conforms to a pre-specified user detection rate under the assumption that different sensors with different reliability and cost characteristics can be used. To generate an optimal inspection strategy, an evolutionary optimization approach known as probabilistic solution discovery algorithm has been used.  相似文献   

4.
MODELS FOR IMPROVED EFFECTIVENESS BASED ON DEA EFFICIENCY RESULTS   总被引:8,自引:0,他引:8  
Following the characterization via Data Envelopment Analysis (DEA) of managerial units as efficient or inefficient, management will wish to increase profitability and/or control costs while becoming (or remaining) technically efficient in the DEA sense. This paper presents three families of models for achieving this and describes the managerial situations in which they are useful. The first addresses the management of an existing Decision Making Unit (DMU) and die second attempts to identify the desired “location” for a new DMU. The third addresses the aggregate of all DMUs, reallocating scarce resources among them for maximum overall organizational profitability and technical efficiency.  相似文献   

5.
We deal with a system whose failures depend on several parallel effects, such as the time in use L and the mileage H. Manufacturer warranties are typically described by a two-dimensional region in the (L, H)-plane. A proper determination of the warranty limits must be based on a two-dimensional distribution of time to failure on this plane. The aim of this paper is to demonstrate the possibility of designing individual warranties for a “nontypical” customer who has a very low or very high usage rate b = H/L, and to show a simple way to calculate warranty limits by minimizing the lifetime coefficient of variation. The latter is carried out by introducing the “best” combined time scale in the form K = (1-ε)LH which provides the minimal lifetime coefficient of variation.  相似文献   

6.
This paper provides a comparative analysis of two sets of alternative joint lot-sizing models for the general one-vendor, many-nonidentical-purchasers case. Specifically, the basic joint economic lot size (irJELS) and individually responsible and rational decision (IRDD) models, and the simultaneous setup cost and order cost reduction versions are explored. Models for the latter situation are derived by the use of classical optimization techniques. A numerical example is presented which provides the basis for comparison of the models with the results of independent optimization (IO). For the basic models the previously reported advantages of IRRD are refuted. In the simultaneous investment case both the vendor and the purchasers realize significant savings over IO when the JELS policy is followed. This is not true for IRRD. This suggests that when an environment of co-operation between the parties has been established the JELS is a superior policy.  相似文献   

7.
How should a video rental chain replenish its stock of new movies over time? Any such policy should consist of two key dimensions: (i) the number of copies purchased; and (ii) when to remove a movie from the front shelves and replace it by a newly released one. We first analyze this bi-variate problem for an integrated chain. As for decentralized chains, we show that a (wholesale) price-only contract cannot coordinate such a chain. We then consider a price-and-revenue-sharing contract. Such a contract can achieve coordination, but the unique price and share which are needed may not provide one of the parties with its desired profit (i.e., it will violate individual rationality). This situation has been reported in the case of Blockbuster Video and has led to litigation between Blockbuster and Disney Studios. We thus propose adding a third lever: a license fee (or subsidy) associated with each new movie. Such a contract can coordinate the channel and satisfy the individual rationality requirements. In fact, all our results hold true irrespective of whether or not the rental store is allowed to sell surplus copies of movies. We are able to compare the optimal decision variable and coordinating lever values, as well as the optimal profits, for the “rental only” and “sales + rental” models. Our numerical examples, which utilize empirical demand data have significant managerial implications in terms of increasing the effectiveness of the video rental industry.  相似文献   

8.
为了定量研究生产成本扰动和不同权利结构下的供应链脆弱性,分别构建集中决策、制造商主导和零售商主导的线上线下双渠道供应链和直销供应链两种供应链博弈模型。采用使供应链最优决策保持不变的制造商生产成本扰动区间长度来衡量供应链脆弱性的大小,区间长度越大,供应链脆弱性越小。结果表明:1)对线上线下双渠道供应链而言,集中决策和制造商主导时的成本扰动区间长度相等,即脆弱性相同,且大于零售商主导时的脆弱性;2)对直销供应链而言,制造商主导时的成本扰动区间长度大于集中决策时的区间长度,即制造商主导时的供应链脆弱性小于集中决策时的脆弱性,而零售商主导时脆弱性的大小与参数βk有关。最后通过数值仿真验证了该方法的可行性。  相似文献   

9.
The tendency of younger drivers to be more likely than older drivers to drive smaller cars has been an important consideration in a number of prior investigations of the relation between car size and traffic safety. The purpose of the present study is to quantify this effect on a firmer basis than hitherto by fitting data from seven independent sources to a unified general model. More specifically, when the exposure measures “per unit distance of travel” or “per registered car” are used in studies of car mass effects on traffic safety, the exposure information often does not contain the variable driver age. This work develops a general procedure for disaggregating such exposure data into three driver (or owner) age categories; A1: 16–24; A2: 25–34; and A3: 35 years. Data from the seven sources are fined to the equation
f(i,m) = Hi[1 + G i(m/900 − 1)]
where m is the ear mass in kg, and f(i,m) is the fraction of cars of mass m which are driven (owned) by persons in the Ai, (i = 1, 2, 3) age category. The form of this equation permits easy comparison of 900 and 1800 kg cars. Those particular masses that have been chosen for illustrative comparisons in earlier work. The seven sets of data are used to derive overall average values of the parameters H1 and G1. The data from all seven sources show consistent effects which are summarized in one analytical expression which is well suited for use in future studies of car size effects because it reflects a synthesis of much prior data and it permits sensitivity analyses to be performed conveniently.  相似文献   

10.
The innovation process may be divided into three main parts: the front end (FE), the new product development (NPD) process, and the commercialization. Every NPD process has a FE in which products and projects are defined. However, companies tend to begin the stages of FE without a clear definition or analysis of the process to go from Opportunity Identification to Concept Generation; as a result, the FE process is often aborted or forced to be restarted. Koen’s Model for the FE is composed of five phases. In each of the phases, several tools can be used by designers/managers in order to improve, structure, and organize their work. However, these tools tend to be selected and used in a heuristic manner. Additionally, some tools are more effective during certain phases of the FE than others. Using tools in the FE has a cost to the company, in terms of time, space needed, people involved, etc. Hence, an economic evaluation of the cost of tool usage is critical, and there is furthermore a need to characterize them in terms of their influence on the FE. This paper focuses on decision support for managers/designers in their process of assessing the cost of choosing/using tools in the core front end (CFE) activities identified by Koen, namely Opportunity Identification and Opportunity Analysis. This is achieved by first analyzing the influencing factors (firm context, industry context, macro-environment) along with data collection from managers followed by the automatic construction of fuzzy decision support models (FDSM) of the discovered relationships. The decision support focuses upon the estimated investment needed for the use of tools during the CFE. The generation of FDSMs is carried out automatically using a specialized genetic algorithm, applied to learning data obtained from five experienced managers, working for five different companies. The automatically constructed FDSMs accurately reproduced the managers’ estimations using the learning data sets and were very robust when validated with hidden data sets. The developed models can be easily used for quick financial assessments of tools by the person responsible for the early stage of product development within a design team. The type of assessment proposed in this paper would better suit product development teams in companies that are cost-focused and where the trade-offs between what (material), who (staff), and how long (time) to involve in CFE activities can vary a lot and hence largely influence their financial performances later on in the NPD process.  相似文献   

11.
为了揭示供应商与承包商为主体的二级供应链中,两者间耦合作用条件下发生的供应中断对供应商最优决策及对承包商成本的影响机制,且剖析供应中断对承包商成本的冲击效应,以供应商存在高退出风险为研究对象,在仅知产量均值与方差条件下,运用Scarf提出的“极大极小值”鲁棒决策方法,通过设定不同决策情况下的供应链系统反馈,分析了不同退出标准情况下二级供应链的博弈过程。基于理论支撑下的鲁棒模型数值仿真结果得出:在退出标准α为100时,中间变量在290~350范围内,供应商存在最优决策,但承包商最优决策无法确定;与α为0时相比,在订货量达到一定状态,即α为140时,承包商成本增加了1.5%,表明承包商成本与供应商退出标准呈正相关,退出标准越高,对承包商成本造成的冲击效应越大。此研究结论为供应链初期不稳定且信息不完整条件下发生的供应中断影响机理提供了新的视角。  相似文献   

12.
This paper focuses on risk assessment and multi-criteria decision making methods applicable to deciding among candidate safety improvement strategies in the face of cost, safety and other uncertainties for NASA flight vehicles, launch vehicles and ground research facilities. Deciding on the best safety improvement strategy to implement on a spacecraft involves balancing safety with other quantifiable criteria such as technical feasibility, schedule, mass, performance, volume, and cost. The strategem of a simplified example is used to illustrate the use of probabilistic risk assessment derived results with multi-criteria decision making in the face of uncertainties. The investigated decision making approaches are intuition, cost/benefit ratio, expected impact, and Analytic Hierarchy Process. A useful sensitivity study, termed Decision Trajectories, is proposed in this paper. The example, using the shuttle auxiliary power units (APUs), is limited to the following criteria: (1) safety improvement of proposed strategies, and (2) the associated recurring and non-recurring costs. These two criteria provide sufficient richness of domain to illustrate the technologies of risk assessment and decision making. Because of the general unfamiliarity of the readers with work being conducted by NASA, this paper also provides the needed background.  相似文献   

13.
Among recent studies considering the splitting of an order among several suppliers (i.e., in “multiple sourcing”), one group considered only the favorable effect of multiple suppliers on the effective lead time demand and required safety stock, but its effects on the annual order and holding cost components have been ignored. Another group that considers the effect of using two suppliers on all relevant cost components imposes severe restrictions on the suppliers' lead-time distributions as well as the proportion of order-split between the suppliers. Our primary purpose is to present easily-solvable decision models for minimizing the sum of annual holding and ordering costs with two suppliers, subject to a maximum allowable stockout risk; restrictions on lead-time distributions and order-split proportion are completely eliminated. Solving these models gives the optimal total order quantity, reorder point and proportion of split between the two suppliers. Numerical results from our models reveal some unexpected observations; e.g.: (i) In using two suppliers, die reduction of inventory carrying cost (a hitherto unrecognized component) is at least as important, and often considerably more important than the effect of safety-stock-cost reduction, (ii) Although intuitively one might use suppliers with the shortest (mean) lead times, it is actually better to have two suppliers such mat the second supplier's mean lead time is “suitably” larger man the first's; this could mean excluding the one with lowest mean lead times among the candidates for the second supplier, (iii) The optimal proportion of split varies with, among other factors, the difference in the suppliers' mean lead times.  相似文献   

14.
A simple tracking method is to look for the U, X and V triple coincidences in a wire chamber, but the space resolution is in general not satisfactory. In this paper, we consider the use of space correction for the inclined planes in a chamber. The resolution can be much improved for both the proportional chamber and the drift chamber. For proportional chambers, the combination of this method and the wire-spacing “drift distance” technique, further improved the resolution.  相似文献   

15.
We consider the problem of using “safety capacity” to ensure due date integrity in a pull manufacturing system and quantify the basic tradeoff between lost revenue opportunity and overtime costs. In this context, we address the question of when it is economically attractive to use “under capacity scheduling” and the problem of setting economic production quotas.

We develop four models for addressing the quota setting problem. The first three assume that quota shortfalls cannot be carried over to the next regular time production period. Models 1 and 3 assume that these shortages are made up on overtime and incur fixed or fixed plus variable costs. Model 2 does not use a capacity buffer and treats shortages as lost sales. Finally, Model 4 assumes that shortages can be backlogged to the next regular time production period at a cost. For this model, we compute both an optimal quota and an overtime “trigger,” which represents the minimum shortage for which overtime is used. We give computational results that illustrate and contrast the various models.  相似文献   

16.
The decision as to whether a contaminated site poses a threat to human health and should be cleaned up relies increasingly upon the use of risk assessment models. However, the more sophisticated risk assessment models become, the greater the concern with the uncertainty in, and thus the credibility of, risk assessment. In particular, when there are several equally plausible models, decision makers are confused by model uncertainty and perplexed as to which model should be chosen for making decisions objectively. When the correctness of different models is not easily judged after objective analysis has been conducted, the cost incurred during the processes of risk assessment has to be considered in order to make an efficient decision. In order to support an efficient and objective remediation decision, this study develops a methodology to cost the least required reduction of uncertainty and to use the cost measure in the selection of candidate models. The focus is on identifying the efforts involved in reducing the input uncertainty to the point at which the uncertainty would not hinder the decision in each equally plausible model. First, this methodology combines a nested Monte Carlo simulation, rank correlation coefficients, and explicit decision criteria to identify key uncertain inputs that would influence the decision in order to reduce input uncertainty. This methodology then calculates the cost of required reduction of input uncertainty in each model by convergence ratio, which measures the needed convergence level of each key input's spread. Finally, the most appropriate model can be selected based on the convergence ratio and cost. A case of a contaminated site is used to demonstrate the methodology.  相似文献   

17.
A new mechanistic approach (NMA) was used recently to examine the physical aspects of LEFM (long) fatigue crack growth (FCG) process in crack-ductile materials in stages I and II. In this paper, NMA is extended to examine both the physical and analytical aspects of the combined effects of Young's modulus, E and stress ratio, R, in the same stages of the same materials. It is shown that, (i) with submicroscopic cleavage or reversed shear mechanism operating in the pure form, E is the most influential intrinsic “material” property controlling FCG, (ii) E-dependence of da/dN is a natural consequence of near-crack-tip displacement control proposed previously, and (iii) the demonstrated similarity of FCG curves and the existence of characteristic “pivot points” on these curves for a “class of materials” results from E-influence which continues even at a higher R. A simple analytical model based on “strain intensity factor,” K0, which contains E-influence implicitly and controls da/dN in all materials irrespective of class, is proposed. Model-predicted K0-based theoretical values of threshold, “Idealised Master Growth Curves (IMGCs)” and mechanism transition point, all agreed excellently with experimental data for at least three classes of materials, i.e. steels, Al-alloys and Ti-alloys at extreme R-values of 0 and ≥ 0.6. The K0-parameter concept is used here to raise the status of the analysis of the E-effect from a simple “normalisation” to that of direct data “representation”. Using NMA existing empirical relations are given some sound theoretical base. In addition to aiding in a clearer physical understanding of the FCG process, the unique IMGCs developed for different R-values are considered useful in quick, accurate and conservative life estimations, and performing failure analyses usually required in selection and design of materials.  相似文献   

18.
The plastic work required for a unit area of fatigue crack propagation U was measured by cementing tiny foil strain gages ahead of propagating fatigue cracks and recording the stress-strain curves as the crack approached. Measurements of U and plastic zone size in aluminum alloys 2024-T4, 2219-T861, 2219 overaged, and A1-6.3 wt% Cu-T4, and a binary Ni-base alloy with 7.2 wt% A1 are herein reported. The results are discussed along with previously reported measurements of U in three steels and 7050 aluminum alloy. When U is compared to the fatigue crack propagation rate at constant ΔK along with strength and modulus, the conclusion is drawn that U is one of the parameters which determines the rate of fatigue crack propagation. The relation of U to microstructure is also discussed. “Homogeneous” plastic deformation in the plastic zone ahead of the crack seems desirable.  相似文献   

19.
Solving Quadratic Assignment Problems by 'Simulated Annealing'   总被引:9,自引:0,他引:9  
Recently, an interesting analogy between problems in combinatorial optimization and statistical mechanics has been developed and has proven useful in solving certain traditional optimization problems such as computer design, partitioning, component placement, wiring, and traveling salesman problems. The analogy has resulted in a methodology, termed “simulated annealing,” which, in the process of iterating to an optimum, uses Monte Carlo sampling to occasionally accept solutions to discrete optimization problems which increase rather than decrease the objective function value. This process is counter to the normal 'steepest-descent' algorithmic approach. However, it is argued in the analogy that by taking such controlled uphill steps, the optimizing algorithm need not get “stuck” on inferior solutions.

This paper presents an application of the simulated annealing method to solve the quadratic assignment problem (QAP). Performance is tested on a set of “standard” problems, as well as some newly generated larger problems (n = 50 and n = 100). The results are compared to those from other traditional heuristics, e.g., CRAFT, biased sampling, and a revised Hillier procedure. It is shown that under certain conditions simulated annealing can yield higher quality (lower cost) solutions at comparable CPU times. However, the simulated annealing algorithm is sensitive to a number of parameters, some of whose effects are investigated and reported herein through the analysis of an experimental design.  相似文献   

20.
Determination of the optimum equipment replacement policy and time is of great economic importance. After a brief survey of the models which have been used for decision making, the paper looks at methods for detecting and quantifying growth of failure frequency (peril rate) in repairable equipment. It examines the trend detection methods of Laplace and Mann when the peril rate varies as a power of equipment age and also applies them to some actual field failure data. An economic model is developed, based on total discounted future cost and providing for ongoing future technological growth. The cost comprises not only the conventional cost of ownership, but also the shortfall between an equipment's achieved benefit and that which would be achieved by an ideal equipment in the same demand environment. The inclusion of this shortfall, called incapacity cost', enables the replacement decision to be based not only on the deterioration of equipment but also on its performance inadequacy and on the availability of technological improvement in present and future challengers. The formulation of the cost model is such that for both a single-replacement finite planning horizon and an infinite horizon the total discounted future cost is readily computed for a range of alternative replacement times and the optimum replacement programme thereby determined. The sensitivity of total cost to the replacement times and the sensitivity of the optimum times to the variability of assumed input data are easily examined. The application of the model to traffic signal equipment is described. In this application the total cost is shared between the nominal owner of the equipment and the community.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号