首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Technical debt is a metaphor for delayed software maintenance tasks. Incurring technical debt may bring short-term benefits to a project, but such benefits are often achieved at the cost of extra work in future, analogous to paying interest on the debt. Currently technical debt is managed implicitly, if at all. However, on large systems, it is too easy to lose track of delayed tasks or to misunderstand their impact. Therefore, we have proposed a new approach to managing technical debt, which we believe to be helpful for software managers to make informed decisions. In this study we explored the costs of the new approach by tracking the technical debt management activities in an on-going software project. The results from the study provided insights into the impact of technical debt management on software projects. In particular, we found that there is a significant start-up cost when beginning to track and monitor technical debt, but the cost of ongoing management soon declines to very reasonable levels.  相似文献   

2.
Sophisticated agents operating in open environments must make decisions that efficiently trade off the use of their limited resources between dynamic deliberative actions and domain actions. This is the meta-level control problem for agents operating in resource-bounded multi-agent environments. Control activities involve decisions on when to invoke and the amount to effort to put into scheduling and coordination of domain activities. The focus of this paper is how to make effective meta-level control decisions. We show that meta-level control with bounded computational overhead allows complex agents to solve problems more efficiently than current approaches in dynamic open multi-agent environments. The meta-level control approach that we present is based on the decision-theoretic use of an abstract representation of the agent state. This abstraction concisely captures critical information necessary for decision making while bounding the cost of meta-level control and is appropriate for use in automatically learning the meta-level control policies.  相似文献   

3.
We have developed a method of fabricating microfluidic device channels for bio-nanoelectronics system by using high performance epoxy based dry photopolymer films or dry film resists (DFRs). The DFR used was with a trademark name Ordyl SY355 from Elga Europe. The developing and exposing processes as well as the time taken in making the channels are recorded. Finally from those recorded methods, the accurate procedures and time taken for DFR development and exposure have been found and ultimately been consistently used in fabricating our channels. These channels were patterned and sandwiched in between two glass substrates. In our advance, the channel was formed for the colloidal particle separation system. They can be used for handling continuous fluid flow and particle repositioning maneuver using dielectrophoresis that have showed successful results in the separation.  相似文献   

4.
Retailers often need to replace soon-to-be-unseasonable products with new seasonable goods when the season changes. The trade-off for such activities involves choosing between the salvage loss of the unseasonable product and the profit of selling the seasonable product early. This article develops a periodic-review inventory model for planning the changes of seasonable goods with state-dependent demand and cost parameters. We show that the single-period optimal policy for product changes is a threshold policy based on the initial inventory of the unseasonable goods. The corresponding optimal inventory policy follows a Purchase-Keep-Dispose policy if the incumbent product is kept or a base-stock policy if the incumbent product is replaced. Numerically, we find that the structure of the multi-period optimal policy resembles that of the single-period model. We propose a heuristic to solve the multi-period model and demonstrate its effectiveness. Our research provides insights into dynamically managing seasonable goods.  相似文献   

5.
This paper considers resource allocation decisions in an unreliable multi-source multi-sink flow network, which applies to many real-world systems such as electric and power systems, telecommunications, and transportation systems. Due to uncertainties of components in such an unreliable flow network, transmitting resources successfully and economically through the unreliable flow network is of concern to resource allocation decisions at resource-supplying (source) nodes. We study the resource allocation decisions in an unreliable flow network for a range of demand configurations constrained by demand-dependent and demand-independent cost considerations under the reliability optimization objective. Solutions to these problems can be obtained by computing the resource allocation for each demand configuration independently. In contrast, we pursue an updating scheme that eludes time-consuming enumeration of flow patterns, which is necessary in independent computation of resource allocations for different demand configurations. We show that updating is attainable under both demand-independent and demand-dependent cost constraints when demand incurs an incremental change, and demonstrate the proposed updating scheme with numerical examples.  相似文献   

6.
There is no question that many high-tech supply chains operate in a context of high process and market uncertainties due to shorter product life cycles. When introducing a new product, these supply chains must manage the cost of supply, including the cost of capacity and inventories, with revenues from the product’s demand over its life cycle. However, in early phase of introduction after earlier buyers purchase, there might be a demand gap for a period followed by a sudden surge. To stay responsive and serve the market downstream after such gaps, two important decisions must be made: (a) the sizing of the capacity, and (b) the level of collaboration. It is the intention of this paper to show that the chosen level of collaboration effects significantly on managing the gap in the demand trajectory in new high-tech product diffusion. We study the impact of different collaboration strategies like vendor managed inventory (VMI), jointly managed inventory (JMI), and a collaborative planning, forecasting & replenishment (CPFR) model using system dynamics based simulation and compare the results with a non-collaborative chain. Our results yield insights into effectiveness of collaboration in managing the dynamics of demand gap.  相似文献   

7.
Both researchers and practitioners recognize the importance of the interactions between financial and inventory decisions in the development of cost effective supply chains. Moreover, achieving effective coordination among the supply chain players has become a pertinent research issue. This paper considers a three-level supply chain, consisting of a capital-constrained supplier, a retailer, and a financial intermediary (bank), coordinating their decisions to minimize the total supply chain costs. Specifically, we consider a retailer managing its cash through the supplier’s bank, in return for permissible delay in payments from the supplier. The bank, benefiting from increasing its cash holdings with the retailer’s cash deposits, offers the supplier a discount on its borrowing rate. We show that the proposed coordination mechanism achieves significant cost reduction, by up to 26.2%, when compared to the non-coordinated model. We also find that, with coordination, the retailer orders in larger quantities than its economic order quantity, and that a higher return on cash for the retailer leads to a higher order quantity. Furthermore, we empirically validate our proposed coordination mechanism, by showing that banks, retailers, and suppliers have much to gain through collaboration. Thus, using COMPUSTAT datasets for the years 1950 through 2012, we determine the most important factors that affect the behavior of the retailers and suppliers in granting and receiving trade credit. Our results indicate that engaging into such a coordination mechanism is a win–win situation to all parties involved.  相似文献   

8.
刘亚珺  李兵  李增扬  梁鹏  吴闽泉 《计算机科学》2017,44(11):15-21, 40
软件技术债务是运用经济学中“债务”的概念来描述软件开发中因项目短期利益而实施的技术折中。但从长期来看,技术债务会影响软件的质量、成本和开发效率,因此有必要对其进行有效管理。现有的技术债务管理工具数量少且存在各种局限性,难以实现有效的管理。主流的软件集成开发环境功能强大且应用广泛,可以为技术债务管理服务。以具有代表性的集成开发环境Visual Studio 2015企业版为研究对象,通过C#实例发现其管理4类与代码直接相关的技术债务的能力,并将其与4种专门的技术债务管理工具进行对比,为开发团队的日常实践提供技术债务管理支持。结果表明,Visual Studio能够提供更好的技术债务管理功能,并能应用多种方法对项目中存在的各类技术债务进行不同程度的管理。  相似文献   

9.
This paper introduces a certainty-weighted detection system (CWDS) based on distributed decision makers that can classify a binary phenomenon as true, false, or uncertain. The CWDS is composed of two main blocks: the definite decision block (DDB), which provides a decision regarding the presence or absence of the phenomenon, and the uncertainty measure block (UMB) that provides a measure of uncertainty. The final decision, which may be definite (true or false) or uncertain, depends on characteristic parameters that define the region of uncertainty (RU/sub i/ and /spl alpha/) used by piecewise linear certainty functions in the DDB and in the UMB. The Bayes cost analysis is extended to include the cost of uncertain classifications and cost of errors. A cost function is used to compare the CWDS to decision structures based on the Dempster-Shafer theory and fuzzy logic that also provide uncertain decisions. The CWDS performs similarly to a classical Bayes detection system when no uncertain classifications are provided. By changing the parameters RU/sub i/ and /spl alpha/, the CWDS can also be adjusted to perform similarly to the Dempster-Shafer and fuzzy structures. The differences between these approaches are mainly in their characterization of uncertainty, and they can reduce the total costs below that of the Bayesian model if the cost of uncertain classifications is sufficiently smaller than the cost of errors. The performance of the CWDS was less sensitive to changes in the ratio of the costs of uncertain decisions to the cost of incorrect certain decisions, showing the CWDS to be more robust to system parameters than the fuzzy and Dempster-Shafer systems.  相似文献   

10.
Constraint propagation is one of the techniques central to the success of constraint programming. To reduce search, fast algorithms associated with each constraint prune the domains of variables. With global (or non-binary) constraints, the cost of such propagation may be much greater than the quadratic cost for binary constraints. We therefore study the computational complexity of reasoning with global constraints. We first characterise a number of important questions related to constraint propagation. We show that such questions are intractable in general, and identify dependencies between the tractability and intractability of the different questions. We then demonstrate how the tools of computational complexity can be used in the design and analysis of specific global constraints. In particular, we illustrate how computational complexity can be used to determine when a lesser level of local consistency should be enforced, when constraints can be safely generalized, when decomposing constraints will reduce the amount of pruning, and when combining constraints is tractable.  相似文献   

11.
Moving toward high confidence software that can meet ever increasing demands for critical DOD applications will require planning, specifying, selecting, and managing the necessary development and testing activities that will ensure the success of the software project. In order to trust the decisions being made, there must be evidence (i.e., an information base of data and facts) that techniques and tools being chosen for application on critical projects will perform as expected. Today, these expectations are mostly intuitive; there is little hard evidence available to guide acquisition managers and software developers in making necessary decisions.  相似文献   

12.
We propose a flexible decision support scheme which could be used in managing the wage negotiation between employers and employees. This scheme uses fuzzy inference systems and game theory concepts in arriving at decisions on future wage increase which could be more mutually agreeable. For example, rather than specifying 5% yearly increase of wages, we propose that the uncertain factors which are mostly difficult to predict and that could affect wage decisions need to be taken into consideration by the wage formula. These include business revenues or (profit), inflation rate, number of competitors, cost of production, and other uncertain factors that may affect business operations. The accuracy of the fuzzy rule base and the game strategies will help to mitigate the adverse effects that a business may suffer from these uncertain factors. Based on our scheme, we propose that employers and employees should calculate their future wage by using a fuzzy rule base and strategies that take into consideration these uncertain variables. The proposed approach is illustrated with a case study and the procedure and methodology may be easily implemented by business organizations in their wage bargaining and decision processes.  相似文献   

13.
Replica placement algorithms for mobile transaction systems   总被引:1,自引:0,他引:1  
In distributed mobile systems, communication cost and disconnections are major concerns. In this paper, we address replica placement issues to achieve improved performance for systems supporting mobile transactions. We focus on handling correlated data objects and disconnections. Frequently, requests and/or transactions issued by mobile clients may access multiple data objects and should be considered together in terms of replica allocation. We discuss the replication cost model for correlated data objects and show that the problem of finding an optimal solution is NP. We further adjust the replication cost model for disconnections. A heuristic "expansion-shrinking" algorithm is developed to efficiently make replica placement decisions. The algorithm obtains near optimal solutions for the correlated data model and yields significant performance gains when disconnection is considered. Experimental studies show that the heuristic expansion-shrinking algorithm significantly outperforms the general frequency-based replication schemes.  相似文献   

14.
15.
Statistical process control to improve coding and code review   总被引:1,自引:0,他引:1  
Jacob  A.L. Pillai  S.K. 《Software, IEEE》2003,20(3):50-55
Software process comprises activities such as estimation, planning, requirements analysis, design, coding, reviews, and testing, undertaken when creating a software product. Effective software process management involves proactively managing each of these activities. Statistical process control tools enable proactive software process management. One such tool, the control chart, can be used for managing, controlling, and improving the code review process.  相似文献   

16.
ContextVariability management is a key activity in software product line engineering. This paper focuses on managing rationale information during the decision-making activities that arise during variability management. By decision-making we refer to systematic problem solving by considering and evaluating various alternatives. Rationale management is a branch of science that enables decision-making based on the argumentation of stakeholders while capturing the reasons and justifications behind these decisions.ObjectiveDecision-making should be supported to identify variability in domain engineering and to resolve variation points in application engineering. We capture the rationale behind variability management decisions. The captured rationale information is useful to evaluate future changes of variability models as well as to handle future instantiations of variation points. We claim that maintaining rationale will enhance the longevity of variability models. Furthermore, decisions should be performed using a formal communication between domain engineering and application engineering.MethodWe initiate the novel area of issue-based variability management (IVM) by extending variability management with rationale management. The key contributions of this paper are: (i) an issue-based variability management methodology (IVMM), which combines questions, options and criteria (QOC) and a specific variability approach; (ii) a meta-model for IVMM and a process for variability management and (iii) a tool for the methodology, which was developed by extending an open source rationale management tool.ResultsRationale approaches (e.g. questions, options and criteria) guide distributed stakeholders when selecting choices for instantiating variation points. Similarly, rationale approaches also aid the elicitation of variability and the evaluation of changes. The rationale captured within the decision-making process can be reused to perform future decisions on variability.ConclusionIVMM was evaluated comparatively based on an experimental survey, which provided evidence that IVMM is more effective than a variability modeling approach that does not use issues.  相似文献   

17.
We consider the problem of managing a hybrid computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired on-demand from a cloud computing provider through short-term reservation contracts, and virtual machines made available by the remote peers of a best-effort peer-to-peer (P2P) grid. Each of these resources has different cost basis and associated quality of service guarantees. The applications that run in this hybrid infrastructure are characterized by a utility function: the utility gained with the completion of an application depends on the time taken to execute it. We take a business-driven approach to manage this infrastructure, aiming at maximizing the profit yielded, that is, the utility produced as a result of the applications that are run minus the cost of the computing resources that are used to run them. We propose a heuristic to be used by a contract planner agent that establishes the contracts with the cloud computing provider to balance the cost of running an application and the utility that is obtained with its execution, with the goal of producing a high overall profit. Our analytical results show that the simple heuristic proposed achieves very high relative efficiency in the use of the hybrid infrastructure. We also demonstrate that the ability to estimate the grid behaviour is an important condition for making contracts that allow such relative efficiency values to be achieved. On the other hand, our simulation results with realistic error predictions show only a modest improvement in the profit achieved by the simple heuristic proposed, when compared to a heuristic that does not consider the grid when planning contracts, but uses it, and another that is completely oblivious to the existence of the grid. This calls for the development of more accurate predictors for the availability of P2P grids, and more elaborated heuristics that can better deal with the several sources of non-determinism present in this hybrid infrastructure.  相似文献   

18.
We propose and motivate an alternative to the traditional error-based or cost-based evaluation metrics for the goodness of speaker detection performance. The metric that we propose is an information-theoretic one, which measures the effective amount of information that the speaker detector delivers to the user. We show that this metric is appropriate for the evaluation of what we call application-independent detectors, which output soft decisions in the form of log-likelihood-ratios, rather than hard decisions. The proposed metric is constructed via analysis and generalization of cost-based evaluation metrics. This construction forms an interpretation of this metric as an expected cost, or as a total error-rate, over a range of different application-types. We further show how the metric can be decomposed into a discrimination and a calibration component. We conclude with an experimental demonstration of the proposed technique to evaluate three speaker detection systems submitted to the NIST 2004 Speaker Recognition Evaluation.  相似文献   

19.
We present new results for the Stochastic Shortest Path problem when an unlimited number of hops may be used. Nodes and links in the network may be congested or uncongested, and their states change over time. The goal is to adaptively choose the next node to visit such that the expected cost to the destination is minimized. Since the state of a node may change, having the option to revisit a node can improve the expected cost. Therefore, the option to use an unbounded number of hops may be required to achieve the minimum expected cost. We show that when revisits are prohibited, the optimal routing problem is np-hard. We also prove properties about networks for which continual improvement may occur.We study the related routing problem which asks whether it is possible to determine the optimal next node based on the current node and state, when an unlimited number of hops is allowed. We show that as the number of hops increases, this problem may not converge to a solution.  相似文献   

20.
In practice, machine schedules are usually subject to disruptions which have to be repaired by reactive scheduling decisions. The most popular predictive approach in project management and machine scheduling literature is to leave idle times (time buffers) in schedules in coping with disruptions, i.e. the resources will be under-utilized. Therefore, preparing initial schedules by considering possible disruption times along with rescheduling objectives is critical for the performance of rescheduling decisions. In this paper, we show that if the processing times are controllable then an anticipative approach can be used to form an initial schedule so that the limited capacity of the production resources are utilized more effectively. To illustrate the anticipative scheduling idea, we consider a non-identical parallel machining environment, where processing times can be controlled at a certain compression cost. When there is a disruption during the execution of the initial schedule, a match-up time strategy is utilized such that a repaired schedule has to catch-up initial schedule at some point in future. This requires changing machine–job assignments and processing times for the rest of the schedule which implies increased manufacturing costs. We show that making anticipative job sequencing decisions, based on failure and repair time distributions and flexibility of jobs, one can repair schedules by incurring less manufacturing cost. Our computational results show that the match-up time strategy is very sensitive to initial schedule and the proposed anticipative scheduling algorithm can be very helpful to reduce rescheduling costs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号