首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the current “mass customization” scenario, product complexity is increasing significantly due to the necessity to answer as quickly and effectively as possible to many different costumer needs but maintaining costs under control. In this scenario, requirements management becomes a fundamental features for the entire product lifecycle, as enterprises need to have a complete and clear idea of the market for succeeding in developing and supporting the right and innovative product. Moreover, considering that product lifecycle is characterized by many “trade-off”, so that product features are often negotiated in order to fulfil to conflicting requirements, it is important to support the “traceability” of the entire lifecycle “negotiation” process. For this reason, PLM platform has to provide suitable methodologies and tools able to efficiently support the design and management of large set of complex requirements. Requirements Management Tools (RMt) embedded in PLM solutions help keeping specifications consistent, up-to-date, and accessible. At present, there are different possible solutions, but a shared PLM integrated seems not to be available. In order to fill this gap, this paper has developed an user-based strategy, based on Kano methodology, so on “user satisfaction”, in order to define a structured set of guidelines to support the design of the features of an integrated PLM requirement management tool.  相似文献   

2.
In this paper we study the effects of a change from the traditional request “How much effort is required to complete X?” to the alternative “How much can be completed in Y work-hours?”. Studies 1 and 2 report that software professionals receiving the alternative format provided much lower, and presumably more optimistic, effort estimates of the same software development work than those receiving the traditional format. Studies 3 and 4 suggest that the effect belongs to the family of anchoring effects. An implication of our results is that project managers and clients should avoid the alternative estimation request format.  相似文献   

3.
In traditional scheduling problems, the processing time for the given job is assumed to be a constant regardless of whether the job is scheduled earlier or later. However, the phenomenon named “learning effect” has extensively been studied recently, in which job processing times decline as workers gain more experience. This paper discusses a bi-criteria scheduling problem in an m-machine permutation flowshop environment with varied learning effects on different machines. The objective of this paper is to minimize the weighted sum of the total completion time and the makespan. A dominance criterion and a lower bound are proposed to accelerate the branch-and-bound algorithm for deriving the optimal solution. In addition, the near-optimal solutions are derived by adapting two well-known heuristic algorithms. The computational experiments reveal that the proposed branch-and-bound algorithm can effectively deal with problems with up to 16 jobs, and the proposed heuristic algorithms can yield accurate near-optimal solutions.  相似文献   

4.
The cost of servicing a warranty depends, amongst other factors, on the type of repair performed under warranty. Although “all minimal repair” and “all replacement” policies are easy to implement and analyze, they are not always feasible and/or practical. Having a combination of different types of repair often leads to lower warranty servicing costs. In this article, to reduce the warranty servicing cost, we study a servicing strategy that involves performing imperfect repairs in place of some of the minimal repairs of an “all minimal repair” strategy; the effect of an imperfect repair is characterized by a drop in the conditional intensity function of the failure process. We consider both fixed and random degrees of repair. For a given type of product, we partition the warranty region so that the expected total warranty servicing cost is minimized. We provide a numerical illustration and a comparison with previously-studied repair-replacement strategies.  相似文献   

5.
The capacitated arc routing problem (CARP) is an important and practical problem in the OR literature. In short, the problem is to identify routes to service (e.g., pickup or deliver) demand located along the edges of a network such that the total cost of the routes is minimized. In general, a single route cannot satisfy the entire demand due to capacity constraints on the vehicles. CARP belongs to the set of NP-hard problems; consequently numerous heuristic and metaheuristic solution approaches have been developed to solve it. In this paper an “ellipse rule” based heuristic is proposed for the CARP. This approach is based on the path-scanning heuristic, one of the mostly used greedy-add heuristics for this problem. The innovation consists basically of selecting edges only inside ellipses when the vehicle is near the end of each route. This new approach was implemented and tested on three standard datasets and the solutions are compared against: (i) the original path-scanning heuristic; (ii) two other path-scanning heuristics and (iii) the three best known metaheuristics. The results indicate that the “ellipse rule” approach lead to improvements over the three path-scanning heuristics, reducing the average distance to the lower bound in the test problems by about 44%.  相似文献   

6.
7.
Most dimension reduction techniques produce ordered coordinates so that only the first few coordinates need be considered in subsequent analyses. The choice of how many coordinates to use is often made with a visual heuristic, i.e., by making a scree plot and looking for a “big gap” or an “elbow.” In this article, we present a simple and automatic procedure to accomplish this goal by maximizing a simple profile likelihood function. We give a wide variety of both simulated and real examples.  相似文献   

8.
We offer a variant of the maximal covering location problem to locate up to pp signal-receiving stations. The “demands,” called geolocations, to be covered by these stations are distress signals and/or transmissions from any targets. The problem is complicated by several factors. First, to find a signal location, the signal must be received by at least three stations—two lines of bearing for triangulation and a third for accuracy. Second, signal frequencies vary by source and the included stations do not necessarily receive all frequencies. One must decide which listening frequencies are allocated to which stations. Finally, the range or coverage area of a station varies stochastically because of meteorological conditions. This problem is modeled using a multiobjective (or multicriteria) linear integer program (MOLIP), which is an approximation of a highly nonlinear integer program. As a solution algorithm, the MOLIP is converted to a two-stage network-flow formulation that reduces the number of explicitly enumerated integer variables. Non-inferior solutions of the MOLIP are evaluated by a value function, which identifies solutions that are similar to the more accurate nonlinear model. In all case studies, the “best” non-inferior solutions were about one to four standard deviations better than the sample mean of thousands of randomly located receivers with heuristic frequency assignments. We also show that a two-stage network-flow algorithm is a practical solution to an intractable nonlinear integer model. Most importantly, the procedure has been implemented in the field.  相似文献   

9.
Effective and timely maintenance actions can sustain and improve both system availability and product quality in automated manufacturing systems. However, arbitrarily stopping machines for maintenance will occupy their production time and may introduce system-level production losses. There may exist hidden opportunities during production, such that specific machines can be actively shut down for preventive maintenance without penalizing the system throughput. In this paper, the time intervals for such opportunities are defined as active maintenance opportunity windows (AMOWs). A Bernoulli model is developed to analytically estimate AMOWs in two-machine-one-buffer production lines. A recursive algorithm based on an aggregation method is used to estimate AMOWs in long lines. For balanced production lines, a heuristic algorithm is proposed to estimate AMOWs in real time. The effectiveness of the methods has been validated through numerical studies.  相似文献   

10.
A new hybrid assembly line design, called parallel U-shaped assembly line system, is introduced and characterised along with numerical examples for the first time. Different from existing studies on U-shaped lines, we combine the advantages of two individual line configurations (namely parallel lines and U-shaped lines) and create an opportunity for assigning tasks to multi-line workstations located in between two adjacent U-shaped lines with the aim of maximising resource utilisation. Utilisation of crossover workstations, in which tasks from opposite areas of a same U-shaped line can be performed, is also one of the main advantages of the U-shaped lines. As in traditional U-shaped line configurations, the newly proposed line configuration also supports the utilisation of crossover workstations. An efficient heuristic algorithm is developed to find well-balanced solutions for the proposed line configurations. Test cases derived from existing studies and modified in accordance with the proposed system in this study are solved using the proposed heuristic algorithm. The comparison of results obtained when the lines are balanced independently and when the lines are balanced together (in parallel to each other) clearly indicates that the parallelisation of U-shaped lines helps decrease the need for workforce significantly.  相似文献   

11.
In the paper, we develop an EPQ (economic production quantity) inventory model to determine the optimal buffer inventory for stochastic demand in the market during preventive maintenance or repair of a manufacturing facility with an EPQ (economic production quantity) model in an imperfect production system. Preventive maintenance, an essential element of the just-in-time structure, may cause shortage which is reduced by buffer inventory. The products are sold with the free minimal repair warranty (FRW) policy. The production system may undergo “out-of-control” state from “in-control” state, after a certain time that follows a probability density function. The defective (non-conforming) items in “in-control” or “out-of-control” state are reworked at a cost just after the regular production time. Finally, an expected cost function regarding the inventory cost, unit production cost, preventive maintenance cost and shortage cost is minimized analytically. We develop another case where the buffer inventory as well as the production rate are decision variables and the expected unit cost considering the above cost functions is optimized also. The numerical examples are provided to illustrate the behaviour and application of the model. Sensitivity analysis of the model with respect to key parameters of the system is carried out.  相似文献   

12.
A fuzzy clustering problem consists of assigning a set of patterns to a given number of clusters with respect to some criteria such that each of them may belong to more than one cluster with different degrees of membership. In order to solve it, we first propose a new local search heuristic, called Fuzzy J-Means, where the neighbourhood is defined by all possible centroid-to-pattern relocations. The “integer” solution is then moved to a continuous one by an alternate step, i.e., by finding centroids and membership degrees for all patterns and clusters. To alleviate the difficulty of being stuck in local minima of poor value, this local search is then embedded into the Variable Neighbourhood Search metaheuristic. Results on five standard test problems from the literature are reported and compared with those obtained with the well-known Fuzzy C-Means heuristic. It appears that solutions of substantially better quality are obtained with the proposed methods than with this former one.  相似文献   

13.
14.
In this paper, we consider the problem of scheduling sports competitions over several venues which are not associated with any of the competitors. A two-phase, constraint programming approach is developed, first identifying a solution that designates the participants and schedules each of the competitions, then assigning each competitor as the “home” or the “away” team. Computational experiments are conducted and the results are compared with an integer goal programming approach. The constraint programming approach achieves optimal solutions for problems with up to sixteen teams, and near-optimal solutions for problems with up to thirty teams.  相似文献   

15.
Theoretical comparisons of search strategies in branch-and-bound algorithms   总被引:1,自引:0,他引:1  
Four known search strategies used in branch-and-bound algorithms-heuristic search, depth-first search, best-bound search, and breadth-first search-are theoretically compared from the viewpoint of the performance of the resulting algorithms. Heuristic search includes the other three as special cases. Since heuristic search is determined by a heuristic functionh, we first investigate how the performance of the resulting algorithms depends onh. In particular, we show that heuristic search is stable in the sense that a slight change inh causes only a slight change in its performance. The best and the worst heurstic functions are clarified, and also discussed is how the heuristic functionh should be modified to obtain a branch-and-bound algorithm with an improved performance. Finally, properties and limitations of depth-first search, best-bound search, and breadth-first search viewed as special cases of heuristic search are considered. In particular, it is shown that the stability observed for heuristic search no longer holds for depth-first search.  相似文献   

16.
Many tasks require evaluating a specified Boolean expression φ over a set of probabilistic tests whose costs and success probabilities are each known. A strategy specifies when to perform which test, towards determining the overall outcome of φ. We are interested in finding the strategy with the minimum expected cost.As this task is typically NP-hard—for example, when tests can occur many times within φ, or when there are probabilistic correlations between the test outcomes—we consider those cases in which the tests are probabilistically independent and each appears only once in φ. In such cases, φ can be written as an and-or tree, where each internal node corresponds to either the “and” or “or” of its children, and each leaf node is a probabilistic test. In this paper we investigate “probabilistic and-or tree resolution” (PAOTR), namely the problem of finding optimal strategies for and-or trees.We first consider a depth-first approach: evaluate each penultimate rooted subtree in isolation, replace each such subtree with a single “mega-test”, and recurse on the resulting reduced tree. We show that the strategies produced by this approach are optimal for and-or trees with depth at most two but can be arbitrarily sub-optimal for deeper trees.Each depth-first strategy can be described by giving the linear relative order in which tests are to be executed, with the understanding that any test whose outcome becomes irrelevant is skipped. The class of linear strategies is strictly larger than depth-first strategies. We show that even the best linear strategy can also be arbitrarily sub-optimal.We next prove that an optimal strategy honors a natural partial order among tests with a common parent node (“leaf-sibling tests”), and use this to produce a dynamic programming algorithm that finds the optimal strategy in time O(d2d(r+1)), where r is the maximum number of leaf-siblings and d is the number of leaf-parents; hence, for trees with a bounded number of internal nodes, this run-time is polynomial in the tree size. We also present another special class of and-or trees for which this task takes polynomial time.We close by presenting a number of other plausible approaches to PAOTR, together with counterexamples to show their limitations.  相似文献   

17.
The p-median problem (PMP) consists of locating p facilities (medians) in order to minimize the sum of distances from each client to the nearest facility. The interest in the large-scale PMP arises from applications in cluster analysis, where a set of patterns has to be partitioned into subsets (clusters) on the base of similarity.In this paper we introduce a new heuristic for large-scale PMP instances, based on Lagrangean relaxation. It consists of three main components: subgradient column generation, combining subgradient optimization with column generation; a “core” heuristic, which computes an upper bound by solving a reduced problem defined by a subset of the original variables chosen on a base of Lagrangean reduced costs; and an aggregation procedure that defines reduced size instances by aggregating together clients with the facilities. Computational results show that the proposed heuristic is able to compute good quality lower and upper bounds for instances up to 90,000 clients and potential facilities.  相似文献   

18.
In this article, an alternative approach to SRAM testing – the dynamic supply current test is presented, which is used to cover resistive opens considered as “hard detectable” type of physical defects. The investigation of the efficiency in unveiling open defects is based on the evaluation analysis carried out on a six transistor (6T) SRAM cell designed in a 90 nm CMOS technology, where parasitic components of word lines, bit lines, and power supply lines are derived from a 4096-bit SRAM array. Three possible approaches to the dynamic supply current test are proposed and compared. Finally, achieved results are analyzed and discussed.  相似文献   

19.
Finding the product of two polynomials is an essential and basic problem in computer algebra. While most previous results have focused on the worst-case complexity, we instead employ the technique of adaptive analysis to give an improvement in many “easy” cases. We present two adaptive measures and methods for polynomial multiplication, and also show how to effectively combine them to gain both advantages. One useful feature of these algorithms is that they essentially provide a gradient between existing “sparse” and “dense” methods. We prove that these approaches provide significant improvements in many cases but in the worst case are still comparable to the fastest existing algorithms.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号