首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
2.
Sequential analysis as a sampling technique facilitates efficient statistical inference by considering less number of observations in comparison to the fixed sampling method. The optimal stopping rule dictates the sample size and also the statistical inference deduced thereafter. In this research we propose three variants of the already existing multistage sampling procedures and name them as (i) Jump and Crawl (JC), (ii) Batch Crawl and Jump (BCJ) and (iii) Batch Jump and Crawl (BJC) sequential sampling methods. We use the (i) normal, (ii) exponential, (iii) gamma and (iv) extreme value distributions for the point estimation problems under bounded risk conditions. We highlight the efficacy of using the right adaptive sampling plan for the bounded risk problems for these four distributions, considering two different loss functions, namely (i) squared error loss (SEL) and (ii) linear exponential (LINEX) loss functions. Comparison and analysis of our proposed methods with existing sequential sampling techniques is undertaken and the importance of this study is highlighted using extensive theoretical simulation runs.  相似文献   

3.
Testing concurrent programs is a challenging problem due to interleaving explosion: even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testing algorithms only test a select subset of runs. For example, limiting testing to runs that contain data races or atomicity violations has been shown to capture a large proportion of concurrency bugs. In this paper we present a general approach to concurrent program testing that is based on techniques from artificial intelligence (AI) automated planning. We propose a framework for predicting concurrent program runs that violate a collection of generic correctness specifications for concurrent programs, namely runs that contain data races, atomicity violations, or null-pointer dereferences. Our prediction is based on observing an arbitrary run of the program, and using information collected from this run to model the behavior of the program, and to predict new runs that contain bugs with one of the above noted violation patterns. We characterize the problem of predicting such new runs as an AI sequential planning problem with the temporally extended goal of achieving a particular violation pattern. In contrast to many state-of-the-art approaches, in our approach feasibility of the predicted runs is guaranteed and, therefore, all generated runs are fully usable for testing. Moreover, our planning-based approach has the merit that it can easily accommodate a variety of violation patterns which serve as the selection criteria for guiding search in the state space of concurrent runs. This is achieved by simply modifying the planning goal. We have implemented our approach using state-of-the-art AI planning techniques and tested it within the Penelope concurrent program testing framework [35]. Nevertheless, the approach is general and is amenable to a variety of program testing frameworks. Our experiments with a benchmark suite showed that our approach is very fast and highly effective, finding all known bugs.  相似文献   

4.
Recently, control charts plotting a statistic having a Student’s t distribution have been proposed as an efficient solution to perform Statistical Process Control (SPC) in short production runs where the shift size of the in-control process mean from μ0 to μ1 is known a priori. The shift size is usually measured as a multiple δ of the in-control process standard deviation σ0: but in practice, at the beginning of the production run, both the value of next shift δ and σ0 are unknown. As a consequence, when the actual shift size differs from the value assumed at the chart design stage, the performance of the control chart can be seriously affected. To overcome this problem, this paper investigates the statistical performance of the Shewhart, EWMA and CUSUM t charts for short production runs when the shift size is unknown and modeled by means of a statistical distribution. An extensive numerical analysis allows the properties of the three charts to be compared and discussed when uniform and triangular distributions are used by quality practitioners to fit the unknown shift size. An illustrative example is utilized to demonstrate a practical implementation of the best performing among the three investigated charts.  相似文献   

5.
Shape from incomplete silhouettes based on the reprojection error   总被引:1,自引:0,他引:1  
Traditional shape from silhouette methods compute the 3D shape as the intersection of the back-projected silhouettes in the 3D space, the so called visual hull. However, silhouettes that have been obtained with background subtraction techniques often present miss-detection errors (produced by false negatives or occlusions) which produce incomplete 3D shapes. Our approach deals with miss-detections, false alarms, and noise in the silhouettes. We recover the voxel occupancy which describes the 3D shape by minimizing an energy based on an approximation of the error between the shape 2D projections and the silhouettes. Two variants of the projection – and as a result the energy – as a function of the voxel occupancy are proposed. One of these variants outperforms the other. The energy also includes a sparsity measure, a regularization term, and takes into account the visibility of the voxels in each view in order to handle self-occlusions.  相似文献   

6.
The performance of Multi-Radio Multi-Channel Wireless Mesh Networks (MRMC-WMNs) based on the IEEE 802.11 technology depends significantly on how the channels are assigned to the radios and how traffic is routed between the access points and the gateways. In this paper we propose an algorithmic approach to this problem, for which, as far as we know, no optimal polynomial time solutions have been put forward in the literature. The core of our scheme consists of a sequential divide-and-conquer technique which divides the overall Joint Channel Assignment and Routing (JCAR) problem into a number of local optimization sub-problems that are executed sequentially. We propose a generalized scheme called Generalized Partitioned Mesh network traffic and interference aware channeL Assignment (G-PaMeLA), where the number of sub-problems is equal to the maximum number of hops to the gateway, and a customized version which takes advantage of the knowledge of the topology. In both cases each sub-problem is formulated as an Integer Linear Programming (ILP) optimization problem. An optimal solution for each sub-problem can be found by using a branch-and-cut method. The final solution is obtained after a post-processing phase, which improves network connectivity. The divide-and-conquer technique significantly reduces the execution time and makes our solution feasible for an operational WMN. With the help of a detailed packet level simulation, the G-PaMeLA technique is compared with several state-of-the-art JCAR algorithms. Our results highlight that G-PaMeLA performs much better than the others in terms of packet loss rate, collision probability and fairness among traffic flows.  相似文献   

7.
A protein consists of atoms. Given a protein, the automatic recognition of depressed regions on the surface of the protein, often called docking sites or pockets, is important for the analysis of interaction between a protein and a ligand and facilitates fast development of new drugs.Presented in this paper is a geometric approach for the detection of docking sites using β-shape which is based on the Voronoi diagram for atoms in Euclidean distance metric. We first propose a geometric construct called a β-shape which represents the proximity among atoms on the surface of a protein. Then, using the β-shape, which takes the size differences among different atoms into account, we present an algorithm to extract the pockets for the possible docking site on the surface of a protein.  相似文献   

8.
In this paper, a metaheuristic inspired on the T-Cell model of the immune system (i.e., an artificial immune system) is introduced. The proposed approach (called DTC, for Dynamic T-Cell) is used to solve dynamic optimization problems, and is validated using test problems taken from the specialized literature on dynamic optimization. Results are compared with respect to artificial immune approaches representative of the state-of-the-art in the area. Some statistical analyses are also performed, in order to determine the sensitivity of the proposed approach to its parameters.  相似文献   

9.
This paper is concerned with the cost minimization of prestressed concrete beams using a special differential evolution-based technique. The optimum design is posed as single-objective optimization problem in presence of constraints formulated in accordance with the current European building code. The design variables include geometrical dimensions that define the shape of the cross section and the amount of prestressing steel. A special (μ?+?λ)-constrained differential evolution method is performed in order to solve the optimization problem. Its search mechanism depends on several mutation strategies whereas an archiving-based adaptive tradeoff model is in charge of selecting a specific constraint-handling technique. Finally, numerical examples are included to illustrate the application of the presented approach.  相似文献   

10.
An Approximation Algorithm for a Large-Scale Facility Location Problem   总被引:1,自引:0,他引:1  
We developed a new practical optimization method that gives approximate solutions for large-scale real instances of the Uncapacitated Facility Location Problem. The optimization consists of two steps: application of the Greedy—Interchange heuristic using a small subset of warehouse candidates, and application of the newly developed heuristic named Balloon Search that takes account of all warehouse candidates, and runs in ( O (3n + 2n log n ) ) expected time (n is the number of nodes of the underlying graph). Our experiments on the spare parts logistics of a Japanese manufacturing company with 6000 customers and 380,000 warehouse candidates led us to conclude that the Greedy heuristic improved the total cost by 9%-11%, that the Interchange heuristic improved the total cost by an additional 0.5%—1.5%, and that Balloon Search improved it by a further 0.5%—1.5%.  相似文献   

11.
This paper describes the multiobjective optimization of parts made with curvilinear fiber composites. Two structures are studied: a square plate and a fuselage-like section. The square plate is designed in two ways. First, classical lamination theory (CLT) is used to obtain the structural response for a plate with straight fibers designed for maximum buckling load and maximum stiffness. The same plate is then designed with curved fibers using finite element analysis (FEA) to determine the structural response. Next, the fuselage-like section is designed using the same FEA approach. The problems have three to twelve variables. To enable the resulting Pareto front to be visualized more clearly, only two objectives are considered. The first two optimization problems are unconstrained, while the last one is constrained by two project requirements. To overcome the problem of long computational run time when using FEA, Kriging-based approaches are used. Three such approaches suitable for multiobjective problems are compared: (i) the efficient global optimization algorithm (EGO) is applied to a single-objective function consisting of a weighted combination of the objectives, (ii) a technique that involves sequential maximization of the expected hypervolume improvement, and (iii) a novel approach proposed here based on sequential minimization of the variance of the predicted Pareto front. Comparison of the results using the inverted generational distance (IGD) metric revealed that the approach (iii) had the best performance (mean) and best robustness (standard deviation) for all the cases studied.  相似文献   

12.
During the last decades, simulation software based on the Finite Element Method (FEM) has significantly contributed to the design of feasible forming processes. Coupling FEM to mathematical optimization algorithms offers a promising opportunity to design optimal metal forming processes rather than just feasible ones. In this paper Sequential Approximate Optimization (SAO) for optimizing forging processes is discussed. The algorithm incorporates time-consuming nonlinear FEM simulations. Three variants of the SAO algorithm—which differ by their sequential improvement strategies—have been investigated and compared to other optimization algorithms by application to two forging processes. The other algorithms taken into account are two iterative algorithms (BFGS and SCPIP) and a Metamodel Assisted Evolutionary Strategy (MAES). It is essential for sequential approximate optimization algorithms to implement an improvement strategy that uses as much information obtained during previous iterations as possible. If such a sequential improvement strategy is used, SAO provides a very efficient algorithm to optimize forging processes using time-consuming FEM simulations.  相似文献   

13.
He  Youwei  Sun  Jinju  Song  Peng  Wang  Xuesong 《Engineering with Computers》2021,38(3):2001-2026

The multi-objective efficient global optimization (MOEGO), an extension of the single-objective efficient global optimization algorithm with the intention to handle multiple objectives, is one of the most frequently studied surrogate model-based optimization algorithms. However, the evaluation of the infill point obtained in each MOEGO update iteration using simulation tool may fail. Such evaluation failures are critical to the sequential MOEGO method as it leads to a premature halt of the optimization process due to the impossibility of updating the Kriging models approximating objectives. In this paper, a novel strategy to prevent the premature halt of the sequential MOEGO method is proposed. The key point is to introduce an additional Kriging model to predict the success possibility of the simulation at an unvisited point. Multi-objective expected improvement-based criteria incorporating the success possibility of the simulation are proposed. Experiments are performed on a set of six analytic problems, five low-fidelity airfoil shape optimization problems, and a high-fidelity axial flow compressor tandem cascade optimization problem. Results suggest that the proposed MOEGO-Kriging method is the only method that consistently performs well on analytic and practical problems. The methods using the least-square support vector machine (LSSVM) or weighted LSSVM as the predictor of success possibility perform competitively or worse compared with MOEGO-Kriging. The penalty-based method, assigning high objective values to the failed evaluations in minimization problem, yields the worst performance.

  相似文献   

14.
Chen, S., Istepanian, R., and Luk, B. L., Digital IIR Filter Design Using Adaptive Simulated Annealing, Digital Signal Processing11 (2001) 241–251Adaptive infinite-impulse-response (IIR) filtering provides a powerful approach for solving a variety of practical problems. Because the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. We apply a global optimization method, called the adaptive simulated annealing (ASA), to digital IIR filter design. An important advantage of the ASA is the simplicity in software programming. Simulation study involving system identification application shows that the proposed approach is accurate and has a fast convergence rate, and the results obtained demonstrate that the ASA offers a viable tool to digital IIR filter design.  相似文献   

15.

This paper presents the application of Taguchi method (TM) to design optimization of non-uniform circular antenna array (CAA) for suppression of sidelobe levels (SLLs). TM, a robust design approach, takes signal-to-noise ratio and orthogonal array tools from the statistical design of experiments. These tools allow instead of full factorial parametric analysis minimize the design parameters; thus, increase the convergence speed and generate more accurate solutions. TM is used to determine an optimal set of amplitudes and positions of CAA for 8, 10, and 12 elements. Comparison of the results of the TM with those of latest meta-heuristic algorithms in the literature reveals that the CAA design with TM provides the best SLL reduction performance in all cases.

  相似文献   

16.
The single source shortest paths problem with positive edge weights (SSSPP) is one of the more widely studied problems in operations research and theoretical computer science, on account of its wide applicability to practical situations. This problem was first solved in polynomial time by Dijkstra, who showed that by extracting vertices with the smallest distance from the source and relaxing its outgoing edges, the shortest path to each vertex is obtained. Variations of this general theme have led to a number of algorithms which work well in practice. At the heart of a Dijkstra implementation is the technique used to implement a priority queue. It is well known that using Dijkstra’s approach requires Ω(n log n) steps on a graph having n vertices, since it essentially sorts vertices based on their distances from the source. Accordingly, the fastest implementation of Dijkstra’s algorithm on a graph with n vertices and m edges should take Ω(m + n · log n) time, and consequently, the Dijkstra procedure for SSSPP using Fibonacci Heaps is optimal in the comparison-based model. In this paper, we introduce a new data structure to implement priority queues called two-level heap (TLH) and a new variant of Dijkstra’s algorithm called Phased Dijkstra. We contrast the performance of Dijkstra’s algorithm (both the simple and the phased variants) using a number of data structures to implement the priority queue and empirically establish that TLH are far superior to Fibonacci heaps on every graph family considered. It is to be noted that our profiling includes both sparse and dense graphs.  相似文献   

17.
Statistical dependency analysis is the basis of all empirical science. A commonly occurring problem is to find the most significant dependency rules, which describe either positive or negative dependencies between categorical attributes. In medical science, for example, one is interested in genetic factors, which can either predispose or prevent diseases. The requirement of statistical significance is essential, because the discoveries should hold also in future data. Typically, the significance is estimated either by Fisher??s exact test or the ?? 2-measure. The problem is computationally very difficult, because the number of all possible dependency rules increases exponentially with the number of attributes. As a solution, different kinds of restrictions and heuristics have been applied, but a general, scalable search method has been missing. In this paper, we introduce an efficient algorithm, called Kingfisher, for searching for the best non-redundant dependency rules with statistical significance measures. The rules can express either positive or negative dependencies between a set of positive attributes and a single consequent attribute. The algorithm itself is independent from the used goodness measure, but we concentrate on Fisher??s exact test and the ?? 2-measure. The algorithm is based on an application of the branch-and-bound search strategy, supplemented by several pruning properties. Especially, we prove a new lower bound for Fisher??s p and introduce a new effective pruning principle. According to our experiments on classical benchmark data, the algorithm is well scalable and can efficiently handle even dense and high-dimensional data sets. An interesting observation was that Fisher??s exact test did not only produce more reliable rules than the ?? 2-measure, but it also performed the search much faster.  相似文献   

18.
A considerable amount of work has been done in data clustering research during the last four decades, and a myriad of methods has been proposed focusing on different data types, proximity functions, cluster representation models, and cluster presentation. However, clustering remains a challenging problem due to its ill-posed nature: it is well known that off-the-shelf clustering methods may discover different patterns in a given set of data, mainly because every clustering algorithm has its own bias resulting from the optimization of different criteria. This bias becomes even more important as in almost all real-world applications, data is inherently high-dimensional and multiple clustering solutions might be available for the same data collection. In this respect, the problems of projective clustering and clustering ensembles have been recently defined to deal with the high dimensionality and multiple clusterings issues, respectively. Nevertheless, despite such two issues can often be encountered together, existing approaches to the two problems have been developed independently of each other. In our earlier work Gullo et al. (Proceedings of the international conference on data mining (ICDM), 2009a) we introduced a novel clustering problem, called projective clustering ensembles (PCE): given a set (ensemble) of projective clustering solutions, the goal is to derive a projective consensus clustering, i.e., a projective clustering that complies with the information on object-to-cluster and the feature-to-cluster assignments given in the ensemble. In this paper, we enhance our previous study and provide theoretical and experimental insights into the PCE problem. PCE is formalized as an optimization problem and is designed to satisfy desirable requirements on independence from the specific clustering ensemble algorithm, ability to handle hard as well as soft data clustering, and different feature weightings. Two PCE formulations are defined: a two-objective optimization problem, in which the two objective functions respectively account for the object- and feature-based representations of the solutions in the ensemble, and a single-objective optimization problem, in which the object- and feature-based representations are embedded into a single function to measure the distance error between the projective consensus clustering and the projective ensemble. The significance of the proposed methods for solving the PCE problem has been shown through an extensive experimental evaluation based on several datasets and comparatively with projective clustering and clustering ensemble baselines.  相似文献   

19.
The construction of a new generation of MEMS which includes micro-assembly steps in the current microfabrication process is a big challenge. It is necessary to develop new production means named micromanufacturing systems in order to perform these new assembly steps. The classical approach called “top-down” which consists in a functional analysis and a definition of the tasks sequences is insufficient for micromanufacturing systems. Indeed, the technical and physical constraints of the microworld (e.g. the adhesion phenomenon) must be taken into account in order to design reliable micromanufacturing systems. A new method of designing micromanufacturing systems is presented in this paper. Our approach combines the general “top-down” approach with a “bottom-up” approach which takes into account technical constraints. The method enables to build a modular architecture for micromanufacturing systems. In order to obtain this modular architecture, we have devised an original identification technique of modules and an association technique of modules. This work has been used to design the controller of an experimental robotic micro-assembly station.  相似文献   

20.
根据小型立体化仓库运营特点,基于顺序单目标优化思想,提出一种新的仓库货位分配策略。将考虑存储能耗、货架稳定性、运行效率的多目标仓库货位优化问题,转化为单目标优化,建立了仓库货位优化数学模型。根据数学模型特点,采用嵌套分区算法进行优化求解。通过算例分析证明该分配策略与优化方法,可有效处理多目标仓库库位优化问题,优化效果显著。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号