共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
《Computers & Operations Research》1998,25(12):1097-1106
In Routing Problems the aim is to determine a minimum cost traversal over a graph satisfying some specified constraints. Most of them are NP-hard problems and many different heuristic solution algorithms have been proposed. The name Monte Carlo, MC, applies to a set of heuristic procedures with the common feature of using random numbers to simulate a given process. MC approach has not been applied to the framework of Routing Problems in the literature. The purpose of this paper is to demonstrate that MC methods could be useful in implementing heuristic algorithms for Routing Problems. In particular, we design an efficient MC heuristic algorithm for the well known Rural Postman Problem (RPP), for which we have a set of instances with known optimal solution taken from the literature.The Rural Postman Problem (RPP) consists of finding a minimum cost traversal of a specified arc subset of a graph. Given that the RPP is a NP-hard problem, heuristic algorithms are interesting both to handle large size instances and to provide upper bounds that could be used in branch and cut procedures. In this paper we propose a heuristic algorithm for the RPP based on Monte Carlo methods. We simulate a vehicle travelling randomly over the graph, jumping from one node to another on the basis of certain probabilities. Monte Carlo methods provide a simple approach to many different Routing Problems and they are easily implemented in a computer code. The application of this algorithm to a set of RPP instances taken from the literature demonstrates that, using the appropriate probabilities, they are also efficient. 相似文献
3.
In heterogeneous computing systems, there is a need for solutions that can cope with the unavoidable uncertainty in individual task execution times, when scheduling DAGs. When such uncertainties occur, static DAG scheduling approaches may suffer, and some rescheduling may be necessary. Assuming that the uncertainty in task execution times is modelled in a stochastic manner, we may be able to use this information to improve static DAG scheduling considerably. In this paper, a novel DAG scheduling approach is proposed to solve this stochastic scheduling problem, based on a Monte Carlo method. The approach is built on the top of a classic static DAG scheduling heuristic and evaluated through extensive simulation. Empirical results show that a significant improvement of average application performance can be achieved by the proposed approach at a reasonable execution time cost. 相似文献
4.
T. Binoth G. Dissertori A. Denner R. Frederix S. Höche P. Skands T. Gleisberg G. Heinrich D. Maître J. Huston F. Maltoni G. Passarino S. Pozzorini S. Schumann 《Computer Physics Communications》2010,181(9):1612-1622
Many highly developed Monte Carlo tools for the evaluation of cross sections based on tree matrix elements exist and are used by experimental collaborations in high energy physics. As the evaluation of one-loop matrix elements has recently been undergoing enormous progress, the combination of one-loop matrix elements with existing Monte Carlo tools is on the horizon. This would lead to phenomenological predictions at the next-to-leading order level. This note summarises the discussion of the next-to-leading order multi-leg (NLM) working group on this issue which has been taking place during the workshop on Physics at TeV Colliders at Les Houches, France, in June 2009. The result is a proposal for a standard interface between Monte Carlo tools and one-loop matrix element programs.Dedicated to the memory of, and in tribute to, Thomas Binoth, who led the effort to develop this proposal for Les Houches 2009. Thomas led the discussions, set up the subgroups, collected the contributions, and wrote and edited this paper. He made a promise that the paper would be on the arXiv the first week of January, and we are faithfully fulfilling his promise. In his honour, we would like to call this the Binoth Les Houches Accord. 相似文献
5.
This work presents the current state-of-the-art in techniques for tracking a number of objects moving in a coordinated and interacting fashion. Groups are structured objects characterized with particular motion patterns. The group can be comprised of a small number of interacting objects (e.g. pedestrians, sport players, convoy of cars) or of hundreds or thousands of components such as crowds of people. The group object tracking is closely linked with extended object tracking but at the same time has particular features which differentiate it from extended objects. Extended objects, such as in maritime surveillance, are characterized by their kinematic states and their size or volume. Both group and extended objects give rise to a varying number of measurements and require trajectory maintenance. An emphasis is given here to sequential Monte Carlo (SMC) methods and their variants. Methods for small groups and for large groups are presented, including Markov Chain Monte Carlo (MCMC) methods, the random matrices approach and Random Finite Set Statistics methods. Efficient real-time implementations are discussed which are able to deal with the high dimensionality and provide high accuracy. Future trends and avenues are traced. 相似文献
6.
Grid computing is an evolutionary technological achievement that takes advantages of wide area communication and large scale resource sharing. Although there are several advantages, grid does not guaranty stableness of resources due to their resources’ nature, diverse failures and error conditions that may appear. Consequently, grid dependability issues arise. In this paper, a grid computing environment with star topology and direct access to resources is considered, which consists of a Resource Management System (RMS) and distributed Root Nodes (RNs). The distributed RNs are considered either as operational or as failed, though the RMS is susceptible to resource exhaustion, which can lead to software failures. The response waiting time from a distributed RN is also taken under consideration. A software rejuvenation to counteract RMS resource exhaustion is adopted, and its effect on grid environment performance is studied extensively. A non-Markovian approach is considered to model the system’s evolution in time and to evaluate the proposed performance measures. Due to the complicated structure of the system, analytic formulas for the proposed measures are not available, thus Monte Carlo simulation methods are employed. 相似文献
7.
Ajay Jasra Arnaud Doucet Christopher C. Holmes 《Computational statistics & data analysis》2008,52(4):1765-1791
The methodology of interacting sequential Monte Carlo (SMC) samplers is introduced. SMC samplers are methods for sampling from a sequence of densities on a common measurable space using a combination of Markov chain Monte Carlo (MCMC) and sequential importance sampling/resampling (SIR) methodology. One of the main problems with SMC samplers when simulating from trans-dimensional, multimodal static targets is that transition kernels do not mix which leads to low particle diversity. In such situations poor Monte Carlo estimates may be derived. To deal with this problem an interacting SMC approach for static inference is introduced. The method proceeds by running SMC samplers in parallel on, initially, different regions of the state space and then moving the corresponding samples onto the entire state space. Once the samplers reach a common space the samplers are combined and allowed to interact. The method is intended to increase the diversity of the population of samples. It is established that interacting SMC admit a Feynman-Kac representation; this provides a framework for the convergence results that are developed. In addition, the methodology is demonstrated on a trans-dimensional inference problem in Bayesian mixture modelling and also, using adaptive methods, a mixture modelling problem in population genetics. 相似文献
8.
Areeg Abdalla James J. Buckley 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2007,11(10):991-996
We apply our new fuzzy Monte Carlo method to a certain fuzzy linear regression problem to estimate the best solution. The
best solution is a vector of triangular fuzzy numbers, for the fuzzy coefficients in the model, which minimizes one of two
error measures. We use a quasi-random number generator to produce random sequences of these fuzzy vectors which uniformly
fill the search space. We consider an example problem and show this Monte Carlo method obtains the best solution for one error
measure and is approximately best for the other error measure. 相似文献
9.
Areeg Abdalla James J. Buckley 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2009,13(11):1027-1033
In this paper, we consider an optimization problem in fuzzy queuing theory that was first used in web planning. This fuzzy
optimization problem has no solution algorithm and approximate solutions were first produced by computing the fuzzy value
of the objective function for only sixteen values of the fuzzy variables. We introduce our fuzzy Monte Carlo method, using
a quasi-random number generator, to produce 100,000 random sequences of fuzzy vectors for the fuzzy variables, which will
present a much better approximate solution. 相似文献
10.
F. Bertoli 《International journal of control》2018,91(10):2387-2402
This work considers the stability of nonlinear stochastic receding horizon control when the optimal controller is only computed approximately. A number of general classes of controller approximation error are analysed including deterministic and probabilistic errors and even controller sample and hold errors. In each case, it is shown that the controller approximation errors do not accumulate (even over an infinite time frame) and the process converges exponentially fast to a small neighbourhood of the origin. In addition to this analysis, an approximation method for receding horizon optimal control is proposed based on Monte Carlo simulation. This method is derived via the Feynman–Kac formula which gives a stochastic interpretation for the solution of a Hamilton–Jacobi–Bellman equation associated with the true optimal controller. It is shown, and it is a prime motivation for this study, that this particular controller approximation method practically stabilises the underlying nonlinear process. 相似文献
11.
Martin Feda 《The Visual computer》1996,12(8):390-405
Galerkin radiosity solves the integral rendering equation by projecting the illumination functions into a set of higher-order basis functions. This paper presents a Monte Carlo approach for Galerkin radiosity to compute the coefficients of the basis functions. The new approach eliminates the problems with edge singularities between adjacent surfaces present in conventional Galerkin radiosity, the time complexity is reduced fromO(K
4) toO(K
2) for aK-order basis, and ideally specular energy transport can be simulated. As in conventional Galerkin radiosity, no meshing is required even for large or curved surfaces, thus reducing memory requirements, and no a posteriori Gouraud interpolation is necessary. The new algorithm is simple and can be parallelized on any parallel computer, including massively parallel systems. 相似文献
12.
Areeg Abdalla James J. Buckley 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(5):463-468
We apply our new fuzzy Monte Carlo method to a certain fuzzy linear regression problem to estimate the best solution. The
best solution is a vector of crisp numbers, for the coefficients in the model, which minimizes one of two error measures.
We use a quasi-random number generator to produce random sequences of these crisp vectors which uniformly fill the search
space. We consider an example problem and show this Monte Carlo method obtains the best solution for both error measures. 相似文献
13.
14.
Akila Gothandaraman Gregory D. Peterson G.L. Warren Robert J. Hinde Robert J. Harrison 《Parallel Computing》2008,34(4-5):278-291
Quantum Monte Carlo methods enable us to determine the ground-state properties of atomic or molecular clusters. Here, we present a reconfigurable computing architecture using Field Programmable Gate Arrays (FPGAs) to accelerate two computationally intensive kernels of a Quantum Monte Carlo (QMC) application applied to N-body systems. We focus on two key kernels of the QMC application: acceleration of potential energy and wave function calculations. We compare the performance of our application on two reconfigurable platforms. Firstly, we use a dual-processor 2.4 GHz Intel Xeon augmented with two reconfigurable development boards consisting of Xilinx Virtex-II Pro FPGAs. Using this platform, we achieve a speedup of 3× over a software-only implementation. Following this, the chemistry application is ported to the Cray XD1 supercomputer equipped with Xilinx Virtex-II Pro and Virtex-4 FPGAs. The hardware-accelerated application on one node of the high performance system equipped with a single Virtex-4 FPGA yields a speedup of approximately 25× over the serial reference code running on one node of the dual-processor dual-core 2.2 GHz AMD Opteron. This speedup is mainly attributed to the use of pipelining, the use of fixed-point arithmetic for all calculations and the fine-grained parallelism using FPGAs. We can further enhance the performance by operating multiple instances of our design in parallel. 相似文献
15.
为了更鲁棒和快速地进行目标跟踪,在基于粒子滤波的目标跟踪方法的启发下,提出了一种新的基于蒙特卡罗方法的目标跟踪方法。该方法首先运用蒙特卡罗技术对下一帧目标可能出现的位置和尺度进行抽样;然后计算各抽样与参考目标的相似度;最后通过估计目标状态来获得跟踪目标。实验表明,该方法无需目标运动信息,特别适用于目标灵活运动时的跟踪,与现有的算法相比,不仅算法实现简单,同时有较好的鲁棒性和通用性。 相似文献
16.
Wireless sensor networks (WSNs) have been widely used in many fields. The issue of node localization is a fundamental problem in WSNs. And it is the basis and prerequisite for many applications. Due to the mobility of the sensor nodes, it is more challenging to locate nodes in the mobile WSNs than in the static ones. The existing localization schemes for mobile WSNs are almost based on the Sequential Monte Carlo (SMC) localization method. The SMC-based schemes may suffer from low sampling efficiency resulted from a large sampling area, which makes them difficult to achieve high localization accuracy and efficiency. Some schemes try to reduce the sampling area by further employing position relationship with neighbor common nodes, while we have found that the movements of the neighbor beacon nodes have not been fully exploited. Addressing this issue, in this paper, some new constraint rules are developed and some existing constraint rules are optimized with the consideration of the moving distance and direction of neighbor beacons. A series of distance constraint conditions are further created, by which, the scope/size of the sampling area can be further reduced, and the samples can be filtered more accurately. The performance of our algorithm is evaluated by extensive simulation experiments. The simulation results show that the localization error and computation cost of our proposed algorithm are lower than those of the existing ones, even when the speed of the sensor nodes is relative high. 相似文献
17.
It is well known that there is no analytic expression for the electrical capacitance of the unit cube. However, there are several Monte Carlo methods that have been used to numerically estimate this capacitance to high accuracy. These include a Brownian dynamics algorithm [H.-X. Zhou, A. Szabo, J.F. Douglas, J.B. Hubbard, A Brownian dynamics algorithm for calculating the hydrodynamic friction and the electrostatic capacitance of an arbitrarily shaped object, J. Chem. Phys. 100 (5) (1994) 3821–3826] coupled to the “walk on spheres” (WOS) method [M.E. Müller, Some continuous Monte Carlo methods for the Dirichlet problem, Ann. Math. Stat. 27 (1956) 569–589]; the Green’s function first-passage (GFFP) algorithm [J.A. Given, J.B. Hubbard, J.F. Douglas, A first-passage algorithm for the hydrodynamic friction and diffusion-limited reaction rate of macromolecules, J. Chem. Phys. 106 (9) (1997) 3721–3771]; an error-controlling Brownian dynamics algorithm [C.-O. Hwang, M. Mascagni, Capacitance of the unit cube, J. Korean Phys. Soc. 42 (2003) L1–L4]; an extrapolation technique coupled to the WOS method [C.-O. Hwang, Extrapolation technique in the “walk on spheres” method for the capacitance of the unit cube, J. Korean Phys. Soc. 44 (2) (2004) 469–470]; the “walk on planes” (WOP) method [M.L. Mansfield, J.F. Douglas, E.J. Garboczi, Intrinsic viscosity and the electrical polarizability of arbitrarily shaped objects, Phys. Rev. E 64 (6) (2001) 061401:1–061401:16; C.-O. Hwang, M. Mascagni, Electrical capacitance of the unit cube, J. Appl. Phys. 95 (7) (2004) 3798–3802]; and the random “walk on the boundary” (WOB) method [M. Mascagni, N.A. Simonov, The random walk on the boundary method for calculating capacitance, J. Comp. Phys. 195 (2004) 465–473]. Monte Carlo methods are convenient and efficient for problems whose solution includes singularities. In the calculation of the unit cube capacitance, there are edge and corner singularities in the charge density distribution. In this paper, we review the above Monte Carlo methods for computing the electrical capacitance of a cube and compare their effectiveness. We also provide a new result. We will focus our attention particularly on two Monte Carlo methods: WOP [M.L. Mansfield, J.F. Douglas, E.J. Garboczi, Intrinsic viscosity and the electrical polarizability of arbitrarily shaped objects, Phys. Rev. E 64 (6) (2001) 061401:1–061401:16; C.-O. Hwang, M. Mascagni, Electrical capacitance of the unit cube, J. Appl. Phys. 95 (7) (2004) 3798–3802; C.-O. Hwang, T. Won, Edge charge singularity of conductors, J. Korean Phys. Soc. 45 (2004) S551–S553] and the random WOB [M. Mascagni, N.A. Simonov, The random walk on the boundary method for calculating capacitance, J. Comp. Phys. 195 (2004) 465–473] methods. 相似文献
18.
Financial Monte Carlo simulations are computationally intensive applications that must meet tight deadlines in terms of job completion times. The completion time might have a huge impact on the financial profits made from decisions derived from the simulation results. Naturally, there is a huge interest in being able to simulate as fast as possible. While single simulations can be done on one machine, decisions often depend on portfolios of simulations. Distributing the workload among resources is crucial to achieve low latency. In this article we present a combination of a middleware with a high‐performance implementation of an Asian options evaluation code on the Cell Broadband Engine (CBE). We handle workload distribution with our PHASTGrid middleware and provide users with a web service interface to the whole infrastructure. The CBE is particularly suitable for Monte Carlo simulations. We implemented a well‐known algorithm on both the CBE and the Intel x86 multicore architectures. Both codes are integrated in our middleware, allowing a direct comparison of the performance and scalability. In addition to the Monte Carlo simulation, we also use different applications and compare our middleware with Globus. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
19.