首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Sokó? and Lewiñski (Struct Multidisc Optim 42:835–853, Sokó? and Lewiñski 2010) published a paper on Michell trusses for three forces in the plane, with an application to a class of symmetric trusses. Vazquez Espi and Cervera Bravo (2011) have written a Discussion on the above paper, which was replied by Sokó? and Lewiñski (2011). In the current Discussion, some comments on the above exchange are offered, because it involves some fundamental issues of topology optimization.  相似文献   

2.
The commonly used one step methods and linear multi-step methods all have a global error that is of the same order as the local truncation error (as defined in [1, 6, 8, 13, 15]). In fact, this is true of the entire class of general linear methods. In practice, this means that the order of the method is typically defined solely by order conditions which are derived by studying the local truncation error. In this work we investigate the interplay between the local truncation error and the global error, and develop a methodology which defines the construction of explicit error inhibiting block one-step methods (alternatively written as explicit general linear methods [2]). These error inhibiting schemes are constructed so that the accumulation of the local truncation error over time is controlled, which results in a global error that is one order higher than the local truncation error. In this work, we delineate how to carefully choose the coefficient matrices so that the growth of the local truncation error is inhibited. We then use this theoretical understanding to construct several methods that have higher order global error than local truncation error, and demonstrate their enhanced order of accuracy on test cases. These methods demonstrate that the error inhibiting concept is realizable. Future work will further develop new error inhibiting methods and will analyze the computational efficiency and linear stability properties of these methods.  相似文献   

3.
In this paper, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under \(L^2\) norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Math 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal \((k+1)\)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal \((k+1)\)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.  相似文献   

4.
State-based formal methods [e.g. Event-B/RODIN (Abrial in Modeling in Event-B—system and software engineering. Cambridge University Press, Cambridge, 2010; Abrial et al. in Int J Softw Tools Technol Transf (STTT) 12(6):447–466, 2010)] for critical system development and verification are now well established, with track records including tool support and industrial applications. The focus of proof-based verification, in particular, is on safety properties. Liveness properties, which guarantee eventual, or converging computations of some requirements, are less well dealt with. Inductive reasoning about liveness is not explicitly supported. Liveness proofs are often complex and expensive, requiring high-skill levels on the part of the verification engineer. Fairness-based temporal logic approaches have been proposed to address this, e.g. TLA Lamport (ACM Trans Program Lang Syst 16(3):872–923, 1994) and that of Manna and Pnueli (Temporal verification of reactive systems—safety. Springer, New York, 1995). We contribute to this technology need by proposing a fairness-based method integrating temporal and first-order logic, proof and tools for modelling and verification of safety and liveness properties. The method is based on an integration of Event-B and TLA. Building on our previous work (Méry and Poppleton in Integrated formal methods, 10th international conference, IFM 2013, Turku, Finland, pp 208–222, 2013. doi: 10.1007/978-3-642-38613-8_15), we present the method via three example population protocols Angluin et al. (Distrib Comput 18(4):235–253, 2006). These were proposed as a theoretical framework for computability reasoning about Wireless Sensor Network and Mobile Ad-Hoc Network algorithms. Our examples present typical liveness and convergence requirements. We prove convergence results for the examples by integrated modelling and proof with Event-B/RODIN and TLA. We exploit existing proof rules, define and apply three new proof rules; soundness proofs are also provided. During the process we observe certain repeating patterns in the proofs. These are easily identified and reused because of the explicit nature of the reasoning.  相似文献   

5.
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations (Parker in Synthese 169(3):483–496, 2009; Morrison in Philos Stud 143(1):33–57, 2009), the nature of computer data (Barberousse and Vorms, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of computer simulations (Krohs in Int Stud Philos Sci 22(3):277–292, 2008; Durán in Int Stud Philos Sci 31(1):27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations.  相似文献   

6.
The evolutionary structural optimization (ESO) method developed by Xie and Steven (Comput Struct 49(5):885–896, 162), an important branch of topology optimization, has undergone tremendous development over the past decades. Among all its variants, the convergent and mesh-independent bi-directional evolutionary structural optimization (BESO) method developed by Huang and Xie (Finite Elem Anal Des 43(14):1039–1049, 48) allowing both material removal and addition, has become a widely adopted design methodology for both academic research and engineering applications because of its efficiency and robustness. This paper intends to present a comprehensive review on the development of ESO-type methods, in particular the latest convergent and mesh-independent BESO method is highlighted. Recent applications of the BESO method to the design of advanced structures and materials are summarized. Compact Malab codes using the BESO method for benchmark structural and material microstructural designs are also provided.  相似文献   

7.
8.
We present a Matlab implementation of topology optimization for fluid flow problems in the educational computer code PolyTop (Talischi et al. 2012b). The underlying formulation is the well-established porosity approach of Borrvall and Petersson (2003), wherein a dissipative term is introduced to impede the flow in the solid (non-fluid) regions. Polygonal finite elements are used to obtain a stable low-order discretization of the governing Stokes equations for incompressible viscous flow. As a result, the same mesh represents the design field as well as the velocity and pressure fields that characterize its response. Owing to the modular structure of PolyTop, incorporating new physics, in this case modeling fluid flow, involves changes that are limited mainly to the analysis routine. We provide several numerical examples to illustrate the capabilities and use of the code. To illustrate the modularity of the present approach, we extend the implementation to accommodate alternative formulations and cost functions. These include topology optimization formulations where both viscosity and inverse permeability are functions of the design; and flow control where the velocity at a certain location in the domain is maximized in a prescribed direction.  相似文献   

9.
This paper addresses robust and ultrafast pose tracking on mobile devices, such as smartphones and small drones. Existing methods, relying on either vision analysis or inertial sensing, are either too computational heavy to achieve real-time performance on a mobile platform, or not sufficiently robust to address unique challenges in mobile scenarios, including rapid camera motions, long exposure time of mobile cameras, etc. This paper presents a novel hybrid tracking system which utilizes on-device inertial sensors to greatly accelerate the visual feature tracking process and improve its robustness. In particular, our system adaptively resizes each video frame based on inertial sensor data and applies a highly efficient binary feature matching method to track the object pose in each resized frame with little accuracy degradation. This tracking result is revised periodically by a model-based feature tracking method (Hare et al. 2012) to reduce accumulated errors. Furthermore, an inertial tracking method and a solution of fusing its results with the feature tracking results are employed to further improve the robustness and efficiency. We first evaluate our hybrid system using a dataset consisting of 16 video clips with synchronized inertial sensing data and then assess its performance in a mobile augmented reality application. Experimental results demonstrated our method’s superior performance to a state-of-the-art feature tracking method (Hare et al. 2012), a direct tracking method (Engel et al. 2014) and the Vuforia SDK (Ibañez and Figueras 2013), and can run at more than 40 Hz on a standard smartphone. We will release the source code with the pubilication of this paper.  相似文献   

10.
11.
We introduce a family of generalized prolate spheroidal wave functions (PSWFs) of order \(-1,\) and develop new spectral schemes for second-order boundary value problems. Our technique differs from the differentiation approach based on PSWFs of order zero in Kong and Rokhlin (Appl Comput Harmon Anal 33(2):226–260, 2012); in particular, our orthogonal basis can naturally include homogeneous boundary conditions without the re-orthogonalization of Kong and Rokhlin (2012). More notably, it leads to diagonal systems or direct “explicit” solutions to 1D Helmholtz problems in various situations. Using a rule optimally pairing the bandwidth parameter and the number of basis functions as in Kong and Rokhlin (2012), we demonstrate that the new method significantly outperforms the Legendre spectral method in approximating highly oscillatory solutions. We also conduct a rigorous error analysis of this new scheme. The idea and analysis can be extended to generalized PSWFs of negative integer order for higher-order boundary value and eigenvalue problems.  相似文献   

12.
Some numerical algorithms for elliptic eigenvalue problems are proposed, analyzed, and numerically tested. The methods combine advantages of the two-grid algorithm (Xu and Zhou in Math Comput 70(233):17–25, 2001), the two-space method (Racheva and Andreev in Comput Methods Appl Math 2:171–185, 2002), the shifted inverse power method (Hu and Cheng in Math Comput 80:1287–1301, 2011; Yang and Bi in SIAM J Numer Anal 49:1602–1624, 2011), and the polynomial preserving recovery enhancing technique (Naga et al. in SIAM J Sci Comput 28:1289–1300, 2006). Our new algorithms compare favorably with some existing methods and enjoy superconvergence property.  相似文献   

13.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

14.
In this paper we develop a parallel method for solving possibly non-convex time-dependent Hamilton–Jacobi equations arising from optimal control and differential game problems. The subproblems are independent so they can be implemented in an embarrassingly parallel fashion, which usually has an ideal parallel speedup. The algorithm is proposed to overcome the curse of dimensionality (Bellman in Adaptive control processes: a guided tour. Princeton University Press, Princeton, 1961; Dynamic programming. Princeton University Press, Princeton, 1957) when solving HJ PDE . We extend previous work Chow et al. (Algorithm for overcoming the curse of dimensionality for certain non-convex Hamilton–Jacobi equations, Projections and differential games, UCLA CAM report, pp 16–27, 2016) and Darbon and Osher (Algorithms for overcoming the curse of dimensionality for certain Hamilton–Jacobi equations arising in control theory and elsewhere, UCLA CAM report, pp 15–50, 2015) and apply a generalized Hopf formula to solve HJ PDE involving time-dependent and perhaps non-convex Hamiltonians. We suggest a coordinate descent method for the minimization procedure in the Hopf formula. This method is preferable when even the evaluation of the function value itself requires some computational effort, and also when we handle higher dimensional optimization problem. For an integral with respect to time inside the generalized Hopf formula, we suggest using a numerical quadrature rule. Together with our suggestion to perform numerical differentiation to minimize the number of calculation procedures in each iteration step, we are bound to have numerical errors in our computations. These errors can be effectively controlled by choosing an appropriate mesh-size in time and the method does not use a mesh in space. The use of multiple initial guesses is suggested to overcome possibly multiple local extrema in the case when non-convex Hamiltonians are considered. Our method is expected to have wide application in control theory and differential game problems, and elsewhere.  相似文献   

15.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

16.
Despite a growing number of studies in stochastic dynamic network optimization, the field remains less well defined and unified than other areas of network optimization. Due to the need for approximation methods like approximate dynamic programming, one of the most significant problems yet to be solved is the lack of adequate benchmarks. The values of the perfect information policy and static policy are not sensitive to information propagation while the myopic policy does not distinguish network effects in the value of flexibility. We propose a scalable reference policy value defined from theoretically consistent real option values based on sampled sequences, and estimate it using extreme value distributions. The reference policy is evaluated on an existing network instance with known sequences (Sioux Falls network from Chow and Regan 2011a): the Weibull distribution demonstrates good fit and sampling consistency with more than 200 samples. The reference policy is further applied in computational experiments with two other types of adaptive network design: a facility location and timing problem on the Simchi-Levi and Berman (1988) network, and Hyytiä et al.’s (2012) dynamic dial-a-ride problem. The former experiment represents an application of a new problem class and use of the reference policy as an upper bound for evaluating sampled policies, which can reach 3 % gap with 350 samples. The latter experiment demonstrates that sensitivity to parameters may be greater than expected, particularly when benchmarked against the proposed reference policy.  相似文献   

17.
This paper confirms that, as originally reported in Seneta (Journal of Applied Probability 41:177–187, 2004, p. 183), it is impossible to replicate Madan et al. (European Finance Review 2:135–156, 1998) results using log daily returns on S&P 500 Index from January 1992 to September 1994. This failure leads to a close investigation of the computational problems associated with finding maximum likelihood estimates of the parameters of the popular VG model. Both standard econometric software, such as R, low level programming languages, such as Matlab\(^{\textregistered }\), and non-standard optimization software, such as Ezgrad described in Tucci (Journal of Economic Dynamics and Control 26:1739–1764, 2002), are used. The complexity of the log-likelihood function is studied. It is shown that it looks very complicated, with many local optima, and may be incredibly sensitive to very small changes in the sample used. Adding or removing a single observation may cause huge changes both in the maximum of the log-likelihood function and in the estimated parameter values. An intuitive procedure which works nicely both when implemented in R and in Matlab\(^{\textregistered }\) is presented.  相似文献   

18.
In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in Buchsbaum et al. (Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003) in the context of table compression, and then further elaborated and extended to strings and trees by Ferragina et al. (J. ACM 52:688–713, 2005; Proc. of 46th IEEE Symposium on Foundations of Computer Science, pp. 184–193, 2005) and Mäkinen and Navarro (Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007). Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming (Buchsbaum et al. in J. ACM 50(6):825–851, 2003; Giancarlo and Sciortino in Proc. of 14th Symposium on Combinatorial Pattern Matching, pp. 129–143, 2003), or few heuristics that do not guarantee any bounds on the efficacy of their computed partition (Buchsbaum et al. in Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003), or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform, see e.g. Ferragina et al. in J. ACM 52:688–713, 2005; Mäkinen and Navarro in Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007) and achieve compression performance that might be worse than the optimal-partitioning by a Ω(log?n/log?log?n) factor. Therefore, computing efficiently the optimal solution is still open (Buchsbaum and Giancarlo in Encyclopedia of Algorithms, pp. 939–942, 2008). In this paper we provide the first algorithm which computes in O(nlog?1+ε n) time and O(n) space, a partition of T whose compressed output is guaranteed to be no more than (1+ε)-worse the optimal one, where ε may be any positive constant fixed in advance. This result holds for any base-compressor C whose compression performance can be bounded in terms of the zero-th or the k-th order empirical entropy of the text T. We will also discuss extensions of our results to BWT-based compressors and to the compression booster of Ferragina et al. (J. ACM 52:688–713, 2005).  相似文献   

19.
The Probabilistically Checkable Proof (PCP) theorem (Arora and Safra in J ACM 45(1):70–122, 1998; Arora et al. in J ACM 45(3):501–555, 1998) asserts the existence of proofs that can be verified by reading a very small part of the proof. Since the discovery of the theorem, there has been a considerable work on improving the theorem in terms of the length of the proofs, culminating in the construction of PCPs of quasi-linear length, by Ben-Sasson and Sudan (SICOMP 38(2):551–607, 2008) and Dinur (J ACM 54(3):241–250, 2007). One common theme in the aforementioned PCP constructions is that they all rely heavily on sophisticated algebraic machinery. The aforementioned work of Dinur (2007) suggested an alternative approach for constructing PCPs, which gives a simpler and arguably more intuitive proof of the PCP theorem using combinatorial techniques. However, this combinatorial construction only yields PCPs of polynomial length and is therefore inferior to the algebraic constructions in this respect. This gives rise to the natural question of whether the proof length of the algebraic constructions can be matched using the combinatorial approach. In this work, we provide a combinatorial construction of PCPs of length \({n\cdot\left(\log n\right)^{O(\log\log n)}}\), coming very close to the state-of-the-art algebraic constructions (whose proof length is \({n\cdot\left(\log n\right)^{O(1)}}\)). To this end, we develop a few generic PCP techniques which may be of independent interest. It should be mentioned that our construction does use low-degree polynomials at one point. However, our use of polynomials is confined to the construction of error-correcting codes with a certain simple multiplication property, and it is conceivable that such codes could be constructed without the use of polynomials. In addition, we provide a variant of the main construction that does not use polynomials at all and has proof length \({n^{4} \cdot\left(\log n\right)^{O(\log\log n)}}\). This is already an improvement over the aforementioned combinatorial construction of Dinur.  相似文献   

20.
We extend Hansen and Sargent’s (Discounted linear exponential quadratic gaussian control, 1994, IEEE Trans Autom Control 40:968–971 1995, 2013) analysis of dynamic optimization with risk-averse agents in two directions. Firstly, following Whittle (Risk-sensitive optimal control, 1990), we show that the optimal risk-averse policy is identified via a pessimistic choice mechanism and described by simple recursive formulae. Secondly, we investigate the continuous-time limit and show that sufficient conditions for the existence of optimal solutions coincide with those which apply under risk-neutrality. Our analysis is conducted both under perfect and imperfect state observation. As an illustrative example, we analyze the optimal production policy of an entrepreneur running a monopolistic firm which faces a demand schedule subject to stochastic shocks, showing that risk-aversion induces her to act more aggressively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号