首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

2.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

3.
The Probabilistically Checkable Proof (PCP) theorem (Arora and Safra in J ACM 45(1):70–122, 1998; Arora et al. in J ACM 45(3):501–555, 1998) asserts the existence of proofs that can be verified by reading a very small part of the proof. Since the discovery of the theorem, there has been a considerable work on improving the theorem in terms of the length of the proofs, culminating in the construction of PCPs of quasi-linear length, by Ben-Sasson and Sudan (SICOMP 38(2):551–607, 2008) and Dinur (J ACM 54(3):241–250, 2007). One common theme in the aforementioned PCP constructions is that they all rely heavily on sophisticated algebraic machinery. The aforementioned work of Dinur (2007) suggested an alternative approach for constructing PCPs, which gives a simpler and arguably more intuitive proof of the PCP theorem using combinatorial techniques. However, this combinatorial construction only yields PCPs of polynomial length and is therefore inferior to the algebraic constructions in this respect. This gives rise to the natural question of whether the proof length of the algebraic constructions can be matched using the combinatorial approach. In this work, we provide a combinatorial construction of PCPs of length \({n\cdot\left(\log n\right)^{O(\log\log n)}}\), coming very close to the state-of-the-art algebraic constructions (whose proof length is \({n\cdot\left(\log n\right)^{O(1)}}\)). To this end, we develop a few generic PCP techniques which may be of independent interest. It should be mentioned that our construction does use low-degree polynomials at one point. However, our use of polynomials is confined to the construction of error-correcting codes with a certain simple multiplication property, and it is conceivable that such codes could be constructed without the use of polynomials. In addition, we provide a variant of the main construction that does not use polynomials at all and has proof length \({n^{4} \cdot\left(\log n\right)^{O(\log\log n)}}\). This is already an improvement over the aforementioned combinatorial construction of Dinur.  相似文献   

4.
We use self-reduction methods to prove strong information lower bounds on two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product (IP). In our first result we affirm the conjecture that the information cost of GHD is linear even under the uniform distribution, which strengthens the Ω(n) bound recently shown by Kerenidis et al. (2012), and answers an open problem from Chakrabarti et al. (2012). In our second result we prove that the information cost of IPn is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound recently proved by Braverman and Weinstein (Electronic Colloquium on Computational Complexity (ECCC) 18, 164 2011). Our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past (Chakrabarti et al. 2001; Bar-Yossef et al. J. Comput. Syst. Sci. 68(4), 702–732 2004; Barak et al. 2010) used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner.  相似文献   

5.
Elements of Formal Semantics (EFS) has already been reviewed twice (Rett in Glossa 1(1):42, 2016; Erlewine in Comput Linguist 42(4):837–839, 2017). As well, the website for the work is accompanied by evaluative quotes by noted scholars. All are very positive concerning its clarity and its utility as an introduction to formal semantics for natural language. As I agree with these evaluations my interest in reiterating them in slightly different words is limited. So my reviews of the content chapters will be accompanied by a Reflections section consisting of my own reflections on the foundations of model theoretic semantics for natural language as laid out in EFS. The issues I address—alternate ways of accomplishing the tasks Winter treats—should not be included in an introductory work but they may be helpful for those who teach classes for which EFS is an appropriate text. They might also help with queries about the content of the text by those using it. I note that a mark of a clear text is that it allows the reader to reflect on its content not its presentation.  相似文献   

6.
We extend Hansen and Sargent’s (Discounted linear exponential quadratic gaussian control, 1994, IEEE Trans Autom Control 40:968–971 1995, 2013) analysis of dynamic optimization with risk-averse agents in two directions. Firstly, following Whittle (Risk-sensitive optimal control, 1990), we show that the optimal risk-averse policy is identified via a pessimistic choice mechanism and described by simple recursive formulae. Secondly, we investigate the continuous-time limit and show that sufficient conditions for the existence of optimal solutions coincide with those which apply under risk-neutrality. Our analysis is conducted both under perfect and imperfect state observation. As an illustrative example, we analyze the optimal production policy of an entrepreneur running a monopolistic firm which faces a demand schedule subject to stochastic shocks, showing that risk-aversion induces her to act more aggressively.  相似文献   

7.
Some numerical algorithms for elliptic eigenvalue problems are proposed, analyzed, and numerically tested. The methods combine advantages of the two-grid algorithm (Xu and Zhou in Math Comput 70(233):17–25, 2001), the two-space method (Racheva and Andreev in Comput Methods Appl Math 2:171–185, 2002), the shifted inverse power method (Hu and Cheng in Math Comput 80:1287–1301, 2011; Yang and Bi in SIAM J Numer Anal 49:1602–1624, 2011), and the polynomial preserving recovery enhancing technique (Naga et al. in SIAM J Sci Comput 28:1289–1300, 2006). Our new algorithms compare favorably with some existing methods and enjoy superconvergence property.  相似文献   

8.
We propose a new computing model called chemical reaction automata (CRAs) as a simplified variant of reaction automata (RAs) studied in recent literature (Okubo in RAIRO Theor Inform Appl 48:23–38 2014; Okubo et al. in Theor Comput Sci 429:247–257 2012a, Theor Comput Sci 454:206–221 2012b). We show that CRAs in maximally parallel manner are computationally equivalent to Turing machines, while the computational power of CRAs in sequential manner coincides with that of the class of Petri nets, which is in marked contrast to the result that RAs (in both maximally parallel and sequential manners) have the computing power of Turing universality (Okubo 2014; Okubo et al. 2012a). Intuitively, CRAs are defined as RAs without inhibitor functioning in each reaction, providing an offline model of computing by chemical reaction networks (CRNs). Thus, the main results in this paper not only strengthen the previous result on Turing computability of RAs but also clarify the computing powers of inhibitors in RA computation.  相似文献   

9.
Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM).  相似文献   

10.
In this paper, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under \(L^2\) norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Math 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal \((k+1)\)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal \((k+1)\)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.  相似文献   

11.
In this paper, a new numerical approximation is discussed for the two-dimensional distributed-order time fractional reaction–diffusion equation. Combining with the idea of weighted and shifted Grünwald difference (WSGD) approximation (Tian et al. in Math Comput 84:1703–1727, 2015; Wang and Vong in J Comput Phys 277:1–15, 2014) in time, we establish orthogonal spline collocation (OSC) method in space. A detailed analysis shows that the proposed scheme is unconditionally stable and convergent with the convergence order \(\mathscr {O}(\tau ^2+\Delta \alpha ^2+h^{r+1})\), where \(\tau , \Delta \alpha , h\) and r are, respectively the time step size, step size in distributed-order variable, space step size, and polynomial degree of space. Interestingly, we prove that the proposed WSGD-OSC scheme converges with the second-order in time, where OSC schemes proposed previously (Fairweather et al. in J Sci Comput 65:1217–1239, 2015; Yang et al. in J Comput Phys 256:824–837, 2014) can at most achieve temporal accuracy of order which depends on the order of fractional derivatives in the equations and is usually less than two. Some numerical results are also given to confirm our theoretical prediction.  相似文献   

12.
In some recent works (Reis 2011, Fermé and Reis, J. Philos. Log. 41, 29–52, 2012, Fermé and Reis, Rev. Symb. Log. 6, 460–487, 2013) two new kinds of multiple contraction functions have been proposed, namely the system of spheres-based multiple contractions and the epistemic entrenchment-based multiple contractions, as generalizations (to the case of multiple contraction) of the well-known classes of systems of spheres-based and of epistemic entrenchment-based (singleton) contractions. Additionally, a representation theorem for the class of epistemic entrenchment-based multiple contraction has been proposed, and it has been shown that the two newly proposed constructions are equivalent, in the sense that a multiple contraction function is a system of spheres-based multiple contraction if and only if it is an epistemic entrenchment-based multiple contraction. In this paper we present two axiomatic characterizations for those multiple contraction functions which differ from the one mentioned above and, in particular, make use of some more intuitive postulates.  相似文献   

13.
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations (Parker in Synthese 169(3):483–496, 2009; Morrison in Philos Stud 143(1):33–57, 2009), the nature of computer data (Barberousse and Vorms, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of computer simulations (Krohs in Int Stud Philos Sci 22(3):277–292, 2008; Durán in Int Stud Philos Sci 31(1):27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations.  相似文献   

14.
The aim of Content-based Image Retrieval (CBIR) is to find a set of images that best match the query based on visual features. Most existing CBIR systems find similar images in low level features, while Text-based Image Retrieval (TBIR) systems find images with relevant tags regardless of contents in the images. Generally, people are more interested in images with similarity both in contours and high-level concepts. Therefore, we propose a new strategy called Iterative Search to meet this requirement. It mines knowledge from the similar images of original queries, in order to compensate for the missing information in feature extraction process. To evaluate the performance of Iterative Search approach, we apply this method to four different CBIR systems (HOF Zhou et al. in ACM international conference on multimedia, 2012; Zhou and Zhang in Neural information processing—international conference, ICONIP 2011, Shanghai, 2011, HOG Dalal and Triggs in IEEE computer society conference on computer vision pattern recognition, 2005, GIST Oliva and Torralba in Int J Comput Vision 42:145–175, 2001 and CNN Krizhevsky et al. in Adv Neural Inf Process Syst 25:2012, 2012) in our experiments. The results show that Iterative Search improves the performance of original CBIR features by about \(20\%\) on both the Oxford Buildings dataset and the Object Sketches dataset. Meanwhile, it is not restricted to any particular visual features.  相似文献   

15.
16.
There are two prominent ways of formally modelling human belief. One is in terms of plain beliefs (yes-or-no beliefs, beliefs simpliciter), i.e., sets of propositions. The second one is in terms of degrees of beliefs, which are commonly taken to be representable by subjective probability functions. In relating these two ways of modelling human belief, the most natural idea is a thesis frequently attributed to John Locke: a proposition is or ought to be believed (accepted) just in case its subjective probability exceeds a contextually fixed probability threshold \(t<1\). This idea is known to have two serious drawbacks: first, it denies that beliefs are closed under conjunction, and second, it may easily lead to sets of beliefs that are logically inconsistent. In this paper I present two recent accounts of aligning plain belief with subjective probability: the Stability Theory of Leitgeb (Ann Pure Appl Log 164(12):1338–1389, 2013; Philos Rev 123(2):131–171, 2014; Proc Aristot Soc Suppl Vol 89(1):143–185, 2015a; The stability of belief: an essay on rationality and coherence. Oxford University Press, Oxford, 2015b) and the Probalogical Theory (or Tracking Theory) of Lin and Kelly (Synthese 186(2):531–575, 2012a; J Philos Log 41(6):957–981, 2012b). I argue that Leitgeb’s theory may be too sceptical for the purposes of real life.  相似文献   

17.
18.
The notion of contact algebra is one of the main tools in the region based theory of space. It is an extension of Boolean algebra with an additional relation C called contact. The elements of the Boolean algebra are considered as formal representations of spatial regions as analogues of physical bodies and Boolean operations are considered as operations for constructing new regions from given ones and also to define some mereological relations between regions as part-of, overlap and underlap. The contact relation is one of the basic mereotopological relations between regions expressing some topological nature. It is used also to define some other important mereotopological relations like non-tangential inclusion, dual contact, external contact and others. Most of these definitions are given by means of the operation of Boolean complementation. There are, however, some problems related to the motivation of the operation of Boolean complementation. In order to avoid these problems we propose a generalization of the notion of contact algebra by dropping the operation of complement and replacing the Boolean part of the definition by that of a distributive lattice. First steps in this direction were made in (Düntsch et al. Lect. Notes Comput. Sci. 4136, 135–147, 2006, Düntsch et al. J. Log. Algebraic Program. 76, 18–34, 2008) presenting the notion of distributive contact lattice based on the contact relation as the only mereotopological relation. In this paper we consider as non-definable primitives the relations of contact, nontangential inclusion and dual contact, extending considerably the language of distributive contact lattices. Part I of the paper is devoted to a suitable axiomatization of the new language called extended distributive contact lattice (EDC-lattice) by means of universal first-order axioms true in all contact algebras. EDC-lattices may be considered also as an algebraic tool for a certain subarea of mereotopology, called in this paper distributive mereotopology. The main result of Part I of the paper is a representation theorem, stating that each EDC-lattice can be isomorphically embedded into a contact algebra, showing in this way that the presented axiomatization preserves the meaning of mereotopological relations without considering Boolean complementation. Part II of the paper is devoted to topological representation theory of EDC-lattices, transferring into the distributive case important results from the topological representation theory of contact algebras. It is shown that under minor additional assumptions on distributive lattices as extensionality of the definable relations of overlap or underlap one can preserve the good topological interpretations of regions as regular closed or regular open sets in topological space.  相似文献   

19.
Building upon recent results obtained in Causley and Christlieb (SIAM J Numer Anal 52(1):220–235, 2014), Causley et al. (Math Comput 83(290):2763–2786, 2014, Method of lines transpose: high order L-stable O(N) schemes for parabolic equations using successive convolution, 2015), we describe an efficient second-order, unconditionally stable scheme for solving the wave equation, based on the method of lines transpose (MOL\(^T\)), and the resulting semi-discrete (i.e. continuous in space) boundary value problem. In Causley and Christlieb (SIAM J Numer Anal 52(1):220–235, 2014), unconditionally stable schemes of high order were derived, and in Causley et al. (Method of lines transpose: high order L-stable O(N) schemes for parabolic equations using successive convolution, 2015) a high order, fast \(\mathcal {O}(N)\) spatial solver was derived, which is matrix-free and is based on dimensional-splitting. In this work, are interested in building a wave solver, and our main concern is the development of boundary conditions. We demonstrate all desired boundary conditions for a wave solver, including outflow boundary conditions, in 1D and 2D. The scheme works in a logically Cartesian fashion, and the boundary points are embedded into the regular mesh, without incurring stability restrictions, so that boundary conditions are imposed without any reduction in the order of accuracy. We demonstrate how the embedded boundary approach works in the cases of Dirichlet and Neumann boundary conditions. Further, we develop outflow and periodic boundary conditions for the MOL\(^T\) formulation. Our solver is designed to couple with particle codes, and so special attention is also paid to the implementation of point sources, and soft sources which can be used to launch waves into waveguides.  相似文献   

20.
The semantics of progressive sentences presents a challenge to linguists and philosophers alike. According to a widely accepted view, the truth-conditions of progressive sentences rely essentially on a notion of inertia. Dowty (Word meaning and Montague grammar: the semantics of verbs and times in generative grammar and in Montague’s PTQ, D. Reidel Publishing Company, Dordrecht, 1979) suggested inertia worlds to implement this “inertia idea” in a formal semantic theory of the progressive. The main thesis of the paper is that the notion of inertia went through a subtle, but crucial change when worlds were replaced by events in Landman (Nat Lang Semant 1:1–32, 1992) and Portner (Language 74(4):760–787, 1998), and that this new, event-related concept of inertia results in a possibility-based theory of the progressive. An important case in point in the paper is a proof that, despite its surface structure, the theory presented in Portner (1998) does not implement the notion of inertia in Dowty (1979); rather, it belongs together with Dowty’s earlier, 1977 theory according to which the progressive is a possibility operator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号