首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

2.
We propose a new computing model called chemical reaction automata (CRAs) as a simplified variant of reaction automata (RAs) studied in recent literature (Okubo in RAIRO Theor Inform Appl 48:23–38 2014; Okubo et al. in Theor Comput Sci 429:247–257 2012a, Theor Comput Sci 454:206–221 2012b). We show that CRAs in maximally parallel manner are computationally equivalent to Turing machines, while the computational power of CRAs in sequential manner coincides with that of the class of Petri nets, which is in marked contrast to the result that RAs (in both maximally parallel and sequential manners) have the computing power of Turing universality (Okubo 2014; Okubo et al. 2012a). Intuitively, CRAs are defined as RAs without inhibitor functioning in each reaction, providing an offline model of computing by chemical reaction networks (CRNs). Thus, the main results in this paper not only strengthen the previous result on Turing computability of RAs but also clarify the computing powers of inhibitors in RA computation.  相似文献   

3.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

4.
5.
Learning how to forecast is always important for traders, and divergent learning frequencies prevail among traders. The influence of the evolutionary frequency on learning performance has occasioned many studies of agent-based computational finance (e.g., Lettau in J Econ Dyn Control 21:1117–1147, 1997. doi: 10.1016/S0165-1889(97)00046-8; Szpiro in Complexity 2(4):31–39, 1997. doi: 10.1002/(SICI)1099-0526(199703/04)2:4<31::AID-CPLX8>3.0.CO;2-3; Cacho and Simmons in Aust J Agric Resour Econ 43(3):305–322, 1999. doi: 10.1111/1467-8489.00081). Although these studies all suggest that evolving less frequently and, hence, experiencing more realizations help learning, this implication may result from their common stationary assumption. Therefore, we first attempt to approach this issue in a ‘dynamically’ evolving market in which agents learn to forecast endogenously generated asset prices. Moreover, in these studies’ market settings, evolving less frequently also meant having a longer time horizon. However, it is not true in many market settings that are even closer to the real financial markets. The clarification that the evolutionary frequency and the time horizon are two separate notions leaves the effect of the evolutionary frequency on learning even more elusive and worthy of exploration independently. We find that the influence of a trader’s evolutionary frequency on his forecasting accuracy depends on all market participants and the resulting price dynamics. In addition, prior studies also commonly assume that traders have identical preferences, which is too strong an assumption to apply to a real market. Considering the heterogeneity of preferences, we find that converging to the rational expectations equilibrium is hardly possible, and we even suggest that agents in a slow-learning market learn frequently. We also apply a series of econometric tests to explain the simulation results.  相似文献   

6.
Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM).  相似文献   

7.
We extend Hansen and Sargent’s (Discounted linear exponential quadratic gaussian control, 1994, IEEE Trans Autom Control 40:968–971 1995, 2013) analysis of dynamic optimization with risk-averse agents in two directions. Firstly, following Whittle (Risk-sensitive optimal control, 1990), we show that the optimal risk-averse policy is identified via a pessimistic choice mechanism and described by simple recursive formulae. Secondly, we investigate the continuous-time limit and show that sufficient conditions for the existence of optimal solutions coincide with those which apply under risk-neutrality. Our analysis is conducted both under perfect and imperfect state observation. As an illustrative example, we analyze the optimal production policy of an entrepreneur running a monopolistic firm which faces a demand schedule subject to stochastic shocks, showing that risk-aversion induces her to act more aggressively.  相似文献   

8.
Some numerical algorithms for elliptic eigenvalue problems are proposed, analyzed, and numerically tested. The methods combine advantages of the two-grid algorithm (Xu and Zhou in Math Comput 70(233):17–25, 2001), the two-space method (Racheva and Andreev in Comput Methods Appl Math 2:171–185, 2002), the shifted inverse power method (Hu and Cheng in Math Comput 80:1287–1301, 2011; Yang and Bi in SIAM J Numer Anal 49:1602–1624, 2011), and the polynomial preserving recovery enhancing technique (Naga et al. in SIAM J Sci Comput 28:1289–1300, 2006). Our new algorithms compare favorably with some existing methods and enjoy superconvergence property.  相似文献   

9.
The latest-generation earth observation instruments on airborne and satellite platforms are currently producing an almost continuous high-dimensional data stream. This exponentially growing data poses a new challenge for real-time image processing and recognition. Making full and effective use of the spectral information and spatial structure information of high-resolution remote sensing image is the key to the processing and recognition of high-resolution remote sensing data. In this paper, the adaptive multipoint moment estimation (AMME) stochastic optimization algorithm is proposed for the first time by using the finite lower-order moments and adding the estimating points. This algorithm not only reduces the probability of local optimum in the learning process, but also improves the convergence rate of the convolutional neural network (Lee Cun et al. in Advances in neural information processing systems, 1990). Second, according to the remote sensing image with characteristics of complex background and small sensitive targets, and by automatic discovery, locating small targets, and giving high weights, we proposed a feature extraction method named weighted pooling to further improve the performance of real-time image recognition. We combine the AMME and weighted pooling with the spatial pyramid representation (Harada et al. in Comput Vis Pattern Recognit 1617–1624, 2011) algorithm to form a new, multiscale, and multilevel real-time image recognition model and name it weighted spatial pyramid networks (WspNet). At the end, we use the MNIST, ImageNet, and natural disasters under remote sensing data sets to test WspNet. Compared with other real-time image recognition models, WspNet achieve a new state of the art in terms of convergence rate and image feature extraction compared with conventional stochastic gradient descent method [like AdaGrad, AdaDelta and Adam (Zeiler in Comput Sci, 2012; Kingma and Ba in Comput Sci, 2014; Duchi et al. in J Mach Learn Res 12(7):2121–2159, 2011] and pooling method [like max-pooling, avg-pooling and stochastic-pooling (Zeiler and Fergus in stochastic-pooling for regularization of deep convolutional neural networks, 2013)].  相似文献   

10.
The semantics of progressive sentences presents a challenge to linguists and philosophers alike. According to a widely accepted view, the truth-conditions of progressive sentences rely essentially on a notion of inertia. Dowty (Word meaning and Montague grammar: the semantics of verbs and times in generative grammar and in Montague’s PTQ, D. Reidel Publishing Company, Dordrecht, 1979) suggested inertia worlds to implement this “inertia idea” in a formal semantic theory of the progressive. The main thesis of the paper is that the notion of inertia went through a subtle, but crucial change when worlds were replaced by events in Landman (Nat Lang Semant 1:1–32, 1992) and Portner (Language 74(4):760–787, 1998), and that this new, event-related concept of inertia results in a possibility-based theory of the progressive. An important case in point in the paper is a proof that, despite its surface structure, the theory presented in Portner (1998) does not implement the notion of inertia in Dowty (1979); rather, it belongs together with Dowty’s earlier, 1977 theory according to which the progressive is a possibility operator.  相似文献   

11.
In this paper, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under \(L^2\) norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Math 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal \((k+1)\)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal \((k+1)\)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.  相似文献   

12.
Sokó? and Lewiñski (Struct Multidisc Optim 42:835–853, Sokó? and Lewiñski 2010) published a paper on Michell trusses for three forces in the plane, with an application to a class of symmetric trusses. Vazquez Espi and Cervera Bravo (2011) have written a Discussion on the above paper, which was replied by Sokó? and Lewiñski (2011). In the current Discussion, some comments on the above exchange are offered, because it involves some fundamental issues of topology optimization.  相似文献   

13.
In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in Buchsbaum et al. (Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003) in the context of table compression, and then further elaborated and extended to strings and trees by Ferragina et al. (J. ACM 52:688–713, 2005; Proc. of 46th IEEE Symposium on Foundations of Computer Science, pp. 184–193, 2005) and Mäkinen and Navarro (Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007). Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming (Buchsbaum et al. in J. ACM 50(6):825–851, 2003; Giancarlo and Sciortino in Proc. of 14th Symposium on Combinatorial Pattern Matching, pp. 129–143, 2003), or few heuristics that do not guarantee any bounds on the efficacy of their computed partition (Buchsbaum et al. in Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003), or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform, see e.g. Ferragina et al. in J. ACM 52:688–713, 2005; Mäkinen and Navarro in Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007) and achieve compression performance that might be worse than the optimal-partitioning by a Ω(log?n/log?log?n) factor. Therefore, computing efficiently the optimal solution is still open (Buchsbaum and Giancarlo in Encyclopedia of Algorithms, pp. 939–942, 2008). In this paper we provide the first algorithm which computes in O(nlog?1+ε n) time and O(n) space, a partition of T whose compressed output is guaranteed to be no more than (1+ε)-worse the optimal one, where ε may be any positive constant fixed in advance. This result holds for any base-compressor C whose compression performance can be bounded in terms of the zero-th or the k-th order empirical entropy of the text T. We will also discuss extensions of our results to BWT-based compressors and to the compression booster of Ferragina et al. (J. ACM 52:688–713, 2005).  相似文献   

14.
There are two prominent ways of formally modelling human belief. One is in terms of plain beliefs (yes-or-no beliefs, beliefs simpliciter), i.e., sets of propositions. The second one is in terms of degrees of beliefs, which are commonly taken to be representable by subjective probability functions. In relating these two ways of modelling human belief, the most natural idea is a thesis frequently attributed to John Locke: a proposition is or ought to be believed (accepted) just in case its subjective probability exceeds a contextually fixed probability threshold \(t<1\). This idea is known to have two serious drawbacks: first, it denies that beliefs are closed under conjunction, and second, it may easily lead to sets of beliefs that are logically inconsistent. In this paper I present two recent accounts of aligning plain belief with subjective probability: the Stability Theory of Leitgeb (Ann Pure Appl Log 164(12):1338–1389, 2013; Philos Rev 123(2):131–171, 2014; Proc Aristot Soc Suppl Vol 89(1):143–185, 2015a; The stability of belief: an essay on rationality and coherence. Oxford University Press, Oxford, 2015b) and the Probalogical Theory (or Tracking Theory) of Lin and Kelly (Synthese 186(2):531–575, 2012a; J Philos Log 41(6):957–981, 2012b). I argue that Leitgeb’s theory may be too sceptical for the purposes of real life.  相似文献   

15.
In quantum information theory, symmetric informationally complete positive operator-valued measures (SIC-POVMs) are relevant to quantum state tomography [8], quantum cryptography [15], and foundational studies [16]. In general, it is hard to construct SIC-POVMs and only a few classes of them existed, as we know. Moreover, we do not know whether there exists an infinite class of them. Many researchers tried to construct approximately symmetric informationally complete positive operator-valued measures (ASIC-POVMs). In this paper, we propose two new constructions of ASIC-POVMs for prime power dimensions only by using multiplicative characters over finite fields.  相似文献   

16.
In this paper we study an economy with a high degree of financialization in which (non-financial) firms need loans from commercial banks to finance production, service debt, and make long-term investments. Along the business cycle, the economy follows a Minsky base cycle with firms traversing through the various stages of financial fragility, i.e. hedge, speculative and Ponzi finance (cf., Minsky in The financial instability hypothesis: a restatement. Hyman P Minsky archive paper, vol 180, pp 541–552, 1978; Stabilizing an unstable economy. Yale University Press, 2nd edn 2008, McGraw-Hill, New York, 1986; The financial instability hypothesis. Economics working paper archive wp74. The Jerome Levy Economics Institute of Bard College, 1992). In the speculative financial stage cash flows are insufficient to finance the repayment of principle but sufficient for paying interest, so banks are willing to roll-over credits in order to prevent loan defaults. In the Ponzi financial position even interest payments cannot be served, but banks my still be willing to keep firms alive through “extend and pretend” loans, also known as zombie-lending (Caballero et al. in Am Econ Rev 98(5):1943–1977, 2008). This lending behavior may cause credit bubbles with increasing leverage ratios. Empirical evidence suggests that recessions following such leveraging booms are more severe and can be associated to higher economic costs (Jordà et al. in J Money Credit Bank 45(s2):3–28, 2013; Schularick and Taylor in Am Econ Rev 102(2):1029–1061, 2012). We study macroprudential regulations aimed at: (i) the prevention and mitigation of credit bubbles, (ii) ensuring macro-financial stability, and (iii) limiting the ability of banks to create unsustainable debt bubbles. Our results show that limiting the credit growth by using a non-risk-weighted capital ratio has slightly positive effects, while using loan eligibility criteria such as cutting off funding to all financially unsound firms (speculative and Ponzi) has strong positive effects.  相似文献   

17.
The commonly used one step methods and linear multi-step methods all have a global error that is of the same order as the local truncation error (as defined in [1, 6, 8, 13, 15]). In fact, this is true of the entire class of general linear methods. In practice, this means that the order of the method is typically defined solely by order conditions which are derived by studying the local truncation error. In this work we investigate the interplay between the local truncation error and the global error, and develop a methodology which defines the construction of explicit error inhibiting block one-step methods (alternatively written as explicit general linear methods [2]). These error inhibiting schemes are constructed so that the accumulation of the local truncation error over time is controlled, which results in a global error that is one order higher than the local truncation error. In this work, we delineate how to carefully choose the coefficient matrices so that the growth of the local truncation error is inhibited. We then use this theoretical understanding to construct several methods that have higher order global error than local truncation error, and demonstrate their enhanced order of accuracy on test cases. These methods demonstrate that the error inhibiting concept is realizable. Future work will further develop new error inhibiting methods and will analyze the computational efficiency and linear stability properties of these methods.  相似文献   

18.
Robust and accurate detection of the pupil position is a key building block for head-mounted eye tracking and prerequisite for applications on top, such as gaze-based human–computer interaction or attention analysis. Despite a large body of work, detecting the pupil in images recorded under real-world conditions is challenging given significant variability in the eye appearance (e.g., illumination, reflections, occlusions, etc.), individual differences in eye physiology, as well as other sources of noise, such as contact lenses or make-up. In this paper we review six state-of-the-art pupil detection methods, namely ElSe (Fuhl et al. in Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ACM. New York, NY, USA, pp 123–130, 2016), ExCuSe (Fuhl et al. in Computer analysis of images and patterns. Springer, New York, pp 39–51, 2015), Pupil Labs (Kassner et al. in Adjunct proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing (UbiComp), pp 1151–1160, 2014. doi: 10.1145/2638728.2641695), SET (Javadi et al. in Front Neuroeng 8, 2015), Starburst (Li et al. in Computer vision and pattern recognition-workshops, 2005. IEEE Computer society conference on CVPR workshops. IEEE, pp 79–79, 2005), and ?wirski (?wirski et al. in Proceedings of the symposium on eye tracking research and applications (ETRA). ACM, pp 173–176, 2012. doi: 10.1145/2168556.2168585). We compare their performance on a large-scale data set consisting of 225,569 annotated eye images taken from four publicly available data sets. Our experimental results show that the algorithm ElSe (Fuhl et al. 2016) outperforms other pupil detection methods by a large margin, offering thus robust and accurate pupil positions on challenging everyday eye images.  相似文献   

19.
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations (Parker in Synthese 169(3):483–496, 2009; Morrison in Philos Stud 143(1):33–57, 2009), the nature of computer data (Barberousse and Vorms, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of computer simulations (Krohs in Int Stud Philos Sci 22(3):277–292, 2008; Durán in Int Stud Philos Sci 31(1):27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations.  相似文献   

20.
To generate long-distance shared quantum correlations (QCs) for information processing in future quantum networks, recently we proposed the concept of QC repeater and its kernel technique named QC swapping. Besides, we extensively studied the QC swapping between two simple QC resources (i.e., a pair of Werner states) with four different methods to quantify QCs (Xie et al. in Quantum Inf Process 14:653–679, 2015). In this paper, we continue to treat the same issue by employing other three different methods associated with relative entropies, i.e., the MPSVW method (Modi et al. in Phys Rev Lett 104:080501, 2010), the Zhang method (arXiv:1011.4333 [quant-ph]) and the RS method (Rulli and Sarandy in Phys Rev A 84:042109, 2011). We first derive analytic expressions of all QCs which occur during the swapping process and then reveal their properties about monotonicity and threshold. Importantly, we find that a long-distance shared QC can be generated from two short-distance ones via QC swapping indeed. In addition, we simply compare our present results with our previous ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号