首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

2.
Some numerical algorithms for elliptic eigenvalue problems are proposed, analyzed, and numerically tested. The methods combine advantages of the two-grid algorithm (Xu and Zhou in Math Comput 70(233):17–25, 2001), the two-space method (Racheva and Andreev in Comput Methods Appl Math 2:171–185, 2002), the shifted inverse power method (Hu and Cheng in Math Comput 80:1287–1301, 2011; Yang and Bi in SIAM J Numer Anal 49:1602–1624, 2011), and the polynomial preserving recovery enhancing technique (Naga et al. in SIAM J Sci Comput 28:1289–1300, 2006). Our new algorithms compare favorably with some existing methods and enjoy superconvergence property.  相似文献   

3.
We propose a new computing model called chemical reaction automata (CRAs) as a simplified variant of reaction automata (RAs) studied in recent literature (Okubo in RAIRO Theor Inform Appl 48:23–38 2014; Okubo et al. in Theor Comput Sci 429:247–257 2012a, Theor Comput Sci 454:206–221 2012b). We show that CRAs in maximally parallel manner are computationally equivalent to Turing machines, while the computational power of CRAs in sequential manner coincides with that of the class of Petri nets, which is in marked contrast to the result that RAs (in both maximally parallel and sequential manners) have the computing power of Turing universality (Okubo 2014; Okubo et al. 2012a). Intuitively, CRAs are defined as RAs without inhibitor functioning in each reaction, providing an offline model of computing by chemical reaction networks (CRNs). Thus, the main results in this paper not only strengthen the previous result on Turing computability of RAs but also clarify the computing powers of inhibitors in RA computation.  相似文献   

4.
In this paper, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under \(L^2\) norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Math 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal \((k+1)\)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal \((k+1)\)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.  相似文献   

5.
In this paper, a new numerical approximation is discussed for the two-dimensional distributed-order time fractional reaction–diffusion equation. Combining with the idea of weighted and shifted Grünwald difference (WSGD) approximation (Tian et al. in Math Comput 84:1703–1727, 2015; Wang and Vong in J Comput Phys 277:1–15, 2014) in time, we establish orthogonal spline collocation (OSC) method in space. A detailed analysis shows that the proposed scheme is unconditionally stable and convergent with the convergence order \(\mathscr {O}(\tau ^2+\Delta \alpha ^2+h^{r+1})\), where \(\tau , \Delta \alpha , h\) and r are, respectively the time step size, step size in distributed-order variable, space step size, and polynomial degree of space. Interestingly, we prove that the proposed WSGD-OSC scheme converges with the second-order in time, where OSC schemes proposed previously (Fairweather et al. in J Sci Comput 65:1217–1239, 2015; Yang et al. in J Comput Phys 256:824–837, 2014) can at most achieve temporal accuracy of order which depends on the order of fractional derivatives in the equations and is usually less than two. Some numerical results are also given to confirm our theoretical prediction.  相似文献   

6.
Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM).  相似文献   

7.
8.
The latest-generation earth observation instruments on airborne and satellite platforms are currently producing an almost continuous high-dimensional data stream. This exponentially growing data poses a new challenge for real-time image processing and recognition. Making full and effective use of the spectral information and spatial structure information of high-resolution remote sensing image is the key to the processing and recognition of high-resolution remote sensing data. In this paper, the adaptive multipoint moment estimation (AMME) stochastic optimization algorithm is proposed for the first time by using the finite lower-order moments and adding the estimating points. This algorithm not only reduces the probability of local optimum in the learning process, but also improves the convergence rate of the convolutional neural network (Lee Cun et al. in Advances in neural information processing systems, 1990). Second, according to the remote sensing image with characteristics of complex background and small sensitive targets, and by automatic discovery, locating small targets, and giving high weights, we proposed a feature extraction method named weighted pooling to further improve the performance of real-time image recognition. We combine the AMME and weighted pooling with the spatial pyramid representation (Harada et al. in Comput Vis Pattern Recognit 1617–1624, 2011) algorithm to form a new, multiscale, and multilevel real-time image recognition model and name it weighted spatial pyramid networks (WspNet). At the end, we use the MNIST, ImageNet, and natural disasters under remote sensing data sets to test WspNet. Compared with other real-time image recognition models, WspNet achieve a new state of the art in terms of convergence rate and image feature extraction compared with conventional stochastic gradient descent method [like AdaGrad, AdaDelta and Adam (Zeiler in Comput Sci, 2012; Kingma and Ba in Comput Sci, 2014; Duchi et al. in J Mach Learn Res 12(7):2121–2159, 2011] and pooling method [like max-pooling, avg-pooling and stochastic-pooling (Zeiler and Fergus in stochastic-pooling for regularization of deep convolutional neural networks, 2013)].  相似文献   

9.
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations (Parker in Synthese 169(3):483–496, 2009; Morrison in Philos Stud 143(1):33–57, 2009), the nature of computer data (Barberousse and Vorms, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of computer simulations (Krohs in Int Stud Philos Sci 22(3):277–292, 2008; Durán in Int Stud Philos Sci 31(1):27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations.  相似文献   

10.
In this paper, a maximum-principle-satisfying finite volume compact scheme is proposed for solving scalar hyperbolic conservation laws. The scheme combines weighted essentially non-oscillatory schemes (WENO) with a class of compact schemes under a finite volume framework, in which the nonlinear WENO weights are coupled with lower order compact stencils. The maximum-principle-satisfying polynomial rescaling limiter in Zhang and Shu (J Comput Phys 229:3091–3120, 2010, Proc R Soc A Math Phys Eng Sci 467:2752–2776, 2011) is adopted to construct the present schemes at each stage of an explicit Runge–Kutta method, without destroying high order accuracy and conservativity. Numerical examples for one and two dimensional problems including incompressible flows are presented to assess the good performance, maximum principle preserving, essentially non-oscillatory and high resolution of the proposed method.  相似文献   

11.
We study the class of pseudo-BL-algebras whose every maximal filter is normal. We present an equational base for this class and we extend these results for the class of basic pseudo hoops with fixed strong unit. This is a continuation of the research from Botur et al. (Soft Comput 16:635–644, doi: 10.1007/s00500-011-0763-7, 2012).  相似文献   

12.
State-based formal methods [e.g. Event-B/RODIN (Abrial in Modeling in Event-B—system and software engineering. Cambridge University Press, Cambridge, 2010; Abrial et al. in Int J Softw Tools Technol Transf (STTT) 12(6):447–466, 2010)] for critical system development and verification are now well established, with track records including tool support and industrial applications. The focus of proof-based verification, in particular, is on safety properties. Liveness properties, which guarantee eventual, or converging computations of some requirements, are less well dealt with. Inductive reasoning about liveness is not explicitly supported. Liveness proofs are often complex and expensive, requiring high-skill levels on the part of the verification engineer. Fairness-based temporal logic approaches have been proposed to address this, e.g. TLA Lamport (ACM Trans Program Lang Syst 16(3):872–923, 1994) and that of Manna and Pnueli (Temporal verification of reactive systems—safety. Springer, New York, 1995). We contribute to this technology need by proposing a fairness-based method integrating temporal and first-order logic, proof and tools for modelling and verification of safety and liveness properties. The method is based on an integration of Event-B and TLA. Building on our previous work (Méry and Poppleton in Integrated formal methods, 10th international conference, IFM 2013, Turku, Finland, pp 208–222, 2013. doi: 10.1007/978-3-642-38613-8_15), we present the method via three example population protocols Angluin et al. (Distrib Comput 18(4):235–253, 2006). These were proposed as a theoretical framework for computability reasoning about Wireless Sensor Network and Mobile Ad-Hoc Network algorithms. Our examples present typical liveness and convergence requirements. We prove convergence results for the examples by integrated modelling and proof with Event-B/RODIN and TLA. We exploit existing proof rules, define and apply three new proof rules; soundness proofs are also provided. During the process we observe certain repeating patterns in the proofs. These are easily identified and reused because of the explicit nature of the reasoning.  相似文献   

13.
14.
There are two prominent ways of formally modelling human belief. One is in terms of plain beliefs (yes-or-no beliefs, beliefs simpliciter), i.e., sets of propositions. The second one is in terms of degrees of beliefs, which are commonly taken to be representable by subjective probability functions. In relating these two ways of modelling human belief, the most natural idea is a thesis frequently attributed to John Locke: a proposition is or ought to be believed (accepted) just in case its subjective probability exceeds a contextually fixed probability threshold \(t<1\). This idea is known to have two serious drawbacks: first, it denies that beliefs are closed under conjunction, and second, it may easily lead to sets of beliefs that are logically inconsistent. In this paper I present two recent accounts of aligning plain belief with subjective probability: the Stability Theory of Leitgeb (Ann Pure Appl Log 164(12):1338–1389, 2013; Philos Rev 123(2):131–171, 2014; Proc Aristot Soc Suppl Vol 89(1):143–185, 2015a; The stability of belief: an essay on rationality and coherence. Oxford University Press, Oxford, 2015b) and the Probalogical Theory (or Tracking Theory) of Lin and Kelly (Synthese 186(2):531–575, 2012a; J Philos Log 41(6):957–981, 2012b). I argue that Leitgeb’s theory may be too sceptical for the purposes of real life.  相似文献   

15.
We use self-reduction methods to prove strong information lower bounds on two of the most studied functions in the communication complexity literature: Gap Hamming Distance (GHD) and Inner Product (IP). In our first result we affirm the conjecture that the information cost of GHD is linear even under the uniform distribution, which strengthens the Ω(n) bound recently shown by Kerenidis et al. (2012), and answers an open problem from Chakrabarti et al. (2012). In our second result we prove that the information cost of IPn is arbitrarily close to the trivial upper bound n as the permitted error tends to zero, again strengthening the Ω(n) lower bound recently proved by Braverman and Weinstein (Electronic Colloquium on Computational Complexity (ECCC) 18, 164 2011). Our proofs demonstrate that self-reducibility makes the connection between information complexity and communication complexity lower bounds a two-way connection. Whereas numerous results in the past (Chakrabarti et al. 2001; Bar-Yossef et al. J. Comput. Syst. Sci. 68(4), 702–732 2004; Barak et al. 2010) used information complexity techniques to derive new communication complexity lower bounds, we explore a generic way in which communication complexity lower bounds imply information complexity lower bounds in a black-box manner.  相似文献   

16.
Robust and accurate detection of the pupil position is a key building block for head-mounted eye tracking and prerequisite for applications on top, such as gaze-based human–computer interaction or attention analysis. Despite a large body of work, detecting the pupil in images recorded under real-world conditions is challenging given significant variability in the eye appearance (e.g., illumination, reflections, occlusions, etc.), individual differences in eye physiology, as well as other sources of noise, such as contact lenses or make-up. In this paper we review six state-of-the-art pupil detection methods, namely ElSe (Fuhl et al. in Proceedings of the ninth biennial ACM symposium on eye tracking research & applications, ACM. New York, NY, USA, pp 123–130, 2016), ExCuSe (Fuhl et al. in Computer analysis of images and patterns. Springer, New York, pp 39–51, 2015), Pupil Labs (Kassner et al. in Adjunct proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing (UbiComp), pp 1151–1160, 2014. doi: 10.1145/2638728.2641695), SET (Javadi et al. in Front Neuroeng 8, 2015), Starburst (Li et al. in Computer vision and pattern recognition-workshops, 2005. IEEE Computer society conference on CVPR workshops. IEEE, pp 79–79, 2005), and ?wirski (?wirski et al. in Proceedings of the symposium on eye tracking research and applications (ETRA). ACM, pp 173–176, 2012. doi: 10.1145/2168556.2168585). We compare their performance on a large-scale data set consisting of 225,569 annotated eye images taken from four publicly available data sets. Our experimental results show that the algorithm ElSe (Fuhl et al. 2016) outperforms other pupil detection methods by a large margin, offering thus robust and accurate pupil positions on challenging everyday eye images.  相似文献   

17.
We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.  相似文献   

18.
In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in Buchsbaum et al. (Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003) in the context of table compression, and then further elaborated and extended to strings and trees by Ferragina et al. (J. ACM 52:688–713, 2005; Proc. of 46th IEEE Symposium on Foundations of Computer Science, pp. 184–193, 2005) and Mäkinen and Navarro (Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007). Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming (Buchsbaum et al. in J. ACM 50(6):825–851, 2003; Giancarlo and Sciortino in Proc. of 14th Symposium on Combinatorial Pattern Matching, pp. 129–143, 2003), or few heuristics that do not guarantee any bounds on the efficacy of their computed partition (Buchsbaum et al. in Proc. of 11th ACM-SIAM Symposium on Discrete Algorithms, pp. 175–184, 2000; J. ACM 50(6):825–851, 2003), or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform, see e.g. Ferragina et al. in J. ACM 52:688–713, 2005; Mäkinen and Navarro in Proc. of 14th Symposium on String Processing and Information Retrieval, pp. 229–241, 2007) and achieve compression performance that might be worse than the optimal-partitioning by a Ω(log?n/log?log?n) factor. Therefore, computing efficiently the optimal solution is still open (Buchsbaum and Giancarlo in Encyclopedia of Algorithms, pp. 939–942, 2008). In this paper we provide the first algorithm which computes in O(nlog?1+ε n) time and O(n) space, a partition of T whose compressed output is guaranteed to be no more than (1+ε)-worse the optimal one, where ε may be any positive constant fixed in advance. This result holds for any base-compressor C whose compression performance can be bounded in terms of the zero-th or the k-th order empirical entropy of the text T. We will also discuss extensions of our results to BWT-based compressors and to the compression booster of Ferragina et al. (J. ACM 52:688–713, 2005).  相似文献   

19.
The main purpose of this work is to provide a mathematical proof of our previously proposed orthogonal similarity transformation (OST)-based sensitivity analysis method (Zhao et al. Struct Multidisc Optim 50(3):517–522 2014a, Comput Methods Appl Mech Engrg 273:204–218 c); the proof is designed to show the method’s computational effectiveness. Theoretical study of computational efficiency for both robust topology optimization and robust concurrent topology optimization problems shows the necessity of the OST-based sensitivity analysis method for practical problems. Numerical studies were conducted to demonstrate the computational accuracy of the OST-based sensitivity analysis method and its efficiency over the conventional method. The research leads us to conclude that the OST-based sensitivity analysis method can bring considerable computational savings when used for large-scale robust topology optimization problems, as well as robust concurrent topology optimization problems.  相似文献   

20.
Phononic crystals (PnC) with a specifically designed liquid-filled defect have been recently introduced as a novel sensor platform (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012). Sensors based on this principle feature a band gap covering the typical input span of the measurand as well as a narrow transmission peak within the band gap where the frequency of maximum transmission is governed by the measurand. This approach has been applied for determination of volumetric properties of liquids (Lucklum et al. in Sens Actuators B Chem 171–172:271–277, 2012; Oseev et al. in Sens Actuators B Chem 189:208–212, 2013; Lucklum and Li in Meas Sci Technol 20(12):124014, 2009) and has demonstrated attractive sensitivity. One way to improve sensitivity requires higher probing frequencies in the range of 100 MHz and above. In this range surface acoustic wave (SAW) devices are an established basis for sensors. We have performed first tests towards a PnC microsensors (Lucklum et al. in Towards a SAW based phononic crystal sensor platform. In: 2013 Joint European frequency and time forum and international frequency control symposium (EFTF/IFC), pp 69–72, 2013). The respective feature size of the PnC SAW sensor has dimensions in the range of 10 µm and below. Whereas those dimensions are state of the art for common MEMS materials, etching of holes and cavities in piezoelectric materials that have an aspect ratio diameter/depth is still challenging. In this contribution we describe an improved technological process able to realize considerably deep and uniform holes in a SAW substrate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号