首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

2.
Teachers and students face many challenges in shifting from traditional classroom cultures to enacting the Knowledge-Building Communities model (KBC model) supported by the CSCL environment, Knowledge Forum (Bereiter, 2002; Bereiter & Scardamalia, 1993; Scardamalia, 2002; Scardamalia & Bereiter, 2006). Enacting the model involves socializing students into knowledge work, similar to disciplinary communities. A useful construct in the field of the Learning Sciences for understanding knowledge work is “epistemic games” (Collins & Ferguson, 1993; Morrison & Collins 1995; Perkins, 1997). We propose that a powerful means for supporting classroom enactments of the KBC model entails conceptualizing Knowledge Forum as a collective space for playing multi-player epistemic games. Participation in knowledge-building communities is then scaffolded through learning the moves of such games. We have designed scaffolding tools that highlight particular knowledge-building moves for practice and reflection as a means of supporting students and teachers in coming to understand how to collectively work together toward the progressive improvement of ideas. In order to examine our design theories in practice, we present research on Ideas First, a design-based research program involving enactments of the KBC model in Singaporean primary science classrooms (Bielaczyc & Ow, 2007, 2010; Ow & Bielaczyc, 2007; 2008).  相似文献   

3.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

4.
The routing of traffic between Internet domains, or Autonomous Systems (ASes), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP, Rekhter and Li in RFC 4271 of the Internet Engineering Task Force, 2006). Using BGP, ASes can apply semantically rich routing policies to choose interdomain routes in a distributed fashion. This expressiveness in routing-policy choice supports domains?? autonomy in network operations and in business decisions, but it comes at a price: The interaction of locally defined routing policies can lead to unexpected global anomalies, including route oscillations or overall protocol divergence (see, e.g., Varadhan et?al. in Comput Networks 32(1):1?C16, 2000). Networking researchers have addressed this problem by devising constraints on policies that guarantee BGP convergence without unduly limiting expressiveness and autonomy (see, e.g., Gao and Rexford in IEEE/ACM Trans Network 9(6):681?C692, 2001; Griffin et?al. in Proceedings of 9th ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM??03), pp. 61?C72. ACM Press, New York, 2003). In addition to taking this engineering or ??protocol- design?? approach, researchers have approached interdomain routing from an economic or ??mechanism-design?? point of view. It is known that lowest-cost-path (LCP) routing can be implemented in an incentive-compatible, BGP-compatible manner (Feigenbaum et?al. in Distribut. Comput 18(1):61?C72, 2005; Shneidman and Parkes in Proceedings of 23rd ACM Symposium on Principles of Distributed Computing (PODC??04), pp. 88?C97. ACM Press, New York, 2004) but that several other natural classes of policies cannot (Feigenbaum et?al. in Theor Comput Sci 378(2):175?C189, 2007; Feigenbaum et?al. in Distribut Comput 18(4):293?C305, 2006). In this paper, we present the first example of a class of interdomain-routing policies that is more general than LCP routing and for which BGP itself is both incentive-compatible and guaranteed to converge. We also present several steps toward a general theory of incentive-compatible, BGP-compatible interdomain routing.  相似文献   

5.
In this document, we present an alternative to the method introduced by Ebner (Pattern Recognit 60–67, 2003; J Parallel Distrib Comput 64(1):79–88, 2004; Color constancy using local color shifts, pp 276–287, 2004; Color Constancy, 2007; Mach Vis Appl 20(5):283–301, 2009) for computing the local space average color. We show that when the problem is framed as a linear system and the resulting series is solved, there is a solution based on LU decomposition that reduces the computing time by at least an order of magnitude.  相似文献   

6.
Wireless sensor networks (WSNs), one of the commercial wireless mesh networks (WMNs), are envisioned to provide an effective solution for sensor-based AmI (Ambient Intelligence) systems and applications. To enable the communications between AmI sensor networks and the most popular TCP/IP networks seamlessly, the best solution model is to run TCP/IP directly on WSNs (Mulligan et al. 2009; Hui and Culler 2008; Han and Mam 2007; Kim et al. 2007; Xiaohua et al. 2004; Dunkels et al. 2004; Dunkels et al. 2004; Dunkels 2001; Dunkels et al. 2004). In this case, an IP assignment method is required to assign each sensor node a unique IP address. SIPA (Dunkels et al. 2004) is the best known IP assignment method that uses spatial relations and locations of sensor nodes to assign their IP addresses. It has been applied in Contiki (Dunkels et al. 2004), a famous WSN operating system, to support the 6LowPAN protocol. In Chang et al. (2009), we proposed the SLIPA (Scan-Line IP Assignment) algorithm to improve the assignment success rate (ASR) obtained by SIPA. SLIPA can achieve a good ASR when sensor nodes are uniformly distributed. However, if sensor nodes are deployed by other distributions, the improvements would be limited. This paper proposes a new spatial IP assignment method, called SLIPA-Q (SLIPA with equal-quantity partition), to improve SLIPA. Experiments show that, by testing the proposed method 1,000 times with 1,000 randomly deployed sensor nodes, the average ASR obtained by SLIPA-Q is over two times of that obtained by SLIPA. Under the same 88% ASR, the average numbers of sensor nodes those can be successfully assigned by SLIPA-Q, SLIPA, and SIPA are 950, 850, and 135, respectively. Comparing to previous spatial IP assignment methods, SLIPA-Q can achieve dramatic improvements in ASR for assigning IP addresses to a large set of sensor nodes.  相似文献   

7.
This paper investigates the problem of the pth moment exponential stability for a class of stochastic recurrent neural networks with Markovian jump parameters. With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, some novel sufficient conditions on the pth moment exponential stability of the considered system are derived. The results obtained in this paper are completely new and complement and improve some of the previously known results (Liao and Mao, Stoch Anal Appl, 14:165–185, 1996; Wan and Sun, Phys Lett A, 343:306–318, 2005; Hu et al., Chao Solitions Fractals, 27:1006–1010, 2006; Sun and Cao, Nonlinear Anal Real, 8:1171–1185, 2007; Huang et al., Inf Sci, 178:2194–2203, 2008; Wang et al., Phys Lett A, 356:346–352, 2006; Peng and Liu, Neural Comput Appl, 20:543–547, 2011). Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

8.
The stochastic collocation method (Babu?ka et al. in SIAM J Numer Anal 45(3):1005–1034, 2007; Nobile et al. in SIAM J Numer Anal 46(5):2411–2442, 2008a; SIAM J Numer Anal 46(5):2309–2345, 2008b; Xiu and Hesthaven in SIAM J Sci Comput 27(3):1118–1139, 2005) has recently been applied to stochastic problems that can be transformed into parametric systems. Meanwhile, the reduced basis method (Maday et al. in Comptes Rendus Mathematique 335(3):289–294, 2002; Patera and Rozza in Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations Version 1.0. Copyright MIT, http://augustine.mit.edu, 2007; Rozza et al. in Arch Comput Methods Eng 15(3):229–275, 2008), primarily developed for solving parametric systems, has been recently used to deal with stochastic problems (Boyaval et al. in Comput Methods Appl Mech Eng 198(41–44):3187–3206, 2009; Arch Comput Methods Eng 17:435–454, 2010). In this work, we aim at comparing the performance of the two methods when applied to the solution of linear stochastic elliptic problems. Two important comparison criteria are considered: (1), convergence results of the approximation error; (2), computational costs for both offline construction and online evaluation. Numerical experiments are performed for problems from low dimensions $O(1)$ to moderate dimensions $O(10)$ and to high dimensions $O(100)$ . The main result stemming from our comparison is that the reduced basis method converges better in theory and faster in practice than the stochastic collocation method for smooth problems, and is more suitable for large scale and high dimensional stochastic problems when considering computational costs.  相似文献   

9.
10.
Hot embossing, a polymer molding process conceived by Forschungszentrum Karlsruhe, is one of the established replication processes for microstructures The process is especially well suited for manufacturing small and medium series of microcomponents (SPIE Conference 1997; Polymer News 25:224–229, 2000; J Micromech Microeng 14:R1–14, 2004; Sensors Actuators 3:130–135, 2000). However, a wider application of the process currently is seriously hampered by the lack of adequate simulation tools for process optimization and part design. This situation is becoming more critical, as the dimension of the microstructures shrink from micron and submicron levels to the nanoscale and as productivity requirements dictate the enlargement of formats to process larger numbers of devices in parallel. Based on the current scientific work (Forschungszentrum Karlsruhe, FZKA-Bericht 7058 2003; DTIP Conference Montreux 2004; Microsystem Tech 10:432–437 2004), a German–Canadian cooperation has been started. The objective of this cooperation is to fill the gap mentioned above by developing reliable computer models and simulation tools for the hot embossing process and to incorporate these models in a user-friendly computer code. The present paper will give an overview of the activities in the project. The activities related to material characterization, especially the development of a viscoelastic material model, the characterization of friction between polymer and mold during demolding, the development of an 8-in. microstructured mold, and the fabrication of nanostructured molds will be discussed.  相似文献   

11.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

12.
This work addresses the problem of fault detection and diagnosis (FDD) for a quad-rotor mini air vehicle (MAV). Actuator faults are considered on this paper. The basic idea behind the proposed method is to estimate the faults signals using the extended state observers theory. To estimate the faults, a polynomial observer (Aguilar et al. 2011; Mata-Machuca et al., Commun Nonlinear Sci Numer Simul 15(12):4114–4130, 2010, BioSystems 100(1):65–69, 2010) is presented by using the available measurements and know inputs of the system. In order to investigate the diagnosability properties of the system, a differential algebra approach is proposed (Cruz-Victoria et al., J Frankl Inst 345(2):102–118, 2008; and Martinez-Guerra and Diop, IEE P-Contr Theor Ap 151(1):130–135, 2004). The effectiveness of the methodology is illustrated by means of numerical simulations.  相似文献   

13.
We address the verification problem for concurrent programs modeled as multi-pushdown systems (MPDS). In general, MPDS are Turing powerful and hence come along with undecidability of all basic decision problems. Because of this, several subclasses of MPDS have been proposed and studied in the literature (Atig et al. in LNCS, Springer, Berlin, 2005; La Torre et al. in LICS, IEEE, 2007; Lange and Lei in Inf Didact 8, 2009; Qadeer and Rehof in TACAS, LNCS, Springer, Berlin, 2005). In this paper, we propose the class of bounded-budget MPDS, which are restricted in the sense that each stack can perform an unbounded number of context switches only if its depth is below a given bound, and a bounded number of context switches otherwise. We show that the reachability problem for this subclass is Pspace-complete and that LTL-model-checking is Exptime-complete. Furthermore, we propose a code-to-code translation that inputs a concurrent program \(P\) and produces a sequential program \(P'\) such that running \(P\) under the budget-bounded restriction yields the same set of reachable states as running \(P'\) . Moreover, detecting (fair) non-terminating executions in \(P\) can be reduced to LTL-Model-Checking of \(P'\) . By leveraging standard sequential analysis tools, we have implemented a prototype tool and applied it on a set of benchmarks, showing the feasibility of our translation.  相似文献   

14.
This paper proposes an iterative sealed-bid auction for selling multiple heterogeneous items to bidders interested in buying at most one item. It generalizes the single item bisection auction (Grigorieva et al. Econ Theory, 30:107–118, 2007) to the environment with multiple heterogeneous items. We focus on the case with two items for sale. We show that the auction elicits a minimal amount of information on preferences required to find the Vickrey–Clark–Groves outcome (Clarke, Public Choice, XI:17–33, 1971; Groves, Econometrica, 61:617–631, 1973; Vickrey, J Finance, 16:8–37, 1961), when there are two items for sale and an arbitrary number of bidders.  相似文献   

15.
Proving that a dynamical system is chaotic is a central problem in chaos theory (Hirsch in Chaos, fractals and dynamics, 1985]. In this note we apply the computational method developed in (Calude and Calude in Complex Syst 18:267?C285, 2009; Calude and Calude in Complex Syst 18:387?C401, 2010; Calude et al in J Multi Valued Log Soft Comput 12:285?C307, 2006) to show that Fermat??s last theorem is in the lowest complexity class ${{\mathfrak C}_{U,1}}$ . Using this result we prove the existence of a two-dimensional Hamiltonian system for which the proof that the system has a Smale horseshoe is in the class ${{\mathfrak C}_{U,1}}$ , i.e. it is not too complex.  相似文献   

16.
K-anonymity (Samarati and Sweeny 1998; Samarati, IEEE Trans Knowl Data Eng, 13(6):1010–1027, 2001; Sweeny, Int J Uncertain, Fuzziness Knowl-Based Syst, 10(5):557–570, 2002) and its variants, l-diversity (Machanavajjhala et al., ACM TKDD, 2007) and tcloseness (Li et al. 2007) among others are anonymization techniques for relational data and transaction data, which are used to protect privacy against re-identification attacks. A relational dataset D is k-anonymous if every record in D has at least k-1 other records with identical quasi-identifier attribute values. The combination of released data with external data will never allow the recipient to associate each released record with less than k individuals (Samarati, IEEE Trans Knowl Data Eng, 13(6):1010–1027, 2001). However, the current concept of k-anonymity on transaction data treats all items as quasi-identifiers. The anonymized data set has k identical transactions in groups and suffers from lower data utility (He and Naughton 2009; He et al. 2011; Liu and Wang 2010; Terrovitis et al., VLDB J, 20(1):83–106, 2011; Terrovitis et al. 2008). To improve the utility of anonymized transaction data, this work proposes a novel anonymity concept on transaction data that contain both quasi-identifier items (QID) and sensitive items (SI). A transaction that contains sensitive items must have at least k-1 other identical transactions (Ghinita et al. IEEE TKDE, 33(2):161–174, 2011; Xu et al. 2008). For a transaction that does not contain a sensitive item, no anonymization is required. A transaction dataset that satisfies this property is said to be sensitive k-anonymous. Three algorithms, Sensitive Transaction Neighbors (STN) Gray Sort Clustering (GSC) and Nearest Neighbors for K-anonymization (K-NN), are developed. These algorithms use adding/deleting QID items and only adding SI to achieve sensitive k-anonymity on transaction data. Additionally, a simple “privacy value” is proposed to evaluate the degree of privacy for different types of k-anonymity on transaction data. Extensive numerical simulations were carried out to demonstrate the characteristics of the proposed algorithms and also compared to other types of k-anonymity approaches. The results show that each technique possesses its own advantage under different criteria such as running time, operation, and information loss. The results obtained here can be used as a guideline of the selection of anonymization technique on different data sets and for different applications.  相似文献   

17.
18.
High order path-conservative schemes have been developed for solving nonconservative hyperbolic systems in (Parés, SIAM J.?Numer. Anal. 44:300?C321, 2006; Castro et al., Math. Comput. 75:1103?C1134, 2006; J.?Sci. Comput. 39:67?C114, 2009). Recently, it has been observed in (Abgrall and Karni, J.?Comput. Phys. 229:2759?C2763, 2010) that this approach may have some computational issues and shortcomings. In this paper, a modification to the high order path-conservative scheme in (Castro et al., Math. Comput. 75:1103?C1134, 2006) is proposed to improve its computational performance and to overcome some of the shortcomings. This modification is based on the high order finite volume WENO scheme with subcell resolution and it uses an exact Riemann solver to catch the right paths at the discontinuities. An application to one-dimensional compressible two-medium flows of nonconservative or primitive Euler equations is carried out to show the effectiveness of this new approach.  相似文献   

19.
We propose a uniform method to encode various types of trees succinctly. These families include ordered (ordinal), k-ary (cardinal), and unordered (free) trees. We will show the approach is intrinsically suitable for obtaining entropy-based encodings of trees (such as the degree-distribution entropy). Previously-existing succinct encodings of trees use ad hoc techniques to encode each particular family of trees. Additionally, the succinct encodings obtained using the uniform approach improve upon the existing succinct encodings of each family of trees; in the case of ordered trees, it simplifies the encoding while supporting the full set of navigational operations. It also simplifies the implementation of many supported operations. The approach applied to k-ary trees yields a succinct encoding that supports both cardinal-type operations (e.g. determining the child label i) as well as the full set of ordinal-type operations (e.g. reporting the number of siblings to the left of a node). Previous work on succinct encodings of k-ary trees does not support both types of operations simultaneously (Benoit et al. in Algorithmica 43(4):275–292, 2005; Raman et al. in ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 233–242, 2002). For unordered trees, the approach achieves the first succinct encoding. The approach is based on two recursive decompositions of trees into subtrees. Recursive decomposition of a structure into substructures is a common technique in succinct encodings and has even been used to encode (ordered) trees (Geary et al. in ACM Trans. Algorithms 2(4):510–534, 2006; He et al. in ICALP, pp. 509–520, 2007) and dynamic binary trees (Munro et al. in ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 529–536, 2001; Storm in Representing dynamic binary trees succinctly, Master’s thesis, 2000). The main distinction of the approach in this paper is that a tree is decomposed into subtrees in a manner that the subtrees are maximally isolated from each other. This intermediate decomposition result is interesting in its own right and has proved useful in other applications (Farzan et al. in ICALP (1), pp. 451–462, 2009; Farzan and Munro in ICALP (1), pp. 439–450, 2009; Farzan and Kamali in ICALP, 2011).  相似文献   

20.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号