首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The routing of traffic between Internet domains, or Autonomous Systems (ASes), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP, Rekhter and Li in RFC 4271 of the Internet Engineering Task Force, 2006). Using BGP, ASes can apply semantically rich routing policies to choose interdomain routes in a distributed fashion. This expressiveness in routing-policy choice supports domains?? autonomy in network operations and in business decisions, but it comes at a price: The interaction of locally defined routing policies can lead to unexpected global anomalies, including route oscillations or overall protocol divergence (see, e.g., Varadhan et?al. in Comput Networks 32(1):1?C16, 2000). Networking researchers have addressed this problem by devising constraints on policies that guarantee BGP convergence without unduly limiting expressiveness and autonomy (see, e.g., Gao and Rexford in IEEE/ACM Trans Network 9(6):681?C692, 2001; Griffin et?al. in Proceedings of 9th ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM??03), pp. 61?C72. ACM Press, New York, 2003). In addition to taking this engineering or ??protocol- design?? approach, researchers have approached interdomain routing from an economic or ??mechanism-design?? point of view. It is known that lowest-cost-path (LCP) routing can be implemented in an incentive-compatible, BGP-compatible manner (Feigenbaum et?al. in Distribut. Comput 18(1):61?C72, 2005; Shneidman and Parkes in Proceedings of 23rd ACM Symposium on Principles of Distributed Computing (PODC??04), pp. 88?C97. ACM Press, New York, 2004) but that several other natural classes of policies cannot (Feigenbaum et?al. in Theor Comput Sci 378(2):175?C189, 2007; Feigenbaum et?al. in Distribut Comput 18(4):293?C305, 2006). In this paper, we present the first example of a class of interdomain-routing policies that is more general than LCP routing and for which BGP itself is both incentive-compatible and guaranteed to converge. We also present several steps toward a general theory of incentive-compatible, BGP-compatible interdomain routing.  相似文献   

2.
We present several variants of the sunflower conjecture of Erd?s & Rado (J Lond Math Soc 35:85–90, 1960) and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to the questions of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990) and Cohn et al. (2005) regarding possible approaches for obtaining fast matrix-multiplication algorithms. Specifically, we show that the Erd?s–Rado sunflower conjecture (if true) implies a negative answer to the “no three disjoint equivoluminous subsets” question of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990); we also formulate a “multicolored” sunflower conjecture in ${\mathbb{Z}_3^n}$ and show that (if true) it implies a negative answer to the “strong USP” conjecture of Cohn et al. (2005) (although it does not seem to impact a second conjecture in Cohn et al. (2005) or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith–Winograd conjecture actually implies the Cohn et al. conjecture. The multicolored sunflower conjecture in ${\mathbb{Z}_3^n}$ is a strengthening of the well-known (ordinary) sunflower conjecture in ${\mathbb{Z}_3^n}$ , and we show via our connection that a construction from Cohn et al. (2005) yields a lower bound of (2.51 . . .) n on the size of the largest multicolored 3-sunflower-free set, which beats the current best-known lower bound of (2.21 . . . ) n Edel (2004) on the size of the largest 3-sunflower-free set in ${\mathbb{Z}_3^n}$ .  相似文献   

3.
Given a graph G=(V,E), a vertex v of G is a median vertex if it minimizes the sum of the distances to all other vertices of G. The median problem consists of finding the set of all median vertices of G. In this note, we present self-stabilizing algorithms for the median problem in partial rectangular grids and relatives. Our algorithms are based on the fact that partial rectangular grids can be isometrically embedded into the Cartesian product of two trees, to which we apply the algorithm proposed by Antonoiu and Srimani (J. Comput. Syst. Sci. 58:215–221, 1999) and Bruell et al. (SIAM J. Comput. 29:600–614, 1999) for computing the medians in trees. Then we extend our approach from partial rectangular grids to a more general class of plane quadrangulations. We also show that the characterization of medians of trees given by Gerstel and Zaks (Networks 24:23–29, 1994) extends to cube-free median graphs, a class of graphs which includes these quadrangulations.  相似文献   

4.
Blair et al. (2001) developed an extension of logic programming called set based logic programming. In the theory of set based logic programming the atoms represent subsets of a fixed universe X and one is allowed to compose the one-step consequence operator with a monotonic idempotent operator O so as to ensure that the analogue of stable models in the theory are always closed under O. Marek et al. (1992, Ann Pure Appl Logic 96:231–276 1999) developed a generalization of Reiter’s normal default theories that can be applied to both default theories and logic programs which is based on an underlying consistency property. In this paper, we show how to extend the normal logic programming paradigm of Marek, Nerode, and Remmel to set based logic programming. We also show how one can obtain a new semantics for set based logic programming based on a consistency property.  相似文献   

5.
We apply the concept of asymptotic preserving schemes (SIAM J Sci Comput 21:441–454, 1999) to the linearized \(p\) -system and discretize the resulting elliptic equation using standard continuous Finite Elements instead of Finite Differences. The fully discrete method is analyzed with respect to consistency, and we compare it numerically with more traditional methods such as Implicit Euler’s method.  相似文献   

6.
The cubed-sphere grid is a spherical grid made of six quasi-cartesian square-like patches. It was originally introduced in Sadourny (Mon Weather Rev 100:136–144, 1972). We extend to this grid the design of high-order finite-difference compact operators (Collatz, The numerical treatment of differential equations. Springer, Berlin, 1960; Lele, J Comput Phys 103:16–42, 1992). The present work is limitated to the design of a fourth-order accurate spherical gradient. The treatment at the interface of the six patches relies on a specific interpolation system which is based on using great circles in an essential way. The main interest of the approach is a fully symmetric treatment of the sphere. We numerically demonstrate the accuracy of the approximate gradient on several test problems, including the cosine-bell test-case of Williamson et al. (J Comput Phys 102:211–224, 1992) and a deformational test-case reported in Nair and Lauritzen (J Comput Phys 229:8868–8887, 2010).  相似文献   

7.
A theoretical analysis tool, iterated optimal stopping, has been used as the basis of a numerical algorithm for American options under regime switching (Le and Wang in SIAM J Control Optim 48(8):5193–5213, 2010). Similar methods have also been proposed for American options under jump diffusion (Bayraktar and Xing in Math Methods Oper Res 70:505–525, 2009) and Asian options under jump diffusion (Bayraktar and Xing in Math Fin 21(1):117–143, 2011). An alternative method, local policy iteration, has been suggested in Huang et al. (SIAM J Sci Comput 33(5):2144–2168, 2011), and Salmi and Toivanen (Appl Numer Math 61:821–831, 2011). Worst case upper bounds on the convergence rates of these two methods suggest that local policy iteration should be preferred over iterated optimal stopping (Huang et al. in SIAM J Sci Comput 33(5):2144–2168, 2011). In this article, numerical tests are presented which indicate that the observed performance of these two methods is consistent with the worst case upper bounds. In addition, while these two methods seem quite different, we show that either one can be converted into the other by a simple rearrangement of two loops.  相似文献   

8.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

9.
Teachers and students face many challenges in shifting from traditional classroom cultures to enacting the Knowledge-Building Communities model (KBC model) supported by the CSCL environment, Knowledge Forum (Bereiter, 2002; Bereiter & Scardamalia, 1993; Scardamalia, 2002; Scardamalia & Bereiter, 2006). Enacting the model involves socializing students into knowledge work, similar to disciplinary communities. A useful construct in the field of the Learning Sciences for understanding knowledge work is “epistemic games” (Collins & Ferguson, 1993; Morrison & Collins 1995; Perkins, 1997). We propose that a powerful means for supporting classroom enactments of the KBC model entails conceptualizing Knowledge Forum as a collective space for playing multi-player epistemic games. Participation in knowledge-building communities is then scaffolded through learning the moves of such games. We have designed scaffolding tools that highlight particular knowledge-building moves for practice and reflection as a means of supporting students and teachers in coming to understand how to collectively work together toward the progressive improvement of ideas. In order to examine our design theories in practice, we present research on Ideas First, a design-based research program involving enactments of the KBC model in Singaporean primary science classrooms (Bielaczyc & Ow, 2007, 2010; Ow & Bielaczyc, 2007; 2008).  相似文献   

10.
In a very recent paper, Peng and Liu (Neural Comput Appl 20:543–547, 2011) investigated the pth moment stability of the stochastic Grossberg–Hopfield neural networks with Markov volatilities by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). We should point out that Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) investigated the pth moment exponentially stable for a class of stochastic dynamical systems with constant delay; however, this theorem cannot apply to the case of variable time delays. It is also worthy to emphasize that Peng and Liu (Neural Comput Appl 20:543–547, 2011) discussed by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) the pth moment exponentially stable for the Grossberg–Hopfield neural networks with variable delays, and therefore, there are some gaps between Peng and Liu (Neural Comput Appl 20:543–547, 2011, Theorem 1) and Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). In this paper, we fill up this gap. Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

11.
Recent machine learning techniques have demonstrated their capability for identifying image categories using image features. Among these techniques, Support Vector Machines (SVM) present good results for example in Pascal Voc challenge 2011 [8], particularly when they are associated with a kernel function [28, 35]. However, nowadays image categorization task is very challenging owing to the sizes of benchmark datasets and the number of categories to be classified. In such a context, lot of effort has to be put in the design of the kernel functions and underlying semantic features. In the following of the paper we call semantic features the features describing the (semantic) content of an image. In this paper, we propose a framework to learn an effective kernel function using the Boosting paradigm to linearly combine weak kernels. We then use a SVM with this kernel to categorize image databases. More specifically, this method create embedding functions to map images in a Hilbert space where they are better classified. Furthermore, our algorithm benefits from boosting process to learn this kernel with a complexity linear with the size of the training set. Experiments are carried out on popular benchmarks and databases to show the properties and behavior of the proposed method. On the PASCAL VOC2006 database, we compare our method to simple early fusion, and on the Oxford Flowers databases we show that our method outperforms the best Multiple Kernel Learning (MKL) techniques of the literature.  相似文献   

12.
In this document, we present an alternative to the method introduced by Ebner (Pattern Recognit 60–67, 2003; J Parallel Distrib Comput 64(1):79–88, 2004; Color constancy using local color shifts, pp 276–287, 2004; Color Constancy, 2007; Mach Vis Appl 20(5):283–301, 2009) for computing the local space average color. We show that when the problem is framed as a linear system and the resulting series is solved, there is a solution based on LU decomposition that reduces the computing time by at least an order of magnitude.  相似文献   

13.
Wireless sensor networks (WSNs), one of the commercial wireless mesh networks (WMNs), are envisioned to provide an effective solution for sensor-based AmI (Ambient Intelligence) systems and applications. To enable the communications between AmI sensor networks and the most popular TCP/IP networks seamlessly, the best solution model is to run TCP/IP directly on WSNs (Mulligan et al. 2009; Hui and Culler 2008; Han and Mam 2007; Kim et al. 2007; Xiaohua et al. 2004; Dunkels et al. 2004; Dunkels et al. 2004; Dunkels 2001; Dunkels et al. 2004). In this case, an IP assignment method is required to assign each sensor node a unique IP address. SIPA (Dunkels et al. 2004) is the best known IP assignment method that uses spatial relations and locations of sensor nodes to assign their IP addresses. It has been applied in Contiki (Dunkels et al. 2004), a famous WSN operating system, to support the 6LowPAN protocol. In Chang et al. (2009), we proposed the SLIPA (Scan-Line IP Assignment) algorithm to improve the assignment success rate (ASR) obtained by SIPA. SLIPA can achieve a good ASR when sensor nodes are uniformly distributed. However, if sensor nodes are deployed by other distributions, the improvements would be limited. This paper proposes a new spatial IP assignment method, called SLIPA-Q (SLIPA with equal-quantity partition), to improve SLIPA. Experiments show that, by testing the proposed method 1,000 times with 1,000 randomly deployed sensor nodes, the average ASR obtained by SLIPA-Q is over two times of that obtained by SLIPA. Under the same 88% ASR, the average numbers of sensor nodes those can be successfully assigned by SLIPA-Q, SLIPA, and SIPA are 950, 850, and 135, respectively. Comparing to previous spatial IP assignment methods, SLIPA-Q can achieve dramatic improvements in ASR for assigning IP addresses to a large set of sensor nodes.  相似文献   

14.
15.
Knuth–Bendix completion is a classical calculus in automated deduction for transforming a set of equations into a confluent and terminating set of directed equations which can be used to decide the induced equational theory. Multi-completion with termination tools constitutes an approach that differs from the classical method in two respects: (1) external termination tools replace the reduction order—a typically critical parameter—as proposed by Wehrman et al. (2006), and (2) multi-completion as introduced by Kurihara and Kondo (1999) is used to keep track of multiple orientations in parallel while exploiting sharing to boost efficiency. In this paper we describe the inference system, give the full proof of its correctness and comment on completeness issues. Critical pair criteria and isomorphisms are presented as refinements together with all proofs. We furthermore describe the implementation of our approach in the tool $\mathsf{mkbTT}$ , present extensive experimental results and report on new completions.  相似文献   

16.
We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

17.
18.
The class ${\mathcal{SLUR}}$ (Single Lookahead Unit Resolution) was introduced in Schlipf et al. (Inf Process Lett 54:133–137, 1995) as an umbrella class for efficient (poly-time) SAT solving, with linear-time SAT decision, while the recognition problem was not considered. ?epek et al. (2012) and Balyo et al. (2012) extended this class in various ways to hierarchies covering all of CNF (all clause-sets). We introduce a hierarchy ${\mathcal{SLUR}}_k$ which we argue is the natural “limit” of such approaches. The second source for our investigations is the class ${\mathcal{UC}}$ of unit-refutation complete clause-sets, introduced in del Val (1994) as a target class for knowledge compilation. Via the theory of “hardness” of clause-sets as developed in Kullmann (1999), Kullmann (Ann Math Artif Intell 40(3–4):303–352, 2004) and Ansótegui et al. (2008) we obtain a natural generalisation ${\mathcal{UC}}_k$ , containing those clause-sets which are “unit-refutation complete of level k”, which is the same as having hardness at most k. Utilising the strong connections to (tree-)resolution complexity and (nested) input resolution, we develop basic methods for the determination of hardness (the level k in ${\mathcal{UC}}_k$ ). A fundamental insight now is that ${\mathcal{SLUR}}_k = {\mathcal{UC}}_k$ holds for all k. We can thus exploit both streams of intuitions and methods for the investigations of these hierarchies. As an application we can easily show that the hierarchies from ?epek et al. (2012) and Balyo et al. (2012) are strongly subsumed by ${\mathcal{SLUR}}_k$ . Finally we consider the problem of “irredundant” clause-sets in ${\mathcal{UC}}_k$ . For 2-CNF we show that strong minimisations are possible in polynomial time, while already for (very special) Horn clause-sets minimisation is NP-complete. We conclude with an extensive discussion of open problems and future directions. We envisage the concepts investigated here to be the starting point for a theory of good SAT translations, which brings together the good SAT-solving aspects from ${\mathcal{SLUR}}$ together with the knowledge-representation aspects from ${\mathcal{UC}}$ , and expands this combination via notions of “hardness”.  相似文献   

19.
Stefan Kratsch 《Algorithmica》2012,63(1-2):532-550
It has been observed in many places that constant-factor approximable problems often admit polynomial or even linear problem kernels for their decision versions, e.g., Vertex Cover, Feedback Vertex Set, and Triangle Packing. While there exist examples like Bin Packing, which does not admit any kernel unless P = NP, there apparently is a strong relation between these two polynomial-time techniques. We add to this picture by showing that the natural decision versions of all problems in two prominent classes of constant-factor approximable problems, namely MIN F+Π1 and MAX NP, admit polynomial problem kernels. Problems in MAX SNP, a subclass of MAX NP, are shown to admit kernels with a linear base set, e.g., the set of vertices of a graph. This extends results of Cai and Chen (J. Comput. Syst. Sci. 54(3): 465–474, 1997), stating that the standard parameterizations of problems in MAX SNP and MIN F+Π1 are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. in J. Comput. Syst. Sci. 75(8): 423–434, 2009).  相似文献   

20.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号