首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Numerous sophisticated local algorithm were suggested in the literature for various fundamental problems. Notable examples are the MIS and $(\Delta +1)$ -coloring algorithms by Barenboim and Elkin (Distrib Comput 22(5–6):363–379, 2010), by Kuhn (2009), and by Panconesi and Srinivasan (J Algorithms 20(2):356–374, 1996), as well as the $O\mathopen {}(\Delta ^2)$ -coloring algorithm by Linial (J Comput 21:193, 1992). Unfortunately, most known local algorithms (including, in particular, the aforementioned algorithms) are non-uniform, that is, local algorithms generally use good estimations of one or more global parameters of the network, e.g., the maximum degree $\Delta $ or the number of nodes $n$ . This paper provides a method for transforming a non-uniform local algorithm into a uniform one. Furthermore, the resulting algorithm enjoys the same asymptotic running time as the original non-uniform algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. Specifically, it applies to almost all state of the art non-uniform algorithms for MIS and Maximal Matching, as well as to many results concerning the coloring problem (In particular, it applies to all aforementioned algorithms). To obtain our transformations we introduce a new distributed tool called pruning algorithms, which we believe may be of independent interest.  相似文献   

2.
K-anonymity (Samarati and Sweeny 1998; Samarati, IEEE Trans Knowl Data Eng, 13(6):1010–1027, 2001; Sweeny, Int J Uncertain, Fuzziness Knowl-Based Syst, 10(5):557–570, 2002) and its variants, l-diversity (Machanavajjhala et al., ACM TKDD, 2007) and tcloseness (Li et al. 2007) among others are anonymization techniques for relational data and transaction data, which are used to protect privacy against re-identification attacks. A relational dataset D is k-anonymous if every record in D has at least k-1 other records with identical quasi-identifier attribute values. The combination of released data with external data will never allow the recipient to associate each released record with less than k individuals (Samarati, IEEE Trans Knowl Data Eng, 13(6):1010–1027, 2001). However, the current concept of k-anonymity on transaction data treats all items as quasi-identifiers. The anonymized data set has k identical transactions in groups and suffers from lower data utility (He and Naughton 2009; He et al. 2011; Liu and Wang 2010; Terrovitis et al., VLDB J, 20(1):83–106, 2011; Terrovitis et al. 2008). To improve the utility of anonymized transaction data, this work proposes a novel anonymity concept on transaction data that contain both quasi-identifier items (QID) and sensitive items (SI). A transaction that contains sensitive items must have at least k-1 other identical transactions (Ghinita et al. IEEE TKDE, 33(2):161–174, 2011; Xu et al. 2008). For a transaction that does not contain a sensitive item, no anonymization is required. A transaction dataset that satisfies this property is said to be sensitive k-anonymous. Three algorithms, Sensitive Transaction Neighbors (STN) Gray Sort Clustering (GSC) and Nearest Neighbors for K-anonymization (K-NN), are developed. These algorithms use adding/deleting QID items and only adding SI to achieve sensitive k-anonymity on transaction data. Additionally, a simple “privacy value” is proposed to evaluate the degree of privacy for different types of k-anonymity on transaction data. Extensive numerical simulations were carried out to demonstrate the characteristics of the proposed algorithms and also compared to other types of k-anonymity approaches. The results show that each technique possesses its own advantage under different criteria such as running time, operation, and information loss. The results obtained here can be used as a guideline of the selection of anonymization technique on different data sets and for different applications.  相似文献   

3.
Teachers and students face many challenges in shifting from traditional classroom cultures to enacting the Knowledge-Building Communities model (KBC model) supported by the CSCL environment, Knowledge Forum (Bereiter, 2002; Bereiter & Scardamalia, 1993; Scardamalia, 2002; Scardamalia & Bereiter, 2006). Enacting the model involves socializing students into knowledge work, similar to disciplinary communities. A useful construct in the field of the Learning Sciences for understanding knowledge work is “epistemic games” (Collins & Ferguson, 1993; Morrison & Collins 1995; Perkins, 1997). We propose that a powerful means for supporting classroom enactments of the KBC model entails conceptualizing Knowledge Forum as a collective space for playing multi-player epistemic games. Participation in knowledge-building communities is then scaffolded through learning the moves of such games. We have designed scaffolding tools that highlight particular knowledge-building moves for practice and reflection as a means of supporting students and teachers in coming to understand how to collectively work together toward the progressive improvement of ideas. In order to examine our design theories in practice, we present research on Ideas First, a design-based research program involving enactments of the KBC model in Singaporean primary science classrooms (Bielaczyc & Ow, 2007, 2010; Ow & Bielaczyc, 2007; 2008).  相似文献   

4.
The TreeRank algorithm was recently proposed in [1] and [2] as a scoring-based method based on recursive partitioning of the input space. This tree induction algorithm builds orderings by recursively optimizing the Receiver Operating Characteristic curve through a one-step optimization procedure called LeafRank. One of the aim of this paper is the in-depth analysis of the empirical performance of the variants of TreeRank/LeafRank method. Numerical experiments based on both artificial and real data sets are provided. Further experiments using resampling and randomization, in the spirit of bagging and random forests are developed [3, 4] and we show how they increase both stability and accuracy in bipartite ranking. Moreover, an empirical comparison with other efficient scoring algorithms such as RankBoost and RankSVM is presented on UCI benchmark data sets.  相似文献   

5.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

6.
In a very recent paper, Peng and Liu (Neural Comput Appl 20:543–547, 2011) investigated the pth moment stability of the stochastic Grossberg–Hopfield neural networks with Markov volatilities by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). We should point out that Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) investigated the pth moment exponentially stable for a class of stochastic dynamical systems with constant delay; however, this theorem cannot apply to the case of variable time delays. It is also worthy to emphasize that Peng and Liu (Neural Comput Appl 20:543–547, 2011) discussed by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) the pth moment exponentially stable for the Grossberg–Hopfield neural networks with variable delays, and therefore, there are some gaps between Peng and Liu (Neural Comput Appl 20:543–547, 2011, Theorem 1) and Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). In this paper, we fill up this gap. Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

7.
We develop a novel and simple theoretical model of time-interleaved sequential lamination micromixers that improves the model proposed by Nguyen and coworkers (Microfluid Nanofluid 1:373–375, 2005a, Lab Chip 5:1320–1326, b, J Phys Conf Ser 34:136–141, 2006) based on the Taylor–Aris dispersion theory. The Nguyen model takes into account the non uniform structure of the velocity profile through an effective dispersion coefficient. However, it is essentially a one-dimensional model that is not suitable to describe (i) neither the behavior of mixing occurring at short length-scales, and characterized by the growth of a mixing boundary layer near the channel walls, (ii) nor the exponential decay of the concentration field occurring at larger length-scales. The model we propose, which is based upon the theory of imaginary potential developed by Giona et?al. (J Fluid Mech 513:221–237, 2004, Europhys Lett 83:34001, 2008, J Fluid Mech 639:291–341, 2009a), is able to provide quantitative predictions on the evolution of the L 2-norm of the concentration fields as function of the axial coordinate ξ,?both for short and asymptotic lengthscales. The quantitative comparison with respect to the Nguyen model is illustrated and discussed. Finally, the coupling between parallel lamination and sequential segmentation is analyzed, and leads to unexpected and apparently counter-intuitive findings.  相似文献   

8.
The class ${\mathcal{SLUR}}$ (Single Lookahead Unit Resolution) was introduced in Schlipf et al. (Inf Process Lett 54:133–137, 1995) as an umbrella class for efficient (poly-time) SAT solving, with linear-time SAT decision, while the recognition problem was not considered. ?epek et al. (2012) and Balyo et al. (2012) extended this class in various ways to hierarchies covering all of CNF (all clause-sets). We introduce a hierarchy ${\mathcal{SLUR}}_k$ which we argue is the natural “limit” of such approaches. The second source for our investigations is the class ${\mathcal{UC}}$ of unit-refutation complete clause-sets, introduced in del Val (1994) as a target class for knowledge compilation. Via the theory of “hardness” of clause-sets as developed in Kullmann (1999), Kullmann (Ann Math Artif Intell 40(3–4):303–352, 2004) and Ansótegui et al. (2008) we obtain a natural generalisation ${\mathcal{UC}}_k$ , containing those clause-sets which are “unit-refutation complete of level k”, which is the same as having hardness at most k. Utilising the strong connections to (tree-)resolution complexity and (nested) input resolution, we develop basic methods for the determination of hardness (the level k in ${\mathcal{UC}}_k$ ). A fundamental insight now is that ${\mathcal{SLUR}}_k = {\mathcal{UC}}_k$ holds for all k. We can thus exploit both streams of intuitions and methods for the investigations of these hierarchies. As an application we can easily show that the hierarchies from ?epek et al. (2012) and Balyo et al. (2012) are strongly subsumed by ${\mathcal{SLUR}}_k$ . Finally we consider the problem of “irredundant” clause-sets in ${\mathcal{UC}}_k$ . For 2-CNF we show that strong minimisations are possible in polynomial time, while already for (very special) Horn clause-sets minimisation is NP-complete. We conclude with an extensive discussion of open problems and future directions. We envisage the concepts investigated here to be the starting point for a theory of good SAT translations, which brings together the good SAT-solving aspects from ${\mathcal{SLUR}}$ together with the knowledge-representation aspects from ${\mathcal{UC}}$ , and expands this combination via notions of “hardness”.  相似文献   

9.
We present several variants of the sunflower conjecture of Erd?s & Rado (J Lond Math Soc 35:85–90, 1960) and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to the questions of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990) and Cohn et al. (2005) regarding possible approaches for obtaining fast matrix-multiplication algorithms. Specifically, we show that the Erd?s–Rado sunflower conjecture (if true) implies a negative answer to the “no three disjoint equivoluminous subsets” question of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990); we also formulate a “multicolored” sunflower conjecture in ${\mathbb{Z}_3^n}$ and show that (if true) it implies a negative answer to the “strong USP” conjecture of Cohn et al. (2005) (although it does not seem to impact a second conjecture in Cohn et al. (2005) or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith–Winograd conjecture actually implies the Cohn et al. conjecture. The multicolored sunflower conjecture in ${\mathbb{Z}_3^n}$ is a strengthening of the well-known (ordinary) sunflower conjecture in ${\mathbb{Z}_3^n}$ , and we show via our connection that a construction from Cohn et al. (2005) yields a lower bound of (2.51 . . .) n on the size of the largest multicolored 3-sunflower-free set, which beats the current best-known lower bound of (2.21 . . . ) n Edel (2004) on the size of the largest 3-sunflower-free set in ${\mathbb{Z}_3^n}$ .  相似文献   

10.
We revisit from a fairness point of view the problem of online load balancing in the restricted assignment model and the 1-∞ model. We consider both a job-centric and a machine-centric view of fairness, as proposed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005). These notions are equivalent to the approximate notion of prefix competitiveness proposed by Kleinberg et al. (In: Proceedings of the 40th annual symposium on foundations of computer science, p. 568, 2001), as well as to the notion of approximate majorization, and they generalize the well studied notion of max-min fairness. We resolve a question posed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) proving that the greedy strategy is globally O(log?m)-fair, where m denotes the number of machines. This result improves upon the analysis of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who showed that the greedy strategy is globally O(log?n)-fair, where n is the number of jobs. Typically, n?m, and therefore our improvement is significant. Our proof matches the known lower bound for the problem with respect to the measure of global fairness. The improved bound is obtained by analyzing, in a more accurate way, the more general restricted assignment model studied previously in Azar et al. (J. Algorithms 18:221–237, 1995). We provide an alternative bound which is not worse than the bounds of Azar et al. (J. Algorithms 18:221–237, 1995), and it is strictly better in many cases. The bound we prove is, in fact, much more general and it bounds the load on any prefix of most loaded machines. As a corollary from this more general bound we find that the greedy algorithm results in an assignment that is globally O(log?m)-balanced. The last result generalizes the previous result of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who proved that the greedy algorithm yields an assignment that is globally O(log?m)-balanced for the 1-∞ model.  相似文献   

11.
This paper investigates the problem of the pth moment exponential stability for a class of stochastic recurrent neural networks with Markovian jump parameters. With the help of Lyapunov function, stochastic analysis technique, generalized Halanay inequality and Hardy inequality, some novel sufficient conditions on the pth moment exponential stability of the considered system are derived. The results obtained in this paper are completely new and complement and improve some of the previously known results (Liao and Mao, Stoch Anal Appl, 14:165–185, 1996; Wan and Sun, Phys Lett A, 343:306–318, 2005; Hu et al., Chao Solitions Fractals, 27:1006–1010, 2006; Sun and Cao, Nonlinear Anal Real, 8:1171–1185, 2007; Huang et al., Inf Sci, 178:2194–2203, 2008; Wang et al., Phys Lett A, 356:346–352, 2006; Peng and Liu, Neural Comput Appl, 20:543–547, 2011). Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

12.
In this paper we provide improved approximation algorithms for the Min-Max Tree Cover and Bounded Tree Cover problems. Given a graph G=(V,E) with weights w:E→?+, a set T 1,T 2,…,T k of subtrees of G is called a tree cover of G if $V=\bigcup_{i=1}^{k} V(T_{i})$ . In the Min-Max k-tree Cover problem we are given graph G and a positive integer k and the goal is to find a tree cover with k trees, such that the weight of the largest tree in the cover is minimized. We present a 3-approximation algorithm for this improving the two different approximation algorithms presented in Arkin et al. (J. Algorithms 59:1–18, 2006) and Even et al. (Oper. Res. Lett. 32(4):309–315, 2004) with ratio 4. The problem is known to have an APX-hardness lower bound of $\frac{3}{2}$ (Xu and Wen in Oper. Res. Lett. 38:169–173, 2010). In the Bounded Tree Cover problem we are given graph G and a bound λ and the goal is to find a tree cover with minimum number of trees such that each tree has weight at most λ. We present a 2.5-approximation algorithm for this, improving the 3-approximation bound in Arkin et al. (J. Algorithms 59:1–18, 2006).  相似文献   

13.
The stochastic collocation method (Babu?ka et al. in SIAM J Numer Anal 45(3):1005–1034, 2007; Nobile et al. in SIAM J Numer Anal 46(5):2411–2442, 2008a; SIAM J Numer Anal 46(5):2309–2345, 2008b; Xiu and Hesthaven in SIAM J Sci Comput 27(3):1118–1139, 2005) has recently been applied to stochastic problems that can be transformed into parametric systems. Meanwhile, the reduced basis method (Maday et al. in Comptes Rendus Mathematique 335(3):289–294, 2002; Patera and Rozza in Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations Version 1.0. Copyright MIT, http://augustine.mit.edu, 2007; Rozza et al. in Arch Comput Methods Eng 15(3):229–275, 2008), primarily developed for solving parametric systems, has been recently used to deal with stochastic problems (Boyaval et al. in Comput Methods Appl Mech Eng 198(41–44):3187–3206, 2009; Arch Comput Methods Eng 17:435–454, 2010). In this work, we aim at comparing the performance of the two methods when applied to the solution of linear stochastic elliptic problems. Two important comparison criteria are considered: (1), convergence results of the approximation error; (2), computational costs for both offline construction and online evaluation. Numerical experiments are performed for problems from low dimensions $O(1)$ to moderate dimensions $O(10)$ and to high dimensions $O(100)$ . The main result stemming from our comparison is that the reduced basis method converges better in theory and faster in practice than the stochastic collocation method for smooth problems, and is more suitable for large scale and high dimensional stochastic problems when considering computational costs.  相似文献   

14.
Reachability and shortest path problems are NL-complete for general graphs. They are known to be in L for graphs of tree-width 2 (Jakoby and Tantau in Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007). In this paper, we improve these bounds for k-trees, where k is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed k-trees, and for computation of shortest and longest paths in directed acyclic k-trees. Besides the path problems mentioned above, we also consider the problem of deciding whether a k-tree has a perfect matching (decision version), and if so, finding a perfect matching (search version), and prove that these two problems are L-complete. These problems are known to be in P and in RNC for general graphs, and in SPL for planar bipartite graphs, as shown in Datta et al. (Theory Comput. Syst. 47:737–757, 2010). Our results settle the complexity of these problems for the class of k-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of the divide-and-conquer approach in log-space, along with some ideas from Jakoby and Tantau (Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007) and Limaye et al. (Theory Comput. Syst. 46(3):499–522, 2010).  相似文献   

15.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

16.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

17.
In a previous paper, we laid out the vision of a novel graph query processing paradigm where instead of processing a visual query graph after its construction, it interleaves visual query formulation and processing by exploiting the latency offered by the gui to filter irrelevant matches and prefetch partial query results [8]. Our recent attempts at implementing this vision [8, 9] show significant improvement in system response time (srt) for subgraph queries. However, these efforts are designed specifically for graph databases containing a large collection of small or medium-sized graphs. In this paper, we propose a novel algorithm called quble (QUery Blender for Large nEtworks) to realize this visual subgraph querying paradigm on very large networks (e.g., protein interaction networks, social networks). First, it decomposes a large network into a set of graphlets and supergraphlets using a minimum cut-based graph partitioning technique. Next, it mines approximate frequent and small infrequent fragments (sifs) from them and identifies their occurrences in these graphlets and supergraphlets. Then, the indexing framework of [9] is enhanced so that the mined fragments can be exploited to index graphlets for efficient blending of visual subgraph query formulation and query processing. Extensive experiments on large networks demonstrate effectiveness of quble.  相似文献   

18.
Given a DNF formula f on n variables, the two natural size measures are the number of terms or size s(f) and the maximum width of a term w(f). It is folklore that small DNF formulas can be made narrow: if a formula has m terms, it can be ${\epsilon}$ -approximated by a formula with width ${{\rm log}(m/{\epsilon})}$ . We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be ${\epsilon}$ -approximated by a width w DNF with at most ${(w\, {\rm log}(1/{\epsilon}))^{O(w)}}$ terms. We combine our sparsification result with the work of Luby & Velickovic (1991, Algorithmica 16(4/5):415–433, 1996) to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic ${n^{\tilde{O}({\rm log}\, {\rm log} (n))}}$ time algorithm that computes an additive ${\epsilon}$ approximation to the fraction of satisfying assignments of f for ${\epsilon = 1/{\rm poly}({\rm log}\, n)}$ . The previous best result due to Luby and Velickovic from nearly two decades ago had a run time of ${n^{{\rm exp}(O(\sqrt{{\rm log}\, {\rm log} n}))}}$ (Luby & Velickovic 1991, in Algorithmica 16(4/5):415–433, 1996).  相似文献   

19.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

20.
We first consider the problem of finding a maximum size stable matching if incomplete lists and ties are both allowed, but ties are on one side only. For this problem we give a simple, linear time 3/2-approximation algorithm, improving on the best known approximation factor 5/3 of Irving and Manlove (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007). Next, we show how this extends to the Hospitals/Residents problem with the same ratio if the residents have strict orders. We also give a simple linear time algorithm for the general problem with approximation factor 5/3, improving the best known 15/8-approximation algorithm of Iwama, Miyazaki and Yamauchi (SODA ??07: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp.?288?C297, 2007). For the cases considered in this paper it is NP-hard to approximate within a factor of 21/19 by the result of Halldórsson et?al. (ACM Transactions on Algorithms 3(3):30, 2007). Our algorithms not only give better approximation ratios than the cited ones, but are much simpler and run significantly faster. Also we may drop a restriction used in (J. Comb. Optim., doi:10.1007/s10878-007-9133-x, 2007) and the analysis is substantially more moderate. Preliminary versions of this paper appeared in (Király, Egres Technical Report TR-2008-04, www.cs.elte.hu/egres/, 2008; Király in Proceedings of MATCH-UP 2008: Matching Under Preferences??Algorithms and Complexity, Satellite Workshop of ICALP, July 6, 2008, Reykjavík, Iceland, pp.?36?C45, 2008; Király in ESA 2008, Lecture Notes in Computer Science, vol.?5193, pp.?623?C634, 2008). For the related results obtained thenceforth see Sect.?5.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号