首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

2.
The last decade has seen an explosion in the number of people learning English as a second language (ESL). In China alone, it is estimated to be over 300 million (Yang in Engl Today 22, 2006). Even in predominantly English-speaking countries, the proportion of non-native speakers can be very substantial. For example, the US National Center for Educational Statistics reported that nearly 10 % of the students in the US public school population speak a language other than English and have limited English proficiency (National Center for Educational Statistics (NCES) in Public school student counts, staff, and graduate counts by state: school year 2000–2001, 2002). As a result, the last few years have seen a rapid increase in the development of NLP tools to detect and correct grammatical errors so that appropriate feedback can be given to ESL writers, a large and growing segment of the world’s population. As a byproduct of this surge in interest, there have been many NLP research papers on the topic, a Synthesis Series book (Leacock et al. in Automated grammatical error detection for language learners. Synthesis lectures on human language technologies. Morgan Claypool, Waterloo 2010), a recurring workshop (Tetreault et al. in Proceedings of the NAACL workshop on innovative use of NLP for building educational applications (BEA), 2012), and a shared task competition (Dale et al. in Proceedings of the seventh workshop on building educational applications using NLP (BEA), pp 54–62, 2012; Dale and Kilgarriff in Proceedings of the European workshop on natural language generation (ENLG), pp 242–249, 2011). Despite this growing body of work, several issues affecting the annotation for and evaluation of ESL error detection systems have received little attention. In this paper, we describe these issues in detail and present our research on alleviating their effects.  相似文献   

3.
X-ray computed tomography (CT) has been playing an important role in diagnostic of cancer and radiotherapy. However, high imaging dose added to healthy organs during CT scans is a serious clinical concern. Imaging dose in CT scans can be reduced by reducing the number of X-ray projections. In this paper, we consider 2D CT reconstructions using very small number of projections. Some regularization based reconstruction methods have already been proposed in the literature for such task, like the total variation (TV) based reconstruction (Sidky and Pan in Phys. Med. Biol. 53:4777, 2008; Sidky et al. in J. X-Ray Sci. Technol. 14(2):119–139, 2006; Jia et al. in Med. Phys. 37:1757, 2010; Choi et al. in Med. Phys. 37:5113, 2010) and balanced approach with wavelet frame based regularization (Jia et al. in Phys. Med. Biol. 56:3787–3807, 2011). For most of the existing methods, at least 40 projections is usually needed to get a satisfactory reconstruction. In order to keep radiation dose as minimal as possible, while increase the quality of the reconstructed images, one needs to enhance the resolution of the projected image in the Radon domain without increasing the total number of projections. The goal of this paper is to propose a CT reconstruction model with wavelet frame based regularization and Radon domain inpainting. The proposed model simultaneously reconstructs a high quality image and its corresponding high resolution measurements in Radon domain. In addition, we discovered that using the isotropic wavelet frame regularization proposed in Cai et al. (Image restorations: total variation, wavelet frames and beyond, 2011, preprint) is superior than using its anisotropic counterpart. Our proposed model, as well as other models presented in this paper, is solved rather efficiently by split Bregman algorithm (Goldstein and Osher in SIAM J. Imaging Sci. 2(2):323–343, 2009; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009). Numerical simulations and comparisons will be presented at the end.  相似文献   

4.
The class ${\mathcal{SLUR}}$ (Single Lookahead Unit Resolution) was introduced in Schlipf et al. (Inf Process Lett 54:133–137, 1995) as an umbrella class for efficient (poly-time) SAT solving, with linear-time SAT decision, while the recognition problem was not considered. ?epek et al. (2012) and Balyo et al. (2012) extended this class in various ways to hierarchies covering all of CNF (all clause-sets). We introduce a hierarchy ${\mathcal{SLUR}}_k$ which we argue is the natural “limit” of such approaches. The second source for our investigations is the class ${\mathcal{UC}}$ of unit-refutation complete clause-sets, introduced in del Val (1994) as a target class for knowledge compilation. Via the theory of “hardness” of clause-sets as developed in Kullmann (1999), Kullmann (Ann Math Artif Intell 40(3–4):303–352, 2004) and Ansótegui et al. (2008) we obtain a natural generalisation ${\mathcal{UC}}_k$ , containing those clause-sets which are “unit-refutation complete of level k”, which is the same as having hardness at most k. Utilising the strong connections to (tree-)resolution complexity and (nested) input resolution, we develop basic methods for the determination of hardness (the level k in ${\mathcal{UC}}_k$ ). A fundamental insight now is that ${\mathcal{SLUR}}_k = {\mathcal{UC}}_k$ holds for all k. We can thus exploit both streams of intuitions and methods for the investigations of these hierarchies. As an application we can easily show that the hierarchies from ?epek et al. (2012) and Balyo et al. (2012) are strongly subsumed by ${\mathcal{SLUR}}_k$ . Finally we consider the problem of “irredundant” clause-sets in ${\mathcal{UC}}_k$ . For 2-CNF we show that strong minimisations are possible in polynomial time, while already for (very special) Horn clause-sets minimisation is NP-complete. We conclude with an extensive discussion of open problems and future directions. We envisage the concepts investigated here to be the starting point for a theory of good SAT translations, which brings together the good SAT-solving aspects from ${\mathcal{SLUR}}$ together with the knowledge-representation aspects from ${\mathcal{UC}}$ , and expands this combination via notions of “hardness”.  相似文献   

5.
6.
This paper analyzes the role of common data problems when identifying structural breaks in small samples. Most notably, we survey small sample properties of the most commonly applied endogenous break tests developed by Brown et al. (J R Stat Soc B 37:149–163, 1975) and Zeileis (Stat Pap 45(1):123–131, 2004), Nyblom (J Am Stat Assoc 84(405):223–230, 1989) and Hansen (J Policy Model 14(4):517–533, 1992), and Andrews et al. (J Econ 70(1):9–38, 1996). Power and size properties are derived using Monte Carlo simulations. We find that the Nyblom test is on par with the commonly used F type tests in a small sample in terms of power. While the Nyblom test’s power decreases if the structural break occurs close to the margin of the sample, it proves far more robust to nonnormal distributions of the error term that are found to matter strongly in small samples although being irrelevant asymptotically for all tests that are analyzed in this paper.  相似文献   

7.
Combining the block transmission in Long and Liu (Phys Rev A 65:032302, 2002) and the double operations in Lin et al. (Opt Commun 282:4455, 2009), we propose a secure multiparty quantum secret sharing protocol with the collective eavesdropping-check character. In this protocol, only the boss needs to prepare Bell states and perform Bell state measurements, and all agents only perform local operations, which makes this protocol more feasible with the current technique. Incidentally, we show that the other half of secret messages in Lin et al. protocol (Opt Commun 282:4455, 2009) may also be eavesdropped.  相似文献   

8.
9.
In this paper, inspired by some types of $BL$ -algebra filters (deductive systems) introduced in Haveshki et al. (Soft Comput 10:657–664, 2006), Kondo and Dudek (Soft Comput 12:419–423, 2008) and Turunen (Arch Math Log 40:467–473, 2001), we defined residuated lattice versions of them and study them in connection with Van Gasse et al. (Inf Sci 180(16):3006–3020, 2010), Lianzhen and Kaitai (Inf Sci 177:5725–5738, 2007), Zhu and Xu (Inf Sci 180:3614–3632, 2010). Also we consider some relations between these filters and quotient residuated lattice that are constructed via these filters.  相似文献   

10.
Teachers and students face many challenges in shifting from traditional classroom cultures to enacting the Knowledge-Building Communities model (KBC model) supported by the CSCL environment, Knowledge Forum (Bereiter, 2002; Bereiter & Scardamalia, 1993; Scardamalia, 2002; Scardamalia & Bereiter, 2006). Enacting the model involves socializing students into knowledge work, similar to disciplinary communities. A useful construct in the field of the Learning Sciences for understanding knowledge work is “epistemic games” (Collins & Ferguson, 1993; Morrison & Collins 1995; Perkins, 1997). We propose that a powerful means for supporting classroom enactments of the KBC model entails conceptualizing Knowledge Forum as a collective space for playing multi-player epistemic games. Participation in knowledge-building communities is then scaffolded through learning the moves of such games. We have designed scaffolding tools that highlight particular knowledge-building moves for practice and reflection as a means of supporting students and teachers in coming to understand how to collectively work together toward the progressive improvement of ideas. In order to examine our design theories in practice, we present research on Ideas First, a design-based research program involving enactments of the KBC model in Singaporean primary science classrooms (Bielaczyc & Ow, 2007, 2010; Ow & Bielaczyc, 2007; 2008).  相似文献   

11.
Reachability and shortest path problems are NL-complete for general graphs. They are known to be in L for graphs of tree-width 2 (Jakoby and Tantau in Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007). In this paper, we improve these bounds for k-trees, where k is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed k-trees, and for computation of shortest and longest paths in directed acyclic k-trees. Besides the path problems mentioned above, we also consider the problem of deciding whether a k-tree has a perfect matching (decision version), and if so, finding a perfect matching (search version), and prove that these two problems are L-complete. These problems are known to be in P and in RNC for general graphs, and in SPL for planar bipartite graphs, as shown in Datta et al. (Theory Comput. Syst. 47:737–757, 2010). Our results settle the complexity of these problems for the class of k-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of the divide-and-conquer approach in log-space, along with some ideas from Jakoby and Tantau (Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007) and Limaye et al. (Theory Comput. Syst. 46(3):499–522, 2010).  相似文献   

12.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

13.
We extend the result of Zhang et al. (J Fuzzy Math 14:53, 2006), who discussed the finite fuzzy relation equations with max–min and max–prod composition. In this article, the $\text{max-}*$ composition is used for wide family of operations $*$ . In particular, families of solutions of two relation equations are compared.  相似文献   

14.
In this document, we present an alternative to the method introduced by Ebner (Pattern Recognit 60–67, 2003; J Parallel Distrib Comput 64(1):79–88, 2004; Color constancy using local color shifts, pp 276–287, 2004; Color Constancy, 2007; Mach Vis Appl 20(5):283–301, 2009) for computing the local space average color. We show that when the problem is framed as a linear system and the resulting series is solved, there is a solution based on LU decomposition that reduces the computing time by at least an order of magnitude.  相似文献   

15.
In this paper decision variables for the key-frame detection problem in a video are evaluated using statistical tools derived from the theory of design of experiments. The pixel-by-pixel intensity difference of consecutive video frames is used as the factor or decision variable for designing an experiment for key-frame detection. The determination of a key-frame is correlated with the different values of the factor. A novel concept of meaningfulness of a video key-frame is also introduced to select the representative key-frame from a set of possible key-frames. The use of the concepts of design of experiments and the meaningfulness property to summarize a video is tested using a number of videos taken from MUSCLE-VCD-2007 dataset. The performance of the proposed approach in detecting key-frames is found to be superior in comparison to the competing approaches like PME based method (Liu et al., IEEE Trans Circuits Syst Video Technol 13(10):1006–1013, 2003; Mukherjee et al., IEEE Trans Circuits Syst Video Technol 17(5):612–620, 2007; Panagiotakis et al., IEEE Trans Circuits Syst Video Technol 19(3):447–451, 2009).  相似文献   

16.
Wireless sensor networks (WSNs), one of the commercial wireless mesh networks (WMNs), are envisioned to provide an effective solution for sensor-based AmI (Ambient Intelligence) systems and applications. To enable the communications between AmI sensor networks and the most popular TCP/IP networks seamlessly, the best solution model is to run TCP/IP directly on WSNs (Mulligan et al. 2009; Hui and Culler 2008; Han and Mam 2007; Kim et al. 2007; Xiaohua et al. 2004; Dunkels et al. 2004; Dunkels et al. 2004; Dunkels 2001; Dunkels et al. 2004). In this case, an IP assignment method is required to assign each sensor node a unique IP address. SIPA (Dunkels et al. 2004) is the best known IP assignment method that uses spatial relations and locations of sensor nodes to assign their IP addresses. It has been applied in Contiki (Dunkels et al. 2004), a famous WSN operating system, to support the 6LowPAN protocol. In Chang et al. (2009), we proposed the SLIPA (Scan-Line IP Assignment) algorithm to improve the assignment success rate (ASR) obtained by SIPA. SLIPA can achieve a good ASR when sensor nodes are uniformly distributed. However, if sensor nodes are deployed by other distributions, the improvements would be limited. This paper proposes a new spatial IP assignment method, called SLIPA-Q (SLIPA with equal-quantity partition), to improve SLIPA. Experiments show that, by testing the proposed method 1,000 times with 1,000 randomly deployed sensor nodes, the average ASR obtained by SLIPA-Q is over two times of that obtained by SLIPA. Under the same 88% ASR, the average numbers of sensor nodes those can be successfully assigned by SLIPA-Q, SLIPA, and SIPA are 950, 850, and 135, respectively. Comparing to previous spatial IP assignment methods, SLIPA-Q can achieve dramatic improvements in ASR for assigning IP addresses to a large set of sensor nodes.  相似文献   

17.
We develop a novel and simple theoretical model of time-interleaved sequential lamination micromixers that improves the model proposed by Nguyen and coworkers (Microfluid Nanofluid 1:373–375, 2005a, Lab Chip 5:1320–1326, b, J Phys Conf Ser 34:136–141, 2006) based on the Taylor–Aris dispersion theory. The Nguyen model takes into account the non uniform structure of the velocity profile through an effective dispersion coefficient. However, it is essentially a one-dimensional model that is not suitable to describe (i) neither the behavior of mixing occurring at short length-scales, and characterized by the growth of a mixing boundary layer near the channel walls, (ii) nor the exponential decay of the concentration field occurring at larger length-scales. The model we propose, which is based upon the theory of imaginary potential developed by Giona et?al. (J Fluid Mech 513:221–237, 2004, Europhys Lett 83:34001, 2008, J Fluid Mech 639:291–341, 2009a), is able to provide quantitative predictions on the evolution of the L 2-norm of the concentration fields as function of the axial coordinate ξ,?both for short and asymptotic lengthscales. The quantitative comparison with respect to the Nguyen model is illustrated and discussed. Finally, the coupling between parallel lamination and sequential segmentation is analyzed, and leads to unexpected and apparently counter-intuitive findings.  相似文献   

18.
In this paper we provide improved approximation algorithms for the Min-Max Tree Cover and Bounded Tree Cover problems. Given a graph G=(V,E) with weights w:E→?+, a set T 1,T 2,…,T k of subtrees of G is called a tree cover of G if $V=\bigcup_{i=1}^{k} V(T_{i})$ . In the Min-Max k-tree Cover problem we are given graph G and a positive integer k and the goal is to find a tree cover with k trees, such that the weight of the largest tree in the cover is minimized. We present a 3-approximation algorithm for this improving the two different approximation algorithms presented in Arkin et al. (J. Algorithms 59:1–18, 2006) and Even et al. (Oper. Res. Lett. 32(4):309–315, 2004) with ratio 4. The problem is known to have an APX-hardness lower bound of $\frac{3}{2}$ (Xu and Wen in Oper. Res. Lett. 38:169–173, 2010). In the Bounded Tree Cover problem we are given graph G and a bound λ and the goal is to find a tree cover with minimum number of trees such that each tree has weight at most λ. We present a 2.5-approximation algorithm for this, improving the 3-approximation bound in Arkin et al. (J. Algorithms 59:1–18, 2006).  相似文献   

19.
The stochastic collocation method (Babu?ka et al. in SIAM J Numer Anal 45(3):1005–1034, 2007; Nobile et al. in SIAM J Numer Anal 46(5):2411–2442, 2008a; SIAM J Numer Anal 46(5):2309–2345, 2008b; Xiu and Hesthaven in SIAM J Sci Comput 27(3):1118–1139, 2005) has recently been applied to stochastic problems that can be transformed into parametric systems. Meanwhile, the reduced basis method (Maday et al. in Comptes Rendus Mathematique 335(3):289–294, 2002; Patera and Rozza in Reduced basis approximation and a posteriori error estimation for parametrized partial differential equations Version 1.0. Copyright MIT, http://augustine.mit.edu, 2007; Rozza et al. in Arch Comput Methods Eng 15(3):229–275, 2008), primarily developed for solving parametric systems, has been recently used to deal with stochastic problems (Boyaval et al. in Comput Methods Appl Mech Eng 198(41–44):3187–3206, 2009; Arch Comput Methods Eng 17:435–454, 2010). In this work, we aim at comparing the performance of the two methods when applied to the solution of linear stochastic elliptic problems. Two important comparison criteria are considered: (1), convergence results of the approximation error; (2), computational costs for both offline construction and online evaluation. Numerical experiments are performed for problems from low dimensions $O(1)$ to moderate dimensions $O(10)$ and to high dimensions $O(100)$ . The main result stemming from our comparison is that the reduced basis method converges better in theory and faster in practice than the stochastic collocation method for smooth problems, and is more suitable for large scale and high dimensional stochastic problems when considering computational costs.  相似文献   

20.
We present several variants of the sunflower conjecture of Erd?s & Rado (J Lond Math Soc 35:85–90, 1960) and discuss the relations among them. We then show that two of these conjectures (if true) imply negative answers to the questions of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990) and Cohn et al. (2005) regarding possible approaches for obtaining fast matrix-multiplication algorithms. Specifically, we show that the Erd?s–Rado sunflower conjecture (if true) implies a negative answer to the “no three disjoint equivoluminous subsets” question of Coppersmith & Winograd (J Symb Comput 9:251–280, 1990); we also formulate a “multicolored” sunflower conjecture in ${\mathbb{Z}_3^n}$ and show that (if true) it implies a negative answer to the “strong USP” conjecture of Cohn et al. (2005) (although it does not seem to impact a second conjecture in Cohn et al. (2005) or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith–Winograd conjecture actually implies the Cohn et al. conjecture. The multicolored sunflower conjecture in ${\mathbb{Z}_3^n}$ is a strengthening of the well-known (ordinary) sunflower conjecture in ${\mathbb{Z}_3^n}$ , and we show via our connection that a construction from Cohn et al. (2005) yields a lower bound of (2.51 . . .) n on the size of the largest multicolored 3-sunflower-free set, which beats the current best-known lower bound of (2.21 . . . ) n Edel (2004) on the size of the largest 3-sunflower-free set in ${\mathbb{Z}_3^n}$ .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号