首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present work initiates the study of the learnability of automatic indexable classes which are classes of regular languages of a certain form. Angluin?s tell-tale condition characterises when these classes are explanatorily learnable. Therefore, the more interesting question is when learnability holds for learners with complexity bounds, formulated in the automata–theoretic setting. The learners in question work iteratively, in some cases with an additional long-term memory, where the update function of the learner mapping old hypothesis, old memory and current datum to new hypothesis and new memory is automatic. Furthermore, the dependence of the learnability on the indexing is also investigated. This work brings together the fields of inductive inference and automatic structures.  相似文献   

2.
The important challenge of evaluating XPath queries over XML streams has sparked much interest in the past few years. A number of algorithms have been proposed, supporting wider fragments of the query language, and exhibiting better performance and memory utilization. Nevertheless, all the algorithms known to date use a prohibitively large amount of memory for certain types of queries. A natural question then is whether this memory bottleneck is inherent or just an artifact of the proposed algorithms.In this paper we initiate the first systematic and theoretical study of lower bounds on the amount of memory required to evaluate XPath queries over XML streams. We present a general lower bound technique, which given a query, specifies the minimum amount of memory that any algorithm evaluating the query on a stream would need to incur. The lower bounds are stated in terms of new graph-theoretic properties of queries. The proofs are based on tools from communication complexity.We then exploit insights learned from the lower bounds to obtain a new algorithm for XPath evaluation on streams. The algorithm uses space close to the optimum. Our algorithm deviates from the standard paradigm of using automata or transducers, thereby avoiding the need to store large transition tables.  相似文献   

3.
In analytical queries, a number of important operators like JOIN and GROUP BY are suitable for parallelization, and GPU is an ideal accelerator considering its power of parallel computing. However, when data size increases to hundreds of gigabytes, one GPU card becomes insufficient due to the small capacity of global memory and the slow data transfer between host and device. A straightforward solution is to equip more GPUs linked with high-bandwidth connectors, but the cost will be highly increased. We utilize unified memory (UM) produced by NVIDIA CUDA (Compute Unified Device Architecture) to make it possible to accelerate large-scale queries on just one GPU, but we notice that the transfer performance between host and UM, which happens before kernel execution, is often significantly slower than the theoretical bandwidth. An important reason is that, in single-GPU environment, data processing systems usually invoke only one or a static number of threads for data copy, leading to an inefficient transfer which slows down the overall performance heavily. In this paper, we present D-Cubicle, a runtime module to accelerate data transfer between host-managed memory and unified memory. D-Cubicle boosts the actual transfer speed dynamically through a self-adaptive approach. In our experiments, taking data transfer into account, D-Cubicle processes 200 GB of data on a single GPU with 32 GB of global memory, achieving 1.43x averagely and 2.09x maximally the performance of the baseline system.  相似文献   

4.
A computational model for learning languages in the limit from full positive data and a bounded number of queries to the teacher (oracle) is introduced and explored. Equivalence, superset, and subset queries are considered (for the latter one we consider also a variant when the learner tests every conjecture, but the number of negative answers is uniformly bounded). If the answer is negative, the teacher may provide a counterexample. We consider several types of counterexamples: arbitrary, least counterexamples, the ones whose size is bounded by the size of positive data seen so far, and no counterexamples. A number of hierarchies based on the number of queries (answers) and types of answers/counterexamples is established. Capabilities of learning with different types of queries are compared. In most cases, one or two queries of one type can sometimes do more than any bounded number of queries of another type. Still, surprisingly, a finite number of subset queries is sufficient to simulate the same number of equivalence queries when behaviourally correct learners do not receive counterexamples and may have unbounded number of errors in almost all conjectures.  相似文献   

5.
A Business Process (BP for short) consists of a set of activities which, combined in a flow, achieve some business goal. A given BP may have a large, possibly infinite, number of possible execution flows (EX-flows for short), each having some probability to occur at run time. This paper studies query evaluation over such probabilistic BPs. We focus on two important classes of queries, namely boolean queries that compute the probability that a random EX-flow of a BP satisfies a given property, and projection queries focusing on portions of EX-flows that are of interest to the user. For the latter queries the answer consists of the top-k instances of these portions that are most likely to occur at run-time. We study the complexity of query evaluation for both kinds of queries, showing in particular that projection queries may be harder to evaluate than boolean queries. We present a picture of which combinations of BP classes and query features lead to PTIME algorithms and which to NP-hard or infeasible problems.  相似文献   

6.
7.
In this paper we study the question how many queries are needed to halve a given version space. In other words: how many queries are needed to extract from the learning environment the one bit of information that rules out fifty percent of the concepts which are still candidates for the unknown target concept. We relate this problem to the classical exact learning problem. For instance, we show that lower bounds on the number of queries needed to halve a version space also apply to randomized learners (whereas the classical adversary arguments do not readily apply). Furthermore, we introduce two new combinatorial parameters, the halving dimension and the strong halving dimension, which determine the halving complexity (modulo a small constant factor) for two popular models of query learning: learning by a minimum adequate teacher (equivalence queries combined with membership queries) and learning by counterexamples (equivalence queries alone). These parameters are finally used to characterize the additional power provided by membership queries (compared to the power of equivalence queries alone). All investigations are purely information-theoretic and ignore computational issues.  相似文献   

8.
We consider a system where users wish to find similar users. To model similarity, we assume the existence of a set of queries, and two users are deemed similar if their answers to these queries are (mostly) identical. Technically, each user has a vector of preferences (answers to queries), and two users are similar if their preference vectors differ in only a few coordinates. The preferences are unknown to the system initially, and the goal of the algorithm is to classify the users into classes of roughly the same preferences by asking each user to answer the least possible number of queries. We prove nearly matching lower and upper bounds on the maximal number of queries required to solve the problem. Specifically, we present an “anytime” algorithm that asks each user at most one query in each round, while maintaining a partition of the users. The quality of the partition improves over time: for n users and time T, groups of [(O)\tilde](n/T)\tilde{O}(n/T) users with the same preferences will be separated (with high probability) if they differ in sufficiently many queries. We present a lower bound that matches the upper bound, up to a constant factor, for nearly all possible distances between user groups.  相似文献   

9.
ABSTRACT

The effect of 2D and 3D educational content learning on memory has been studied using electroencephalography (EEG) brain signal. A hypothesis is set that the 3D materials are better than the 2D materials for learning and memory recall. To test the hypothesis, we proposed a classification system that will predict true or false recall for short-term memory (STM) and long-term memory (LTM) after learning by either 2D or 3D educational contents. For this purpose, EEG brain signals are recorded during learning and testing; the signals are then analysed in the time domain using different types of features in various frequency bands. The features are then fed into a support vector machine (SVM)-based classifier. The experimental results indicate that the learning and memory recall using 2D and 3D contents do not have significant differences for both the STM and the LTM.  相似文献   

10.
黄光球  赵煜 《计算机应用》2009,29(5):1279-1284
利用生物记忆原理中的记忆存储、更新与遗忘原理,建立一种基于生物记忆原理的入侵检测模型。在本模型中,利用瞬时记忆衰减更新速度快且对外界信息反应敏感的特点,对异常数据进行及时检测,尽可能较早地阻止入侵的发生;短时记忆和长时记忆可以随着异常访问频度调整记忆强度并进行相互转化,以达到最佳的记忆效果。短时记忆容量限制和记忆库自动更新清理机制能有效地节省入侵检测系统资源消耗,将更多空间用于存储重要信息。实践检验发现,该模型能实时追踪最新动态,借助记忆库对旧信息进行选择性更新和遗忘,并对未知入侵行为做出及时、高效、准确的判断。  相似文献   

11.
Because early variable mnemonics were limited to as few as six to eight characters, many early programmers abbreviated concepts in their variable names. The past thirty years have seen a steady increase in permitted name length and, slowly, an increase in the actual identifier length. However, in theory names can be too long for programmers to comprehend and manipulate effectively. Most obviously, in object-oriented programs, entity naming often involves chaining of method calls and field selectors (e.g., class.firstAssignment().name.trim()). While longer names bring the potential for better comprehension through more embedded sub-words, there are practical limits to their length given limited human memory resources.The driving hypothesis behind the presented study is that names used in modern programs have reached this limit. Thus, a goal of the study is to better understand the balance between longer, more expressive names and limited programmer memory resources. Statistical models derived from an experiment involving 158 programmers of varying degrees of experience show that longer names extracted from production code take more time to process and reduce correctness in a simple recall activity. This has clear negative implications for any attempt to read, and hence comprehend or manipulate, the source code found in modern software. The experiment also evaluates the advantage of identifiers having probable ties to a programmer’s persistent memory. Combined, these results reinforce past proposals advocating the use of limited, consistent, and regular vocabulary in identifier names. In particular, good naming limits individual name length and reduces the need for specialized vocabulary.  相似文献   

12.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

13.
Vertical partitioning is a design technique for reducing the number of disk accesses to execute a given set of queries by minimizing the number of irrelevant instance variables accessed. This is accomplished by grouping the frequently accessed instance variables as vertical class fragments. The complexity of object-oriented database models due to subclass hierarchy and class composition hierarchy complicates the definition and representation of vertical partitioning of the classes, which makes the problem of vertical partitioning in OODBs very challenging. In this paper, we develop a comprehensive analytical cost model for processing of queries on vertically partitioned OODB classes. A set of analytical evaluation results is presented to show the effect of vertical partitioning, and to study the trade-off between the projection ratio versus selectivity factor vis-a-vis sequential versus index access. Furthermore, an empirical experimental prototype supporting vertical class partitioning has been implemented on a commercial OODB tool kit to validate our analytical cost model.  相似文献   

14.
《Information and Computation》2007,205(10):1551-1573
U-shaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold’s formal model of learning languages from positive data, is necessary for learning some tasks.It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner’s ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner’s choice of hypotheses space.  相似文献   

15.
目前已经提出了多种查询XML数据的方法,然而这些传统的方法不能充分利用多处理器和多核心处理器的优势。本文提出了一种XML查询的并行算法,大幅提高了共享存储器多处理器、多核心处理器系统中XML数据的查询效率。  相似文献   

16.
17.
This paper studies the complexity of learning classes of expressions in propositional logic from equivalence queries and membership queries. In particular, we focus on bounding the number of queries that are required to learn the class ignoring computational complexity. This quantity is known to be captured by a combinatorial measure of concept classes known as the certificate complexity. The paper gives new constructions of polynomial size certificates for monotone expressions in conjunctive normal form (CNF), for unate CNF functions where each variable affects the function either positively or negatively but not both ways, and for Horn CNF functions. Lower bounds on certificate size for these classes are derived showing that for some parameter settings the new certificate constructions are optimal. Finally, the paper gives an exponential lower bound on the certificate size for a natural generalization of these classes known as renamable Horn CNF functions, thus implying that the class is not learnable from a polynomial number of queries.  相似文献   

18.
We present a novel Hybrid Analysis technology which can efficiently and seamlessly integrate all static and run-time analysis of memory references into a single framework that is capable of performing all data dependence analysis and can generate necessary information for most associated memory related optimizations. We use HA to perform automatic parallelization by extracting run-time assertions from any loop and generating appropriate run-time tests that range from a low cost scalar comparison to a full, reference by reference run-time analysis. Moreover we can order the run-time tests in increasing order of complexity (overhead) and thus risk the minimum necessary overhead. We accomplish this by both extending compile time IP analysis techniques and by incorporating speculative run-time techniques when necessary. Our solution is to bridge free compile time techniques with exhaustive run-time techniques through a continuum of simple to complex solutions. We have implemented our framework in the Polaris compiler by introducing an innovative intermediate representation called RT_LMAD and a run-time library that can operate on it. Based on the experimental results obtained to date we hope to automatically parallelize most and possibly all PERFECT codes, a significant accomplishment.  相似文献   

19.
数据流管理系统计算聚集查询结果保存在内存中形成流数据方(StreamCube),提供快速、精确的在线OLAP查询。有限的内存空间需要一种有效的存储方法来存储更大时间窗口的流数据方。提出一种基于QC-Tree结构的流数据方StreamQCTree生成、裁剪及查询方法。将QC-Tree结构中上界集划分为基本上界类和附加上界类;并分析附加上界类的成本计算模型;根据该模型在固定存储空间下,采用动态选择物化结点的方案选择物化部分附加上界类,使对StreamQCTree的平均查询响应时间最小。实验表明,StreamQCTree能够有效地访问数据方且获得较好的压缩效果。  相似文献   

20.
Goldsmith  Judy  Sloan  Robert H.  Turán  György 《Machine Learning》2002,47(2-3):257-295
The theory revision, or concept revision, problem is to correct a given, roughly correct concept. This problem is considered here in the model of learning with equivalence and membership queries. A revision algorithm is considered efficient if the number of queries it makes is polynomial in the revision distance between the initial theory and the target theory, and polylogarithmic in the number of variables and the size of the initial theory. The revision distance is the minimal number of syntactic revision operations, such as the deletion or addition of literals, needed to obtain the target theory from the initial theory. Efficient revision algorithms are given for three classes of disjunctive normal form expressions: monotone k-DNF, monotone m-term DNF and unate two-term DNF. A negative result shows that some monotone DNF formulas are hard to revise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号