共查询到20条相似文献,搜索用时 9 毫秒
1.
Minoru Yokota Akira Yamamoto Kazuo Taki Hiroshi Nishikawa Shunichi Uchida 《New Generation Computing》1983,1(2):125-144
A Personal Sequential Inference Machine, called PSI, is a personal computer designed as a software development tool for the Fifth Generation Computer Systems (FGCS) project. PSI has a logic based, highlevel machine instruction set, called Kernel Language Version 0 (KL0). The machine architecture of PSI is specialized for direct execution of KL0. “Unification” and “backtracking” are the principal operations in Logic Programming, and they are efficiently performed by PSI hardware/ firmware. Its estimated execution speed is 20K to 30K LIPS. The machine is also equipped with a large main memory with a maximum of 40 bits×16M words. This paper presents the key points of its design and the features of its machine architecture. 相似文献
2.
Yukio Kaneda Naoyuki Tamura Koichi Wada Hideo Matsuda Shumin Kuo Sadao Maekawa 《New Generation Computing》1986,4(1):51-66
The sequential Prolog machine PEK currently under development is described. PEK is an experimental machine designed for high speed execution of Prolog programs. The PEK machine is controlled by horizontal-type microinstructions. The machine includes bit slice microprocessor elements comprising a microprogram sequencer and ALU, and possesses hardware circuits for unification and backtracking. The PEK machine consists of a host processor (MC68000) and a backend processor (PEK engine). A Prolog interpreter has been developed on the machine and the machine performance evaluated. A single inference can be executed in 89 microinstructions, and execution speed is approximately 60–70 KLIPS. 相似文献
3.
4.
5.
6.
序列模式挖掘在电子商务个性化服务中的应用 总被引:1,自引:0,他引:1
分析了电子商务发展面临的问题和个性化服务的特点,提出了Web使用挖掘技术在电子商务个性化服务中的应用方法,论述了基于Web挖掘的个性化服务研究.详细阐述了其挖掘过程,最后讨论了使用序列模式和分类相结合的技术得以实现个性化服务的方法。利用这些算法得到的个性化信息可以准确把握用户兴趣模式并对Web信息资源的组织方式进行有效更新,从而提高网络信息服务效率,为用户提供“一对一”的具备自适应性的智能个性化服务。 相似文献
7.
高的计算复杂度限制了双边加权模糊支持向量机在实际分类问题中的应用。为了降低计算复杂度,提出了应用序贯最小优化算法(SMO)解该模型,该模型首先将整个二次规划问题分解成一系列规模为2的二次规划子问题,然后求解这些二次规划子问题。为了测试SMO算法的性能,在三个真实数据集和两个人工数据集上进行了数值实验。结果表明:与传统的内点算法相比,在不损失测试精度的情况下,SMO算法明显地降低了模型的计算复杂度,使其在实际中的应用成为可能。 相似文献
8.
Karvel K Thornber 《Knowledge》1996,9(8):483-489
Analogic is the unique class of many-valued, non-monotonic logics which preserves the richness of inferences in (Boolean) logic and the manipulability of (Boolean) algebra underlying logic, and, in addition, contains a number of unexpected, emergent properties which extend inferentiability in non-trivial ways beyond the limits of logic. For example, one such inference is rada (reductio ad absurdum, reasoning by contradiction, but now in the absence of excluded middle). This is important to retain since direct proofs of many theorems are not known. Another example is chaining. Transitivity is uncommon in many-valued logics; however, in analogic we can carry out inferences in either direction even through weak links in the chain. The latter, impossible in logic, simulate intuitive leaps in reasoning. Protologic effects inferences using only (n+1) implications which require 2n implications in logic. Indeed protologic has no counterpart in logic, or any other form of reasoning. Analogic is useful in formulating problems which are largely inferential including document and pattern classification and retrieval. These inference properties, long sought after in alternative logics by adding appropriate axioms or other indicative implicit or explicit restrictions, are now available in analogic, in turn the result of removing an axiom and letting inferences become many-valued. 相似文献
9.
Yunbo YANG Xiaolei DONG Zhenfu CAO Jiachen SHEN Shangmin DOU 《Frontiers of Computer Science》2023,17(5):175811
Oblivious Cross-Tags (OXT) [1 ] is the first efficient searchable encryption (SE) protocol for conjunctive queries in a single-writer single-reader framework. However, it also has a trade-off between security and efficiency by leaking partial database information to the server. Recent attacks on these SE schemes show that the leakages from these SE schemes can be used to recover the content of queried keywords. To solve this problem, Lai et al. [2 ] propose Hidden Cross-Tags (HXT), which reduces the access pattern leakage from Keyword Pair Result Pattern (KPRP) to Whole Result Pattern (WRP). However, the WRP leakage can also be used to recover some additional contents of queried keywords. This paper proposes Improved Cross-Tags (IXT), an efficient searchable encryption protocol that achieves access and searches pattern hiding based on the labeled private set intersection. We also prove the proposed labeled private set intersection (PSI) protocol is secure against semi-honest adversaries, and IXT is -semi-honest secure ( is leakage function). Finally, we do experiments to compare IXT with HXT. The experimental results show that the storage overhead and computation overhead of the search phase at the client-side in IXT is much lower than those in HXT. Meanwhile, the experimental results also show that IXT is scalable and can be applied to various sizes of datasets. 相似文献
10.
文法推断研究的历史和现状 总被引:5,自引:0,他引:5
文法推断属于形式语言的归纳学习问题,它研究如何从语言的有限信息出发,通过归纳推断得到语言的语法定义.文章综述文法推断研究的历史和现状.首先阐述文法推断的理论模型,接着罗列上下文无关文法类及其非平凡子类、隐马尔可夫模型以及随机上下文无关文法的推断方法,最后简介文法推断的应用,并展望其发展趋势. 相似文献
11.
12.
潜语义标与汉语信息检索研究 总被引:4,自引:0,他引:4
1 引言典型的传统信息检索系统,如布尔逻辑模型、向量空间模型,根据用户提供的查询条件,依据关键词的匹配或向量空间的相似系数,返回相关查询结果。对于相同的概念,使用不同的词汇表示,如同义词或近义词,或同一词汇在不同的语言环境中拥有不同的语义,即一词多义,因此基于语词匹配的查询方法,其准确性和完整性都不够理想。尽管同义词词典的使用,在一定程度上,提高了信息检索的查全率(recall),但却降低了查询的精度,且在实际应用中,需要不断更新同义词库,才能满足系统不断变化的要求。 相似文献
13.
14.
15.
In order to compare learning algorithms, experimental results reported in the machine learning literature often use statistical tests of significance to support the claim that a new learning algorithm generalizes better. Such tests should take into account the variability due to the choice of training set and not only that due to the test examples, as is often the case. This could lead to gross underestimation of the variance of the cross-validation estimator, and to the wrong conclusion that the new algorithm is significantly better when it is not. We perform a theoretical investigation of the variance of a variant of the cross-validation estimator of the generalization error that takes into account the variability due to the randomness of the training set as well as test examples. Our analysis shows that all the variance estimators that are based only on the results of the cross-validation experiment must be biased. This analysis allows us to propose new estimators of this variance. We show, via simulations, that tests of hypothesis about the generalization error using those new variance estimators have better properties than tests involving variance estimators currently in use and listed in Dietterich (1998). In particular, the new tests have correct size and good power. That is, the new tests do not reject the null hypothesis too often when the hypothesis is true, but they tend to frequently reject the null hypothesis when the latter is false. 相似文献
16.
XML文档作为一种网上信息交换方式,其应用越来越广泛.信息发布的安全性给数据库带来新的挑战,目前一些安全策略以法律条文形式颁布,这要求采用有效的手段证实对XML文档的访问与安全策略的一致性.审计能达到这样的目的,但已有的审计方法只能对SQL查询结果进行审计,不能对XML文档查询——XQuery或Xpath进行审计,且蓄意破坏的用户可能通过对查询结果进行推理来访问敏感信息,这就要求对XQuery的审计必然同时具备推理能力.对此,首先提出了可靠而可行的XQuery审计方法、算法及相应查询图模型(QGM);为使审计具备基本的推理能力,针对XML文档的几种典型约束,给出了推理审计方法、算法及相应查询图模型;实验结果表明,给出的XML查询推理审计框架切实可行. 相似文献
17.
The Semantic Web lacks support for explaining answers from web applications. When applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Many users also do not know how implicit answers were derived. The Inference Web (IW) aims to take opaque query answers and make the answers more transparent by providing infrastructure for presenting and managing explanations. The explanations include information concerning where answers came from (knowledge provenance) and how they were derived (or retrieved). In this article we describe an infrastructure for IW explanations. The infrastructure includes: IWBase — an extensible web-based registry containing details about information sources, reasoners, languages, and rewrite rules; PML — the Proof Markup Language specification and API used for encoding portable proofs; IW browser — a tool supporting navigation and presentations of proofs and their explanations; and a new explanation dialogue component. Source information in the IWBase is used to convey knowledge provenance. Representation and reasoning language axioms and rewrite rules in the IWBase are used to support proofs, proof combination, and Semantic Web agent interoperability. The Inference Web is in use by four Semantic Web agents, three of them using embedded reasoning engines fully registered in the IW. Inference Web also provides explanation infrastructure for a number of DARPA and ARDA projects. 相似文献
18.
19.
Inference of high-dimensional grammars is discussed. Specifically, techniques for inferring tree grammars are briefly presented. The problem of inferring a stochastic grammar to model the behavior of an information source is also introduced and techniques for carrying out the inference process are presented for a class of stochastic finite-state and context-free grammars. The possible practical application of these methods is illustrated by examples. 相似文献
20.