首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we separate many-one reducibility from truth-table reducibility for distributional problems in DistNP under the hypothesis that P NP . As a first example we consider the 3-Satisfiability problem (3SAT) with two different distributions on 3CNF formulas. We show that 3SAT with a version of the standard distribution is truth-table reducible but not many-one reducible to 3SAT with a less redundant distribution unless P = NP . We extend this separation result and define a distributional complexity class C with the following properties: (1) C is a subclass of DistNP, this relation is proper unless P = NP. (2) C contains DistP, but it is not contained in AveP unless DistNP \subseteq AveZPP. (3) C has a p m -complete set. (4) C has a p tt -complete set that is not p m -complete unless P = NP. This shows that under the assumption that PNP, the two completeness notions differ on some nontrivial subclass of DistNP.  相似文献   

2.
3.
Abstract

Suppose LC 1 and LC 2 are two machine learning classes each based on a criterion of success. Suppose, for every machine which learns a class of functions according to the LC 2 criterion of success, there is a machine which learns this class according to the LC 2 criterion. In the case where the converse does not hold LC, is said to be separated from LC 2. It is shown that for many such separated learning classes from the literature a much stronger separation holds: (?𝒞∈LC 1) (?𝒞' ∈LC 2 - LC 1(( [' ?𝒞] It is also shown that there is a pair of separated learning classes from the literature for which the stronger separation above does not hold. A philosophical heuristic toward the design of artificially intelligent learning programs is presented with each strong separation result.  相似文献   

4.
This article introduces a novel path planning algorithm, called ν , that reduces the problem of robot path planning to optimisation of a probabilistic finite state automaton. The ν -algorithm makes use of renormalised measure ν of regular languages to plan the optimal path for a specified goal. Although the underlying navigation model is probabilistic, the ν -algorithm yields path plans that can be executed in a deterministic setting with automated optimal trade-off between path length and robustness under dynamic uncertainties. The ν -algorithm has been experimentally validated on Segway Robotic Mobility Platforms in a laboratory environment.  相似文献   

5.
In machine-learning technologies, the support vector machine (SV machine, SVM) is a brilliant invention with many merits, such as freedom from local minima, the widest possible margins separating different clusters, and a solid theoretical foundation. In this paper, we first explore the linear separability relationships between the high-dimensional feature space H and the empirical kernel map U as well as between H and the space of kernel outputs K. Second, we investigate the relations of the distances between separating hyperplanes and SVs in H and U, and derive an upper bound for the margin width in K. Third, as an application, we show experimentally that the separating hyperplane in H can be slightly adjusted through U. The experiments reveal that existing SVM training can linearly separate the data in H with considerable success. The results in this paper allow us to visualize the geometry of H by studying U and K.  相似文献   

6.
目的 图像配准是影响拼接质量的关键因素。已有的视差图象拼接方法没有解决匹配特征点对间的错误配准问题,容易引起不自然的拼接痕迹。针对这一问题,提出了使用线约束运动最小二乘法的配准算法,减少图像的配准误差,提高拼接质量。方法 首先,计算目标图像和参考图像的SIFT(scale-invariant feature transform)特征点,应用RANSAC(random sample consensus)方法建立特征点的匹配关系,由此计算目标到参考图像的最佳单应变换。然后,使用线约束运动最小二乘法分别配准两组图像:1)第1组是目标图像和参考图像;2)第2组是经单应变换后的目标图像和参考图像。第1组用逐点仿射变换进行配准,而第2组配准使用了单应变换加上逐点仿射变换。最后,在重叠区域,利用最大流最小割算法寻找最优拼接缝,沿着拼接缝评估两组配准的质量,选取最优的那组进行融合拼接。结果 自拍图库和公开数据集上的大量测试结果表明,本文算法的配准精度超过95%,透视扭曲比例小于17%。与近期拼接方法相比,本文配准算法精度提高3%,拼接结果中透视扭曲现象减少73%。结论 运动最小二乘法可以准确地配准特征点,但可能会扭曲图像中的结构对象。而线约束项则尽量保持结构,阻止扭曲。因此,线约束运动最小二乘法兼顾了图像结构的完整性和匹配特征点的对准精度,基于此配准模型的拼接方法能够有效减少重影和鬼影等人工痕迹,拼接结果真实自然。  相似文献   

7.
Unlike the customary describing functions, the describing series proposed by Teodorescu (1970, 1971) allow one to compute the equivalent gain vector n of any symmetric non-linearity (non-linearity vector z) by means of a linear transformation n = Hz, where H is a constant matrix, i.e. independent of the non-linearity shape. The inverse transformation z = H -1 n is useful for computing non-linear systems with desired steady-state and/or transient properties. Such transformations with constant matrices H, H -1 can be combined with a recently proposed criterion for the non-existence of limit cycles in interconnected systems (Skar et al. 1981 a, b) to yield a straightforward design technique.  相似文献   

8.
Given a parametric polynomial family p(s; Q) := {n k=0 ak (q)sk : q ] Q}, Q R m , the robust root locus of p(s; Q) is defined as the two-dimensional zero set p,Q := {s ] C:p(s; q) = 0 for some q ] Q}. In this paper we are concerned with the problem of generating robust root loci for the parametric polynomial family p(s; E) whose polynomial coefficients depend polynomially on elements of the parameter vector q ] E which lies in an m-dimensional ellipsoid E. More precisely, we present a computational technique for testing the zero inclusion/exclusion of the value set p(z; E) for a fixed point z in C, and then apply an integer-labelled pivoting procedure to generate the boundary of each subregion of the robust root locus p,E . The proposed zero inclusion/exclusion test algorithm is based on using some simple sufficient conditions for the zero inclusion and exclusion of the value set p(z,E) and subdividing the domain E iteratively. Furthermore, an interval method is incorporated in the algorithm to speed up the process of zero inclusion/exclusion test by reducing the number of zero inclusion test operations. To illustrate the effectiveness of the proposed algorithm for the generation of robust root locus, an example is provided.  相似文献   

9.
目的 场景文本识别(scene text recognition,STR)是计算机视觉中的一个热门研究领域。最近,基于多头自注意力机制的视觉Transformer (vision Transformer,ViT)模型被提出用于STR,以实现精度、速度和计算负载的平衡。然而,没有机制可以保证不同的自注意力头确实捕捉到多样性的特征,这将导致使用多头自注意力机制的ViT模型在多样性极强的场景文本识别任务中表现不佳。针对这个问题,提出了一种新颖的正交约束来显式增强多个自注意力头之间的多样性,提高多头自注意力对不同子空间信息的捕获能力,在保证速度和计算效率的同时进一步提高网络的精度。方法 首先提出了针对不同自注意力头上Q (query)、K (key)和V (value)特征的正交约束,这可以使不同的自注意力头能够关注到不同的查询子空间、键子空间、值子空间的特征,关注不同子空间的特征可以显式地使不同的自注意力头捕捉到更具差异的特征。还提出了针对不同自注意力头上QKV 特征线性变换权重的正交约束,这将为Q、K和V特征的学习提供正交权重空间的解决方案,并在网络训练中带来隐式正则化的效果。结果 实验在7个数据集上与基准方法进行比较,在规则数据集Street View Text (SVT)上精度提高了0.5%;在不规则数据集CUTE80 (CT)上精度提高了1.1%;在7个公共数据集上的整体精度提升了0.5%。结论 提出的即插即用的正交约束能够提高多头自注意力机制在STR任务中的特征捕获能力,使ViT模型在STR任务上的识别精度得到提高。本文代码已公开: https://github.com/lexiaoyuan/XViTSTR。  相似文献   

10.
Delphi: geometry-based connectivity prediction in triangle mesh compression   总被引:1,自引:0,他引:1  
Delphi is a new geometry-guided predictive scheme for compressing the connectivity of triangle meshes. Both compression and decompression algorithms traverse the mesh using the EdgeBreaker state machine. However, instead of encoding the EdgeBreaker clers symbols that capture connectivity explicitly, they estimate the location of the unknown vertex, v , of the next triangle. If the predicted location lies sufficiently close to the nearest vertex, w , on the boundary of the previously traversed portion of the mesh, then Delphi estimates that v coincides with w . When the guess is correct, a single confirmation bit is encoded. Otherwise, additional bits are used to encode the rectification of that prediction. When v coincides with a previously visited vertex that is not adjacent to the parent triangle (EdgeBreaker S case), the offset, which identifies the vertex v , must be encoded, mimicking the cut-border machine compression proposed by Gumhold and Strasser. On models where 97% of Delphi predictions are correct, the connectivity is compressed down to 0.19 bits per triangle. Compression rates decrease with the frequency of wrong predictors, but remains below 1.50 bits per triangle for all models tested.  相似文献   

11.
Alternating-time temporal logic (atl) is a logic for reasoning about open computational systems and multi-agent systems. It is well known that atl model checking is linear in the size of the model. We point out, however, that the size of an atl model is usually exponential in the number of agents. When the size of models is defined in terms of states and agents rather than transitions, it turns out that the problem is (1) Δ 3 P -complete for concurrent game structures, and (2) Δ 2 P -complete for alternating transition systems. Moreover, for “Positive atl” that allows for negation only on the level of propositions, model checking is (1) Σ 2 P -complete for concurrent game structures, and (2) NP-complete for alternating transition systems. We show a nondeterministic polynomial reduction from checking arbitrary alternating transition systems to checking turn-based transition systems, We also discuss the determinism assumption in alternating transition systems, and show that it can be easily removed. In the second part of the paper, we study the model checking complexity for formulae of atl with imperfect information (atl ir ). We show that the problem is Δ 2 P -complete in the number of transitions and the length of the formula (thereby closing a gap in previous work of Schobbens in Electron. Notes Theor. Comput. Sci. 85(2), 2004). Then, we take a closer look and use the same fine structure complexity measure as we did for atl with perfect information. We get the surprising result that checking formulae of atl ir is also Δ 3 P -complete in the general case, and Σ 2 P -complete for “Positive atl ir ”. Thus, model checking agents’ abilities for both perfect and imperfect information systems belongs to the same complexity class when a finer-grained analysis is used.  相似文献   

12.
13.
14.
2LS is a decidable many-sorted set-theoretic language involving one sort for elements and one sort for sets of elements. In this paper we extend 2LS with constructs for expressing monotonicity, additivity, and multiplicativity properties of set-to-set functions. We call the resulting language 2LSmf. We prove that 2LSmf is decidable by reducing the problem of determining the satisfiability of its sentences to the problem of determining the satisfiability of sentences of 2LS. Furthermore, we prove that the language 2LSmf is stably infinite with respect to the sort of elements. Therefore, by using a many-sorted version of the Nelson–Oppen combination method, 2LSmf can be combined with other languages modeling the sort of elements.  相似文献   

15.
In the study of cappable and noncappable properties of the recursively enumerable(r.e.)degrees.Lempp suggested a conjecture which asserts that for all r.e.degrees and b,if a ≮b then there exists an r.e.degree c such that c≤a and c≮b and c is cappable.We shall prove in this paper that this conjecture holds under the condition that a is high.Working below a high r.e.degree h,we show that for any r.e.degree b with h≮b,there exist r.e.degrees a0 and a1 and such that a0,a1≮b,a0,a1≤h,and a0 and a1 from a minimal pair.  相似文献   

16.
Chunking in Soar: The Anatomy of a General Learning Mechanism   总被引:4,自引:0,他引:4  
In this article we describe an approach to the construction of a general learning mechanism based on chunking in Soar. Chunking is a learning mechanism that acquires rules from goal-based experience. Soar is a general problem-solving architecture with a rule-based memory. In previous work we have demonstrated how the combination of chunking and Soar could acquire search-control knowledge (strategy acquisition) and operator implementation rules in both search-based puzzle tasks and knowledge-based expert-systems tasks. In this work we examine the anatomy of chunking in Soar and provide a new demonstration of its learning capabilities involving the acquisition and use of macro-operators.  相似文献   

17.
Abstract. Using the notion of modular decomposition we extend the class of graphs on which both the treewidth and the minimum fill-in can be solved in polynomial time. We show that if C is a class of graphs that are modularly decomposable into graphs that have a polynomial number of minimal separators, or graphs formed by adding a matching between two cliques, then both the treewidth and the minimum fill-in on C can be solved in polynomial time. For the graphs that are modular decomposable into cycles we give algorithms that use respectively O(n) and O(n 3 ) time for treewidth and minimum fill-in.  相似文献   

18.
目的多项式求实根问题有着广泛的应用。改进传统的裁剪方法,在多项式重根的情形下,保持计算稳定性的同时显著地提高相应的收敛阶。方法提出了基于R~3空间内的3次裁剪方法。该方法继承了传统裁剪求根方法的优点,充分利用了Bernstein基函数较好的计算稳定性,同时给出简单方法判别重根的存在性,从而使得重根的情形可以转化为单根的情形。结果与已有的基于R~1和R~2空间的3次裁剪方法相比,本文方法可以具有更好的逼近效果。单根情形下,本文方法与基于R~2空间的3次裁剪方法同时具有5次收敛阶,略高于基于R~1空间3次裁剪方法的4次收敛阶;m(≥2)重根情形下,本文方法理论上可具有5次收敛阶,明显优于已有的基于R~1和R~2空间的3次裁剪方法的4/m或5/m收敛阶。基于R~1,R~2和R~3空间的3次裁剪方法的计算时间复杂度大致相当,均为O(n~2)。结论本文方法可以快速判定重根的情形,同时具有更高的收敛阶和更好的逼近效果。  相似文献   

19.
Jiang  Mengjuan  Li  Fanzhang 《Applied Intelligence》2022,52(10):10965-10978

Humans can use acquired experience to learn new skills quickly and without forgetting the knowledge they already have. However, the neural network cannot do continual learning like humans, because it is easy to fall into the stability-plasticity dilemma and lead to catastrophic forgetting. Since meta-learning with the already acquired knowledge as a priori can directly optimize the final goal, this paper proposes LGCMLA (Lie Group Continual Meta Learning Algorithm) based on meta-learning, this algorithm is an improvement of CMLA (Continual Meta Learning Algorithm) proposed by Jiang et al. On the one hand, LGCMLA enhances the continuity between tasks by changing the inner-loop update rule (from using random initialization parameters for each task to using the updated parameters of the previous task for the subsequent task). On the other hand, it uses orthogonal groups to limit the parameter space and adopts the natural Riemannian gradient descent to accelerate the convergence speed. It not only corrects the shortcomings of poor convergence and stability of CMLA, but also further improves the generalization performance of the model and solves the stability-plasticity dilemma more effectively. Experiments on miniImageNet, tieredImageNet and Fewshot-CIFAR100 (Canadian Institute For Advanced Research) datasets prove the effectiveness of LGCMLA. Especially compared to MAML (Model-Agnostic Meta-Learning) with standard four-layer convolution, the accuracy of 1 shot and 5 shot is improved by 16.4% and 17.99% respectively under the setting of 5-way on miniImageNet.

  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号