首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Group Normalization   总被引:1,自引:0,他引:1  
International Journal of Computer Vision - Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the...  相似文献   

2.
离散余弦变换的归一化正交性   总被引:1,自引:0,他引:1  
在以前提出的离散余弦变换(DCT)类定义的基础上,给出了一维DCT的归一化正交性的一种简洁证明,为同类问题的证明奠定了基础.  相似文献   

3.
We present a new algorithm to compute the integral closure of a reduced Noetherian ring in its total ring of fractions. A modification, applicable in positive characteristic, where actually all computations are over the original ring, is also described. The new algorithm of this paper has been implemented in Singular, for localizations of affine rings with respect to arbitrary monomial orderings. Benchmark tests show that it is in general much faster than any other implementation of normalization algorithms known to us.  相似文献   

4.
网络书写具有随意性、非正规性等特点。变体词就是网络语言作为一种不规范语言的显著特色,人们往往出于避免审查、表达情感、讽刺、娱乐等需求将相对严肃、规范、敏感的词用相对不规范、不敏感的词来代替,用来代替原来词的新词就叫做变体词(Morph)。变体词和其对应的原来的词(目标实体词)会分别在非规范文本和规范文本中共存,甚至变体词会渗透到规范文本中。变体词使行文更加生动活泼,相关事件、消息也传播得更加广泛。但是因为变体词通常是某种隐喻,已不再是其表面字词的意义了,从而使网络上文体与正式文本(如新闻等)具有巨大的差异。由此如何识别出这些变体词及其所对应的目标实体词对于下游的自然语言处理技术具有重要的意义。本文首先介绍了变体词的定义和特征,变体词的生成规律,总结了当前变体词的识别和规范化的主要技术进展和成果,最后是此领域发展方向的展望。  相似文献   

5.
DTD的规范化   总被引:19,自引:0,他引:19  
一个设计良好的DTD对于XML应用来说是必须的,从消除文档内数据冗余的角度出发研究了这一问题。函数依赖是数据语义的重要组成部分,将它引入到XML的领域中。给出的函数依赖可以是绝对的,也可以是相对的,键只是它的一种特例。讨论了逻辑蕴涵及其相应的推理规则,并证明了推理规则集的正确性和完备性。基于函数依赖,提出了规范化的DTD概念,并给出了一个将DTD转化为规范化形式的算法。  相似文献   

6.
This paper offers a systematic account of techniques to infer strong normalization from weak normalization that make use of syntactic translations from -terms to I-terms. We present variants of such techniques due to Klop, Sørensen, Xi, Gandy, and Loader.We show that all the translations, in some cases via adjustments, are special cases of a generic scheme of translations, known as permutative inner interpretations. Having established this, which is easy, the fact that all the translations can be used to reduce strong normalization to weak normalization can be obtained from a single, general result concerning permutative inner interpretations.Furthermore, we show that each of the translations can be obtained as the composition of Klop's well-known -translation and some other translation of independent interest. For instance, in the case of Xi's translation, this other translation is a thunkification translation.Finally we compare the above techniques in some detail to the intimately related techniques by de Vrijer and Girard.A main contribution of the paper is to compare the techniques of Gandy, Klop, and Girard in detail. For instance, we prove the main property of Gandy's translation without reference to functionals using instead ideas from Klop's translation.  相似文献   

7.
Schema mappings are high-level specifications that describe the relationship between database schemas. They are an important tool in several areas of database research, notably in data integration and data exchange. However, a concrete theory of schema mapping optimization including the formulation of optimality criteria and the construction of algorithms for computing optimal schema mappings is completely lacking to date. The goal of this work is to fill this gap. We start by presenting a system of rewrite rules to minimize sets of source-to-target tuple-generating dependencies. Moreover, we show that the result of this minimization is unique up to variable renaming. Hence, our optimization also yields a schema mapping normalization. By appropriately extending our rewrite rule system, we also provide a normalization of schema mappings containing equality-generating target dependencies. An important application of such a normalization is in the area of defining the semantics of query answering in data exchange, since several definitions in this area depend on the concrete syntactic representation of the mappings. This is, in particular, the case for queries with negated atoms and for aggregate queries. The normalization of schema mappings allows us to eliminate the effect of the concrete syntactic representation of the mapping from the semantics of query answering. We discuss in detail how our results can be fruitfully applied to aggregate queries.  相似文献   

8.
A rewrite closure is an extension of a term rewrite system with new rules, usually deduced by transitivity. Rewrite closures have the nice property that all rewrite derivations can be transformed into derivations of a simple form. This property has been useful for proving decidability results in term rewriting. Unfortunately, when the term rewrite system is not linear, the construction of a rewrite closure is quite challenging. In this paper, we construct a rewrite closure for term rewrite systems that satisfy two properties: the right-hand side term in each rewrite rule contains no repeated variable (right-linear) and contains no variable occurring at depth greater than one (right-shallow). The left-hand side term is unrestricted, and in particular, it may be non-linear. As a consequence of the rewrite closure construction, we are able to prove decidability of the weak normalization problem for right-linear right-shallow term rewrite systems. Proving this result also requires tree automata theory. We use the fact that right-shallow right-linear term rewrite systems are regularity preserving. Moreover, their set of normal forms can be represented with a tree automaton with disequality constraints, and emptiness of this kind of automata, as well as its generalization to reduction automata, is decidable. A preliminary version of this work was presented at LICS 2009 (Creus 2009).  相似文献   

9.
关系数据库设计与规范化   总被引:6,自引:0,他引:6  
数据库模式直接决定和影响了数据的完整性、准确性和一致性,它对数据库的性能有致关重要的影响。关系数据库设计的目标就是要从各种可能的关系模式组合中选取一组关系模式来构成一个数据库模式,使得我们既不必存储不必要的重复信息,又可以方便地获取信息。在进行关系数据库设计时,一般是通过设计满足某一范式的模式来获得一个好的数据库模式,通常认为3NF在性能、扩展性和数据完整性方面达到了最好的平衡,故一般数据库设计要求达到3NF。  相似文献   

10.
In addition to ordinary words and names, real text contains non-standard “words" (NSWs), including numbers, abbreviations, dates, currency amounts and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary “letter-to-sound" rules. Non-standard words also have a greater propensity than ordinary words to be ambiguous with respect to their interpretation or pronunciation. In many applications, it is desirable to “normalize" text by replacing the NSWs with the contextually appropriate ordinary word or sequence of words. Typical technology for text normalization involves sets of ad hoc rules tuned to handle one or two genres of text (often newspaper-style text) with the expected result that the techniques do not usually generalize well to new domains. The purpose of the work reported here is to take some initial steps towards addressing deficiencies in previous approaches to text normalization. We developed a taxonomy of NSWs on the basis of four rather distinct text types—news text, a recipes newsgroup, a hardware-product-specific newsgroup, and real-estate classified ads. We then investigated the application of several general techniques including n-gram language models, decision trees and weighted finite-state transducers to the range of NSW types, and demonstrated that a systematic treatment can lead to better results than have been obtained by the ad hoc treatments that have typically been used in the past. For abbreviation expansion in particular, we investigated both supervised and unsupervised approaches. We report results in terms of word-error rate, which is standard in speech recognition evaluations, but which has only occasionally been used as an overall measure in evaluating text normalization systems.  相似文献   

11.
12.
13.
Abstract

Sets of Thematic Mapper (TM) imagery taken over the Washington DC metropolitan area during the months of November, March and May were converted into a form of ground reflectance imagery. This conversion was accomplished by adjusting the incident sunlight and view angles and by applying a pixel-by-pixel correction for atmospheric effects. Seasonal colour changes of the area can be better observed when such normalization is applied to space imagery taken in time series. In normalized imagery, the grey scale depicts variations in surface reflectance and tonal signature of multi-band colour imagery can be directly interpreted for quantitative information of the target.  相似文献   

14.
This paper describes a noisy-channel approach for the normalization of informal text, such as that found in emails, chat rooms, and SMS messages. In particular, we introduce two character-level methods for the abbreviation modeling aspect of the noisy channel model: a statistical classifier using language-based features to decide whether a character is likely to be removed from a word, and a character-level machine translation model. A two-phase approach is used; in the first stage the possible candidates are generated using the selected abbreviation model and in the second stage we choose the best candidate by decoding using a language model. Overall we find that this approach works well and is on par with current research in the field.  相似文献   

15.
关系数据库设计的目标就是要得到一组关系模式来组成一个最优的数据库模式,使得我们既不必存储不必要的重复数据,又可以方便地查询数据。这组关系模式首先要用关系规范化理论规范化,构造好的逻辑结构,然后根据需要,从提高数据库性能的角度,进行非规范化处理。规范化和非规范化各有其优缺点,需要我们根据实际需求权衡处理。  相似文献   

16.
对XML文档中存在的异常数据依赖进行了分析,提出了规范化所对应的范式及规则.  相似文献   

17.
因果图理论是一种基于概率论的推理方法。文章在分析因果图理论发展和存在的问题基础上,将模糊数学引入因果图理论,即模糊因果图,从而可以克服因果图分析中概率难以精确赋值的缺点,将因果图理论应用扩大到了模糊领域。文章主要对事件概率为梯形模糊教进行讨论,提出了模糊因果图的算子,得到了模糊条件概率的计算公式,讨论了模糊概率的归一化方法。最后,以核电站的一个子系统为例进行仿真实验,实验结果与实际一致,归一化方法可行。研究表明:模糊因果图能有效地用于故障分析,比原来的因果图方法具有更大的灵活性和适应性.  相似文献   

18.
关系数据库设计的目标就是要得到一组关系模式采组成一个最优的数据库模式,使得我们既不必存储不必晏的重复数据.又可以方便地查询数据。这组关系模式首先要用关系规范化理论规范化。构造好的逻辑结构,然后根据需要,从提高数据库性能的角度.进行非规范化处理。规范化和非规范化各有其优缺点,需要我们根据实际需求权衡处理。  相似文献   

19.
We present a systematic construction of a reduction-free normalization function. Starting from a reduction-based normalization function, i.e., the transitive closure of a one-step reduction function, we successively subject it to refocusing (i.e., deforestation of the intermediate reduced terms), simplification (i.e., fusing auxiliary functions), refunctionalization (i.e., Church encoding), and direct-style transformation (i.e., the converse of the CPS transformation). We consider two simple examples and treat them in detail: for the first one, arithmetic expressions, we construct an evaluation function; for the second one, terms in the free monoid, we construct an accumulator-based flatten function. The resulting two functions are traditional reduction-free normalization functions.The construction builds on previous work on refocusing and on a functional correspondence between evaluators and abstract machines. It is also reversible.  相似文献   

20.
当今,人工智能已经广泛应用到各个领域中,并取得了显著的效果。数据归一化是人工智能应用落地中的一个重要环节,它有助于避免神经网络因数据量纲的复杂性对数据进行错误建模。在大数据场景下,相当一部分数据是以流的形式先后到达训练点,所以在流场景下数据归一化研究是当前亟待解决的关键问题。目前关于归一化研究的综述较多,大多仅仅针对于批数据的归一化研究,而缺乏对流数据的归一化方法的总结,不具参考性。在批数据归一化研究基础之上,系统化整理并详尽分析了流数据归一化的相关文献,凝练提出了基于流数据的归一化分类方法,并将数据归一化方法划分为批数据的归一化方法和流数据的归一化方法。同时,对这些方法的原理、优势和可以解决的主要问题进行了对比分析,在不同场景下对数据归一化的未来研究方向进行了展望。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号