首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In this paper, we present a fuzzy logic modulation classifier that works in nonideal environments in which it is difficult or impossible to use precise probabilistic methods. We first transform a general pattern classification problem into one of function approximation, so that fuzzy logic systems (FLS) can be used to construct a classifier; then, we introduce the concepts of fuzzy modulation type and fuzzy decision and develop a nonsingleton fuzzy logic classifier (NSFLC) by using an additive FLS as a core building block. Our NSFLC uses 2D fuzzy sets, whose membership functions are isotropic so that they are well suited for a modulation classifier (MC). We establish that our NSFLC, although completely based on heuristics, reduces to the maximum-likelihood modulation classifier (ML MC) in ideal conditions, In our application of NSFLC to MC in a mixture of α-stable and Gaussian noises, we demonstrate that our NSFLC performs consistently better than the ML MC and it gives the same performance as the ML MC when no impulsive noise is present  相似文献   

2.
The polymorphic environment calculus is a polymorphic lambda calculus which enables us to treat environments as first-class citizens. In the calculus, environments are formalized as explicit substitutions, and the substitutions are included in the set of terms of the calculus. First, we introduce an untyped environment calculus, and we present a semantics of the calculus as a translation into the lambda calculus. Second, we propose a polymorphic type system for the environment calculus based on Damas-Milner's ML-polymorphic type system. In ML, polymorphism is allowed only in let-expressions; in the polymorphic environment calculus, polymorphism is provided with environment compositions. We prove a subject-reduction theorem for the type system. Third, a type-inference algorithm is given to the polymorphic environment calculus, and we establish its soundness, termination, and principal-typing theorem.  相似文献   

3.
Morphologically rich languages pose a challenge for statistical machine translation (SMT). This challenge is magnified when translating into a morphologically rich language. In this work we address this challenge in the framework of a broad-coverage English-to-Arabic phrase based statistical machine translation (PBSMT). We explore the largest-to-date set of Arabic segmentation schemes ranging from full word form to fully segmented forms and examine the effects on system performance. Our results show a difference of 2.31 BLEU points averaged over all test sets between the best and worst segmentation schemes indicating that the choice of the segmentation scheme has a significant effect on the performance of an English-to-Arabic PBSMT system in a large data scenario. We show that a simple segmentation scheme can perform as well as the best and more complicated segmentation scheme. An in-depth analysis on the effect of segmentation choices on the components of a PBSMT system reveals that text fragmentation has a negative effect on the perplexity of the language models and that aggressive segmentation can significantly increase the size of the phrase table and the uncertainty in choosing the candidate translation phrases during decoding. An investigation conducted on the output of the different systems, reveals the complementary nature of the output and the great potential in combining them.  相似文献   

4.
5.
We present an unboxed operational semantics for an ML-style polymorphic language. Different from the conventional formalisms, the proposed semantics accounts for actual representations of run-time objects of various types, and supports a refined notion of polymorphism that allows polymorphic functions to be applied directly to values of various different representations. In particular, polymorphic functions can receive multi-word constants such as floating-point numbers without requiring them to be boxed (i.e., heap allocated.) This semantics will serve as an alternative basis for implementing polymorphic languages. The development of the semantics is based on the technique of the type-inference-based compilation for polymorphic record operations [20]. We first develop a lower-level calculus, called a polymorphic unboxed calculus, that accounts for direct manipulation of unboxed values in a polymorphic language. This unboxed calculus supports efficient value binding through integer representation of variables. Different from de Bruijn indexes, our integer representation of a variable corresponds to the actual offset to the value in a run-time environment consisting of objects of various sizes. Polymorphism is supported through an abstraction mechanism over argument sizes. We then develop an algorithm that translates ML into the polymorphic unboxed calculus by using type information obtained through type inference. At the time of polymorphic let binding, the necessary size abstractions are inserted so that a polymorphic function is translated into a function that is polymorphic not only in the type of the argument but also in its size. The ML type system is shown to be sound with respect to the operational semantics realized by the translation.  相似文献   

6.
Region-Based Memory Management   总被引:1,自引:0,他引:1  
This paper describes a memory management discipline for programs that perform dynamic memory allocation and de-allocation. At runtime, all values are put intoregions. The store consists of a stack of regions. All points of region allocation and de-allocation are inferred automatically, using a type and effect based program analysis. The scheme does not assume the presence of a garbage collector. The scheme was first presented in 1994 (M. Tofte and J.-P. Talpin,in“Proceedings of the 21st ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages,” pp. 188–201); subsequently, it has been tested in The ML Kit with Regions, a region-based, garbage-collection free implementation of the Standard ML Core language, which includes recursive datatypes, higher-order functions and updatable references L. Birkedal, M. Tofte, and M. Vejlstrup, (1996),in“Proceedings of the 23 rd ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages,” pp. 171–183. This paper defines a region-based dynamic semantics for a skeletal programming language extracted from Standard ML. We present the inference system which specifies where regions can be allocated and de-allocated and a detailed proof that the system is sound with respect to a standard semantics. We conclude by giving some advice on how to write programs that run well on a stack of regions, based on practical experience with the ML Kit.  相似文献   

7.
A new numerical scheme is presented for computing strict maximum likelihood (ML) of geometric fitting problems having an implicit constraint. Our approach is orthogonal projection of observations onto a parameterized surface defined by the constraint. Assuming a linearly separable nonlinear constraint, we show that a theoretically global solution can be obtained by iterative Sampson error minimization. Our approach is illustrated by ellipse fitting and fundamental matrix computation. Our method also encompasses optimal correction, computing, e.g., perpendiculars to an ellipse and triangulating stereo images. A detailed discussion is given to technical and practical issues about our approach.  相似文献   

8.
Staging is a powerful language construct that allows a program at one stage of evaluation to manipulate and specialize a program to be executed at a later stage. We propose a new staged language calculus, ??ML??, which extends the programmability of staged languages in two directions. First, ??ML?? supports dynamic type specialization: types can be dynamically constructed, abstracted, and passed as arguments, while preserving decidable typechecking via a System F??-style semantics combined with a restricted form of ?? ?? -style runtime type construction. With dynamic type specialization the data structure layout of a program can be optimized via staging. Second, ??ML?? works in a context where different stages of computation are executed in different process spaces, a property we term staged process separation. Programs at different stages can directly communicate program data in ??ML?? via a built-in serialization discipline. The language ??ML?? is endowed with a metatheory including type preservation, type safety, and decidability as demonstrated constructively by a sound type checking algorithm. While our language design is general, we are particularly interested in future applications of staging in resource-constrained and embedded systems: these systems have limited space for code and data, as well as limited CPU time, and specializing code for the particular deployment at hand can improve efficiency in all of these dimensions. The combination of dynamic type specialization and staging across processes greatly increases the utility of staged programming in these domains. We illustrate this via wireless sensor network programming examples.  相似文献   

9.
Due to the ability of sensor nodes to collaborate, time synchronization is essential for many sensor network operations. With the aid of hardware capabilities, this work presents a novel time synchronization method, which employs a dual-clock delayed-message approach, for energy-constrained wireless sensor networks (WSNs). To conserve WSN energy, this study adopts the flooding time synchronization scheme based on one-way timing messages. Via the proposed approach, the maximum-likelihood (ML) estimation of time parameters, such as clock skew and clock offset, can be obtained for time synchronization. Additionally, with the proposed scheme, the clock skew and offset estimation problem will be transformed into a problem independent of random delay and propagation delay. The ML estimation of link propagation delay, which can be used for localization systems in the proposed scenario, is also obtained. In addition to good performance, the proposed method has low complexity.  相似文献   

10.
In a heterogeneous database system, a query for one type of database system (i.e., a source query) may have to be translated to an equivalent query (or queries) for execution in a different type of database system (i.e., a target query). Usually, for a given source query, there is more than one possible target query translation. Some of them can be executed more efficiently than others by the receiving database system. Developing a translation procedure for each type of database system is time-consuming and expensive. We abstract a generic hierarchical database system (GHDBS) which has properties common to database systems whose schema contains hierarchical structures (e.g., System 2000, IMS, and some object-oriented database systems). We develop principles of query translation with GHDBS as the receiving database system. Translation into any specific system can be accomplished by a translation into the general system with refinements to reflect the characteristics of the specific system. We develop rules that guarantee correctness of the target queries, where correctness means that the target query is equivalent to the source query. We also provide rules that can guarantee a minimum number of target queries in cases when one source query needs to be translated to multiple target queries. Since the minimum number of target queries implies the minimum number of times the underlying system is invoked, efficiency is taken into consideration  相似文献   

11.
This paper presents an identification scheme for sparse FIR systems with quantised data. We consider a general quantisation scheme, which includes the commonly deployed static quantiser as a special case. To tackle the sparsity issue, we utilise a Bayesian approach, where an ?1 a priori distribution for the parameters is used as a mechanism to promote sparsity. The general framework used to solve the problem is maximum likelihood (ML). The ML problem is solved by using a generalised expectation maximisation algorithm.  相似文献   

12.
We set out in this study to review a vast amount of recent literature on machine learning (ML) approaches to predicting financial distress (FD), including supervised, unsupervised and hybrid supervised–unsupervised learning algorithms. Four supervised ML models including the traditional support vector machine (SVM), recently developed hybrid associative memory with translation (HACT), hybrid GA-fuzzy clustering and extreme gradient boosting (XGBoost) were compared in prediction performance to the unsupervised classifier deep belief network (DBN) and the hybrid DBN-SVM model, whereby a total of sixteen financial variables were selected from the financial statements of the publicly-listed Taiwanese firms as inputs to the six approaches. Our empirical findings, covering the 2010–2016 sample period, demonstrated that among the four supervised algorithms, the XGBoost provided the most accurate FD prediction. Moreover, the hybrid DBN-SVM model was able to generate more accurate forecasts than the use of either the SVM or the classifier DBN in isolation.  相似文献   

13.
A blind adaptive scheme is proposed for joint maximum likelihood (ML) channel estimation and data detection of single- input multiple-output (SIMO) systems.The joint ML optimisation over channel and data is decomposed into an iterative optimisation loop.An efficient global optimisation algorithm called the repeated weighted boosting search is employed at the upper level to optimally identify the unknown SIMO channel model,and the Viterbi algorithm is used at the lower level to produce the maximum likelihood sequence estimation of the unknown data sequence.A simulation example is used to demonstrate the effectiveness of this joint ML optimisation scheme for blind adaptive SIMO systems.  相似文献   

14.
The effective integration of MT technology into computer-assisted translation tools is a challenging topic both for academic research and the translation industry. In particular, professional translators consider the ability of MT systems to adapt to the feedback provided by them to be crucial. In this paper, we propose an adaptation scheme to tune a statistical MT system to a translation project using small amounts of post-edited texts, like those generated by a single user in even just one day of work. The same scheme can be applied on a larger scale in order to focus general purpose models towards the specific domain of interest. We assess our method on two domains, namely information technology and legal, and four translation directions, from English to French, Italian, Spanish and German. The main outcome is that our adaptation strategy can be very effective provided that the seed data used for adaptation is ‘close enough’ to the remaining text to be translated; otherwise, MT quality neither improves nor worsens, thus showing the robustness of our method.  相似文献   

15.
邹志斌  李允  张晓先 《计算机工程》2014,(1):280-282,286
TTCN-3数据系统的实现在遵照TTCN-3标准的基础上,还需要支持数据兼容等特性。针对该问题,给出一种TTCN-3数据系统到Java的翻译方案。利用Java语言具有的继承、多态等面向对象的特色,借鉴抽象工厂设计模式,通过检视分析翻译生成代码。证明该方案符合TTCN-3标准规定,并清晰地体现数据系统中的数据类型和数据值的功能区分。该方案支持不同数据类型的兼容和数据值之间的比较,并易于扩展。  相似文献   

16.
Compilers that have been formally verified in theorem provers are often not directly usable because the formalization language is not a general-purpose programming language or the formalization contains non-executable constructs. This paper takes a comprehensive, even though simplified model of Java, formalized in the Isabelle proof assistant, as starting point and shows how core functions in the translation process (type checking and compilation) are defined and proved correct. From these, Isabelle's program extraction facility generates ML code that can be directly interfaced with other, possibly “unsafe” code.  相似文献   

17.
18.
We develop a formal proof of the ML type inference algorithm, within the Coq proof assistant. We are much concerned with methodology and reusability of such a mechanization. This proof is an essential step toward the certification of a complete ML compiler.In this paper we present the Coq formalization of the typing system and its inference algorithm. We establish formally the correctness and the completeness of the type inference algorithm with respect to the typing rules of the language. We describe and comment on the mechanized proofs.  相似文献   

19.
This paper reports practical experience in implementing Alice, an extension of Standard ML, on top of an existing implementation of Oz. This approach yields a high-quality implementation with little effort. The combination is an advanced programming system for both Oz and Alice, which offers more than either language on its own.Many thanks go to Ulrike Becker-Kornstaedt, Thorsten Brunklaus, Tobias Müller, and Christian Schulte for their comments on a previous version of this paper. For the numerous discussions regarding the details of the translation scheme, thanks go to Andreas Rossberg, implementor of the Alice compiler frontend. Finally, I'd like to thank the anonymous referees for their commments.  相似文献   

20.
This paper presents a focused and comprehensive literature survey on the use of machine learning (ML) in antenna design and optimization. An overview of the conventional computational electromagnetics and numerical methods used to gain physical insight into the design of the antennas is first presented. The major aspects of ML are then presented, with a study of its different learning categories and frameworks. An overview and mathematical briefing of regression models built with ML algorithms is then illustrated, with a focus on those applied in antenna synthesis and analysis. An in‐depth overview on the different research papers discussing the design and optimization of antennas using ML is then reported, covering the different techniques and algorithms applied to generate antenna parameters based on desired radiation characteristics and other antenna specifications. Various investigated antennas are sorted based on antenna type and configuration to assist the readers who wish to work with a specific type of antennas using ML.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号