首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
徐妮妮  于海艳  肖志涛 《计算机应用》2010,30(10):2777-2780
给出了频域抽取(DIF)多维向量基快速傅里叶变换(FFT)算法。对多维频域信号的每一维,采用向量基2频域抽取法,导出了快速算法蝶形运算的一般形式。该FFT算法适合于维数为任意整数的情况,当维数为1时,算法退化为著名的频域抽取向量基2 FFT算法。为了便于编程实现,以频域抽取3维向量基FFT算法为例,给出了快速算法实现流程,该流程易于向任意整数维推广。计算量比较结果显示,频域抽取多维向量基FFT算法比多维分离式FFT算法计算量低。  相似文献   

2.
利用力学预计算方法生成植物运动的真实感动画需要对形变数据作有效简化和压缩。针对植物及其运动天然具有层级性的特点,提出一种基于视觉感知层次的动态几何简化算法。依据植物的树型层级结构,将各级“子树”的运动作高低频分解;“低频运动”表征“子树”的主体运动,“高频运动”即为运动细节。按“广度优先”遍历植物层级结构,在某一“深度”(层级)上将所有“子树”的“低频运动”合成即获得原始运动在此层次的逼近。遍历深度递增将构成植物运动的渐进逼近;递减则为植物运动的逐步抽象化,这实质上实现了植物运动的细节层次控制。实验表明该算法可实现植物运动的高效简化和压缩,利于生成大规模植物场景的真实感动画。  相似文献   

3.
针对当前数字信号处理领域对快速傅里叶变换应用的广泛需求,在对算法原理分析的基础上,给出了8点基-2按时间抽选FFT处理器的实现方案;并综合Xilinx xc3s1500系列芯片,通过Modelsim SE 6.0对程序进行仿真.实验结果表明,该处理器功能实现正确,并且具有较高的运算速度和精度.  相似文献   

4.
Decimation describes the process of removing entities (such as polygons) from a geometric representation. The goal is to intelligently reduce the number of primitives required to accurately model the problem of interest. The work described in the article was originally motivated by the need for efficient and robust decimation of volume tessellations, that is, unstructured tetrahedrizations. Existing surface-based decimation schemes do not generalize to volumes. The technique allows local, dynamic vertex removal from an unstructured tetrahedrization while preserving the initial tessellation topology and boundary geometry. The research focuses on vertex removal methodology, not on the formulation of decimation criteria. In practice, criteria for removing vertices are application specific. The basis of the algorithm is a unique and general method to classify a triangle with respect to a nonconvex polygon. The resulting decimation algorithm (applicable to both surface and volume tessellations) is robust and efficient because it avoids floating-point classification computations  相似文献   

5.
对“按频率抽取的RBFFT算法”一文的修正   总被引:1,自引:0,他引:1  
本文首先修正了“按频率抽取的RB FFT算法”一文中的错误,然后给出了RB FFT算法的另一个矩阵分解公式。按该公式进行计算所需运算量比按文[1]给出的两种都要少。  相似文献   

6.
The development of an automated health monitoring framework is critical for aviation system safety, especially considering the expected increase in air traffic over the next decade. Conventional approaches such as model-based and exceedance methods have a low detection accuracy and are limited to specific applications. This paper proposes a robust real-time health monitoring framework for detecting performance anomalies, which may impact system safety during flight operations, with high accuracy and generalized applicability. The proposed monitoring framework utilizes sensor data from commercial flight data recorders to predict possible flight performance anomalies. Decimation, a signal processing technique, in conjunction with Savitzky-Golay filtering is utilized to preprocess the dataset and mitigate sampling rate and noise issues that prevent direct usage of historical flight data. Correlation-based feature subset selection is subsequently performed, and these features are used to train a support vector machine that predicts flight performance. With this model, performance anomalies in the test data are automatically detected based on deviations from the predicted flight behavior. The proposed monitoring framework was demonstrated to detect performance anomalies in real-time and exhibited accurate detection capabilities with high computational efficiency.  相似文献   

7.
The goal of this paper is to motivate and define yet another sorted logic, called SND. All the previous sorted logics that can be found in the Artificial Intelligence literature have been designed to be used in (completely) automated deduction . SND has been designed to be used in interactive theorem proving. Because of this shift of focus, SND has been designed to satisfy three innovative design requirements: it is defined on top of a natural deduction calculus, and in a way to be a definitional extension of such calculus; and it is implemented on top of its implementation. In turn, because of this fact, SND has various innovative technical properties; among them: it allows us to deal with free variables, it has no notion of well-sortedness and of well-sortedness being a prerequisite of well-formedness, its implementation is such that, in the default mode, the system behaves exactly as with the original unsorted calculus.  相似文献   

8.

With rapid development in wireless sensor networks and continuous improvements in developing artificial intelligence-based scientific solutions, the concept of ambient assisted living has been encouraged and adopted. This is due to its widespread applications in smart homes and healthcare. In this regard, the concept of human activity recognition (HAR) & classification has drawn numerous researchers’ attention as it improves the quality of life. However, before using this concept in real-time scenarios, it is required to analyse its performance following activities of daily living using benchmarked data set. In this continuation, this work has adopted the activity classification algorithms to improve their accuracy further. These algorithms can be used as a benchmark to analyse others’ performance. Initially, the raw 3-axis accelerometer data is first preprocessed to remove noise and make it feasible for training and classification. For this purpose, the sliding window algorithm, linear and Gaussian filters have been applied to raw data. Then Naïve Bayes (NB) and Decision Tree (DT) classification algorithms are used to classify human activities such as: sitting, standing, walking, sitting down and standing up. From results, it can be seen that maximum 89.5% and 99.9% accuracies are achieved using NB and DT classifiers with Gaussian filter. Furthermore, we have also compared the obtained results with its counterpart algorithms in order to prove its effectiveness.

  相似文献   

9.
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers’ performance levels high is an important area of research. In this article, we explore Input Decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses these subsets to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains. ID="A1"Correspondance and offprint requests to: Kagan Tumer, NASA Ames Research Center, Moffett Field, CA, USA  相似文献   

10.
Correspondence discriminant analysis (CDA) is a multivariate statistical method derived from discriminant analysis which can be used on contingency tables. We have used CDA to separate Gram negative bacteria proteins according to their subcellular location. The high resolution of the discrimination obtained makes this method a good tool to predict subcellular location when this information is not known. The main advantage of this technique is its simplicity. Indeed, by computing two linear formulae on amino acid composition, it is possible to classify a protein into one of the three classes of subcellular location we have defined. The CDA itself can be computed with the ADE-4 software package that can be downloaded, as well as the data set used in this study, from the P?le Bio-Informatique Lyonnais (PBIL) server at http://pbil.univ-lyon1.fr.  相似文献   

11.
We present in this paper a logic programming specification language and its application to the formal specification of PROLOG dialects (Marseille-Edinburgh like dialect or parallel logic programs). In particular it is used in the standardization work of PROLOG. The specification language is based on normal clauses (definite clauses with possibly negative literals in the body) whose semantics is the set of the (generalized) proof-trees. We restrict the specification language to stratified programs and ground proof-trees such that its semantics fits with most of the usual known semantics in logic programming. The specification language is fully declarative in the sense that it is written in a pure logical stule. It is relatively easy to deduce an executable specification from a specification written in such a language. Part of the specification are the associated comments and a methodology has been developed to write these. Without the comments a formal specification cannot be understood; they are partly formal and serve only to help to understand the axioms. They are a natural language form of formal statements relative to the correctness and the completeness of the axioms with regards to some intended meaning. We show in this paper how this specification language can be used to specify dialects of PROLOG. The presented example is just a sample of PROLOG but fully developed here. The specification language has already been used for real dialects as PARLOG and standard PROLOG. This specification method is also interesting because it illustrates the power of logic programming to make specifications. It seems to us that logic programming is generally considered as “impure” executable specification. Our purpose is to show that logic programming may also be used as a perhaps low level but full specification language.  相似文献   

12.
A.  J.  I.  H.  L.J.  O.  A. 《Neurocomputing》2007,70(16-18):2853
Clustering algorithms have been successfully applied in several disciplines. One of those applications is the initialization of radial basis function (RBF) centers composing a neural network, designed to solve functional approximation problems. The Clustering for Function Approximation (CFA) algorithm was presented as a new clustering technique that provides better results than other clustering algorithms that were traditionally used to initialize RBF centers. Even though CFA improves performance against other clustering algorithms, it has some flaws that can be improved. Within those flaws, it can be mentioned the way the partition of the input data is done, the complex migration process, the algorithm's speed, the existence of some parameters that have to be set in order to obtain good solutions, and the convergence is not guaranteed. In this paper, it is proposed an improved version of this algorithm that solves the problems that its predecessor have using fuzzy logic successfully. In the experiments section, it will be shown how the new algorithm performs better than its predecessor and how important is to make a correct initialization of the RBF centers to obtain small approximation errors.  相似文献   

13.
The classical Early Prepare (EP) commit protocol, used in many commercial systems, is not suitable for use in multi-level secure (MLS) distributed database systems that employ a locking protocol for concurrency control. This is because EP requires that read locks are not released by a participant during their window of uncertainty; however, it is not possible for a locking protocol to provide this guarantee in a MLS system (since the read lock of a higher-level transaction on a lower-level data object must be released whenever a lower-level transaction wants to write the same data). The only available work in the literature, namely the Secure Early Prepare (SEP) protocol, overcomes this difficulty by aborting those distributed transactions that release their low-level read locks prematurely. We see this approach as being too restrictive. One of the major benefits of distributed processing is its robustness to failures, and SEP fails to take advantage of this. In this paper, we propose the Advanced Secure Early Prepare (ASEP) commit protocol to solve the above problem, together with a number of language primitives that can be used as system calls in distributed transactions. These primitives permit features like partial rollback and forward recovery to be incorporated within the transaction model, and allow a distributed transaction to proceed even when a participant has released its low-level read locks prematurely. This not only offers flexibility, but can also be used, if desired, by a sophisticated programmer to trade off consistency for atomicity of the distributed transaction  相似文献   

14.
This paper investigates the benefits that the partial least squares (PLS) modelling approach offers engineers involved in the operation of fed-batch fermentation processes. It is shown that models developed using PLS can be used to provide accurate inference of quality variables that are difficult to measure on-line, such as biomass concentration. It is further shown that this model can be used to provide fault detection and isolation capabilities and that it can be integrated within a standard model predictive control framework to regulate the growth of biomass within the fermenter. This model predictive controller is shown to provide its own monitoring capabilities that can be used to identify faults within the process and also within the controller itself. Finally it is demonstrated that the performance of the controller can be maintained in the presence of fault conditions within the process.  相似文献   

15.
Changes and continuous progress in logistics and productive systems make the realization of improvements in decision making necessary. Simulation is a good support tool for this type of decisions because it allows reproducing processes virtually to study their behavior, to analyze the impact of possible changes or to compare different design alternatives without the high cost of scale experiments. Although process simulation is usually focused on industrial processes, over the last two decades, new proposals have emerged to bring simulation techniques into software engineering. This paper describes a Systematic Literature Review (SLR) which returned 8070 papers (published from 2013 to 2019) by a systematic search in 4 digital libraries. After conducting this SLR, 36 Software Process Simulation Modeling (SPSM) works were selected as primary studies and were documented following a specific characterization scheme. This scheme allows characterizing each proposal according to the paradigm used and its technology base as well as its future line of work. Our purpose is to identify trends and directions for future research on SPSM after identifying and studying which proposals in this topic have been defined and the relationships and dependencies between these proposals in the last five years. After finishing this review, it is possible to conclude that SPSM continues to be a topic that is very much addressed by the scientific community, but each contribution has been proposed with particular goals. This review also concludes that Agent-Based Simulation and System Dynamics paradigm is increasing and decreasing, respectively, its trend among SPSM proposals in the last five years. Regarding Discrete-Event Simulation paradigm, it seems that it is strengthening its position among research community in recent years to design new approaches.  相似文献   

16.
We consider the problem of a learning mechanism (for example, a robot) locating a point on a line when it is interacting with a random environment which essentially informs it, possibly erroneously, which way it should move. In this paper we present a novel scheme by which the point can he learned using some recently devised learning principles. The heart of the strategy involves discretizing the space and performing a controlled random walk on this space. The scheme is shown to be epsilon-optimal and to converge with probability 1. Although the problem is solved in its generality, its application in nonlinear optimization has also been suggested. Typically, an optimization process involves working one's way toward the maximum (minimum) using the local information that is available. However, the crucial issue in these strategies is that of determining the parameter to be used in the optimization itself. If the parameter is too small the convergence is sluggish. On the other hand, if the parameter is too large, the system could erroneously converge or even oscillate. Our strategy can be used to determine the best parameter to be used in the optimization.  相似文献   

17.
《Information & Management》2004,41(4):399-411
Joint application development (JAD) is a facilitated group technique that can be used in systems requirements determination (SRD); it was designed to encourage team rapport and achieve synergy by leveraging the combined knowledge of participants. JAD has been reported to cure several problems of conventional SRD techniques and shortened development schedules. However, its freely interacting meeting structure may curtail effectiveness by encouraging adverse group-related actions that challenge even the best facilitators. In this study, we integrated JAD and nominal group technique (NGT), a popular group structure that has been used to reduce the effects of negative group dynamics on task-oriented objectives. We examined this integrated structure in a laboratory experiment to determine whether it could alleviate the problems that JAD has experienced during SRD. The results suggest that the integrated approach outperformed JAD in our test environment; it was as efficient as JAD alone and it appeared to contribute to the reduction of the need for great facilitation skills in group decision-making.  相似文献   

18.
A frequently used algorithm for finding the convex hull of a simple polygon in linear running time has been recently shown to fail in some cases. Due to its simplicity the algorithm is, nevertheless, attractive. In this paper it is shown that the algorithm does in fact work for a family of simple polygons known as weakly externally visible polygons. Some application areas where such polygons occur are briefly discussed. In addition, it is shown that with a trivial modification the algorithm can be used to internally and externally triangulate certain classes of polygons in 0(n) time.  相似文献   

19.
Latent Semantic Kernels   总被引:5,自引:0,他引:5  
Kernel methods like support vector machines have successfully been used for text categorization. A standard choice of kernel function has been the inner product between the vector-space representation of two documents, in analogy with classical information retrieval (IR) approaches.Latent semantic indexing (LSI) has been successfully used for IR purposes as a technique for capturing semantic relations between terms and inserting them into the similarity measure between two documents. One of its main drawbacks, in IR, is its computational cost.In this paper we describe how the LSI approach can be implemented in a kernel-defined feature space.We provide experimental results demonstrating that the approach can significantly improve performance, and that it does not impair it.  相似文献   

20.
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号