首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 797 毫秒
1.
State space explosion is a key problem in the analysis of finite state systems. The sweep-line method is a state exploration method which uses a notion of progress to allow states to be deleted from memory when they are no longer required. This reduces the peak number of states that need to be stored, while still exploring the full state space. The technique shows promise but has never achieved reductions greater than about a factor of 10 in the number of states stored in memory for industrially relevant examples. This paper discusses sweep-line analysis of the connection management procedures of a new Internet standard, the Datagram Congestion Control Protocol (DCCP). As the intuitive approaches to sweep-line analysis are not effective, we introduce new variables to track progress. This creates further state explosion. However, when used with the sweep-line, the peak number of states is reduced by over two orders of magnitude compared with the original. Importantly, this allows DCCP to be analysed for larger parameter values. Somsak Vanit-Anunchai was partially supported by an Australian Research Council Discovery Grant (DP0559927) and Suranaree University of Technology. Guy Edward Gallasch was supported by an Australian Research Council Discovery Grant (DP0559927).  相似文献   

2.
传统的工作面瓦斯预测方法仅利用瓦斯数据的时间特性,缺乏与空间相关的先验信息,因此利用瓦斯数据的时空特性,采用深度学习算法长短期记忆与全连接神经网络相结合的方法构建LSTM-FC(Long Short Time Memory-Fully Connection)瓦斯浓度时空序列的预测模型。LSTM能够解决瓦斯序列的长时间依赖性,全连接神经网络能够准确捕捉瓦斯序列的空间关联性,深入挖掘瓦斯数据之间的时空特性,通过预测不同位置的瓦斯值,构造工作面的瓦斯分布图。实验结果表明,通过使用LSTM-FC模型,预测误差有了明显减少,相比于其他神经网络预测模型,预测精度有所提高。  相似文献   

3.
The Hopfield model effectively stores a comparatively small number of initial patterns, about 15% of the size of the neural network. A greater value can be attained only in the Potts-glass associative memory model, in which neurons may exist in more than two states. Still greater memory capacity is exhibited by a parametric neural network based on the nonlinear optical signal transfer and processing principles. A formalism describing both the Potts-glass associative memory and the parametric neural network within a unified framework is developed. The memory capacity is evaluated by the Chebyshev–Chernov statistical method.  相似文献   

4.
In this paper, the concept of a long memory system for forecasting is developed. Pattern modelling and recognition systems are introduced as local approximation tools for forecasting. Such systems are used for matching the current state of the time-series with past states to make a forecast. In the past, this system has been successfully used for forecasting the Santa Fe competition data. In this paper, we forecast the financial indices of six different countries, and compare the results with neural networks on five different error measures. The results show that pattern recognition-based approaches in time-series forecasting are highly accurate, and that these are able to match the performance of advanced methods such as neural networks. Received: 2 April 1998?Received in revised form: 1 February 1999?Accepted: 16 February 1999  相似文献   

5.
The optimized distance-based access methods currently available for multidimensional indexing in multimedia databases have been developed based on two major assumptions: a suitable distance function is known a priori and the dimensionality of the image features is low. It is not trivial to define a distance function that best mimics human visual perception regarding image similarity measurements. Reducing high-dimensional features in images using the popular principle component analysis (PCA) might not always be possible due to the non-linear correlations that may be present in the feature vectors. We propose in this paper a fast and robust hybrid method for non-linear dimensions reduction of composite image features for indexing in large image database. This method incorporates both the PCA and non-linear neural network techniques to reduce the dimensions of feature vectors so that an optimized access method can be applied. To incorporate human visual perception into our system, we also conducted experiments that involved a number of subjects classifying images into different classes for neural network training. We demonstrate that not only can our neural network system reduce the dimensions of the feature vectors, but that the reduced dimensional feature vectors can also be mapped to an optimized access method for fast and accurate indexing. Received 11 June 1998 / Accepted 25 July 2000 Published online: 13 February 2001  相似文献   

6.
In this paper, we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action–perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modelled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.  相似文献   

7.
We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli that are nongaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high-dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low-dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is small, it becomes feasible to map the neuron's input-output function even under fully natural stimulus conditions. These ideas are illustrated in simulations on model visual and auditory neurons responding to natural scenes and sounds, respectively.  相似文献   

8.
Model‐checking enables the automated formal verification of software systems through the explicit enumeration of all the reachable states. While this technique has been successfully applied to industrial systems, it suffers from the state‐space explosion problem because of the exponential growth in the number of states with respect to the number of interacting components. In this paper, we present a new reachability analysis algorithm, named Past‐Free[ze], that reduces the state‐space explosion problem by freeing parts of the state‐space from memory. This algorithm relies on the explicit isolation of the acyclic parts of the system before analysis. The parallel composition of these parts drives the reachability analysis, the core of all model‐checkers. During the execution, the past states of the system are freed from memory making room for more future states. To enable counter‐example construction, the past states can be stored on external storage. To show the effectiveness of the approach, the algorithm was implemented in the OBP Observation Engine and was evaluated both on a synthetic benchmark and on realistic case studies from automotive and aerospace domains. The benchmark, composed of 50 test cases, shows that in average, 75% of the state‐space can be dropped from memory thus enabling the exploration of up to 14 times more states than traditional approaches. Moreover, in some cases, the reachability analysis time can be reduced by up to 25%. In realistic settings, the use of Past‐Free[ze] enabled the exploration of a state‐space 4.5 times larger on the automotive case study, where almost 50% of the states are freed from memory. Moreover, this approach offers the possibility of analyzing an arbitrary number of interactions between the environment and the system‐under‐verification; for instance, in the case of the aerospace example, 1000 pilot/system interactions could be analyzed unraveling an 80 GB state‐space using only 10 GB of memory. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
10.
The Levenberg-Marquardt (LM) learning algorithm is a popular algorithm for training neural networks; however, for large neural networks, it becomes prohibitively expensive in terms of running time and memory requirements. The most time-critical step of the algorithm is the calculation of the Gauss-Newton matrix, which is formed by multiplying two large Jacobian matrices together. We propose a method that uses backpropagation to reduce the time of this matrix-matrix multiplication. This reduces the overall asymptotic running time of the LM algorithm by a factor of the order of the number of output nodes in the neural network.  相似文献   

11.
There are many examples in science and engineering which are reduced to a set of partial differential equations (PDEs) through a process of mathematical modelling. Nevertheless there exist many sources of uncertainties around the aforementioned mathematical representation. Moreover, to find exact solutions of those PDEs is not a trivial task especially if the PDE is described in two or more dimensions. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set to an arbitrary accuracy. In this article, a strategy based on the differential neural network (DNN) for the non-parametric identification of a mathematical model described by a class of two-dimensional (2D) PDEs is proposed. The adaptive laws for weights ensure the ‘practical stability’ of the DNN-trajectories to the parabolic 2D-PDE states. To verify the qualitative behaviour of the suggested methodology, here a non-parametric modelling problem for a distributed parameter plant is analysed.  相似文献   

12.
Taking the Bayesian approach in solving the discrete-time parameter estimation problem has two major results: the unknown parameters are legitimately included as additional system states, and the computational objective becomes calculation of the entire posterior density instead of just its first few moments. This viewpoint facilitates intuitive analysis, allowing increased qualitative understanding of the system behavior. With the actual posterior density in hand, the true optimal estimate for any given loss function can be calculated. Although the computational burden of doing so might preclude online use, it does not provide a clearly justified baseline for comparative studies. These points are demonstrated by analyzing a scalar problem with a single unknown, and by comparing an established point estimator's performance to the true optimal estimate  相似文献   

13.
The exact dynamics of shallow loaded associative neural memories are generated and characterized. The Boolean matrix analysis approach is employed for the efficient generation of all possible state transition trajectories for parallel updated binary-state dynamic associative memories (DAMs). General expressions for the size of the basin of attraction of fundamental and oscillatory memories and the number of oscillatory and stable states are derived for discrete synchronous Hopfield DAMs loaded with one, two, or three even-dimensionality bipolar memory vectors having the same mutual Hamming distances between them. Spurious memories are shown to occur only if the number of stored patterns exceeds two in an even-dimensionality Hopfield memory. The effects of odd- versus even-dimensionality memory vectors on DAM dynamics and the effects of memory pattern encoding on DAM performance are tested. An extension of the Boolean matrix dynamics characterization technique to other, more complex DAMs is presented.  相似文献   

14.
The Hybrid neural Fuzzy Inference System (HyFIS) is a multilayer adaptive neural fuzzy system for building and optimizing fuzzy models using neural networks. In this paper, the fuzzy Yager inference scheme, which is able to emulate the human deductive reasoning logic, is integrated into the HyFIS model to provide it with a firm and intuitive logical reasoning and decision-making framework. In addition, a self-organizing gaussian Discrete Incremental Clustering (gDIC) technique is implemented in the network to automatically form fuzzy sets in the fuzzification phase. This clustering technique is no longer limited by the need to have prior knowledge about the number of clusters present in each input and output dimensions. The proposed self-organizing Yager based Hybrid neural Fuzzy Inference System (SoHyFIS-Yager) introduces the learning power of neural networks to fuzzy logic systems, while providing linguistic explanations of the fuzzy logic systems to the connectionist networks. Extensive simulations were conducted using the proposed model and its performance demonstrates its superiority as an effective neuro-fuzzy modeling technique.  相似文献   

15.
The problem of multi-cell tracking plays an important role in studying dynamic cell cycle behaviors. In this paper, a novel ant system with multiple tasks is modeled for jointly estimating the number of cells and individual states in cell image sequences. In our ant system, in addition to pure cooperative mechanism used in traditional ant colony optimization algorithm, we model and investigate another two types of ant working modes, namely, dual competitive mode and interactive mode with cooperation and competition to evaluate the tracking performance on spatially adjacent cells. For adjacent ant colonies, dual competitive mode encourages ant colonies with different tasks to work independently, whereas the interactive mode introduces a trade-off between cooperation and competition. In simulations of real cell image sequences, the multi-tasking ant system integrated with interactive mode yielded better tracking results than systems adopting pure cooperation or dual competition alone, both of which cause tracking failures by under-estimating and over-estimating the number of cells, respectively. Furthermore, the results suggest that our algorithm can automatically and accurately track numerous cells in various scenarios, and is competitive with state-of-the-art multi-cell tracking methods.  相似文献   

16.
Multithreading is used for hiding long memory latency in uniprocessors and multiprocessor computer systems and aims at increasing system efficiency. In such an architecture, a number of threads are allocated to each processing element (PE) and whenever a running thread becomes suspended the PE switches to another ready thread.In this paper, we discuss analytical modeling of coarsely multithreaded architectures and present two analytical models: (i) a deterministic model, where the timing parameters (e.g., context switching time, threads's run length, and memory latency) are assumed to be constant, and (ii) a stochastic model where the timing parameters are random variables.Both models provide a framework to study the dependence of the MTA efficiency on design parameters of the target architecture and its workload. The deterministic model, as well as asymptotic bounding analysis of the stochastic model, allows to determine upper bounds and some break points of the MTA efficiency such as stability (saturation) points, whereas the stochastic model provides more accurate prediction of the efficiency.  相似文献   

17.
随着社会的快速发展,人们对嵌入式智能设备的依赖程度越来越高。嵌入式系统工作在体积受限的封闭环境中,运算部件、存储单元等相关元器件体积小集成度高,不同工作环境、不同使用频度对电子元器件的可靠性将产生重要影响。针对嵌入式系统工作过程中的动态可靠性,提出了面向系统动态可靠性的自适应目标代码生成方法。该方法借助于决策树学习算法,构建了系统可靠性评估模型;并以此为指导,设计了多路径目标代码生成方法。使得系统能够根据实际工作状态信息,自适应的最佳的执行路径,以避免系统资源使用的不均衡,提高各运算部件的可靠性。实验表明,该方法将程序对单个处理器最高使用率由80%以上降到了30%以内,将内存单元最大最小访问比例由157.3降到了15.4,有效均衡了各处理器核和内存单元的使用。  相似文献   

18.
The use of dynamic dependence analysis spans several areas of software research including software testing, debugging, fault localization, and security. Many of the techniques devised in these areas require the execution of large test suites in order to generate profiles that capture the dependences that occurred between given types of program elements. When the aim is to capture direct and indirect dependences between finely granular elements, such as statements and variables, this process becomes highly costly due to: (1) the large number of elements, and (2) the transitive nature of the indirect dependence relationship.The focus of this paper is on computing dynamic dependences between variables, i.e., dynamic information flow analysis or DIFA. First, because the problem of tracking dependences between statements, i.e., dynamic slicing, has already been addressed by numerous researchers. Second, because DIFA is a more difficult problem given that the number of variables in a program is unbounded. We present an algorithm that, in the context of test suite execution, leverages the already computed dependences to efficiently compute subsequent dependences within the same or later test runs. To evaluate our proposed algorithm, we conducted an empirical comparative study that contrasted it, with respect to efficiency, to three other algorithms: (1) a naïve basic algorithm, (2) a memoization based algorithm that does not leverage computed dependences from previous test runs, and (3) an algorithm that uses reduced ordered binary decision diagrams (roBDDs) to maintain and manage dependences. The results indicated that our new DIFA algorithm performed considerably better in terms of both runtime and memory consumption.  相似文献   

19.
A multi-resolution topological representation for non-manifold meshes   总被引:1,自引:0,他引:1  
We address the problem of representing and processing 3D objects, described through simplicial meshes, which consist of parts of mixed dimensions, and with a non-manifold topology, at different levels of detail. First, we describe a multi-resolution model, that we call a non-manifold multi-tessellation (NMT), and we consider the selective refinement query, which is at the heart of several analysis operations on multi-resolution meshes. Next, we focus on a specific instance of a NMT, generated by simplifying simplicial meshes based on vertex-pair contraction, and we describe a compact data structure for encoding such a model. We also propose a new data structure for two-dimensional simplicial meshes, capable of representing both connectivity and adjacency information with a small memory overhead, which is used to describe the mesh extracted from an NMT through selective refinement. Finally, we present algorithms to efficiently perform updates on such a data structure.  相似文献   

20.
Lane  Terran  Brodley  Carla E. 《Machine Learning》2003,51(1):73-107
This paper introduces the computer security domain of anomaly detection and formulates it as a machine learning task on temporal sequence data. In this domain, the goal is to develop a model or profile of the normal working state of a system user and to detect anomalous conditions as long-term deviations from the expected behavior patterns. We introduce two approaches to this problem: one employing instance-based learning (IBL) and the other using hidden Markov models (HMMs). Though not suitable for a comprehensive security solution, both approaches achieve anomaly identification performance sufficient for a low-level focus of attention detector in a multitier security system. Further, we evaluate model scaling techniques for the two approaches: two clustering techniques for the IBL approach and variation of the number of hidden states for the HMM approach. We find that over both model classes and a wide range of model scales, there is no significant difference in performance at recognizing the profiled user. We take this invariance as evidence that, in this security domain, limited memory models (e.g., fixed-length instances or low-order Markov models) can learn only part of the user identity information in which we're interested and that substantially different models will be necessary if dramatic improvements in user-based anomaly detection are to be achieved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号