首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 885 毫秒
1.
Automata-theoretic representations have proven useful in the automatic and exact analysis of computing systems. We propose a new semantical mapping of π-Calculus processes into place/transition Petri nets. Our translation exploits the connections created by restricted names and can yield finite nets even for processes with unbounded name and unbounded process creation. The property of structural stationarity characterises the processes mapped to finite nets. We provide exact conditions for structural stationarity using novel characteristic functions. As application of the theory, we identify a rich syntactic class of structurally stationary processes, called finite handler processes. Our Petri net translation facilitates the automatic verification of a case study modelled in this class.  相似文献   

2.
Hedges, Robert A., and Suter, Bruce W., Numerical Spread: Quantifying Local Stationarity, Digital Signal Processing12 (2002) 628–643One of the fundamental assumptions in signal processing is that of signal stationarity; i.e., the statistics of all orders are not time dependent. Many real data sets are not stationary but can, however, be described as locally stationary; that is, they appear stationary over finite time intervals. We develop numerical spread as a means of quantifying local stationarity. Based on the theoretical spread as introduced by W. Kozek and colleagues the numerical spread provides a means for quantifying potential correlation between signal elements. Implementation of such a scheme on finite, discrete data, requires the augmentation of the associated covariance matrix. Three augmentation methods were investigated: zero padding, circular extension, and edge replication. It was determined that the method of edge replication is most desirable. The theoretical techniques estimate the spread as the rectangular region of support of the associated expected ambiguity function oriented parallel to the axes. By applying Radon transform techniques we can produce a parameterized model which describes the orientation of the region of support providing tighter estimates of the signal spread. Examples are provided that illustrate the utility of numerical spread and the enhancement resulting from the new methods.  相似文献   

3.
Vector autoregressive (VAR) modelling is one of the most popular approaches in multivariate time series analysis. The parameters interpretation is simple, and provide an intuitive identification of relationships and Granger causality among time series. However, the VAR modelling requires stationarity conditions which could not be valid in many practical applications. Locally stationary or time dependent modelling seem attractive generalizations, and several univariate approaches have already been proposed. In this paper we propose an estimation procedure for time-varying vector autoregressive processes, based on wavelet expansions of autoregressive coefficients. The asymptotic properties of the estimator are derived and illustrated by computer intensive simulations. We also present an application to brain connectivity identification using functional magnetic resonance imaging (fMRI) data sets.  相似文献   

4.
Vector autoregressive (VAR) modelling is one of the most popular approaches in multivariate time series analysis. The parameters interpretation is simple, and provide an intuitive identification of relationships and Granger causality among time series. However, the VAR modelling requires stationarity conditions which could not be valid in many practical applications. Locally stationary or time dependent modelling seem attractive generalizations, and several univariate approaches have already been proposed. In this paper we propose an estimation procedure for time-varying vector autoregressive processes, based on wavelet expansions of autoregressive coefficients. The asymptotic properties of the estimator are derived and illustrated by computer intensive simulations. We also present an application to brain connectivity identification using functional magnetic resonance imaging (fMRI) data sets.  相似文献   

5.
We consider the problem of polling web pages as a strategy for monitoring the world wide web. The problem consists of repeatedly polling a selection of web pages so that changes that occur over time are detected. In particular, we consider the case where we are constrained to poll a maximum number of web pages per unit of time, and this constraint is typically dictated by the governing communication bandwidth, and by the speed limitations associated with the processing. Since only a fraction of the web pages can be polled within a given unit of time, the issue at stake is one of determining which web pages are to be polled, and we attempt to do it in a manner that maximizes the number of changes detected. We solve the problem by first modelling it as a stochastic nonlinear fractional knapsack problem. We then present an online learning automata (LA) system, namely, the hierarchy of twofold resource allocation automata (H-TRAA), whose primitive component is a twofold resource allocation automaton (TRAA). Both the TRAA and the H-TRAA have been proven to be asymptotically optimal. Finally, we demonstrate empirically that the H-TRAA provides orders of magnitude faster convergence compared to the learning automata knapsack game (LAKG) which represents the state-of-the-art for this problem. Further, in contrast to the LAKG, the H-TRAA scales sub-linearly. Based on these results, we believe that the H-TRAA has also tremendous potential to handle demanding real-world applications, particularly those which deal with the world wide web.  相似文献   

6.
汇率波动性的预测一直以来是研究金融市场者关注的焦点之一,本文拓展了一种基于自组织神经网络技术的,用于预测非平稳汇率波动性的自组织混合模型(SOMAR).SOMAR突破了传统模型对平稳性的假设,变全局建模为局部建模,使得全局非平稳数据变成局部平稳数据.同时,它也是一种基于神经元网络技术的非参数回归模型,结合传统回归模型的简易性和神经元网络算法的灵活性,拓展模型(ESOMAR)提高了对数据异构的适应性.在对汇率波动性的预测实验中,ESOMAR体现出优于传统回归模型和一些基于其它神经元网络模型的效果,并证明了它在预测金融数据方面所具有的价值.  相似文献   

7.
A quantity known as the Kemeny constant, which is used to measure the expected number of links that a surfer on the World Wide Web, located on a random web page, needs to follow before reaching his/her desired location, coincides with the more well known notion of the expected time to mixing, i.e., to reaching stationarity of an ergodic Markov chain. In this paper we present a new formula for the Kemeny constant and we develop several perturbation results for the constant, including conditions under which it is a convex function. Finally, for chains whose transition matrix has a certain directed graph structure we show that the Kemeny constant is dependent only on the common length of the cycles and the total number of vertices and not on the specific transition probabilities of the chain.  相似文献   

8.
Short term electric load forecasting with a neural network based on fuzzy rules is presented. In this network, fuzzy membership functions are represented using combinations of two sigmoid functions. A new scheme for augmenting the rule base is proposed. The network employs outdoor temperature forecast as one of the input quantities. The influence of imprecision in this quantity is investigated. The model is shown to be capable of also making reasonable forecasts in exceptional weekdays. Forecasting simulations were made with three different time series of electric load. In addition, the neuro-fuzzy method was tested at two electricity works, where it was used to produce forecasts with 1–24 hour lead times. The results of these one month real world tests are represented. Comparative forecasts were also made with the conventional Holt-Winters exponential smoothing method. The main result of the study is that the neuro-fuzzy method requires stationarity from the time series with respect to training data in order to give clearly better forecasts than the Holt-Winters method.  相似文献   

9.
In this paper, a model for websites is presented. The model is well-suited for the formal verification of dynamic as well as static properties of the system. A website is defined as a collection of web pages which are semantically connected in some way. External web pages (which are related pages not belonging to the website) are treated as the environment of the system. We also present the logic which is used to specify properties of websites, and illustrate the kinds of properties that can be specified and verified by using a model-checking tool on the system. In this setting, we discuss some interesting properties which often need to be checked when designing websites. We have encoded the model using the specification language Maude which allows us to use the Maude model-checking tool.  相似文献   

10.
Although efficient identification of user access sessions from very large web logs is an unavoidable data preparation task for the success of higher level web log mining, little attention has been paid to algorithmic study of this problem. In this paper we consider two types of user access sessions, interval sessions and gap sessions. We design two efficient algorithms for finding respectively those two types of sessions with the help of some proposed structures. We present theoretical analysis of the algorithms and prove that both algorithms have optimal time complexity and certain error-tolerant properties as well. We conduct empirical performance analysis of the algorithms with web logs ranging from 100 megabytes to 500 megabytes. The empirical analysis shows that the algorithms just take several seconds more than the baseline time, i.e., the time needed for reading the web log once sequentially from disk to RAM, testing whether each user access record is valid or not, and writing each valid user access record back to disk. The empirical analysis also shows that our algorithms are substantially faster than the sorting based session finding algorithms. Finally, optimal algorithms for finding user access sessions from distributed web logs are also presented.  相似文献   

11.
Sets and bags are closely related structures and have been studied in relational databases. A bag is different from a set in that it is sensitive to the number of times an element occurs while a set is not. In this paper, we introduce the concept of web bag in the context of a web warehouse called Whoweda (Warehouse Of Weda Data) which we are currently building. Informally, a web bag is a web table which allows multiple occurrences of identical web tuples. Web bag helps to discover useful knowledge from a web table such as visible documents (or web sites), luminous documents and luminous paths. In this paper, we perform a cost-benefit analysis with respect to storage, transmission and operational cost of web bags and discussed issues and implication of materializing web bags as opposed to web tables containing distinct web tuples. We have computed analytically the upper and lower bounds for the parameters which affect the cost of materializing web bags. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

12.
Huge amounts of various web items (e.g., images, keywords, and web pages) are being made available on the Web. The popularity of such web items continuously changes over time, and mining for temporal patterns in the popularity of web items is an important problem that is useful for several Web applications; for example, the temporal patterns in the popularity of web search keywords help web search enterprises predict future popular keywords, thus enabling them to make price decisions when marketing search keywords to advertisers. However, the presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in the popularity of web items. We treat the popularity of web items as time-series and propose a novel measure, a gap measure, to quantify the dissimilarity between the popularity of two web items. To reduce the computational overhead for this measure, an efficient method using the Discrete Fourier Transform (DFT) is presented. We assume that the popularity of web items is not necessarily periodic. For finding clusters of web items with similar popularity trends, we show the limitations of traditional clustering approaches and propose a scalable, efficient, density-based clustering algorithm using the gap measure. Our experiments using the popularity trends of web search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications.  相似文献   

13.
In this paper, we address the problem of the detection of out-of-plane web vibrations by means of a single camera and a laser dots pattern device. We have been motivated by the important economical impact of web vibrations phenomena which occur in winding/unwinding systems. Among many sources of disturbances, out-of-plane vibrations of an elastic moving web are well-known to be one of the most limiting factors for the velocity in the web transport industry.The new technique we proposed for the contact-less estimation of out-of-plane web vibration properties and during the winding process is the main contribution of this work. As far as we know, this is the first time a technique is proposed to evaluate the vibrations of a moving web with a camera. Vibration frequencies are estimated from distance variations of a web cross-section with respect to the camera.Experiments have been performed on a winding plant for elastic fabric with a web width of 10 cm. Distances from the web surface to the camera have been estimated all along an image sequence and the most significant frequencies have been extracted from the variations of this signal (forced and free vibrations) and compared to those provided with strain gauges and also with a simple elastic string model, in motion.  相似文献   

14.
The expansion of the World Wide Web (WWW) has created an increasing need for tools capable of supporting WWW authors in composing documents using the HyperText Markup Language (HTML). Currently, most web authors use tools which are basically ordinary text editors and have additional features to facilitate the easy and correct use of HTML tags. This approach places the burden on the web author to design and then create the entire web site in a top-down fashion, without any explicit support for the structural design of the site. In this paper we discuss an alternative structural approach to Web authoring, which is based on the use of the HyperTree hypermedia system as the central authoring tool. The advantages of using HyperTree are two-dimensional. Firstly, web authors can manage a web site as a single complete hypermedia database. For example, HyperTree provides facilities like the automatic creation of indices and the discovery of link inconsistencies. Additionally, it organizes the web pages in an easy to understand hierarchy without using any HTML directly. Secondly, web end-users can benefit from the use of HyperTree, since seeking information in structured web sites is generally less disorientating and develops fewer cognitive overheads. ©1997 John Wiley & Sons, Ltd.  相似文献   

15.
In this study, we use the newly developed and refined panel stationary test with structural breaks to investigate the time-series properties of stock prices for the G-7 stock markets during the 2000–2007 period. The empirical results from numerous earlier panel-based unit root tests which do not take structural breaks into account indicate that stock prices for all the countries we study here are non-stationary; but when we employ panel stationary test with structural breaks, we find the null hypothesis of I(0) stationarity in stock prices cannot be rejected for any of the G-7 stock markets. Our results indicate that the efficient market hypothesis does not hold in these G-7 stock markets.  相似文献   

16.
Many systems in nature, including biological systems, have very complex dynamics which generate random-looking time series. To better understand a particular dynamical system, it is often of interest to determine whether the system is caused by deterministic subsystems (e.g. chaotic systems), stochastic subsystems, or both. Although there are now several different approaches to determine this from time series data (e.g. correlation dimension and Lyapunov exponent calculations), these methods often require large amounts of stationary data (biological data is frequently nonstationary for long time scales), can often mis-identify certain systems, and can be subject to other technical problems. Alternatively, one can use methods that measure the complexity in a particular system which seldom make assumptions about a particular system, such as assuming the presence of stationarity. Additionally, mathematical and computational modeling techniques can be used to test different hypothesis about the dynamics of biological systems.  相似文献   

17.
In signal processing and in computational techniques for applied mathematics, linear filtering is an important way of turning a time series with independent innovations into a process with highly dependent signals. Stationary autoregressive processes are some of the best-studied linear modes, and their construction involves either full or soft (randomized) linear filters. The entropy rate (per unit time entropy) of a stationary time series is an important quantitative characteristic. When filtering, the entropy for n-dimensional blocks should change as ‘the order’ increases, but the entropy rate of the process could remain invariant. This is the main issue addressed in the paper, where we prove that full or soft linear filtering for Gaussian or uniform innovations produce no change in the entropy rate of the process. That is, linear operators applied to independent innovations do not add or wipe entropy from the innovations family. The plug-in estimators of the entropy rates are also provided, and they can be used to characterize Gaussian or uniform stationary sources. A simulation study is conducted in order to validate the theoretical results and to give a statistical characterization of the plug-in estimator we propose for the entropy rate. Not only it is easy to calculate and has a good precision, but it bridges the lack for nonparametric estimators for the entropy rate of stationary AR processes.  相似文献   

18.
The estimation of muscle fatigue using surface electromyography (SEMG) is of high relevance to evaluate ergonomic risk factors in the occupational settings. Signal stationarity plays an important role while selecting appropriate SEMG signal processing method for fatigue evaluation. The Fourier algorithm based signal processing methods (mean or median frequency of power spectrum) rely on the assumption that the signal under investigation is stationary. Stationarity of SEMG signals and its association with fatigue is rarely studied in the ergonomics literature. Therefore, this study was aimed at understanding the effect of fatigue on the stationarity of the SEMG data. Ten participants performed 40 min of fatiguing upper extremity exertions and SEMG data were recorded from the right upper trapezius muscle. The SEMG data recorded under static and dynamic conditions at the beginning and at the end of fatiguing exertions were used in the analysis. The stationarity analysis was performed for five window sizes of 128, 256, 512, 768 and 1024 ms using modified reverse arrangement test. The results showed that the muscle fatigue reduced the stationarity of the SEMG signal under static and dynamic conditions. The relationship between the muscle fatigue and the stationarity of the SEMG signal was found to be significant at the window size of 512 ms. A significantly higher fatigue related decrease in the stationarity was observed during dynamic exertions compared to the static exertions.Relevance to industryThe findings from the current study illustrate that the stationarity of SEMG signals could be used to quantify muscle fatigue under static and dynamic task conditions. These findings are useful to the ergonomic practitioners in conducting muscle fatigue estimation using SEMG.  相似文献   

19.
The mountain clustering method and the subtractive clustering method are useful methods for finding cluster centers based on local density in object data. These methods have been extended to shell clustering. In this article, we propose a relational mountain clustering method (RMCM), which produces a set of (proto) typical objects as well as a crisp partition of the objects generating the relation, using a new concept that we call relational density. We exemplify RMCM by clustering several relational data sets that come from object data. Finally, RMCM is applied to web log analysis, where it produces useful user profiles from web log data. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 375–392, 2005.  相似文献   

20.
Web video retagging   总被引:1,自引:0,他引:1  
Tags associated with web videos play a crucial role in organizing and accessing large-scale video collections. However, the raw tag list (RawL) is usually incomplete, imprecise and unranked, which reduces the usability of tags. Meanwhile, compared with studies on improving the quality of web image tags, tags associated with web videos are not studied to the same extent. In this paper, we propose a novel web video tag enhancement approach called video retagging, which aims at producing the more complete, precise, and ranked retagged tag list (RetL) for web videos. Given a web video, video retagging first collect its textually and visually related neighbor videos. All tags attached to the neighbors are treated as possible relevant ones and then RetL is generated by inferring the degree of relevance of the tags from both global and video-specific perspectives, using two different graph based models. Two kinds of experiments, i.e., application-oriented video search and categorization and user-based subjective studies are carried out on a large-scale web video dataset, which demonstrate that in most cases, RetL is better than RawL in terms of completeness, precision and ranking.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号