首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Du  Haizhou  Duan  Ziyi 《Applied Intelligence》2022,52(3):2496-2509

The multivariate time series often contain complex mixed inputs, with complex correlations between them. Detecting change points in multivariate time series is of great importance, which can find anomalies early and reduce losses, yet very challenging as it is affected by many complex factors, i.e., dynamic correlations and external factors. The performance of traditional methods typically scales poorly. In this paper, we propose Finder, a novel approach of change point detection via multivariate fusion attention networks. Our model consists of two key modules. First, in the time series prediction module, we employ multi-level attention networks based on the Transformer and integrate the external factor fusion component, achieving feature extraction and fusion of multivariate data. Secondly, in the change point detection module, a deep learning classifier is used to detect change points, improving efficiency and accuracy. Extensive experiments prove the superiority and effectiveness of Finder on two real-world datasets. Our approach outperforms the state-of-the-art methods by up to 10.50% on the F1 score.

  相似文献   

2.
Liu  Huafeng  Han  Xiaofeng  Li  Xiangrui  Yao  Yazhou  Huang  Pu  Tang  Zhenmin 《Multimedia Tools and Applications》2019,78(17):24269-24283

Robust road detection is a key challenge in safe autonomous driving. Recently, with the rapid development of 3D sensors, more and more researchers are trying to fuse information across different sensors to improve the performance of road detection. Although many successful works have been achieved in this field, methods for data fusion under deep learning framework is still an open problem. In this paper, we propose a Siamese deep neural network based on FCN-8s to detect road region. Our method uses data collected from a monocular color camera and a Velodyne-64 LiDAR sensor. We project the LiDAR point clouds onto the image plane to generate LiDAR images and feed them into one of the branches of the network. The RGB images are fed into another branch of our proposed network. The feature maps that these two branches extract in multiple scales are fused before each pooling layer, via padding additional fusion layers. Extensive experimental results on public dataset KITTI ROAD demonstrate the effectiveness of our proposed approach.

  相似文献   

3.
We present a compositional approach for specifying concurrent behavior of components with data states on the basis of interface theories. The dynamic aspects of a system are specified by modal input/output automata, whereas changing data states are specified by pre- and postconditions. The combination of the two formalisms leads to our notion of modal input/output automata with data constraints (MIODs). In this setting we study refinement and behavioral compatibility of MIODs. We show that compatibility is preserved by refinement and that refinement is compositional w.r.t. synchronous composition, thus satisfying basic requirements of an interface theory. We propose a semantic foundation of interface specifications where any MIOD is equipped with a model-theoretic semantics describing the class of its correct implementation models. Implementation models are formalized in terms of guarded input/output transition systems and the correctness notion is based on a simulation relation between an MIOD and an implementation model which relates not only abstract and concrete control states but also (abstract) data constraints and concrete data states. We show that our approach is compositional in the sense that locally correct implementation models of compatible MIODs compose to globally correct implementations, thus ensuring independent implementability.  相似文献   

4.
Planar Shape Detection and Regularization in Tandem   总被引:1,自引:0,他引:1       下载免费PDF全文
We present a method for planar shape detection and regularization from raw point sets. The geometric modelling and processing of man‐made environments from measurement data often relies upon robust detection of planar primitive shapes. In addition, the detection and reinforcement of regularities between planar parts is a means to increase resilience to missing or defect‐laden data as well as to reduce the complexity of models and algorithms down the modelling pipeline. The main novelty behind our method is to perform detection and regularization in tandem. We first sample a sparse set of seeds uniformly on the input point set, and then perform in parallel shape detection through region growing, interleaved with regularization through detection and reinforcement of regular relationships (coplanar, parallel and orthogonal). In addition to addressing the end goal of regularization, such reinforcement also improves data fitting and provides guidance for clustering small parts into larger planar parts. We evaluate our approach against a wide range of inputs and under four criteria: geometric fidelity, coverage, regularity and running times. Our approach compares well with available implementations such as the efficient random sample consensus–based approach proposed by Schnabel and co‐authors in 2007.  相似文献   

5.
We propose weighted modal transition systems, an extension to the well-studied specification formalism of modal transition systems that allows to express both required and optional behaviours of their intended implementations. In our extension we decorate each transition with a weight interval that indicates the range of concrete weight values available to the potential implementations. In this way resource constraints can be modelled using the modal approach. We focus on two problems. First, we study the question of existence/finding the largest common refinement for a number of finite deterministic specifications and we show PSPACE-completeness of this problem. By constructing the most general common refinement, we allow for a stepwise and iterative construction of a common implementation. Second, we study a logical characterisation of the formalism and show that a formula in a natural weight extension of the logic CTL is satisfied by a given modal specification if and only if it is satisfied by all its refinements. The weight extension is general enough to express different sorts of properties that we want our weights to satisfy.  相似文献   

6.
Many applications of wireless sensor networks monitor the physical world and report events of interest. To facilitate event detection in these applications, in this paper we propose a pattern-based event detection approach and integrate the approach into an in-network sensor query processing framework. Different from existing threshold-based event detection, we abstract events into patterns in sensory data and convert the problem of event detection into a pattern matching problem. We focus on applying single-node temporal patterns, and define the general patterns as well as five types of basic patterns for event specification. Considering the limited storage on sensor nodes, we design an on-node cache manager to maintain the historical data required for pattern matching and develop event-driven processing techniques for queries in our framework. We have conducted experiments using patterns for events that are extracted from real-world datasets. The results demonstrate the effectiveness and efficiency of our approach.  相似文献   

7.
In this article, we introduce an approach for detecting evolving geophysical features within interferometric synthetic aperture radar (InSAR)-derived point cloud data sets. This approach is based on the availability of models describing both spatial and temporal behaviours of the geophysical features of interest. The model parameters are used to generate a multidimensional space that is then scanned with a user-defined resolution. For each point in the parameter space, a spatiotemporal template is reconstructed from the original model. This template is then used to scan the point cloud data set for regions matching the spatiotemporal behaviour.

We also introduce a proportional measure where the residual for each point in the data set is compared to both the data and the template to provide a scale invariant measure of the behavioural matching. The matching is evaluated for every point in the parameter over a region of influence determined by the parameters. The resulting multidimensional space is then collapsed onto geographical coordinates to produce an overlay map identifying regions whose spatiotemporal behaviour matches the feature of interest.

We tailored our approach to the detection of subsidence behaviour, indicative of the development of sinkholes, modelled as Gaussian with amplitude linearly increasing with time. We verified the validity of our model using both synthetic and actual InSAR data sets. The latter was obtained by processing imagery of a region near Wink, Texas, containing ground truth sinkhole data.

We applied this framework to a 40 km × 40 km area of interest located in western Virginia and performed ground validation on a subset of the identified regions. The results show good agreement between the locations detected by our algorithm and the evidence of subsidence observed during the ground validation campaign.  相似文献   

8.
9.
Zhou  Yanjun  Ren  Huorong  Li  Zhiwu  Wu  Naiqi  Al-Ahmari  Abdulrahman M. 《Applied Intelligence》2021,51(7):4874-4887

Since the time series data have the characteristics of a large amount of data and non-stationarity, we usually cannot obtain a satisfactory result by a single-model-based method to detect anomalies in time series data. To overcome this problem, in this paper, a combination-model-based approach is proposed by combining a similarity-measurement-based method and a model-based method for anomaly detection. First, the process of data representation is performed to generate a new data form to arrive at the purpose of reducing data volume. Furthermore, due to the anomalies being generally caused by changes in amplitude and shape, we take both the original time series data and their amplitude change data into consideration of the process of data representation to capture the shape and morphological features. Then, the results of data representation are employed to establish a model for anomaly detection. Compared with the state-of-the-art methods, experimental studies on a large number of datasets show that the proposed method can significantly improve the performance of anomaly detection with higher data anomaly resolution.

  相似文献   

10.
Nie  Weizhi  Yan  Yan  Song  Dan  Wang  Kun 《Multimedia Tools and Applications》2021,80(11):16205-16214

Emotion is a key element in video data. However, it is difficult to understand the emotions conveyed in such videos due to the sparsity of video frames expressing emotion. Meanwhile, some approaches proposed to consider utterances as independent entities and ignore the inter-dependencies and relations among the utterances in recent years. These approaches also ignore the key point of multi-modal feature fusion in the feature learning process. In order to handle this problem, in this paper, we propose an LSTM-based model that can fully consider the relations among the utterances and also handle the multi-modal feature fusion problem in the learning process. Finally, the experiments on some popular datasets demonstrate the effectiveness of our approach.

  相似文献   

11.
ABSTRACT

High-resolution imagery provides rich information useful for land-use and land-cover change detection; however, methods to exploit these data lag behind data collection technologies. In this article, we propose a novel object-oriented multi-scale hierarchical sampling (MSHS) change detection method for high-resolution satellite imagery. In our method, MSHS is carried out to automatically obtain multi-scale training samples and different sample combinations. The training sample spectra, texture, and shape features are fused to build feature space after MSHS. Sample combinations and corresponding feature spaces are input into Random Forest (RF) to train multiple change classifiers. An optimal RF change detection classifier is selected when the out-of-bag error parameter in RF is at the minimum. In order to validate the proposed method, we applied it to high-resolution satellite image data and compared the detection results from our method and the single-scale sampling change detection method. These experimental results show that false alarm rates and missed detection of changed objects using our method were lower than the single-scale sampling change detection method. To demonstrate the scalability of the algorithm, different change detection methods were applied to three study sites. Experimental results show that our method delivered high overall accuracy and F1-scores. Compared to traditional methods, our method makes full use of the multi-scale characteristics of ground objects. Our approach does not extend multi-scale feature vectors directly, but instead automatically increases the amount of the training samples at multiple scales, without increasing the volume of manual processing, thus improving the ability of the algorithm to generalize features from the RF model, making it more robust.  相似文献   

12.
13.

The problem of computing windows using relational expressions has been solved only in certain cases in which the chase semantics and the extension chase semantics of the database coincide. However, the general problem of computing windows under either chase semantics or extension chase semantics, but without restrictions, remained an open problem. In this paper we present a complete solution of the general problem, under extension chase semantics. Our solution is complete in the sense that it does not require any assumption on the database scheme or on the database state. It follows that our approach subsumes previous approaches, and we exhibit cases in which our approach correctly computes the windows, while previous approaches fail to do so. Moreover, the efficiency of our approach lies in the fact that it uses only those relation schemes and only those functional dependencies that are necessary in the computation of windows. The main technique employed by our approach is a least fixpoint construction using the notion of cover (a cover being a set of relation schemes satisfying certain properties). The proposed technique can be implemented using relational algebra plus recursion.  相似文献   

14.
In this paper, we provide stability guarantees for two frameworks that are based on the notion of functional maps—the framework of shape difference operators and the one of analyzing and visualizing the deformations between shapes. We consider two types of perturbations in our analysis: one is on the input shapes and the other is on the change in scale. In theory, we formulate and justify the robustness that has been observed in practical implementations of those frameworks. Inspired by our theoretical results, we propose a pipeline for constructing shape difference operators on point clouds and show numerically that the results are robust and informative. In particular, we show that both the shape difference operators and the derived areas of highest distortion are stable with respect to changes in shape representation and change of scale. Remarkably, this is in contrast with the well‐known instability of the eigenfunctions of the Laplace–Beltrami operator computed on point clouds compared to those obtained on triangle meshes.  相似文献   

15.
Many real-life critical systems are described with large models and exhibit both probabilistic and non-deterministic behaviour. Verification of such systems requires techniques to avoid the state space explosion problem. Symbolic model checking and compositional verification such as assume-guarantee reasoning are two promising techniques to overcome this barrier. In this paper, we propose a probabilistic symbolic compositional verification approach (PSCV) to verify probabilistic systems where each component is a Markov decision process (MDP). PSCV starts by encoding implicitly the system components using compact data structures. To establish the symbolic compositional verification process, we propose a sound and complete symbolic assume-guarantee reasoning rule. To attain completeness of the symbolic assume-guarantee reasoning rule, we propose to model assumptions using interval MDP. In addition, we give a symbolic MTBDD-learning algorithm to generate automatically the symbolic assumptions. Moreover, we propose to use causality to generate small counterexamples in order to refine the conjecture assumptions. Experimental results suggest promising outlooks for our probabilistic symbolic compositional approach.  相似文献   

16.

Enabling information systems to face anomalies in the presence of uncertainty is a compelling and challenging task. In this work the problem of unsupervised outlier detection in large collections of data objects modeled by means of arbitrary multidimensional probability density functions is considered. We present a novel definition of uncertain distance-based outlier under the attribute level uncertainty model, according to which an uncertain object is an object that always exists but its actual value is modeled by a multivariate pdf. According to this definition an uncertain object is declared to be an outlier on the basis of the expected number of its neighbors in the dataset. To the best of our knowledge this is the first work that considers the unsupervised outlier detection problem on data objects modeled by means of arbitrarily shaped multidimensional distribution functions. We present the UDBOD algorithm which efficiently detects the outliers in an input uncertain dataset by taking advantages of three optimized phases, that are parameter estimation, candidate selection, and the candidate filtering. An experimental campaign is presented, including a sensitivity analysis, a study of the effectiveness of the technique, a comparison with related algorithms, also in presence of high dimensional data, and a discussion about the behavior of our technique in real case scenarios.

  相似文献   

17.
The problem of anomaly detection in time series has received a lot of attention in the past two decades. However, existing techniques cannot locate where the anomalies are within anomalous time series, or they require users to provide the length of potential anomalies. To address these limitations, we propose a self-learning online anomaly detection algorithm that automatically identifies anomalous time series, as well as the exact locations where the anomalies occur in the detected time series. In addition, for multivariate time series, it is difficult to detect anomalies due to the following challenges. First, anomalies may occur in only a subset of dimensions (variables). Second, the locations and lengths of anomalous subsequences may be different in different dimensions. Third, some anomalies may look normal in each individual dimension but different with combinations of dimensions. To mitigate these problems, we introduce a multivariate anomaly detection algorithm which detects anomalies and identifies the dimensions and locations of the anomalous subsequences. We evaluate our approaches on several real-world datasets, including two CPU manufacturing data from Intel. We demonstrate that our approach can successfully detect the correct anomalies without requiring any prior knowledge about the data.  相似文献   

18.
A Roadmap to the Integration of Early Visual Modules   总被引:1,自引:0,他引:1  
By examining the problem of image correspondence (binocular stereo and optical flow) and its relationship with other modules such as segmentation, shape and depth estimation, occlusion detection, and local signal processing, we argue that early visual modules are entangled in chicken-and-egg relationships, and unraveling these necessitates a compositional approach. In this paper, we present compositional algorithms which can match images containing slanted surfaces and images having different contrast, while simultaneously solving other problems as part of the same process. Ultimately, our goal is to motivate the application of the compositional approach to unify many other early visual modules. Experimental results have been presented on a large variety of stereo and motion images, including images with contrast mismatch and images containing untextured slanted surfaces.  相似文献   

19.
This article presents an approach to enrich the MATLAB1 language with aspect-oriented modularity features, enabling developers to experiment different implementation characteristics and to acquire runtime data and traces without polluting their base MATLAB code. We propose a language through which programmers configure the low-level data representation of variables and expressions. Examples include specifically-tailored fixed-point data representations leading to more efficient support for the underlying hardware, e.g., digital signal processors and application-specific architectures, without built-in floating point units. This approach assists developers in adding handlers and monitoring features in a non-invasive way as well as configuring MATLAB functions with optimized implementations. Different aspect modules can be used to retarget common MATLAB code bases for different purposes and implementations. We validate the proposed approach with a set of representative examples where we attain a simple way to explore a number of properties. Experiment results and collected aspect-oriented software metrics lend support to the claims on its usefulness.  相似文献   

20.

The neighborhood problem appears in many applications of computational geometry, computational mechanics, etc. In all these situations, the main requirement for a competitive implementation is performance, which can only be attained in modern hardware by exploiting parallelism. However, whereas the performance of serial algorithms is fairly predictable, that of parallel methods depends on delicate issues that have a huge impact (cache memory, cache misses, memory alignment, etc.), but are not easy to control. Even if there is not a simple approach to deal with these factors in shared-memory architectures, it is quite convenient to program parallel algorithms where the data are segregated on a per-thread basis. With this objective in mind, we propose a strategy to develop parallel algorithms based on a two-level design, and apply it to efficiently solve the nearest neighborhood problem. At a higher level, the proposed methods orchestrate the parallel algorithm and split the space into cells stored in a hash table; at the lower level, our methods hold serial search algorithms that are completely agnostic to the high-level counterpart. Using this strategy, we have developed a library combining different serial and parallel algorithms, optimized them, and assessed their performance. The analysis carried out allows to better understand the main bottlenecks in the algorithmic solution of the nearest neighborhood problem and come out with very fast implementations that improve existing available software.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号