共查询到20条相似文献,搜索用时 15 毫秒
1.
Abnormal crowd behavior detection is an important research issue in computer vision. However, complex real-life situations (e.g., severe occlusion, over-crowding, etc.) still challenge the effectiveness of previous algorithms. Recently, the methods based on spatio-temporal cuboid are popular in video analysis. To our knowledge, the spatio-temporal cuboid is always extracted randomly from a video sequence in the existing methods. The size of each cuboid and the total number of cuboids are determined empirically. The extracted features either contain the redundant information or lose a lot of important information which extremely affect the accuracy. In this paper, we propose an improved method. In our method, the spatio-temporal cuboid is no longer determined arbitrarily, but by the information contained in the video sequence. The spatio-temporal cuboid is extracted from video sequence with adaptive size. The total number of cuboids and the extracting positions can be determined automatically. Moreover, to compute the similarity between two spatio-temporal cuboids with different sizes, we design a novel data structure of codebook which is constructed as a set of two-level trees. The experiment results show that the detection rates of false positive and false negative are significantly reduced. Keywords: Codebook, latent dirichlet allocation (LDA), social force model, spatio-temporal cuboid. 相似文献
2.
Abnormal crowd behavior detection is an important research issue in computer vision. The traditional methods first extract
the local spatio-temporal cuboid from video. Then the cuboid is described by optical flow or gradient features, etc. Unfortunately,
because of the complex environmental conditions, such as severe occlusion, over-crowding, etc., the existing algorithms cannot
be efficiently applied. In this paper, we derive the high-frequency and spatio-temporal (HFST) features to detect the abnormal
crowd behaviors in videos. They are obtained by applying the wavelet transform to the plane in the cuboid which is parallel
to the time direction. The high-frequency information characterize the dynamic properties of the cuboid. The HFST features
are applied to the both global and local abnormal crowd behavior detection. For the global abnormal crowd behavior detection,
Latent Dirichlet allocation is used to model the normal scenes. For the local abnormal crowd behavior detection, Multiple
Hidden Markov Models, with an competitive mechanism, is employed to model the normal scenes. The comprehensive experiment
results show that the speed of detection has been greatly improved using our approach. Moreover, a good accuracy has been
achieved considering the false positive and false negative detection rates. 相似文献
3.
The effectiveness of the theory of fuzzy sets in detecting different regional boundaries of X-ray images is demonstrated. The algorithm includes a prior enhancement of the contrast among the regions (having small change in gray levels) using the contrast intensification (INT) operation along with smoothing in the fuzzy property plane before detecting its edges. The property plane is extracted from the spatial domain using S, ? and (1 ?) functions and the fuzzifiers. Final edge detection is achieved using max or min operator. The system performance for different parameter conditions is illustrated by application to an image of a radiograph of the wrist. 相似文献
4.
The detection of abnormal driving behaviors based on video surveillance systems is an important part of Intelligent Transportation System (ITS), which can help reduce disturbances on traffic flow and improve traffic safety. First, the study proposes a novel nonlinear sparse reconstruction method for abnormal driving behavior detection in video surveillance. A hybrid kernel function formed by convexly combining a local kernel of radial basis function (RBF) and a global kernel of homogeneous polynomial is been applied in sparse reconstruction method. Then, a novel Hybrid Kernel Orthogonal Matching Pursuit (HKOMP) algorithm is designed to solve the proposed sparse reconstruction model. Finally, the performance of the abnormal detection method is tested on two datasets i.e. stop sign dataset and car parking dataset. In addition, comparative experiments with five classical methods are carried out. The experimental results indicate that the proposed method outperforms other five comparison methods in terms of accuracy. 相似文献
5.
为有效地监控公交车这一特定环境中人群的异常行为,提出一种公交车内人群异常情况检测的方法。对视频图像确立感兴趣区域,进行预处理;通过改进Vi Be算法提取运动目标,引入多尺度滑窗算法确定识别区域;结合连续多帧识别区域进行改进卷积神经网络算法的异常行为识别,通过识别结果判断公交车内人群是否异常。与传统方法的比较结果表明,该算法的检测正确率较高,可达93.5%,误检率较低,仅为1.6%,在实际应用中具有较高的参考价值。 相似文献
6.
Colon cancer is the second major cause of cancer related deaths in industrial nations. Computed tomographic colonography (CTC) has emerged in the last decade as a new less invasive colon diagnostic alternative to the usually practiced optical colonoscopy. The overall goal is to increase the effectiveness of virtual endoscopic navigation of the existing computer-aided detection (CAD) system. The colonic/haustral folds serve as important landmarks for various associated tasks in the virtual endoscopic navigation like prone–supine registration, colonic polyp detection and tenia coli extraction. In this paper, we present two different techniques, first in isolation and then in synergism, for the detection of haustral folds. Our input is volumetric computed tomographic colonography (CTC) images. The first method, which uses a combination of heat diffusion and fuzzy c-means algorithm (FCM), has a tendency of over-segmentation. The second method, which employs level sets, suffers from under-segmentation. A synergistic combination, where the output of the first is used as an input for the second, is shown to improve the segmentation quality. Experimental results are presented on digital colon phantoms as well as real patient scans. The combined method has a total erroneous (over-segmentation plus under-segmentation) detection of (6.5 ± 2)% of the total number of folds per colon as compared to (12.5 ± 5)% for the diffusion-FCM-based method and (11.5 ± 3)% for the level set-based method. The p-values obtained from the associated ANOVA tests indicate that the performance improvements are statistically significant. 相似文献
8.
This paper presents a new extension of fuzzy sets: R-fuzzy sets. The membership of an element of a R-fuzzy set is represented as a rough set. This new extension facilitates the representation of an uncertain fuzzy membership with a rough approximation. Based on our definition of R-fuzzy sets and their operations, the relationships between R-fuzzy sets and other fuzzy sets are discussed and some examples are provided. 相似文献
9.
In the present paper, a novel graph-based approach to the shape decomposition problem is addressed. The shape is appropriately transformed into a visibility graph enriched with local neighborhood information. A two-step diffusion process is then applied to the visibility graph that efficiently enhances the information provided, thus leading to a more robust and meaningful graph construction. Inspired by the notion of a clique as a strict cluster definition, the dominant sets algorithm is invoked, slightly modified to comport with the specific problem of defining shape parts. The cluster cohesiveness and a node participation vector are two important outputs of the proposed graph partitioning method. Opposed to most of the existing techniques, the final number of the clusters is determined automatically, by estimating the cluster cohesiveness on a random network generation process. Experimental results on several shape databases show the effectiveness of our framework for graph-based shape decomposition. 相似文献
10.
Contour-based object detection can be formulated as a matching problem between model contour parts and image edge fragments. We propose a novel solution by treating this problem as the problem of finding dominant sets in weighted graphs. The nodes of the graph are pairs composed of model contour parts and image edge fragments, and the weights between nodes are based on shape similarity. Because of high consistency between correct correspondences, the correct matching corresponds to a dominant set of the graph. Consequently, when a dominant set is determined, it provides a selection of correct correspondences. As the proposed method is able to get all the dominant sets, we can detect multiple objects in an image in one pass. Moreover, since our approach is purely based on shape, we also determine an optimal scale of target object without a common enumeration of all possible scales. Both theoretic analysis and extensive experimental evaluation illustrate the benefits of our approach. 相似文献
11.
Planar curves “suggested” by sequences of points are considered. A method is discussed for refining the sequence iteratively so that the fairness of the curve is improved while maintaining the basic form and features. This involves the use of intrinsic coordinates and works by smoothing the curvature plot and then integrating twice to recover the curve. Modifications are made during this process to ensure that individual subsegments run correctly between their end points and join smoothly. 相似文献
12.
In this work we consider a fuzzy set based approach to the issue of discovery in databases (database mining). The concept of linguistic summaries is described and shown to be a user friendly way to present information contained in a database. We discuss methods for measuring the amount of information provided by a linguistic summary. The issue of conjecturing, how to decide on which summaries may be informative, is discussed. We suggest two approaches to help us focus on relevant summaries. The first method, called the template method, makes use of linguistic concepts related to the domain of the attributes involved in the summaries. The second approach uses the mountain clustering method to help focus our summaries. © 1996 John Wiley & Sons, Inc. 相似文献
13.
针对入侵检测系统存在的高漏报率和误报率,提出了一种基于邻域粗糙集的入侵检测方法.该方法在粗糙集理论的基础上引入邻域概念,这样便无需对数据进行离散化处理,可以减少信息损失.实验结果表明:该方法可选择出更为重要的属性组合,从而获得较高的检测率和较低的漏报率与误报率. 相似文献
14.
We present techniques for warping and blending (or subtracting) geometric textures onto surfaces represented by high resolution level sets. The geometric texture itself can be represented either explicitly as a polygonal mesh or implicitly as a level set. Unlike previous approaches, we can produce topologically connected surfaces with smooth blending and low distortion. Specifically, we offer two different solutions to the problem of adding fine-scale geometric detail to surfaces. Both solutions assume a level set representation of the base surface which is easily achieved by means of a mesh-to-level-set scan conversion. To facilitate our mapping, we parameterize the embedding space of the base level set surface using fast particle advection. We can then warp explicit texture meshes onto this surface at nearly interactive speeds or blend level set representations of the texture to produce high-quality surfaces with smooth transitions. 相似文献
15.
A method for the analysis of the effects of subjective and objective tolerances in networks and systems is presented in this paper. The imprecision of the components is represented using fuzzy sets and then the value of the desired attribute is computed. The resulting attribute is also fuzzy and is obtained in the form of a fuzzy set. This fuzzy set contains the extremal and other values of the desired attribute along with their grades of membership. Thus, apart from getting the extremel values, we get the overall picture of the attribute. Since this method is directly applicable for the analysis of subjective tolerances, networks representing humanistic systems may be analysed using this method. 相似文献
16.
The task of using Markov chains to develop a statistical behavioral model of a DS user to detect abnormal activity is described. In order to verify the assumption about the possibility of using this method in electronic health records, a program system was developed. The experiments with the system showed that the approach in question could be efficiently applied in abnormal action detection, for example, in data systems handling sensitive information. 相似文献
17.
We propose two fast methods for dominant point detection and polygonal representation of noisy and possibly disconnected curves based on a study of the decomposition of the curve into the sequence of maximal blurred segments [2]. Starting from results of discrete geometry [3] and [4], the notion of maximal blurred segment of width ν[2] has been proposed, well adapted to possibly noisy curves. The first method uses a fixed parameter that is the width of considered maximal blurred segments. The second method is deduced from the first one based on a multi-width approach to obtain a non-parametric method that uses no threshold for working with noisy curves. Comparisons with other methods in the literature prove the efficiency of our approach. Thanks to a recent result [5] concerning the construction of the sequence of maximal blurred segments, the complexity of the proposed methods is O( n log n). An application of vectorization is also given in this paper. 相似文献
18.
The concept of dominance has recently attracted much interest in the context of skyline computation. Given an N-dimensional data set S, a point p is said to dominate q if p is better than q in at least one dimension and equal to or better than it in the remaining dimensions. In this article, we propose extending the concept of dominance for business analysis from a microeconomic perspective. More specifically, we propose a new form of analysis, called Dominant Relationship Analysis (DRA), which aims to provide insight into the dominant relationships between products and potential buyers. By analyzing such relationships, companies can position their products more effectively while remaining profitable. To support DRA, we propose a novel data cube called DADA (Data Cube for Dominant Relationship Analysis), which captures the dominant relationships between products and customers. Three types of queries called Dominant Relationship Queries (DRQs) are consequently proposed for analysis purposes: (1) Linear Optimization Queries (LOQ), (2) Subspace Analysis Queries (SAQ), and (3) Comparative Dominant Queries (CDQ). We designed efficient algorithms for computation, compression and incremental maintenance of DADA as well as for answering the DRQs using DADA. We conducted extensive experiments on various real and synthetic data sets to evaluate the technique of DADA and report results demonstrating the effectiveness and efficiency of DADA and its associated query-processing strategies. 相似文献
19.
Malware detection is one of the most challenging problems in computer security. Recently, methods based on machine learning are very popular in unknown and variant malware detection. In order to achieve a successful learning, extracting discriminant and stable features is the most important prerequisite. In this paper, we propose a bilayer behavior abstraction method based on semantic analysis of dynamic API sequences. Operations on sensitive system resources and complex behaviors are abstracted in an interpretable way at different semantic layers. At the lower layer, raw API calls are combined to abstract low-layer behaviors via data dependency analysis. At the higher layer, low-layer behaviors are further combined to construct more complex high-layer behaviors with good interpretability. The extracted low-layer and high-layer behaviors are finally embedded into a high-dimensional vector space. Hence, the abstracted behaviors can be directly used by many popular machine learning algorithms. Besides, to tackle the problem that benign programs are not adequately sampled or malware and benign programs are severely imbalanced, an improved one-class support vector machine (OC-SVM) named OC-SVM-Neg is proposed which makes use of the available negative samples. Experimental results show that the proposed feature extraction method with OC-SVM-Neg outperforms binary classifiers on the false alarm rate and the generalization ability. 相似文献
20.
针对传统人工检查黑车的方式不但耗时耗力而且效率低下的问题,提出一种新的自动检测黑车的方法。在Hadoop平台上,对物联网技术采集的全疆车辆加气数据进行分析;抽取车辆加气的时间特征和空间特征;利用随机森林算法研究车辆与驾驶员、加气站间的关系,从而发现具有异常加气模式的黑车车辆。在大规模真实数据集上的实验表明:提出的方法在黑车发现问题上有较高的准确率,可以用于帮助有关部门提高黑车检测的效率。 相似文献
|