首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Shadows may occupy a significant portion of the image mainly in urban scenes. This research has the objective to detect shadows in high resolution orbital images using morphological operators. In order to verify preprocessing contribution in this shadow detection methodology, we have tested the median, morphological, bilateral and mean curvature filters to evaluate which one has the characteristic of mitigate the noise of the images and contribute to enhance detection performance. During the study, 10 panchromatic images of Worldview II satellite from the urban area of Presidente Prudente, in the state of Sao Paulo, were used. According to the shadow detection methodology by mathematical morphology, we checked the accuracy value using the images resulting of the smoothing methods applied in the preprocessing step. Finally, we evaluated all the smoothing levels in order to select the most appropriated according to accuracy and if images preserves the elements of interest. By analyzing the obtained results it is easy to see that the bilateral filter has presented satisfactory results, since it considers the spatial domain in the smooth ing process, despite incorporating the pixels intensity domain as well. Therefore, we can conclude that the bilateral filter is a good alternative considering an adequate choice of the parameters.  相似文献   

2.
Abstract: Pedestrian detection techniques are important and challenging especially for complex real world scenes. They can be used for ensuring pedestrian safety, ADASs (advance driver assistance systems) and safety surveillance systems. In this paper, we propose a novel approach for multi-person tracking-by-detection using deformable part models in Kalman filtering framework. The Kalman filter is used to keep track of each person and a unique label is assigned to each tracked individual. Based on this approach, people can enter and leave the scene randomly. We test and demonstrate our results on Caltech Pedestrian benchmark, which is two orders of magnitude larger than any other existing datasets and consists of pedestrians varying widely in appearance, pose and scale. Complex situations such as people occluded by each other are handled gracefully and individual persons can be tracked correctly after a group of people split. Experiments confirm the real-time performance and robustness of our system, working in complex scenes. Our tracking model gives a tracking accuracy of 72.8% and a tracking precision of 82.3%. We can further reduce false positives by 2.8%, using Kalman filtering.  相似文献   

3.
This paper proposes a method to customize a wavelet function for the analysis of pupil diameter fluctuation in the detection of drowsiness states under a driving simulation. The methodology relies on a genetic algorithm-based optimization and lifting schemes, which are a flexible and fast implementation of the discrete wavelet transform. To customize the wavelet function a clustering separability metric is employed as a fitness function so that the feature space created by the wavelet analysis exhibits the maximum class separability favorable for classification. Therefore, a completely new wavelet function is created, having unique characteristics customized to pupil diameter fluctuation analysis. It is demonstrated that the customized wavelet function own distinguished frequency and temporal responses suitable specifically for pupil diameter fluctuation analysis (namely, application-dependent), and in the classification they outperform classical wavelet families including Daubechies, Coiflet and Symlet, which are assumed to be application-independent. Thus the proposed method is useful for analysis of pupil fluctuation in evaluating sleepiness levels, as has been demonstrated in other applications.  相似文献   

4.
Acquiring a set of features that emphasize the differences between normal data points and outliers can drastically facilitate the task of identifying outliers. In our work, we present a novel non-parametric evaluation criterion for filter-based feature selection which has an eye towards the final goal of outlier detection. The proposed method seeks the subset of features that represent the inherent characteristics of the normal dataset while forcing outliers to stand out, making them more easily distinguished by outlier detection algorithms. Experimental results on real datasets show the advantage of our feature selection algorithm compared with popular and state-of-the-art methods. We also show that the proposed algorithm is able to overcome the small sample space problem and perform well on highly imbalanced datasets. Furthermore, due to the highly parallelizable nature of the feature selection, we implement the algorithm on a graphics processing unit (GPU) to gain significant speedup over the serial version. The benefits of the GPU implementation are two-fold, as its performance scales very well in terms of the number of features, as well as the number of data points.  相似文献   

5.
In this paper, we propose a system that can detect and track hair regions of heads automatically and runs at video-rate (30 frames per-second) by making use of both the color and the depth information obtained from a Kinect. Our system has three characteristics: (1) Using a 6D feature vector to describe both the 3D color feature and 3D geometric feature ofa pixel uniformly; (2) Classifying pixels into foreground (e.g., hair) and background with K-means clustering algorithm; (3) Selecting and updating the cluster centers of foreground and background before and during hair tracking automatically. Our system can track hair of any color or any style robustly in clustered background where some objects have color similar to the hair or in environment where the illumination changes. Moreover, it can also be used to track faces (or heads) if the face (= skin + hair) is selected as foreground.  相似文献   

6.
Deduplication technology has been increasingly used to reduce storage costs. Though it has been successfully applied to backup and archival systems, existing techniques can hardly be deployed in primary storage systems due to the associated latency cost of detecting duplicated data, where every unit has to be checked against a substantially large fin- gerprint index before it is written. In this paper we introduce Leach, for inline primary storage, a self-learning in-memory fingerprints cache to reduce the writing cost in deduplica- tion system. Leach is motivated by the characteristics of real- world I/O workloads: highly data skew exist in the access patterns of duplicated data. Leach adopts a splay tree to or- ganize the on-disk fingerprint index, automatically learns the access patterns and maintains hot working sets in cache mem- ory, with a goal to service a majority of duplicated data de- tection. Leveraging the working set property, Leach provides optimization to reduce the cost of splay operations on the fin- gerprint index and cache updates. In comprehensive experi- ments on several real-world datasets, Leach outperforms con- ventional LRU (least recently used) cache policy by reducing the number of cache misses, and significantly improves write performance without great impact to cache hits.  相似文献   

7.
The traditional similar code detection approaches are limited in detecting semantically similar codes, impeding their applications in practice. In this paper, we have improved the traditional metrics-based approach as well as the graph- based approach and presented a metrics-based and graph- based combined approach. First, source codes are represented as augmented system dependence graphs. Then, metrics- based candidate similar code extraction is performed to filter out most of the dissimilar code pairs so as to lower the computational complexity. After that, code normalization is performed on the candidate similar codes to remove code variations so as to detect similar code at the semantic level. Finally, program matching is performed on the normalized control dependence trees to output semantically similar codes. Experiment results show that our approach can detect similar codes with code variations, and it can be applied to large software.  相似文献   

8.
Outlier detection on data streams is an important task in data mining. The challenges become even larger when considering uncertain data. This paper studies the problem of outlier detection on uncertain data streams. We propose Continuous Uncertain Outlier Detection (CUOD), which can quickly determine the nature of the uncertain elements by pruning to improve the efficiency. Furthermore, we propose a pruning approach -- Probability Pruning for Continuous Uncertain Outlier Detection (PCUOD) to reduce the detection cost. It is an estimated outlier probability method which can effectively reduce the amount of calculations. The cost of PCUOD incremental algorithm can satisfy the demand of uncertain data streams. Finally, a new method for parameter variable queries to CUOD is proposed, enabling the concurrent execution of different queries. To the best of our knowledge, this paper is the first work to perform outlier detection on uncertain data streams which can handle parameter variable queries simultaneously. Our methods are verified using both real data and synthetic data. The results show that they are able to reduce the required storage and running time.  相似文献   

9.
An innovative dynamically reconfgurable radio-over-fber(RoF)network equipped with an intelligent medium access control(MAC)protocol is proposed to provide broadband access to train passengers in railway high-speed mobile applications.The proposed RoF network architecture is based on a reconfgurable control station and remote access unit(RAU)that is equipped with a fxed flter and tunable flter.The proposed hybrid frequency-division multiplexing/time division multiple access(FDM/TDMA)based MAC protocol realizes failure detection/recovery and dynamic wavelength allocation to remote access units.Simulation result shows that with the proposed MAC protocol,the control station can detect failures and recover and dynamic wavelength allocation can increase the wavelength resource utilization to maintain network performance.  相似文献   

10.
The aim of software testing is to find faults in a program under test, so generating test data that can expose the faults of a program is very important. To date, current stud- ies on generating test data for path coverage do not perform well in detecting low probability faults on the covered path. The automatic generation of test data for both path coverage and fault detection using genetic algorithms is the focus of this study. To this end, the problem is first formulated as a bi-objective optimization problem with one constraint whose objectives are the number of faults detected in the traversed path and the risk level of these faults, and whose constraint is that the traversed path must be the target path. An evolution- ary algorithm is employed to solve the formulated model, and several types of fault detection methods are given. Finally, the proposed method is applied to several real-world programs, and compared with a random method and evolutionary opti- mization method in the following three aspects: the number of generations and the time consumption needed to generate desired test data, and the success rate of detecting faults. The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.  相似文献   

11.
The following methods detect the attacks intrusion detection system: ANN (artificial neural network) for recognition and GA (genetic algorithm) for optimization of ANN results. We use KDD-CUP dataset to obtain the results, which shows around 0.9998 accuracy of applied methods in detecting the threads. ANN with GA requires 18 features.  相似文献   

12.
Aiming at the series of errors processing and analyzing produced in three-dimensional testing and reconstruction of the highway pavements, this paper conducts a detailed analysis and computation of these process errors, including calibration of signalized points, centerline calculation of light stripe, accumulated system error, etc.. After the contrast experiment and analysis, it finally introduces gravity method to calculate the camera's internal parameters, and gray centroid algorithm to extract light stripes' center. Result shows the system deviation is stabilized at 2 mm, which can better meet the needs of engineering practice.  相似文献   

13.
The aim of SON (self-organizing network) is to realize the autonomic function of the wireless network by self-configuration, self-optimization and self-healing, which reduces the human intervention and improves the user experience, Self-configuration is a process where newly deployed eNodeBs are configured by automatic installation procedures to get the necessary basic configuration for system operation. Self-optimization is a process that continuously monitors an environment and automatically optimizes various parameters when the environment changes. Self-healing is a process that detects and localizes failures, then fixes the problems automatically. This paper gives a comprehensive introduction to the SON functionalities and outlines the framework of self-configuration, self-optimization and self-healing. Some concrete algorithms are proposed for self-optimization and self-healing, which include capacity and coverage optimization and cell outage detection and compensation respectively. Simulation results in various scenarios are provided to evaluate the performance of the proposed algorithms.  相似文献   

14.
For airborne radar,there are usually insufficient independent and identically distributed(IID)training data because of geometric considerations and terrain variations.The rank reduction technique is one of the most effective approaches to circumvent this problem.In this study,we investigate four reduced-rank spacetime adaptive detectors for airborne radar,namely,the reduced-rank sample-matrix-inversion(RR-SMI),the reduced-rank adaptive matched filter(RR-AMF),the reduced-rank adaptive coherence estimator(RR-ACE),and the reduced-rank generalized likelihood ratio test(RR-GLRT).Their asymptotic analytical probabilities of detection(PD’s)and false alarm(PFA’s)are all derived.These detectors all asymptotically attain a constant false alarm rate(CFAR).It is shown that these four reduced-rank detectors exhibit detection performance which is better than or comparable to that of two existing reduced-rank detectors,proposed by Reed and Gau(RG1and RG2).Moreover,these four reduced-rank detectors are more robust to change in power of clutter and noise than RG1 and RG2.  相似文献   

15.
This paper presents a simple Electrocardiogram (ECG) processing algorithm for portable healthcare devices. This algorithm consists of the Haar wavelet transform (HWT), the modulus maxima pair detection (MMPD) and the peak position modification (PPM). To lessen the computational complexity, a novel no multiplier structure is introduced to implement HWT. In the MMPD, the HWT coefficient at scale 24 is processed to find candidate peak positions of ECG. The PPM is designed to correct the time shift in digital process and accurately determine the location of peaks. Some new methods are proposed to improve anti-jamming per- formance in MMPD and PPM. Evaluated by the MIT-BIH arrhythmia database, the sensitivity (Se) of QRS detection is 99.53% and the positive prediction (Pr) of QRS detection is 99.70%. The QT database is chosen to fully validate this algorithm in complete delineation of ECG waveform. The mean # and standard deviation cr between test results and annotations are calculated. Most of a satisfies the CSE limits which indicates that the results are stable and reliable. A detailed and rigorous computational complexity analysis is presented in this paper. The number of arithmetic operations in N input samples is chosen as the criterion of complexity. Without any multiplication operations, the number of addition operations is only about 16.33N. This algorithm achieves high detection accuracy and the lower computational complexity.  相似文献   

16.
Our study proposes a new local model to accurately control an avatar using six inertial sensors in real-time.Creating such a system to assist interactive control of a full-body avatar is challenging because control signals from our performance interfaces are usually inadequate to completely determine the whole body movement of human actors.We use a pre-captured motion database to construct a group of local regression models,which are used along with the control signals to synthesize whole body human movement.By synthesizing a variety of human movements based on actors’control in real-time,this study verifies the effectiveness of the proposed system.Compared with the previous models,our proposed model can synthesize more accurate results.Our system is suitable for common use because it is much cheaper than commercial motion capture systems.  相似文献   

17.
Image sequences processing and video encoding are extremely time consuming problems. The time complexity of them depends on image contents. This paper presents an estimation of a block motion method for video coding with edge alignment. This method uses blocks of size 4 × 4 and its basic idea is to find motion vector using the edge position in each video coding block. The method finds the motion vectors more accurately and faster than any known classical method that calculates all the possibilities. Our presented algorithm is compared with known classical algorithms using the evaluation function of the peak signal-to-noise ratio. For comparison of the methods we are using parameters such as time, CPU usage, and size of compressed data. The comparison is made on benchmark data in color format YUV. Results of our proposed method are comparable and in some cases better than results of standard classical algorithms.  相似文献   

18.
MoCap(motion capture)-based animation is a hot issue in computer animation research currently.Based on the optical MoCap system,this paper proposes a novel cross-mapping based facial expression simulating method.To overcome the problem of the false upper and lower jaw correlation derived from the facial global RBFbased cross-mapping method,we construct a functional partition based RBF cross-mapping method.During model animating,enhanced markers are added and animated by our proposed skin motion mechanism.In addition,based on the enhanced markers,an improved RBF-based animating approach is raised to derive realistic facial animation.Further more,a pre-computing algorithm is presented to reduce computational cost for real-time simulation.The experiments proved that the method can not only map the MoCap data of one subject to diferent personalized faces but generate realistic facial animation.  相似文献   

19.
Image categorization in massive image database is an important problem. This paper proposes an approach for image categorization, using sparse set of salient semantic information and hierarchy semantic label tree (HSLT) model. First, to provide more critical image semantics, the proposed sparse set of salient regions only at the focuses of visual attention instead of the entire scene was formed by our proposed saliency detection model with incorporating low and high level feature and Shotton's semantic texton forests (STFs) method. Second, we also propose a new HSLT model in terms of the sparse regional semantic information to automatically build a semantic image hierarchy, which explicitly encodes a general to specific image relationship. And last, we archived image dataset using image hierarchical semantic, which is help to improve the performance of image organizing and browsing. Extension experimefital results showed that the use of semantic hierarchies as a hierarchical organizing frame- work provides a better image annotation and organization, improves the accuracy and reduces human's effort.  相似文献   

20.
This paper presents the authors' research results about HRI (human-robot interaction). The goal is to estimate together the arm position, the anatomical movements of the shoulder and accelerations of the arm with respect to the shoulder and visualizing this movement in a 3-D virtual model to control a Robot. The estimation algorithm makes use of a nonlinear observer and an optimization routine to fuse information from the sensor. The global asymptotic convergence of the nonlinear observer is guaranteed. Extensive tests of the presented methodology with real world data show the effectiveness of the proposed procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号