首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
In this paper, a deep learning-based anomaly detection (DLAD) system is proposed to improve the recognition problem in video processing. Our system achieves complete detection of abnormal events by involving the following significant proposed modules a Background Estimation (BE) Module, an Object Segmentation (OS) Module, a Feature Extraction (FE) Module, and an Activity Recognition (AR) Module. At first, we have presented a BE (Background Estimation) module that generated an accurate background in which two-phase model is generated to compute the background estimation. After a high-quality background is generated, the OS model is developed to extract the object from videos, and then, object tracking process is used to track the object through the overlapping detection scheme. From the tracked objects, the FE module is extracted for some useful features such as shape, wavelet, and histogram to the abnormal event detection. For the final step, the proposed AR module is classified as abnormal or normal event using the deep learning classifier. Experiments are performed on the USCD benchmark dataset of abnormal activities, and comparisons with the state-of-the-art methods validate the advantages of our algorithm. We can see that the proposed activity recognition system has outperformed by achieving better EER of 0.75 % when compared with the existing systems (20 %). Also, it shows that the proposed method achieves 85 % precision rate in the frame-level performance.  相似文献   

2.
3.
The efficiency of lossless compression algorithms for fixed-palette images (indexed images) may change if a different indexing scheme is adopted. Many lossless compression algorithms adopt a differential-predictive approach. Hence, if the spatial distribution of the indexes over the image is smooth, greater compression ratios may be obtained. Because of this, finding an indexing scheme that realizes such a smooth distribution is a relevant issue. Obtaining an optimal re-indexing scheme is suspected to be a hard problem and only approximate solutions have been provided in literature. In this paper, we restate the re-indexing problem as a graph optimization problem: an optimal re-indexing corresponds to the heaviest Hamiltonian path in a weighted graph. It follows that any algorithm which finds a good approximate solution to this graph-theoretical problem also provides a good re-indexing. We propose a simple and easy-to-implement approximation algorithm to find such a path. The proposed technique compares favorably with most of the algorithms proposed in literature, both in terms of computational complexity and of compression ratio.  相似文献   

4.
In this letter, we propose an efficient algorithm, which can successfully remove impulse noise from corrupted images while preserving image details. It is efficient, and requires no previous training. The algorithm consists of two steps: impulse noise detection and impulse noise cancellation. Extensive experimental results show that the proposed approach significantly outperforms many other well-known techniques for image noise removal.  相似文献   

5.
An iterative algorithm for X-ray CT fluoroscopy   总被引:3,自引:0,他引:3  
X-ray computed tomography fluoroscopy (CTF) enables image guidance of interventions, synchronization of scanning with contrast bolus arrival, and motion analysis. However, filtered backprojection (FB), the current method for CTF image reconstruction, is subject to motion and metal artifacts from implants, needles, or other surgical instruments. Reduced target lesion conspicuity may result from increased image noise associated with reduced tube current. In this report, the authors adapt the row-action expectation-maximization (EM) algorithm for CTF. Because time-dependent variation in images is localized during CTF, the row-action EM-like algorithm allows rapid convergence. More importantly, this iterative CTF algorithm has fewer metal artifacts and better low-contrast performance than FB  相似文献   

6.
The purpose of this work is to develop patient-specific models for automatically detecting lung nodules in computed tomography (CT) images. It is motivated by significant developments in CT scanner technology and the burden that lung cancer screening and surveillance imposes on radiologists. We propose a new method that uses a patient's baseline image data to assist in the segmentation of subsequent images so that changes in size and/or shape of nodules can be measured automatically. The system uses a generic, a priori model to detect candidate nodules on the baseline scan of a previously unseen patient. A user then confirms or rejects nodule candidates to establish baseline results. For analysis of follow-up scans of that particular patient, a patient-specific model is derived from these baseline results. This model describes expected features (location, volume and shape) of previously segmented nodules so that the system can relocalize them automatically on follow-up. On the baseline scans of 17 subjects, a radiologist identified a total of 36 nodules, of which 31 (86%) were detected automatically by the system with an average of 11 false positives (FPs) per case. In follow-up scans 27 of the 31 nodules were still present and, using patient-specific models, 22 (81%) were correctly relocalized by the system. The system automatically detected 16 out of a possible 20 (80%) of new nodules on follow-up scans with ten FPs per case.  相似文献   

7.
Impulse noise reduction from corrupted images plays an important role in image processing. This problem will also affect on image segmentation, object detection, edge detection, compression, etc. Generally, median filters or nonlinear filters have been used for noise reduction but these methods will destroy the natural texture and important information in the image like the edges. In this paper, to eliminate impulse noises from noisy images, we used a hybrid method based on cellular automata (CA) and fuzzy logic called Fuzzy Cellular Automata (FCA) in two steps. In the first step, based on statistical information, noisy pixels are detected by CA; then using this information, the noisy pixel will change by FCA. Regularly, CA is used for systems with simple components where the behavior of each component will be defined and updated based on its neighbors. The proposed hybrid method is characterized as simple, robust and parallel which keeps the important details of the image effectively. The proposed approach has been performed on well-known gray scale test images and compared with other conventional and famous algorithms, is more effective.  相似文献   

8.
The lungs exchange air with the external environment via the pulmonary airways. Computed tomography (CT) scanning can be used to obtain detailed images of the pulmonary anatomy, including the airways. These images have been used to measure airway geometry, study airway reactivity, and guide surgical interventions. Prior to these applications, airway segmentation can be used to identify the airway lumen in the CT images. Airway tree segmentation can be performed manually by an image analyst, but the complexity of the tree makes manual segmentation tedious and extremely time-consuming. We describe a fully automatic technique for segmenting the airway tree in three-dimensional (3-D) CT images of the thorax. We use grayscale morphological reconstruction to identify candidate airways on CT slices and then reconstruct a connected 3-D airway tree. After segmentation, we estimate airway branchpoints based on connectivity changes in the reconstructed tree. Compared to manual analysis on 3-mm-thick electron-beam CT images, the automatic approach has an overall airway branch detection sensitivity of approximately 73%.  相似文献   

9.
High-resolution X-ray computed tomography (CT) imaging is routinely used for clinical pulmonary applications. Since lung function varies regionally and because pulmonary disease is usually not uniformly distributed in the lungs, it is useful to study the lungs on a lobe-by-lobe basis. Thus, it is important to segment not only the lungs, but the lobar fissures as well. In this paper, we demonstrate the use of an anatomic pulmonary atlas, encoded with a priori information on the pulmonary anatomy, to automatically segment the oblique lobar fissures. Sixteen volumetric CT scans from 16 subjects are used to construct the pulmonary atlas. A ridgeness measure is applied to the original CT images to enhance the fissure contrast. Fissure detection is accomplished in two stages: an initial fissure search and a final fissure search. A fuzzy reasoning system is used in the fissure search to analyze information from three sources: the image intensity, an anatomic smoothness constraint, and the atlas-based search initialization. Our method has been tested on 22 volumetric thin-slice CT scans from 12 subjects, and the results are compared to manual tracings. Averaged across all 22 data sets, the RMS error between the automatically segmented and manually segmented fissures is 1.96 +/- 0.71 mm and the mean of the similarity indices between the manually defined and computer-defined lobe regions is 0.988. The results indicate a strong agreement between the automatic and manual lobe segmentations.  相似文献   

10.
Multimedia applications involving image retrieval demand fast and efficient response. Efficiency of search and retrieval of information in a database system is index dependent. Generally, a two-level indexing scheme in an image database can help to reduce the search space against a given query image. In such type of indexing scheme, the first level is required to significantly reduce the search space for second stage of comparisons and must be computationally efficient. It is also required to guarantee that no false negatives may result. The second level of indexing involves more detailed analysis and comparison of potentially relevant images. In this paper, we present an efficient signature representation scheme for first level of a two-level image indexing scheme that is based on hierarchical decomposition of image space into spatial arrangement of image features. Experimental results demonstrate that our signature representation scheme results in fewer number of matching signatures in the first level and significantly improves the overall computational time. As this scheme relies on corner points as the salient feature points in an image to describe its contents, we also compare results using several different contemporary corner detection methods. Further, we formally prove that the proposed signature representation scheme not only results in fewer number of signatures but also does not result in any false negative.  相似文献   

11.
Reversible data hiding in encrypted images is an effective technique to embed information in encrypted domain, without knowing the original content of the image or the encryption key. In this paper, a high-capacity reversible data hiding scheme for encrypted images based on MSB (most significant bit) prediction is proposed. Since the prediction is not always accurate, it is necessary to identify the prediction error and store this information in the location map. The stream cipher is then used to encrypt the original image directly. During the data hiding phase, up to three MSBs of each available pixel in the encrypted image are substituted by the bits of the secret message. At the receiving end, the embedded data can be extracted without any errors and the original image can be perfectly reconstructed by utilizing MSB prediction. Experimental results show that the scheme can achieve higher embedding capacity than most related methods.  相似文献   

12.
In this paper, a novel image encryption scheme based on two rounds of substitution–diffusion is proposed. Two main objectives have guided the design of this scheme: (a) robustness against the most known type of attacks (statistical, chosen/known plaintext, ciphertext-only and brute force attacks) and (b) efficiency in terms of computational complexity (i.e., execution time reduction) in order to meet recent mobiles’ applications’ requirements. First, a dynamic key, changed for every input image is generated and used as the basis to construct the substitution and diffusion processes. Then, the encryption process is performed by the transmitter based on a non-linear S-box (substitution) and a matrix multiplication (diffusion), applied on each sub-matrix of the image. At the destination side, decryption is applied in the reverse order. We have conducted several series of experiments to evaluate the effectiveness of the proposed scheme. The obtained results validated the robustness of our scheme against all considered types of attacks and showed an improvement in terms of execution time reduction compared to the recent existed image-encryption schemes.  相似文献   

13.
The existing probability based reversible authentication schemes for demosaiced images embed authentication codes into rebuilt components of image pixels. The original demosaiced image can be totally recovered if the marked image is unaltered. Although these schemes offer the goal of pixel-wise tamper detection, the generated authentication codes are irrelevant to the image pixels, causing some undetectable intentional alterations. The proposed method pre-processes the rebuilt components of demosaiced images and hashes them to generate authentication codes. With the guide of a randomly-generated reference table, authentication codes are embedded into the rebuilt components of demosaiced images. Since the distortions of image pixels are sensitive to the embedded authentication codes, the proposed method further alters the pre-processed pixels to generate a set of authentication codes. One of the authentication codes that minimizes the distortion is embedded to generate marked demosaiced images. The results show that the proposed method offers a better image quality than prior state-of-the-art works, and is capable of detecting a variety of tampering.  相似文献   

14.
Analytical models with parameters numerically extracted from I-V data have been used in simulation of MOS circuits. The equations are quasi-physical and the extracted parameters do not normally relate to any single identifiable physical mechanism. We have developed an extraction system that can provide a measure of the level of confidence in the extracted parameters; hence, these parameters may be reliably used in circuit simulation as well as process control. The algorithm described is model independent and can be used for any nonlinear least-squares parameter extraction problem.  相似文献   

15.
The authors have developed a new classified vector quantizer (CVQ) using decomposition and prediction which does not need to store or transmit any side information. To obtain better quality in the compressed images, human visual perception characteristics are applied to the classification and bit allocation. This CVQ has been subjectively evaluated for a sequence of X-ray CT images and compared to a DCT coding method. Nine X-ray CT head images from three patients are compressed at 10:1 and 15:1 compression ratios and are evaluated by 13 radiologists. The evaluation data are analyzed statistically with analysis of variance and Tukey's multiple comparison. Even though there are large variations in judging image quality among readers, the proposed algorithm has shown significantly better quality than the DCT at a statistical, significance level of 0.05. Only an interframe CVQ can reproduce the quality of the originals at 10:1 compression at the same significance level. While the CVQ can reproduce compressed images that are not statistically different from the originals in quality, the effect on diagnostic accuracy remains to be investigated.  相似文献   

16.
Long bone panoramas from fluoroscopic X-ray images   总被引:6,自引:0,他引:6  
This paper presents a new method for creating a single panoramic image of a long bone from several individual fluoroscopic X-ray images. Panoramic images are useful preoperatively for diagnosis, and intraoperatively for long bone fragment alignment, for making anatomical measurements, and for documenting surgical outcomes. Our method composes individual overlapping images into an undistorted panoramic view that is the equivalent of a single X-ray image with a wide field of view. The correlations between the images are established from the graduations of a radiolucent ruler imaged alongside the long bone. Unlike existing methods, ours uses readily available hardware, requires a simple image acquisition protocol with minimal user input, and works with existing fluoroscopic C-arm units without modifications. It is robust and accurate, producing panoramas whose quality and spatial resolution is comparable to that of the individual images. The method has been successfully tested on in vitro and clinical cases.  相似文献   

17.
This paper proposes a hybrid approach (HybApp) for numerical transient solution of Markov availability models. It solves both stiff and nonstiff models using a heuristic that provides timely, inexpensive stiffness detection in the model. If the model is found stiff, then a stiff ordinary-differential-equation method is used to solve it from that point onwards, otherwise we continue to use uniformization. Numerical results show the advantages of HybApp  相似文献   

18.
A new framework for model-based lung tissue segmentation in three-dimensional thoracic CT images is proposed. In the first stage, a parametric model for lung segmenting surface is created using shape representation based on level sets method. This model is constituted by the sum of a mean distance function and a number of weighted eigenshapes. Consequently, unlike the other model-based segmentation methods, there is no need to specify any marker point in this model. In the second stage, the segmenting surface is varied so as to be matched with the binarized input image. For this purpose, a region-based energy function is minimized with respect to the parameters including the weights of eigenshapes and coefficients of a three-dimensional similarity transform. Finally, the resulted segmenting surface is post-processed in order to improve its fitness with the lung borders of the input image. The experimental results demonstrated the outperformance of the proposed framework over its model-based counterparts in model matching stage. Moreover, it performed slightly better in terms of final segmentation results.  相似文献   

19.
Investigation of three-dimensional (3-D) geometry and fluid-dynamics in human arteries is an important issue in vascular disease characterization and assessment. Thanks to recent advances in magnetic resonance (MR) and computed tomography (CT), it is now possible to address the problem of patient-specific modeling of blood vessels, in order to take into account interindividual anatomic variability of vasculature. Generation of models suitable for computational fluid dynamics is still commonly performed by semiautomatic procedures, in general based on operator-dependent tasks, which cannot be easily extended to a significant number of clinical cases. In this paper, we overcome these limitations making use of computational geometry techniques. In particular, 3-D modeling was carried out by means of 3-D level sets approach. Model editing was also implemented ensuring harmonic mean curvature vectors distribution on the surface, and model geometric analysis was performed with a novel approach, based on solving Eikonal equation on Voronoi diagram. This approach provides calculation of central paths, maximum inscribed sphere estimation and geometric characterization of the surface. Generation of adaptive-thickness boundary layer finite elements is finally presented. The use of the techniques presented here makes it possible to introduce patient-specific modeling of blood vessels at clinical level.  相似文献   

20.
Rectangular building extraction from stereoscopic airborne Radar images   总被引:2,自引:0,他引:2  
From the recent availability of images recorded by synthetic aperture radar (SAR) airborne systems, automatic results of digital elevation models (DEMs) on urban structures have been published lately. This paper deals with automatic extraction of three-dimensional (3-D) buildings from stereoscopic high-resolution images recorded by the SAR airborne RAMSES sensor from the French Aerospace Research Center (ONERA). On these images, roofs are not very textured whereas typical strong L-shaped echoes are visible. These returns generally result from dihedral corners between ground and structures. They provide a part of the building footprints and the ground altitude, but not the building heights. Thus, we present an adapted processing scheme in two steps. First is stereoscopic structure extraction from L-shaped echoes. Buildings are detected on each image using the Hough transform. Then they are recognized during a stereoscopic refinement stage based on a criterion optimization. Second, is height measurement. As most of previous extracted footprints indicate the ground altitude, building heights are found by monoscopic and stereoscopic measures. Between structures, ground altitudes are obtained by a dense matching process. Experiments are performed on images representing an industrial area. Results are compared with a ground truth. Advantages and limitations of the method are brought out.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号