首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
随着大数据的发展,Hadoop系统成为了大数据处理中的重要工具之一。在实际应用中,Hadoop的I/O操作制约系统性能的提升。通常Hadoop系统通过软件压缩数据来减少I/O操作,但是软件压缩速度较慢,因此使用硬件压缩加速器来替换软件压缩。Hadoop运行在Java虚拟机上,无法直接调用底层I/O硬件压缩加速器。通过实现Hadoop压缩器/解压缩器类和设计C++动态链接库来解决从Hadoop系统中获得压缩数据和将数据流向I/O硬件压缩加速器两个关键技术,从而将I/O硬件压缩加速器集成到Hadoop系统框架。实验结果表明,I/O硬件压缩加速器的每赫兹压缩速度为15.9Byte/s/Hz,集成I/O硬件压缩加速器提升Hadoop系统性能2倍。  相似文献   

2.
In this paper we present a novel hardware architecture for real-time image compression implementing a fast, searchless iterated function system (SIFS) fractal coding method. In the proposed method and corresponding hardware architecture, domain blocks are fixed to a spatially neighboring area of range blocks in a manner similar to that given by Furao and Hasegawa. A quadtree structure, covering from 32 × 32 blocks down to 2 × 2 blocks, and even to single pixels, is used for partitioning. Coding of 2 × 2 blocks and single pixels is unique among current fractal coders. The hardware architecture contains units for domain construction, zig-zag transforms, range and domain mean computation, and a parallel domain-range match capable of concurrently generating a fractal code for all quadtree levels. With this efficient, parallel hardware architecture, the fractal encoding speed is improved dramatically. Additionally, attained compression performance remains comparable to traditional search-based and other searchless methods. Experimental results, with the proposed hardware architecture implemented on an Altera APEX20K FPGA, show that the fractal encoder can encode a 512 × 512 × 8 image in approximately 8.36 ms operating at 32.05 MHz. Therefore, this architecture is seen as a feasible solution to real-time fractal image compression.
David Jeff JacksonEmail:
  相似文献   

3.
This paper describes a new software/hardware architecture for processing wide area airborne camera images in real time. The images under consideration are acquired from the 3K-camera system developed at DLR (German Aerospace Center). It consists of three off-the-shelf cameras, each of it delivers 16 Mpixel three times a second. One camera is installed in nadir, whereas the other two cameras are looking in side direction. Main applications of our system are supposed to be automotive traffic monitoring, determining the workload of public road networks during mass events, or obtaining a survey of damages in disaster areas in real time. Altogether, this demands a fast image processing system on the aircraft, because the amount of original high resolution images can not be sent to ground by up-to-date transfer mode systems. The on-board image processing system is distributed over a local network. On each PC several modules are running concurrently. In order to synchronize several processes and to assure access to commonly used data, a new distributed middleware for real time image processing is introduced. Two sophisticated modules one for orthorectification of images and one for traffic monitoring are explained in more detail. The orthorectification and mosaicking is executed on the fast graphics processing unit on one PC, whereas the traffic monitoring module runs on another PC in the on-board network. The resulting image data and evaluated traffic parameters are sent to a ground station in near real time and are distributed to the involved users. Thus, with the here suggested software/hardware system it becomes possible to support rescue forces and security forces in disaster areas or during mass events in near real time.
Peter ReinartzEmail:

Ulrike Thomas   studied Computer Science at the University of Edinburgh, Scotland and at the Technical University of Braunschweig, Germany, till 2000. From 2000 to 2007 she was assistant researcher at the Institute of Robotics and Process Control at the Technical University of Braunschweig. In 2008 she received her Ph.D in robotics. Since 2007 she is a member of the research group “Photogrammetry and Image Analysis” lead by Dr. Peter Reinartz at the Remote Sensing Technology Institute (IMF) at the German Aerospace Center (DLR). Dominik Rosenbaum   studied Physics and Astronomy at the University of Bonn and received his Ph.D. in Physics at Bochum University in the year 2006. Since 2007 he is responsible for the development of algorithms and methods for extraction of traffic parameters from aerial images in the unit “Photogrammetry and Image Analysis” at the Remote Sensing Technology Institute (IMF) at the German Aerospace Center (DLR). Franz Kurz   studied geodesy at the Technical University Munich, Germany till 1999. In 2003 he received his Ph.D. from the Technical University Munich in the field of remote sensing for agricultural decision support systems. From 2003 to 2005 he worked as researcher at the cartographic institute (ICC) in Barcelona and since 2005 he is a member of research group “photogrammetry and image analysis” at the German Aerospace Center (DLR). His research focus lies now on image analysis, remote sensing, and photogrammetry, e.g. 3D reconstruction of urban areas from airborne optical images. Sahil Suri   completed his bachelor of information technology in 2004 from Hamdard University in New Delhi followed by a 2-year master’s in geomatics engineering from Indian Institute of Technology, Roorkee in 2006. In 2005–2006, he was a recipient of the DAAD (German Academic Exchange Program) fellowship for writing his master thesis at Technical University of Dresden, Germany. Since September 2006, he has been working with the German Aerospace Center as a Ph.D. student. His research interests include remote sensing image processing related to image registration, fusion and traffic related studies. Peter Reinartz   received his Diploma in Physics in 1983 and his Ph.D. in civil engineering from the University of Hannover, in 1989. He is unit head of the unit “Photogrammetry and Image Analysis”, at the German Aerospace Centre (DLR), Remote Sensing Technology Institute (IMF). He has more than 20 years of experience in image processing and remote sensing and over 120 publications in these fields. His main interests are in direct georeferencing, stereo-photogrammetry with space borne and airborne data, generation of digital elevation models and interpretation of VHR data from space sensors like Ikonos, Quickbird a.o.  相似文献   

4.
In this article, the researcher introduces a hybrid chain code for shape encoding, as well as lossless and lossy bi-level image compression. The lossless and lossy mechanisms primarily depend on agent movements in a virtual world and are inspired by many agent-based models, including the Paths model, the Bacteria Food Hunt model, the Kermack–McKendrick model, and the Ant Colony model. These models influence the present technique in three main ways: the movements of agents in a virtual world, the directions of movements, and the paths where agents walk. The agent movements are designed, tracked, and analyzed to take advantage of the arithmetic coding algorithm used to compress the series of movements encountered by the agents in the system. For the lossless mechanism, seven movements are designed to capture all the possible directions of an agent and to provide more space savings after being encoded using the arithmetic coding method. The lossy mechanism incorporates the seven movements in the lossless algorithm along with extra modes, which allow certain agent movements to provide further reduction. Additionally, two extra movements that lead to more substitutions are employed in the lossless and lossy mechanisms. The empirical outcomes show that the present approach for bi-level image compression is robust and that compression ratios are much higher than those obtained by other methods, including JBIG1 and JBIG2, which are international standards in bi-level image compression. Additionally, a series of paired-samples t-tests reveals that the differences between the current algorithms’ results and the outcomes from all the other approaches are statistically significant.  相似文献   

5.
Tool path interpolation is an important part of Computerized Numerical Control (CNC) systems because it is related to the machining accuracy, tool-motion smoothness and overall efficiency. The use of parametric curves to generate tool-motion trajectories on a workpiece for high accuracy machining has become a standard data format that is used for CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) and CNC systems. Splines, Bezier, B-splines, and NURBS (Non-Uniform Rational B-splines) curves are the common parametric technique used for tool path design. However, the reported works bring out the high computational load required for this type of interpolation, and then at best only one interpolation algorithm is implemented. The contribution of this paper is the development of a hardware processing unit based on Field Programmable Gate Arrays (FPGA) for industrial CNC machines, which is capable of implementing the four main interpolation techniques. It allows the selection of the required interpolation technique according the application. Two CAD models are designed for test the CNC interpolations; experimental results show the efficiency of the proposed methodology.  相似文献   

6.
This paper presents an effective compression method suitable for transmission the still images on public switching telephone networks (PSTN). Since compression algorithm reduce the number of pixels or the gray levels of a source picture, therefore this will lead to the reduction of the amount of memory needed to store the source information or the time necessary for transmitting by a channel with a limited bandwidth. First, we introduced some current standards and finally the lossy DCT-based JPEG compression method is chosen. According to our studies, this method is one of the suitable methods. However, it is not directly applicable for image transmission on usual telephone lines (PSTN). Therefore, it must be modified considerably to be suitable for our purposes. From Shannon’s Information Theory, we know that for a given information source like an image there is a coding technique which permits a source to be coded with an average code length as close as to the entropy of the source as desired. So, we have modified the Huffman coding technique and obtained a new optimized version of this coding, which has a high speed and is easily implemented. Then, we have applied the DCT1 and the FDCT2 for compression of the data. We have analyzed and written the programs in C++ for image compression/decompression, which give a very high compression ratio (50:1 or more) with an excellent SNR.3In this paper, we present the necessary modifications on Huffman coding algorithms and the results of simulations on typical images.  相似文献   

7.
This work is an extension of a previous paper (presented at the Cyberworlds 2019 conference) introducing a new method for fractal compression of bitmap binary images. That work is now extended and enhanced through three new valuable features: (1) the bat algorithm is replaced by an improved version based on optimal forage strategy (OFS) and random disturbance strategy (RDS); (2) the inclusion of new similarity metrics; and (3) the consideration of a variable number of contractive maps, whose value can change dynamically over the population and over the iterations. The first feature improves the search capability of the method, the second one improves the reconstruction accuracy, and the third one computes the optimal number of contractive maps automatically. This new scheme is applied to a benchmark of two binary fractal images exhibiting a complex and irregular fractal shape. The graphical and numerical results show that the method performs very well, being able to reconstruct the input images with high accuracy. It also computes the optimal number of contractive maps in a fully automatic way. A comparative work with other alternative methods described in the literature is also carried out. It shows that the presented method outperforms the previous approaches significantly.  相似文献   

8.
This paper presents a new knowledge-based system for extracting and identifying text-lines from various real-life mixed text/graphics compound document images. The proposed system first decomposes the document image into distinct object planes to separate homogeneous objects, including textual regions of interest, non-text objects such as graphics and pictures, and background textures. A knowledge-based text extraction and identification method obtains the text-lines with different characteristics in each plane. The proposed system offers high flexibility and expandability by merely updating new rules to cope with various types of real-life complex document images. Experimental and comparative results prove the effectiveness of the proposed knowledge-based system and its advantages in extracting text-lines with a large variety of illumination levels, sizes, and font styles from various types of mixed and overlapping text/graphics complex compound document images.  相似文献   

9.
A method for compressing large binary images is proposed for applications where spatial access to the image is required. The proposed method is a two‐stage combination of forward‐adaptive modeling and backward‐adaptive context based compression with re‐initialization of statistics. The method improves compression performance significantly in comparison to a straightforward combination of JBIG and tiling. Only minor modifications to the QM‐coder are required, and therefore existing software implementations can be easily utilized. Technical details of the modifications are provided. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
11.
In this paper, we address a new approach for high-resolution reconstruction and enhancement of remote sensing (RS) imagery in near-real computational time based on the aggregated hardware/software (HW/SW) co-design paradigm. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale RS image enhancement tasks via incorporating into the fixed-point iterative reconstruction/enhancement procedures the convex convergence enforcement regularization by constructing the proper projectors onto convex sets (POCS) in the solution domain. The established POCS-regularized iterative techniques are performed separately along the range and azimuth directions over the RS scene frame making an optimal use of the sparseness properties of the employed sensor system modulation format. The hardware design is oriented on employing the Xilinx Field Programmable Gate Array XC4VSX35-10ff668 and performing the image enhancement/reconstruction tasks in a computationally efficient parallel fashion that meets the near-real time imaging system requirements. Finally, we report some simulation results and discuss the implementation performance issues related to enhancement of the real-world RS imagery indicative of the significantly increased performance efficiency gained with the developed approach.
D. Torres Roman
  相似文献   

12.
5/3小波提升结构的深度流水线优化*   总被引:1,自引:0,他引:1  
为了满足基于小波变换的高速信号实时处理的需求,在FPGA上实现更高速的5/3小波变换。采用静态时序分析的方法分析了当前5/3小波变换结构中影响速度的主要因素,并采用深度流水线技术切断原结构中存在的较长组合逻辑路径,从而提高了最高工作频率。使设计中仅增加少量寄存器开销便可获得原结构250%的速度,最高可实现每秒300M样本的数据吞吐量,可用于设计基于小波变换和FPGA的高速信号处理系统。  相似文献   

13.
Though the popular IEEE 802.11 DCF is designed primarily for Wireless LAN (WLAN) environments, today it is being widely used for wide area wireless mesh networking. The protocol parameters of IEEE 802.11 such as timeout values, interframe spaces, and slot durations, sufficient for a general WLAN environment, need to be modified in order to efficiently operate in wide area wireless mesh networks. The current wide area wireless mesh network deployments use manual configuration of these parameters to the upper limit which essentially makes the networks operate at lower system efficiency. In this paper, we propose d802.11 (dynamic 802.11) which dynamically adapts the protocol parameters in order to operate at varying link distances. In fact, in 802.11, a transmitter can face ACK/CTS timeout even when it started receiving ACK/CTS packet before the timeout value. We present three strategies, (i) multiplicative timer backoff (MTB), (ii) additive timer backoff (ATB), and (iii) link RTT memoization (LRM), to adapt the ACK_TIMEOUT in d802.11 in order to provide better adaptation for varying link dimensions. Through extensive simulation experiments we observed significant performance improvement for the proposed strategies. We also theoretically modeled the maximum link throughput as a function of the link dimension for the proposed system. Our results show that the LRM technique provides the best adaptation compared to all other schemes.  相似文献   

14.
This paper presents a Field Programmable Gate Array (FPGA) implementation for image/video compression using an improved block truncation coding (BTC) image compression technique. The improvement is achieved by employing a Hopfield neural network (HNN) to calculate a cost function upon which a block is classified as either a high- or a low-detail block. Accordingly, different blocks are coded with different bit rates and thus resulting in better compression ratios. The paper formulates the utilization of HNN within the BTC algorithm in such a way that a viable FPGA implementation is produced. The implementation exploits the inherent parallelism of the BTC/HNN algorithm to provide efficient algorithm-to-architecture mapping. The Xilinx VirtexE BTC implementation has shown to provide a processing speed of about 1.113 × 106 of pixels per second with a compression ratio which varies between 1.25 and 2 bits/pixel, according to the image nature.  相似文献   

15.
基于ArcIMS和JSP的TM/ETM影像发布系统设计与实现   总被引:1,自引:0,他引:1  
TM/ETM遥感影像在科学研究中有着广泛的用途,但目前国内用户要获取数据存在一定的困难.作为全球土地覆盖数据库的镜像站点,需要通过互联网发布一批TM/ETM影像,方便国内用户使用;同时为了以后能有针对性地进行改进,需要对数据的流向进行统计.ArcIMS是利用Web发布地图服务的一个解决方案,JSP是目前广泛使用的成熟的Web编程技术,结合ArcIMS与JSP技术发布TM/ETM遥感影像,可以根据不同层次的用户需求提供快速直观的数据索引手段.结合项目实践,对TM/ETM影像发布系统的整体设计与实现方法进行了研究,并分析了其中的关键代码.  相似文献   

16.
The purpose of this work is to segment the multi region Fluoro Deoxy Glucose radioactivity uptakes from fused Positron Emission Tomography / Computerized Tomography images automatically irrespective of their location in the body. Color image processing is performed to filter and enhance the saturation components of the images. The proposed method of graph cut image partitioning through kernel mapping of the image data is applied for the saturation equalized components of Red, Green, and Blue model images. Energy minimization of the objective function includes the data term minimization within each segmentation region and smoothening the regularization term preserving the boundary regions. Hybrid kernel functions are used for partitioning by graph-cut iterations and computation of region parameters through fixed-point computation. This method combines the performance of global and local kernel functions which makes the segmentation robust and accurate. The performance assessment is carried out for different views of fused Positron Emission Tomography / Computerized Tomography images, and are evaluated qualitatively, quantitatively, and comparatively. This method can be applied for the analysis of certain image features, diagnosis, and display purposes.  相似文献   

17.
Data compression techniques have long been assisting in making effective use of disk, network and other resources. Most compression utilities require explicit user action for compressing and decompressing of file data. However, there are some systems in which compression and decompression of file data is done transparently by the operating system. A compressed file requires fewer sectors for storage on the disk. Hence, incorporating data compression techniques into a file system gives the advantage of a larger effective disk space. At the same time, the additional time needed for compression and decompression of file data gets compensated for to a large extent by the time gained because of fewer disk accesses. In this paper we describe the design and implementation of a file system for the Linux kernel, with the feature of on‐the‐fly data compression and decompression in a fashion that is transparent to the user. We also present some experimental results which show that the performance of our file system is comparable to that of Ext2fs, the native file system for Linux. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号