共查询到20条相似文献,搜索用时 62 毫秒
1.
Multi-dimensional sparse array operations can be used in the atmosphere and ocean sciences, the image processing, and etc., and have been an extensively investigated problem. Therefore, it becomes an important issue to propose efficient data distribution schemes for multi-dimensional sparse arrays. In our previous work, we have proposed two data distribution schemes Compress Followed Send (CFS) and Encoding-Decoding (ED) for sparse arrays based on the traditional matrix representation (TMR) scheme. We have proposed another scheme, called extended Karnaugh map representation (EKMR), to represent sparse arrays. The EKMR scheme can obtain better performance than the TMR scheme for some sparse array operations. Hence, in this paper, we want to propose efficient data distribution schemes for EKMR-based sparse arrays. We extend the CFS and the ED schemes for TMR-based sparse arrays to EKMR-based sparse arrays first. Then, we compare the performance of these two schemes with that of the Send Followed Compress (SFC), which is an intuitive data distribution scheme for sparse arrays. Finally, we compare these three schemes for EKMR-based sparse arrays with those of TMR-based sparse arrays, respectively. Both the theoretical analysis and the experimental tests were conducted. From the theoretical analysis and the experimental results, we can see that the ED scheme is superior to the CFS scheme that is superior to the SFC scheme for most of testing EKMR-based sparse arrays; the performance of these three schemes for EKMR-based sparse arrays is better than that of TMR-based sparse arrays for all of testing cases, respectively. 相似文献
2.
Data compression is a well-known method to improve the image composition time of parallel volume rendering on distributed
memory multicomputers. In this paper, we propose an efficient data compression scheme, the template run-length encoding (TRLE)
scheme, for image composition. Given an image with 2n×2n pixels, in the TRLE scheme, the image is treated as n×n blocks and each block has 2×2 pixels. Since a pixel can be a blank or non-blank pixel, there 16 templates in a block. To compress an image, the TRLE scheme encodes an image block by block similar to the run-length encoding scheme.
However, the TRLE scheme can filter out or use small space to encode blocks whose four pixels are blank pixels, that is, the
TRLE scheme can encode a partial image according to the shape of non-blank pixels. To evaluate the performance of the TRLE
scheme, we compare the proposed scheme with the BR, the RLE, and the BRLC schemes. Since a data compression scheme needs to
cooperate with some data communication schemes, in the implementation, the binary-swap, the parallel-pipelined, and the rotate-tiling
data communication schemes are used. By combining the four data compression schemes with the three data communication schemes,
we have twelve image composition methods. These twelve methods are implemented on an IBM SP2 parallel machine. Four volume
datasets are used as test samples. The data computation time and the data communication time are measured. The experimental
results show that the TRLE data compression scheme with the rotate-tiling data communication scheme outperforms other eleven
image composition methods for all test samples.
相似文献
Don-Lin YangEmail: |
3.
A hybrid multi-group approach for privacy-preserving data mining 总被引:6,自引:6,他引:0
In this paper, we propose a hybrid multi-group approach for privacy preserving data mining. We make two contributions in this
paper. First, we propose a hybrid approach. Previous work has used either the randomization approach or the secure multi-party
computation (SMC) approach. However, these two approaches have complementary features: the randomization approach is much
more efficient but less accurate, while the SMC approach is less efficient but more accurate. We propose a novel hybrid approach, which takes advantage of the strength of both approaches to balance the accuracy and efficiency constraints. Compared
to the two existing approaches, our proposed approach can achieve much better accuracy than randomization approach and much
reduced computation cost than SMC approach. We also propose a multi-group scheme that makes it flexible for the data miner
to control the balance between data mining accuracy and privacy. This scheme is motivated by the fact that existing randomization
schemes that randomize data at individual attribute level can produce insufficient accuracy when the number of dimensions
is high. We partition attributes into groups, and develop a scheme to conduct group-based randomization to achieve better
data mining accuracy. To demonstrate the effectiveness of the proposed general schemes, we have implemented them for the ID3
decision tree algorithm and association rule mining problem and we also present experimental results.
相似文献
Wenliang DuEmail: |
4.
Improvements on dynamic adjustment mechanism in co-allocation data grid environments 总被引:3,自引:3,他引:0
Chao-Tung Yang I-Hsien Yang Kuan-Ching Li Shih-Yu Wang 《The Journal of supercomputing》2007,40(3):269-280
Several co-allocation strategies have been coupled and used to exploit rate differences among various client-server links
and to address dynamic rate fluctuations by dividing files into multiple blocks of equal sizes. However, a major obstacle,
the idle time of faster servers having to wait for the slowest server to deliver the final block, makes it important to reduce
differences in finishing time among replica servers. In this paper, we propose a dynamic co-allocation scheme, namely Recursive-Adjustment Co-Allocation scheme, to improve the performance of data transfer in Data Grids. Our approach reduces the idle time spent waiting for the
slowest server and decreases data transfer completion time.
相似文献
Shih-Yu WangEmail: |
5.
In order to provide high data availability in peer-to-peer (P2P) DHTs, proper data redundancy schemes are required. This paper
compares two popular schemes: replication and erasure coding. Unlike previous comparison, we take user download behavior into
account. Furthermore, we propose a hybrid redundancy scheme, which shares user downloaded files for subsequent accesses and
utilizes erasure coding to adjust file availability. Comparison experiments of three schemes show that replication saves more
bandwidth than erasure coding, although it requires more storage space, when average node availability is higher than 47%;
moreover, our hybrid scheme saves more maintenance bandwidth with acceptable redundancy factor.
相似文献
Fan WuEmail: |
6.
Bit arrays, or bitmaps, are used to significantly speed up set operations in several areas, such as data warehousing, information retrieval, and data mining, to cite a few. However, bitmaps usually use a large storage space, thus requiring compression. Consequently, there is a space-time tradeoff among compression schemes. The Word Aligned Hybrid (WAH) bitmap compression trades some space to allow for bitwise operations without first decompressing bitmaps. WAH has been recognized as the most efficient scheme in terms of computation time. In this paper we present Concise (Compressed ‘n’ Composable Integer Set), a new scheme that enjoys significantly better performances than WAH. In particular, when compared to WAH our algorithm is able to reduce the required memory up to 50%, while having comparable computation time. Further, we show that Concise can be efficiently used to represent sets of integral numbers in lieu of well-known data structures such as arrays, lists, hashtables, and self-balancing binary search trees. Extensive experiments over synthetic data show the effectiveness of our proposal. 相似文献
7.
We present a framework that uses data dependency information to automate load balanced volume distribution and ray-task scheduling
for parallel visualization of massive volumes. This dependency graph approach improves load balancing for both ray casting
and ray tracing. The main bottlenecks in distributed volume rendering involve moving data across the network and loading memory
into rendering hardware. Our load balancing solution combines static network distribution with dynamic ray-task scheduling.
At the core of the dependency graph approach are the flex-block tree, introduced in this paper, and the cell-tree. The flex-block
tree is similar to a kd-tree except that leaf nodes are cells containing a combination of empty space and tightly cropped
subvolumes, or flex-blocks. A main contribution of this paper is the moving walls algorithm, which uses dynamic programming to create a flex-block partition. We show results for optimizing distributed ray
cast rendering using a time cost function. We compare data distribution using the moving walls algorithm, with distribution
using a recursive solution, and with a grid combined with a local kd-tree partition on each render-node.
相似文献
Arie KaufmanEmail: |
8.
A new method for data hiding in H.264/AVC streams is presented. The proposed method exploits the IPCM encoded macroblocks
during the intra prediction stage in order to hide the desired data. It is a blind data hiding scheme, i.e. the message can
be extracted directly from the encoded stream without the need of the original host video. Moreover, the method exhibits the
useful property of reusing the compressed stream for hiding different data numerous times without considerably affecting either
the bit-rate or the perceptual quality. This property allows data hiding directly in the compressed stream in real time. The
method perfectly suits to covert communication and content authentication applications.
相似文献
Athanassios N. SkodrasEmail: |
9.
S. M. Farhad Md. Mostofa Akbar Md. Humayun Kabir 《Multimedia Tools and Applications》2009,43(1):63-90
Multicast Video-on-Demand (VoD) systems are scalable and cheap-to-operate. In such systems, a single stream is shared by
a batch of common user requests. In this research, we propose multicast communication technique in an Enterprise Network where
multimedia data are stored in distributed servers. We consider a novel patching scheme called Client-Assisted Patching where clients’ buffer of a multicast group can be used to patch the missing portion of the clients who will request the same
movie shortly. This scheme significantly reduces the server load without requiring larger client cache space than conventional
patching schemes. Clients can join an existing multicast session without waiting for the next available server stream which
reduces service latency. Moreover, the system is more scalable and cost effective than similar existing systems. Our simulation
experiment confirms all these claims.
相似文献
Md. Humayun KabirEmail: |
10.
Ali Khoshgozaran Ali Khodaei Mehdi Sharifzadeh Cyrus Shahabi 《Knowledge and Information Systems》2008,17(3):265-286
Vector data and in particular road networks are being queried, hosted and processed in many application domains such as in
mobile computing. Many client systems such as PDAs would prefer to receive the query results in unrasterized format without
introducing an overhead on overall system performance and result size. While several general vector data compression schemes
have been studied by different communities, we propose a novel approach in vector data compression which is easily integrated
within a geospatial query processing system. It uses line aggregation to reduce the number of relevant tuples and Huffman
compression to achieve a multi-resolution compressed representation of a road network database. Our experiments performed
on an end-to-end prototype verify that our approach exhibits fast query processing on both client and server sides as well
as high compression ratio.
相似文献
Cyrus ShahabiEmail: |
11.
J. Bobin Y. Moudden J. Fadili J.-L. Starck 《Journal of Mathematical Imaging and Vision》2009,33(2):149-168
Over the last decade, overcomplete dictionaries and the very sparse signal representations they make possible, have raised
an intense interest from signal processing theory. In a wide range of signal processing problems, sparsity has been a crucial
property leading to high performance. As multichannel data are of growing interest, it seems essential to devise sparsity-based
tools accounting for such specific multichannel data. Sparsity has proved its efficiency in a wide range of inverse problems.
Hereafter, we address some multichannel inverse problems issues such as multichannel morphological component separation and
inpainting from the perspective of sparse representation. In this paper, we introduce a new sparsity-based multichannel analysis
tool coined multichannel Morphological Component Analysis (mMCA). This new framework focuses on multichannel morphological diversity to better represent multichannel data. This paper presents conditions under which the mMCA converges
and recovers the sparse multichannel representation. Several experiments are presented to demonstrate the applicability of
our approach on a set of multichannel inverse problems such as morphological component decomposition and inpainting.
相似文献
J.-L. StarckEmail: |
12.
Lorina Dascal Adi Ditkowski Nir A. Sochen 《Journal of Mathematical Imaging and Vision》2007,29(1):63-77
We analyze the discrete maximum principle for the Beltrami color flow. The Beltrami flow can display linear as well as nonlinear
behavior according to the values of a parameter β, which represents the ratio between spatial and color distances. In general, the standard schemes fail to satisfy the discrete
maximum principle. In this work we show that a nonnegative second order difference scheme can be built for this flow only
for small β, i.e. linear-like diffusion. Since this limitation is too severe, we construct a novel finite difference scheme, which is
not nonnegative and satisfies the discrete maximum principle for all values of β. Numerical results support the analysis.
相似文献
Nir A. Sochen (Corresponding author)Email: |
13.
Data grids deal with a huge amount of data regularly. It is a fundamental challenge to ensure efficient accesses to such widely
distributed data sets. Creating replicas to a suitable site by data replication strategy can increase the system performance.
It shortens the data access time and reduces bandwidth consumption. In this paper, a dynamic data replication mechanism called
Latest Access Largest Weight (LALW) is proposed. LALW selects a popular file for replication and calculates a suitable number of copies and grid sites
for replication. By associating a different weight to each historical data access record, the importance of each record is
differentiated. A more recent data access record has a larger weight. It indicates that the record is more pertinent to the
current situation of data access. A Grid simulator, OptorSim, is used to evaluate the performance of this dynamic replication
strategy. The simulation results show that LALW successfully increases the effective network usage. It means that the LALW
replication strategy can find out a popular file and replicates it to a suitable site without increasing the network burden
too much.
相似文献
Ruay-Shiung ChangEmail: |
14.
Modular redundancy and temporal redundancy are traditional techniques to increase system reliability. In addition to being used as temporal redundancy, with
technology advancements, slack time in a system can also be used by energy management schemes to save energy. In this paper,
we consider the combination of modular and temporal redundancy to achieve energy efficient reliable real-time service provided
by multiple servers. We first propose an efficient adaptive parallel recovery scheme that appropriately processes service requests in parallel to increase the number of faults that can be tolerated and
thus system reliability. Then we explore schemes to determine the optimal redundant configurations of the parallel servers to minimize system energy consumption for a given reliability goal or to maximize system reliability
for a given energy budget. Our analysis results show that small requests, optimistic approaches, and parallel recovery favor
lower levels of modular redundancy, while large requests, pessimistic approaches and restricted serial recovery favor higher
levels of modular redundancy.
相似文献
Daniel MosséEmail: |
15.
Matching high performance approximate inverse preconditioning to architectural platforms 总被引:1,自引:0,他引:1
K. M. Giannoutakis G. A. Gravvanis B. Clayton A. Patil T. Enright J. P. Morrison 《The Journal of supercomputing》2007,42(2):145-163
In this paper we examine the performance of parallel approximate inverse preconditioning for solving finite element systems,
using a variety of clusters containing the Message Passing Interface (MPI) communication library, the Globus toolkit and the
Open MPI open-source software. The techniques outlined in this paper contain parameters that can be varied so as to tune the
execution to the underlying platform. These parameters include the number of CPUs, the order of the linear system (n) and the “retention parameter” (δ
l) of the approximate inverse used as a preconditioner. Numerical results are presented for solving finite element sparse linear
systems on platforms with various CPU types and number, different compilers, different File System types, different MPI implementations
and different memory sizes.
相似文献
J. P. MorrisonEmail: |
16.
In this paper we propose a method to measure the semantic similarity of geographic classes organized as partition hierarchies
within Naive Geography. The contribution of this work consists in extending and integrating the information content approach, and the method for comparing concept attributes in the ontology management system SymOntos developed at IASI. As a result, this proposal allows us to address both the concept similarity within the partition hierarchy,
and the attribute similarity of geographic classes and, therefore, to reduce the gap among the different similarity approaches
defined in the literature.
相似文献
Elaheh Pourabbas (Corresponding author)Email: |
17.
Shiguo Lian 《Multimedia Tools and Applications》2009,43(1):91-107
Commutative Watermarking and Encryption (CWE) provides a solution for interoperation between watermarking and encryption.
It realizes the challenging operation that embeds a watermark into the encrypted multimedia data directly, which avoids the
decryption–watermarking–encryption triples. Till now, few CWE schemes have been reported. They often obtain the commutative
property by partitioning multimedia data into independent parts (i.e., the encryption part and the watermarking part). Since
the two parts are isolated, it can not keep secure enough against replacement attacks. To avoid the disadvantage, a novel
quasi-commutative watermarking and encryption (QCWE) scheme based on quasi-commutative operations is proposed in this paper.
In the proposed scheme, the encryption operation and watermarking operation are applied to the same data part. Since the two
operations are homogenous with commutative properties, their orders can be commutated. As an example, the scheme for MPEG2
video encryption and watermarking is presented. In this example, the DCs in intra macroblocks are encrypted or watermarked
based on random module addition, while the DCs in other macroblocks and all the ACs’ signs are encrypted with a stream cipher
or block cipher. Analysis and experiments show that the scheme obtains high perceptual security and time efficiency, and the
watermarking and encryption operations can be commutated. These properties make the scheme a suitable choice for efficient
media content distribution. Additionally, the paper shows the availability of constructing the commutative watermarking and
encryption scheme with homogenous operations, which is expected to activate the new research topic.
相似文献
Shiguo LianEmail: |
18.
Agustin Ramirez-Agundis Rafael Gadea-Girones Ricardo Colom-Palero Javier Diaz-Carmona 《Journal of Real-Time Image Processing》2007,2(4):271-280
This paper presents a scheme and its Field Programmable Gate Array (FPGA) implementation for a system based on combining the
bi-dimensional discrete wavelet transformation (2D-DWT) and vector quantization (VQ) for image compression. The 2D-DWT works
in a non-separable fashion using a parallel filter structure with distributed control to compute two resolution levels. The
wavelet coefficients of the higher frequency sub-bands are vector quantized using multi-resolution codebook and those of the
lower frequency sub-band at level two are scalar quantized and entropy encoded. VQ is carried out by self organizing feature
map (SOFM) neural nets working at the recall phase. Codebooks are quickly generated off-line using the same nets functioning
at the training phase. The complete system, including the 2D-DWT, the multi-resolution codebook VQ, and the statistical encoder,
was implemented on a Xilinx Virtex 4 FPGA and is capable of performing real-time compression for digital video when dealing
with grayscale 512 × 512 pixels images. It offers high compression quality (PSNR values around 35 dB) and acceptable compression
rate values (0.62 bpp).
相似文献
Javier Diaz-CarmonaEmail: |
19.
In conventional motion compensated temporal filtering based wavelet coding scheme, where the group of picture structure and
low-pass frame position are fixed, variations in motion activities of video sequences are not considered. In this paper, we
propose an adaptive group of picture structure selection scheme, which the group of picture size and low-pass frame position
are selected based on mutual information. Furthermore, the temporal decomposition process is determined adaptively according
to the selected group of picture structure. A large amount of experimental work is carried out to compare the compression
performance of proposed method with the conventional motion compensated temporal filtering encoding scheme and adaptive group
of picture structure in standard scalable video coding model. The proposed low-pass frame selection can improve the compression
quality by about 0.3–0.5 dB comparing to the conventional scheme in video sequences with high motion activities. In the scenes
with un-even variation of motion activities, e.g. frequent shot cuts, the proposed adaptive group of picture size can achieve
a better compression capability than conventional scheme. When comparing to adaptive group of picture in standard scalable
video coding model, the proposed group of picture structure scheme can lead to about 0.2~0.8 dB improvements in sequences
with high motion activities or shot cut.
相似文献
Zhao-Guang LiuEmail: |
20.
Mining top-K frequent itemsets from data streams 总被引:1,自引:0,他引:1
Frequent pattern mining on data streams is of interest recently. However, it is not easy for users to determine a proper frequency
threshold. It is more reasonable to ask users to set a bound on the result size. We study the problem of mining top K frequent itemsets in data streams. We introduce a method based on the Chernoff bound with a guarantee of the output quality
and also a bound on the memory usage. We also propose an algorithm based on the Lossy Counting Algorithm. In most of the experiments
of the two proposed algorithms, we obtain perfect solutions and the memory space occupied by our algorithms is very small.
Besides, we also propose the adapted approach of these two algorithms in order to handle the case when we are interested in
mining the data in a sliding window. The experiments show that the results are accurate.
相似文献
Ada Wai-Chee FuEmail: |