首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
ABSTRACT

The recent trends in technology have made it possible to reproduce and share digital media more easily and more rapidly. This leads to the problem of exploiting the media illegitimately. To alleviate this problem, many cryptographic techniques are used to secure the data by encrypting them. However, the cloaked form of encrypted data attracts the intruder and shows the way to malicious attacks. Due to this, in recent times steganography has drawn more attention to secure the data. This article presents a new technique that embeds data in intermediate significant bit (ISB) and least significant bit (LSB) planes of the cover image. The method employs chaotic maps to generate random locations to hide the data bits as well as permutation order to encrypt the cover image. The cover image is first encrypted by applying permutation order, then embedding is carried out on the random locations generated. After embedding, the decrypted cover image is transmitted. This method provides two-level security in extracting the hidden data. Experimental outcomes (PSNR, MSE, NAE, and NCC) confirm that the method is proficient. The randomness of the values generated by chaotic maps is assessed by the NIST standard test suite.  相似文献   

2.
The contemporary multimedia and communication technology has made it possible to replicate and distribute digital media easier and faster. This ease of availability causes the problem of exposing transmitted digital data on the network with the risk of being copied or intercepted illegally. Many cryptographic techniques are in vogue to encrypt the data before transmission to avert any security problems. However, disguised appearance of the encrypted data makes the adversary suspicious and increases the chances of malicious attack. In such a scenario data hiding has received significant attention as an alternate way to ensure data security. This paper presents a data hiding technique based on the concepts of scrambling and pseudorandom data hiding; to provide a data hiding system with two layer security to the embedded data, and good perceptual transparency of the stego images. The proposed system uses the novel concept of embedding the secret data in scrambled (encrypted) cover images. The data embedding is carried out in the Intermediate Significant and least significant bit planes of encrypted image at the predetermined locations pointed to by Pseudorandom Address Space (PAS) and Address Space Direction Pointer (ASDP). Experimental results prove the efficacy of scheme viz-a-viz various parameters of interest.  相似文献   

3.
Multimedia Tools and Applications - This paper proposes a dual encoding approach with sequence folding for reversible data hiding in dual stego images. This method initially encodes the secret data...  相似文献   

4.
Li  Qi  Yan  Bin  Li  Hui  Chen  Na 《Multimedia Tools and Applications》2018,77(23):30749-30768
Multimedia Tools and Applications - Reversible data hiding (RDH) has to be conducted in the encrypted images when original images are encrypted for privacy protection in some open environments,...  相似文献   

5.
Password-based remote user authentication schemes using smart cards are designed to ensure that only a user who possesses both the smart card and the corresponding password can gain access to the remote servers. Despite many research efforts, it remains a challenging task to design a secure password-based authentication scheme with user anonymity. The author uses Kumari et al.’s scheme as the case study. Their scheme uses non-public key primitives. The author first presents the cryptanalysis of Kumari et al.’s scheme in which he shows that their scheme is vulnerable to user impersonation attack, and does not provide forward secrecy and user anonymity. Using the case study, he has identified that public-key techniques are indispensable to construct a two-factor authentication scheme with security attributes, such as user anonymity, unlinkability and forward secrecy under the nontamper resistance assumption of the smart card. The author proposes a password-based authentication scheme using elliptic curve cryptography. Through the informal and formal security analysis, he shows that proposed scheme is secure against various known attacks, including the attacks found in Kumari’s scheme. Furthermore, he verifies the correctness of mutual authentication using the BAN logic.  相似文献   

6.
Data hiding in images has evolved as one of the trusted methods of secure data communication and numerous approaches have been introduced over the years using gray scale images as the cover media. Most of the methods are based on data hiding in least significant bit planes of cover images. Many such methods purely depend on data substitution algorithms by defining a pattern in which data is embedded. One can gain access to the secret data in a few attempts, if the algorithm is known. Keeping this in view several approaches based on secret keys have also been proposed by researchers. This paper proposes an efficient data embedding scheme using a key and an embedding pattern generated through midpoint circle generation algorithm. The pattern can be applied to a carrier that is mapped onto a grid/image. The cryptosystem uses the concept of steganography and is computationally light and secure. The secret-key is generated in such a way that Avalanche effect is ensured except in very rare cases. The proposed data embedding method is shown to be robust and highly secure while maintaining good hiding capacity and imperceptibility. It is applicable for data hiding in a generic grid that could be of pixels or bits.  相似文献   

7.
Different from reversible image data hiding, most reversible video data hiding schemes have the particular problem that the distortion due to hidden data will spread and accumulate. In this paper, the problem of distortion drift caused by reversible data hiding in compressed video is analyzed, and a lossless drift compensation scheme is proposed to restrain the distortion for the first time. In order to ensure the reversibility, drift compensation signals are merged in the quantized DCT (Discrete Cosine Transform) coefficients of P-frames and the corresponding recovery mechanism is presented as well. Experimental results demonstrate that the proposed lossless drift compensation scheme significantly improves the video quality, and the original compressed video can be recovered exactly after the hidden data and compensation signals are removed. In addition, the proposed scheme does not depend on specific reversible data hiding method.  相似文献   

8.
Multimedia Tools and Applications - Big Data (BD) is a new technology which rapidly growing in the telecommunications sectors, especially in the contemporary field of wireless telecommunications....  相似文献   

9.
Multimedia Tools and Applications - In recent years, the dual stego-image reversible data embedding methods have been developed rapidly, e.g., exploiting modification direction, magic matrix, least...  相似文献   

10.
Multimedia Tools and Applications - It is well known that the world is becoming more interconnected with the help of World Wide Web in distributed environment. Persons and Organizations want to...  相似文献   

11.
《Information Systems》2004,29(5):405-420
This paper discusses the effective processing of similarity search that supports time warping in large sequence databases. Time warping enables sequences with similar patterns to be found even when they are of different lengths. Prior methods for processing similarity search that supports time warping failed to employ multi-dimensional indexes without false dismissal since the time warping distance does not satisfy the triangular inequality. They have to scan the entire database, thus suffering from serious performance degradation in large databases. Another method that hires the suffix tree, which does not assume any distance function, also shows poor performance due to the large tree size.In this paper, we propose a novel method for similarity search that supports time warping. Our primary goal is to enhance the search performance in large databases without permitting any false dismissal. To attain this goal, we have devised a new distance function, Dtwlb, which consistently underestimates the time warping distance and satisfies the triangular inequality. Dtwlb uses a 4-tuple feature vector that is extracted from each sequence and is invariant to time warping. For the efficient processing of similarity search, we employ a multi-dimensional index that uses the 4-tuple feature vector as indexing attributes, and Dtwlb as a distance function. We prove that our method does not incur false dismissal. To verify the superiority of our method, we have performed extensive experiments. The results reveal that our method achieves a significant improvement in speed up to 43 times faster with a data set containing real-world S&P 500 stock data sequences, and up to 720 times with data sets containing a very large volume of synthetic data sequences. The performance gain increases: (1) as the number of data sequences increases, (2) the average length of data sequences increases, and (3) as the tolerance in a query decreases. Considering the characteristics of real databases, these tendencies imply that our approach is suitable for practical applications.  相似文献   

12.
The rapidly increasing scale of data warehouses is challenging today’s data analytical technologies. A conventional data analytical platform processes data warehouse queries using a star schema — it normalizes the data into a fact table and a number of dimension tables, and during query processing it selectively joins the tables according to users’ demands. This model is space economical. However, it faces two problems when applied to big data. First, join is an expensive operation, which prohibits a parallel database or a MapReduce-based system from achieving efficiency and scalability simultaneously. Second, join operations have to be executed repeatedly, while numerous join results can actually be reused by different queries. In this paper, we propose a new query processing framework for data warehouses. It pushes the join operations partially to the pre-processing phase and partially to the postprocessing phase, so that data warehouse queries can be transformed into massive parallelized filter-aggregation operations on the fact table. In contrast to the conventional query processing models, our approach is efficient, scalable and stable despite of the large number of tables involved in the join. It is especially suitable for a large-scale parallel data warehouse. Our empirical evaluation on Hadoop shows that our framework exhibits linear scalability and outperforms some existing approaches by an order of magnitude.  相似文献   

13.
Web开发中高效安全的数据持久层研究   总被引:3,自引:0,他引:3  
本文阐述了企业数据的重要性,分析了当前流行的持久层解决方案的优缺点,并给出了基于XML的设计模式,建立一个高性能、高安全性、有弹性、可拓展性的数据持久层的方法.同时用一个信息查询的完整业务流程解释了持久性框架的主要设计思想.  相似文献   

14.

In this paper, a joint scheme and a separable scheme for reversible data hiding (RDH) in compressed and encrypted images by reserving room through Kd-tree were proposed. Firstly, the plain cover image was losslessly compressed and encrypted with lifting based integer wavelet transform (IWT) and set partition in hierarchical tree (SPIHT) encoding. Then, several shift operations were performed on the generated SPIHT bit-stream. The shifted bit-stream was restructured into small chunks and packed in the form of a large square matrix. The binary square matrix was exposed to Kd-tree with random permutations and reserving uniform areas of ones and zeros for secret data hiding. After that, a joint or a separable RDH scheme can be performed in these reserved spaces. In the joint RDH scheme, the secret data were embedded in the reserved spaces before encrypting with multiple chaotic maps. Thus, secret data extraction and cover image recovery were achieved together. In the separable RDH scheme, the secret data were embedded in the reserved spaces after encrypting with multiple chaotic maps. Since message extraction and cover image recovery are performed separately, anyone who has the embedding key can extract the secret message from the marked encrypted copy, while cannot recover the cover image. A complete encoding and decoding procedure of RDH for compressed and encrypted images was elaborated. The imperceptibility analysis showed that the proposed methods bring no distortion to the cover image because there was no change to the original cover image. The experimental results showed that the proposed schemes can perform better for secret data extraction and can restore the original image with 100% reversibility with much more embedding capacity and security. The proposed schemes significantly outperform the state-of-the-art RDH methods in the literature on compressed and encrypted images.

  相似文献   

15.
ContextSoftware clustering is a key technique that is used in reverse engineering to recover a high-level abstraction of the software in the case of limited resources. Very limited research has explicitly discussed the problem of finding the optimum set of clusters in the design and how to penalize for the formation of singleton clusters during clustering.ObjectiveThis paper attempts to enhance the existing agglomerative clustering algorithms by introducing a complementary mechanism. To solve the architecture recovery problem, the proposed approach focuses on minimizing redundant effort and penalizing for the formation of singleton clusters during clustering while maintaining the integrity of the results.MethodAn automated solution for cutting a dendrogram that is based on least-squares regression is presented in order to find the best cut level. A dendrogram is a tree diagram that shows the taxonomic relationships of clusters of software entities. Moreover, a factor to penalize clusters that will form singletons is introduced in this paper. Simulations were performed on two open-source projects. The proposed approach was compared against the exhaustive and highest gap dendrogram cutting methods, as well as two well-known cluster validity indices, namely, Dunn’s index and the Davies-Bouldin index.ResultsWhen comparing our clustering results against the original package diagram, our approach achieved an average accuracy rate of 90.07% from two simulations after the utility classes were removed. The utility classes in the source code affect the accuracy of the software clustering, owing to its omnipresent behavior. The proposed approach also successfully penalized the formation of singleton clusters during clustering.ConclusionThe evaluation indicates that the proposed approach can enhance the quality of the clustering results by guiding software maintainers through the cutting point selection process. The proposed approach can be used as a complementary mechanism to improve the effectiveness of existing clustering algorithms.  相似文献   

16.
Storage and retrieval of data in Wireless Sensor Networks (WSNs) have been found to be challenging issues in recent studies. The two principal approaches used in almost all proposed schemes in this field are Centralized and Decentralized. Many investigations have considered the security and communication issues in centralized schemes where a central node exists as the sink and all collected data are sent to it to be processed or the central node provides access to the possible users. In decentralized systems, all nodes are equally responsible for the duties of the central node in centralized approaches. In cases where sensors cannot transmit data immediately, which is usually the case in distributed approaches, in-network storage and retrieval is an alternative with its own security and communication issues stemming from the properties of energy sensitive sensors. In this paper, a semi-centralized scheme is introduced that is based on a number of traditional techniques, namely, clustering, symmetric and asymmetric key-management, and threshold secret sharing. The proposed scheme provides an energy-efficient and secure in-network storage and retrieval that could be applied to WSNs. A predictive method is proposed to adaptively determine the proper parameters for the threshold secret sharing technique. Confidentiality, dependability, and integrity of the sensed data are enhanced in a distributed manner with fairly low communication and computation costs. Simulations were utilized to illustrate the effect of several network parameters on energy consumption and to come up with optimization recommendations for the parameters of the proposed secret sharing scheme.  相似文献   

17.
18.
19.
Video surveillance applications need video data center to provide elastic virtual machine (VM) provisioning. However, the workloads of the VMs are hardly to be predicted for online video surveillance service. The unknown arrival workloads easily lead to workload skew among VMs. In this paper, we study how to balance the workload skew on online video surveillance system. First, we design the system framework for online surveillance service which consists of video capturing and analysis tasks. Second, we propose StreamTune, an online resource scheduling approach for workload balancing, to deal with irregular video analysis workload with the minimum number of VMs. We aim at timely balancing the workload skew on video analyzers without depending on any workload prediction method. Furthermore, we evaluate the performance of the proposed approach using a traffic surveillance application. The experimental results show that our approach is well adaptive to the variation of workload and achieves workload balance with less VMs.  相似文献   

20.
Geetha  R  Geetha  S 《Multimedia Tools and Applications》2020,79(19-20):12869-12890
Multimedia Tools and Applications - Reversible Data Hiding (RDH) is one of the popular and highly recommended methods to enhance medical data security or Electronic Patient Record (EPR) privacy. In...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号