Visual Cryptography (VC) is gaining attraction during the past few years to secure the visual information in the transmission network. It enables the visual data i.e. handwritten notes, photos, printed text, etc. to encrypt in such a way that their decryption can be done through the human visual framework. Hence, no computational assistance is required for the decryption of the secret images they can be seen through naked eye. In this paper, a novel enhanced halftoning-based VC scheme is proposed that works for both binary and color images. Fake share is generated by the combination of random black and white pixels. The proposed algorithm consists of 3 stages i.e., detection, encryption, and decryption. Halftoning, Encryption, (2, 2) visual cryptography and the novel idea of fake share, make it even more secure and improved. As a result, it facilitates the original restored image to the authentic user, however, the one who enters the wrong password gets the combination of fake share with any real share. Both colored and black images can be processed with minimal capacity using the proposed scheme.
This paper presents a novel denoising approach based on smoothing linear and nonlinear filters combined with an optimization algorithm. The optimization algorithm used was cuckoo search algorithm and is employed to determine the optimal sequence of filters for each kind of noise. Noises that would be eliminated form images using the proposed approach including Gaussian, speckle, and salt and pepper noise. The denoising behaviour of nonlinear filters and wavelet shrinkage threshold methods have also been analysed and compared with the proposed approach. Results show the robustness of the proposed filter when compared with the state-of-the-art methods in terms of peak signal-to-noise ratio and image quality index. Furthermore, a comparative analysis is provided between the said optimization algorithm and the genetic algorithm. 相似文献
Big data technologies and a range of Government open data initiatives provide the basis for discovering new insights into cities; how they are planned, how they managed and the day-to-day challenges they face in health, transport and changing population profiles. The Australian Urban Research Infrastructure Network (AURIN – www.aurin.org.au) project is one example of such a big data initiative that is currently running across Australia. AURIN provides a single gateway providing online (live) programmatic access to over 2000 data sets from over 70 major and typically definitive data-driven organizations across federal and State government, across industry and across academia. However whilst open (public) data is useful to bring data-driven intelligence to cities, more often than not, it is the data that is not-publicly accessible that is essential to understand city challenges and needs. Such sensitive (unit-level) data has unique requirements on access and usage to meet the privacy and confidentiality demands of the associated organizations. In this paper we highlight a novel geo-privacy supporting solution implemented as part of the AURIN project that provides seamless and secure access to individual (unit-level) data from the Department of Health in Victoria. We illustrate this solution across a range of typical city challenges in localized contexts around Melbourne. We show how unit level data can be combined with other data in a privacy-protecting manner. Unlike other secure data access and usage solutions that have been developed/deployed, the AURIN solution allows any researcher to access and use the data in a manner that meets all of the associated privacy and confidentiality concerns, without obliging them to obtain ethical approval or any other hurdles that are normally put in place on access to and use of sensitive data. This provides a paradigm shift in secure access to sensitive data with geospatial content. 相似文献
Service Oriented Architecture with underlying technologies like web services and web service orchestration opens new vistas for integration among business processes operating in heterogeneous environments. However, such dynamic collaborations require a highly secure environment at each respective business partner site. Existing web services standards address the issue of security only on the service provider platform. The partner platforms to which sensitive information is released have till now been neglected. Remote Attestation is a relatively new field of research which enables an authorized party to verify that a trusted environment actually exists on a partner platform. To incorporate this novel concept in to the web services realm, a new mechanism called WS-Attestation has been proposed. This mechanism provides a structural paradigm upon which more fine-grained solutions can be built. In this paper, we present a novel framework, Behavioral Attestation for Web Services, in which XACML is built on top of WS-Attestation in order to enable more flexible remote attestation at the web services level. We propose a new type of XACML policy called XACML behavior policy, which defines the expected behavior of a partner platform. Existing web service standards are used to incorporate remote attestation at the web services level and a prototype is presented, which implements XACML behavior policy using low-level attestation techniques. 相似文献
ABSTRACTThe quality of user-generated content over World Wide Web media is a matter of serious concern for both creators and users. To measure the quality of content, webometric techniques are commonly used. In recent times, bibliometric techniques have been introduced to good effect for evaluation of the quality of user-generated content, which were originally used for scholarly data. However, the application of bibliometric techniques to evaluate the quality of YouTube content is limited to h-index and g-index considering only views. This paper advocates for and demonstrates the adaptation of existing Bibliometric indices including h-index, g-index and M-index exploiting both views and comments and proposes three indices hvc, gvc and mvc for YouTube video channel ranking. The empirical results prove that the proposed indices using views along with the comments outperform the existing approaches on a real-world dataset of YouTube. 相似文献
We present a scheme for the implementation of three-qubit Grover’s algorithm using four-level superconducting quantum interference devices (SQUIDs) coupled to a superconducting resonator. The scheme is based on resonant, off-resonant interaction of the cavity field with SQUIDs and application of classical microwave pulses. We show that adjustment of SQUID level spacings during the gate operations, adiabatic passage, and second-order detuning are not required that leads to faster implementation. We also show that the marked state can be searched with high fidelity even in the presence of unwanted off-resonant interactions, level decay, and cavity dissipation. 相似文献
Object identification is a specialized type of recognition in which the category (e.g. cars) is known and the goal is to recognize
an object’s exact identity (e.g. Bob’s BMW). Two special challenges characterize object identification. First, inter-object
variation is often small (many cars look alike) and may be dwarfed by illumination or pose changes. Second, there may be many
different instances of the category but few or just one positive “training” examples per object instance. Because variation
among object instances may be small, a solution must locate possibly subtle object-specific salient features, like a door
handle, while avoiding distracting ones such as specular highlights. With just one training example per object instance, however,
standard modeling and feature selection techniques cannot be used. We describe an on-line algorithm that takes one image from
a known category and builds an efficient “same” versus “different” classification cascade by predicting the most discriminative
features for that object instance. Our method not only estimates the saliency and scoring function for each candidate feature,
but also models the dependency between features, building an ordered sequence of discriminative features specific to the given
image. Learned stopping thresholds make the identifier very efficient. To make this possible, category-specific characteristics
are learned automatically in an off-line training procedure from labeled image pairs of the category. Our method, using the
same algorithm for both cars and faces, outperforms a wide variety of other methods. 相似文献
This paper proposes a spam detection technique, at the packet level (layer 3), based on classification of e-mail contents. Our proposal targets spam control implementations on middleboxes. E-mails are first pre-classified (pre-detected) for spam on a per-packet basis, without the need for reassembly. This, in turn, allows fast e-mail class estimation (spam detection) at receiving e-mail servers to support more effective spam handling on both inbound and outbound (relayed) e-mails. In this paper, the naïve Bayes classification technique is adapted to support both pre-classification and fast e-mail class estimation, on a per-packet basis. We focus on evaluating the accuracy of spam detection at layer 3, considering the constraints on processing byte-streams over the network, including packet re-ordering, fragmentation, overlapped bytes, and different packet sizes. Results show that the proposed layer-3 classification technique gives less than 0.5% false positive, which approximately equals the performance attained at layer 7. This shows that classifying e-mails at the packet level could differentiate non-spam from spam with high confidence for a viable spam control implementation on middleboxes. 相似文献
A microarray machine offers the capacity to measure the expression levels of thousands of genes simultaneously. It is used
to collect information from tissue and cell samples regarding gene expression differences that could be useful for cancer
classification. However, the urgent problems in the use of gene expression data are the availability of a huge number of genes
relative to the small number of available samples, and the fact that many of the genes are not relevant to the classification.
It has been shown that selecting a small subset of genes can lead to improved accuracy in the classification. Hence, this
paper proposes a solution to the problems by using a multiobjective strategy in a genetic algorithm. This approach was tried
on two benchmark gene expression data sets. It obtained encouraging results on those data sets as compared with an approach
that used a single-objective strategy in a genetic algorithm.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 相似文献