Visual Cryptography (VC) is gaining attraction during the past few years to secure the visual information in the transmission network. It enables the visual data i.e. handwritten notes, photos, printed text, etc. to encrypt in such a way that their decryption can be done through the human visual framework. Hence, no computational assistance is required for the decryption of the secret images they can be seen through naked eye. In this paper, a novel enhanced halftoning-based VC scheme is proposed that works for both binary and color images. Fake share is generated by the combination of random black and white pixels. The proposed algorithm consists of 3 stages i.e., detection, encryption, and decryption. Halftoning, Encryption, (2, 2) visual cryptography and the novel idea of fake share, make it even more secure and improved. As a result, it facilitates the original restored image to the authentic user, however, the one who enters the wrong password gets the combination of fake share with any real share. Both colored and black images can be processed with minimal capacity using the proposed scheme.
Standard genetic algorithms (SGAs) are investigated to optimise discrete-time proportional-integral-derivative (PID) controller parameters, by three tuning approaches, for a multivariable glass furnace process with loop interaction. Initially, standard genetic algorithms (SGAs) are used to identify control oriented models of the plant which are subsequently used for controller optimisation. An individual tuning approach without loop interaction is considered first to categorise the genetic operators, cost functions and improve searching boundaries to attain the desired performance criteria. The second tuning approach considers controller parameters optimisation with loop interaction and individual cost functions. While, the third tuning approach utilises a modified cost function which includes the total effect of both controlled variables, glass temperature and excess oxygen. This modified cost function is shown to exhibit improved control robustness and disturbance rejection under loop interaction. 相似文献
Big data technologies and a range of Government open data initiatives provide the basis for discovering new insights into cities; how they are planned, how they managed and the day-to-day challenges they face in health, transport and changing population profiles. The Australian Urban Research Infrastructure Network (AURIN – www.aurin.org.au) project is one example of such a big data initiative that is currently running across Australia. AURIN provides a single gateway providing online (live) programmatic access to over 2000 data sets from over 70 major and typically definitive data-driven organizations across federal and State government, across industry and across academia. However whilst open (public) data is useful to bring data-driven intelligence to cities, more often than not, it is the data that is not-publicly accessible that is essential to understand city challenges and needs. Such sensitive (unit-level) data has unique requirements on access and usage to meet the privacy and confidentiality demands of the associated organizations. In this paper we highlight a novel geo-privacy supporting solution implemented as part of the AURIN project that provides seamless and secure access to individual (unit-level) data from the Department of Health in Victoria. We illustrate this solution across a range of typical city challenges in localized contexts around Melbourne. We show how unit level data can be combined with other data in a privacy-protecting manner. Unlike other secure data access and usage solutions that have been developed/deployed, the AURIN solution allows any researcher to access and use the data in a manner that meets all of the associated privacy and confidentiality concerns, without obliging them to obtain ethical approval or any other hurdles that are normally put in place on access to and use of sensitive data. This provides a paradigm shift in secure access to sensitive data with geospatial content. 相似文献
Service Oriented Architecture with underlying technologies like web services and web service orchestration opens new vistas for integration among business processes operating in heterogeneous environments. However, such dynamic collaborations require a highly secure environment at each respective business partner site. Existing web services standards address the issue of security only on the service provider platform. The partner platforms to which sensitive information is released have till now been neglected. Remote Attestation is a relatively new field of research which enables an authorized party to verify that a trusted environment actually exists on a partner platform. To incorporate this novel concept in to the web services realm, a new mechanism called WS-Attestation has been proposed. This mechanism provides a structural paradigm upon which more fine-grained solutions can be built. In this paper, we present a novel framework, Behavioral Attestation for Web Services, in which XACML is built on top of WS-Attestation in order to enable more flexible remote attestation at the web services level. We propose a new type of XACML policy called XACML behavior policy, which defines the expected behavior of a partner platform. Existing web service standards are used to incorporate remote attestation at the web services level and a prototype is presented, which implements XACML behavior policy using low-level attestation techniques. 相似文献
ABSTRACTThe quality of user-generated content over World Wide Web media is a matter of serious concern for both creators and users. To measure the quality of content, webometric techniques are commonly used. In recent times, bibliometric techniques have been introduced to good effect for evaluation of the quality of user-generated content, which were originally used for scholarly data. However, the application of bibliometric techniques to evaluate the quality of YouTube content is limited to h-index and g-index considering only views. This paper advocates for and demonstrates the adaptation of existing Bibliometric indices including h-index, g-index and M-index exploiting both views and comments and proposes three indices hvc, gvc and mvc for YouTube video channel ranking. The empirical results prove that the proposed indices using views along with the comments outperform the existing approaches on a real-world dataset of YouTube. 相似文献
We present a scheme for the implementation of three-qubit Grover’s algorithm using four-level superconducting quantum interference devices (SQUIDs) coupled to a superconducting resonator. The scheme is based on resonant, off-resonant interaction of the cavity field with SQUIDs and application of classical microwave pulses. We show that adjustment of SQUID level spacings during the gate operations, adiabatic passage, and second-order detuning are not required that leads to faster implementation. We also show that the marked state can be searched with high fidelity even in the presence of unwanted off-resonant interactions, level decay, and cavity dissipation. 相似文献
This paper proposes a spam detection technique, at the packet level (layer 3), based on classification of e-mail contents. Our proposal targets spam control implementations on middleboxes. E-mails are first pre-classified (pre-detected) for spam on a per-packet basis, without the need for reassembly. This, in turn, allows fast e-mail class estimation (spam detection) at receiving e-mail servers to support more effective spam handling on both inbound and outbound (relayed) e-mails. In this paper, the naïve Bayes classification technique is adapted to support both pre-classification and fast e-mail class estimation, on a per-packet basis. We focus on evaluating the accuracy of spam detection at layer 3, considering the constraints on processing byte-streams over the network, including packet re-ordering, fragmentation, overlapped bytes, and different packet sizes. Results show that the proposed layer-3 classification technique gives less than 0.5% false positive, which approximately equals the performance attained at layer 7. This shows that classifying e-mails at the packet level could differentiate non-spam from spam with high confidence for a viable spam control implementation on middleboxes. 相似文献
A microarray machine offers the capacity to measure the expression levels of thousands of genes simultaneously. It is used
to collect information from tissue and cell samples regarding gene expression differences that could be useful for cancer
classification. However, the urgent problems in the use of gene expression data are the availability of a huge number of genes
relative to the small number of available samples, and the fact that many of the genes are not relevant to the classification.
It has been shown that selecting a small subset of genes can lead to improved accuracy in the classification. Hence, this
paper proposes a solution to the problems by using a multiobjective strategy in a genetic algorithm. This approach was tried
on two benchmark gene expression data sets. It obtained encouraging results on those data sets as compared with an approach
that used a single-objective strategy in a genetic algorithm.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 相似文献
Gene expression technology, namely microarrays, offers the ability to measure the expression levels of thousands of genes
simultaneously in biological organisms. Microarray data are expected to be of significant help in the development of an efficient
cancer diagnosis and classification platform. A major problem in these data is that the number of genes greatly exceeds the
number of tissue samples. These data also have noisy genes. It has been shown in literature reviews that selecting a small
subset of informative genes can lead to improved classification accuracy. Therefore, this paper aims to select a small subset
of informative genes that are most relevant for cancer classification. To achieve this aim, an approach using two hybrid methods
has been proposed. This approach is assessed and evaluated on two well-known microarray data sets, showing competitive results.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 相似文献