Multimedia Tools and Applications - Nowadays, web users frequently explore multimedia contents to satisfy their information needs. The exploration approaches usually provide linear interaction... 相似文献
Security threats are crucial challenges that deter Mixed reality (MR) communication in medical telepresence. This research aims to improve the security by reducing the chances of types of various attacks occurring during the real-time data transmission in surgical telepresence as well as reduce the time of the cryptographic algorithm and keep the quality of the media used. The proposed model consists of an enhanced RC6 algorithm in combination. Dynamic keys are generated from the RC6 algorithm mixed with RC4 to create dynamic S-box and permutation table, preventing various known attacks during the real-time data transmission. For every next session, a new key is created, avoiding possible reuse of the same key from the attacker. The results obtained from our proposed system are showing better performance compared to the state of art. The resistance to the tested attacks is measured throughout the entropy, Pick to Signal Noise Ratio (PSNR) is decreased for the encrypted image than the state of art, structural similarity index (SSIM) closer to zero. The execution time of the algorithm is decreased for an average of 20%. The proposed system is focusing on preventing the brute force attack occurred during the surgical telepresence data transmission. The paper proposes a framework that enhances the security related to data transmission during surgeries with acceptable performance.
Most schemes exhibit low robustness due to LSB’s (Least Significant Bit) and MSB’s (Most Significant Bit) based information hiding in the cover image. However, most of these IW schemes have low imperceptibility as the cover image distortion reveals to the attacker due to information hiding in MSB’s. In this paper, a hybrid image watermarking scheme is proposed based on integrating Robust Principal Component Analysis (R-PCA), Discrete Tchebichef Transform (DTT), and Singular Value Decomposition (SVD). A grayscale watermark image is twisted/scrambled using a 2D Discrete Hyper-chaotic Encryption System (2D-DHCES) to boost up the robustness/heftiness and security. The original cover image is crumbled into sparse components using R-PCA and using DTT the substantial component is additionally decomposed and the watermark will be embedded in the cover image using SVD processing. In DTT, scarcer coefficients hold the utmost energy, also provide an optimum sparse depiction of the substantial image edges and features that supports proficient retrieval of the watermark image even after unadorned image distortion based channel attacks. The imperceptibility and robustness of the proposed method are corroborated against a variety of signal processing channel attacks (salt and pepper noise, multi-directional shearing, cropping, and frequency filtering, etc.). The visual and quantifiable outcomes reveal that the proposed image watermarking scheme is much effective and delivers high forbearance against several image processing and geometric attacks.
Journal of Central South University - This work is concerned with the analysis of blood flow through inclined catheterized arteries having a balloon (angioplasty) with time-variant overlapping... 相似文献
Reusing wastewater from oil-related industries is becoming increasingly important, especially in water-stressed oil-producing countries. Before oily wastewater can be discharged or reused, it must be properly treated, e.g., by membrane-based processes like ultrafiltration. A major issue of the applied membranes is their high fouling propensity. This paper reports on mitigating fouling inside ready-to-use ultrafiltration hollow-fiber modules used in a polishing step in oil/water separation. For this purpose, in-situ polyzwitterionic hydrogel coating was applied. The membrane performance was tested with oil nano-emulsions using a mini-plant system. The main factors influencing fouling were systematically investigated using statistical design of experiments. 相似文献
Hydrogels are polymeric materials widely used in medicine due to their similarity with the biological components of the body. Hydrogels are biocompatible materials that have the potential to promote cell proliferation and tissue support because of their hydrophilic nature, porous structure, and elastic mechanical properties. In this work, we demonstrate the microwave-assisted synthesis of three molecular weight varieties of poly(ethylene glycol) dimethacrylate (PEGDMA) with different mechanical and thermal properties and the rapid photo of them using 1-hydroxy-cyclohexyl-phenyl-ketone (Irgacure 184) as UV photoinitiator. The effects of the poly(ethylene glycol) molecular weight and degree of acrylation on swelling, mechanical, and rheological properties of hydrogels were investigated. The biodegradability of the PEGDMA hydrogels, as well as the ability to grow and proliferate cells, was examined for its viability as a scaffold in tissue engineering. Altogether, the biomaterial hydrogel properties open the way for applications in the field of regenerative medicine for functional scaffolds and tissues. 相似文献
Deposition of diamond films onto various substrates can result in significant technological advantages in terms of functionality
and improved life and performance of components. Diamond is hard, wear resistant, chemically inert, and biocompatible. It
is considered to be the ideal material for surfaces of cutting tools and biomedical components. However, it is well known
that diamond deposition onto technologically important substrates, such as co-cemented carbides and steels, is problematic
due to carbon interaction with the substrate, low nucleation densities, and poor adhesion. Several papers previously published
in the relevant literature have reported the application of interlayer materials such as metal nitrides and carbides to provide
bonding between diamond and hostile substrates. In this study, the chemical vapor deposition (CVD) of polycrystalline diamond
on TiN/SiNx nc (nc) interlayers deposited at relatively low temperatures has been investigated for the first time. The nc layers were
deposited at 70 or 400 °C on Si substrates using a dual ion beam deposition system. The results showed that a preliminary
seeding pretreatment with diamond suspension was necessary to achieve large diamond nucleation densities and that diamond
nucleation was larger on nc films than on bare sc-Si subjected to the same pretreatment and CVD process parameters. TiN/SiNx layers synthesized at 70 or 400 °C underwent different nanostructure modifications during diamond CVD. The data also showed
that TiN/SiNx films obtained at 400 °C are preferable in so far as their use as interlayers between hostile substrates and CVD diamond
is concerned.
This paper was presented at the fourth International Surface Engineering Congress and Exposition held August 1–3, 2005 in
St. Paul, MN. 相似文献
The effect of nickel and molybdenum concentrations on the phase transformation and mechanical properties of conventional 18Ni(350)
maraging steel has been investigated. Both of these elements act as strong austenite stabilizers. When the concentration of
molybdenum or nickel is greater than 7.5 or 24 wt %, respectively, the austenite phase remains stable up to room temperature.
In both molybdenum- and nickel-alloyed steels, the austenite phase could be transformed to martensite by either dipping the
material in liquid nitrogen or subjecting it to cold working. When 7.5 wt% Mo and 24 wt% Ni were added in combination, however,
the austenite phase obtained at room temperature did not transform to martensite when liquid-nitrogen quenched or even when
cold rolled to greater than 95% reduction. The aging response of these materials has also been investigated using optical,
scanning electron, and scanning transmission electron microscopy. 相似文献
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues. 相似文献
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration. 相似文献