There have been tremendous efforts made to investigate various materials to enhance the electrical performance of triboelectric nanogenerators (TENGs) but there is still demand for some techniques to further enhance the performance of tribomaterials. Therefore, we fabricated a bimetallic hybrid cryogel via cheap and facile UV-radiation as well as in situ reduction method. Fabricated TENG device made up of porous hybrid bimetallic cryogel film containing silver and gold nanoparticles as tribopositive material and poly dimethyl siloxane (PDMS) as a tribonegative layer with dimension of 1 × 2 cm2 has the ability to produced output voltage of 262.14 V with current density of 27.52 mA/m2 and 7.44 W/m2 peak power density, which was sufficient to light up more than 120 white light emitting-diodes (LEDs). Porous and rough structure, interaction of nanoparticles was the reason behind the performance enhancement of tribopositive material. Thus, this study introduces a very stable and easily synthesized bimetallic hybrid cryogel as a tribopositive material to enhance the performance of tribomaterials to design high performance TENG devices. 相似文献
Cerebral Microbleeds (CMBs) are microhemorrhages caused by certain abnormalities of brain vessels. CMBs can be found in people with Traumatic Brain Injury (TBI), Alzheimer’s disease, and in old individuals having a brain injury. Current research reveals that CMBs can be highly dangerous for individuals having dementia and stroke. The CMBs seriously impact individuals’ life which makes it crucial to recognize the CMBs in its initial phase to stop deterioration and to assist individuals to have a normal life. The existing work report good results but often ignores false-positive’s perspective for this research area. In this paper, an efficient approach is presented to detect CMBs from the Susceptibility Weighted Images (SWI). The proposed framework consists of four main phases (i) making clusters of brain Magnetic Resonance Imaging (MRI) using k-mean classifier (ii) reduce false positives for better classification results (iii) discriminative feature extraction specific to CMBs (iv) classification using a five layers convolutional neural network (CNN). The proposed method is evaluated on a public dataset available for 20 subjects. The proposed system shows an accuracy of 98.9% and a 1.1% false-positive rate value. The results show the superiority of the proposed work as compared to existing states of the art methods. 相似文献
Social networking platforms provide a vital source for disseminating
information across the globe, particularly in case of disaster. These platforms
are great mean to find out the real account of the disaster. Twitter is an example
of such platform, which has been extensively utilized by scientific community due
to its unidirectional model. It is considered a challenging task to identify eyewitness tweets about the incident from the millions of tweets shared by twitter users.
Research community has proposed diverse sets of techniques to identify eyewitness account. A recent state-of-the-art approach has proposed a comprehensive set
of features to identify eyewitness account. However, this approach suffers some
limitation. Firstly, automatically extracting the feature-words remains a perplexing
task against each feature identified by the approach. Secondly, all identified features were not incorporated in the implementation. This paper has utilized the language structure, linguistics, and word relation to achieve automatic extraction of
feature-words by creating grammar rules. Additionally, all identified features were
implemented which were left out by the state-of-the-art model. A generic
approach is taken to cover different types of disaster such as earthquakes, floods,
hurricanes, and wildfires. The proposed approach was then evaluated for all disaster-types, including earthquakes, floods, hurricanes, and fire. Based on the static
dictionary, the Zahra et al. approach was able to produce an F-Score value of
0.92 for Eyewitness identification in the earthquake category. The proposed
approach secured F-Score values of 0.81 in the same category. This score can
be considered as a significant score without using a static dictionary. 相似文献
The design and sustainability of reinforced concrete deep beam are still the main issues in the sector of structural engineering despite the existence of modern advancements in this area. Proper understanding of shear stress characteristics can assist in providing safer design and prevent failure in deep beams which consequently lead to saving lives and properties. In this investigation, a new intelligent model depending on the hybridization of support vector regression with bio-inspired optimization approach called genetic algorithm (SVR-GA) is employed to predict the shear strength of reinforced concrete (RC) deep beams based on dimensional, mechanical and material parameters properties. The adopted SVR-GA modelling approach is validated against three different well established artificial intelligent (AI) models, including classical SVR, artificial neural network (ANN) and gradient boosted decision trees (GBDTs). The comparison assessments provide a clear impression of the superior capability of the proposed SVR-GA model in the prediction of shear strength capability of simply supported deep beams. The simulated results gained by SVR-GA model are very close to the experimental ones. In quantitative results, the coefficient of determination (R2) during the testing phase (R2 = 0.95), whereas the other comparable models generated relatively lower values of R2 ranging from 0.884 to 0.941. All in all, the proposed SVR-GA model showed an applicable and robust computer aid technology for modelling RC deep beam shear strength that contributes to the base knowledge of material and structural engineering perspective.
Conductive polymeric blends (CPBs) of polystyrene and polyaniline (PS/PANI) were prepared by solution casting method in various compositions. Film thickness of CPBs was achieved?~?250 micron. PS/PANI blend films were analyzed for electromagnetic interference (EMI) shielding characteristics in microwave and near-infrared (NIR) regions. PS/PANI blends showed remarkable features. Most mobile telecommunications use GHz frequency range and shielding effectiveness was observed in 9 GHz to 18 GHz. In 9 GHz to 18 GHz frequency range, 45 dB shielding effectiveness was measured. CPBs were also analyzed in the NIR region and showed transmittance of <1%. Microwaves and NIR radiation are the most abundant in the environment and cause damage to human health. Both types of radiation causes serious damage to electronic devices as well.
In machine learning literature, deep learning methods have been moving toward greater heights by giving due importance in both data representation and classification methods. The recently developed multilayered arc-cosine kernel leverages the possibilities of extending deep learning features into the kernel machines. Even though this kernel has been widely used in conjunction with support vector machines (SVM) on small-size datasets, it does not seem to be a feasible solution for the modern real-world applications that involve very large size datasets. There are lot of avenues where the scalability aspects of deep kernel machines in handling large dataset need to be evaluated. In machine learning literature, core vector machine (CVM) is being used as a scaling up mechanism for traditional SVMs. In CVM, the quadratic programming problem involved in SVM is reformulated as an equivalent minimum enclosing ball problem and then solved by using a subset of training sample (Core Set) obtained by a faster \((1+\epsilon )\) approximation algorithm. This paper explores the possibilities of using principles of core vector machines as a scaling up mechanism for deep support vector machine with arc-cosine kernel. Experiments on different datasets show that the proposed system gives high classification accuracy with reasonable training time compared to traditional core vector machines, deep support vector machines with arc-cosine kernel and deep convolutional neural network. 相似文献
Microsystem Technologies - The present work deals with comparative and robustness analysis of grey wolf optimization (GWO) based fractional order proportional–integral derivative (FOPID)... 相似文献
Debugging—the process of identifying, localizing and fixing bugs—is a key activity in software development. Due to issues such as non-determinism and difficulties of reproducing failures, debugging concurrent software is significantly more challenging than debugging sequential software. A number of methods, models and tools for debugging concurrent and multicore software have been proposed, but the body of work partially lacks a common terminology and a more recent view of the problems to solve. This suggests the need for a classification, and an up-to-date comprehensive overview of the area. This paper presents the results of a systematic mapping study in the field of debugging of concurrent and multicore software in the last decade (2005–2014). The study is guided by two objectives: (1) to summarize the recent publication trends and (2) to clarify current research gaps in the field. Through a multi-stage selection process, we identified 145 relevant papers. Based on these, we summarize the publication trend in the field by showing distribution of publications with respect to year, publication venues, representation of academia and industry, and active research institutes. We also identify research gaps in the field based on attributes such as types of concurrency bugs, types of debugging processes, types of research and research contributions. The main observations from the study are that during the years 2005–2014: (1) there is no focal conference or venue to publish papers in this area; hence, a large variety of conferences and journal venues (90) are used to publish relevant papers in this area; (2) in terms of publication contribution, academia was more active in this area than industry; (3) most publications in the field address the data race bug; (4) bug identification is the most common stage of debugging addressed by articles in the period; (5) there are six types of research approaches found, with solution proposals being the most common one; and (6) the published papers essentially focus on four different types of contributions, with “methods” being the most common type. We can further conclude that there are still quite a number of aspects that are not sufficiently covered in the field, most notably including (1) exploring correction and fixing bugs in terms of debugging process; (2) order violation, suspension and starvation in terms of concurrency bugs; (3) validation and evaluation research in the matter of research type; (4) metric in terms of research contribution. It is clear that the concurrent, parallel and multicore software community needs broader studies in debugging. This systematic mapping study can help direct such efforts. 相似文献
Vehicular ad-hoc network (VANET) is characterized as a highly dynamic wireless network due to the dynamic connectivity of the network nodes. To achieve better connectivity under such dynamic conditions, an optimal transmission strategy is required to direct the information flow between the nodes. Earlier studies on VANET’s overlook the characteristics of heterogeneity in vehicle types, traffic structure, flow for density estimation, and connectivity observation. In this paper, we have proposed a heterogeneous traffic flow based dual ring connectivity model to enhance both the message disseminations and network connectivity. In our proposed model the availability of different types of vehicles on the road, such as, cars, buses, etc., are introduced in an attempt to propose a new communication structure for moving vehicles in VANETl under cooperative transmission in heterogeneous traffic flow. The model is based on the dual-ring structure that forms the primary and secondary rings of vehicular communication. During message disseminations, Slow speed vehicles (buses) on the secondary ring provide a backup path of communication for high speed vehicles (cars) moving on the primary ring. The Slow speed vehicles act as the intermediate nodes in the aforementioned connectivity model that helps improve the network coverage and end-to-end data delivery. For the evaluation and the implementation of dual-ring model a clustering routing scheme warning energy aware cluster-head is adopted that also caters for the energy optimization. The implemented dual-ring message delivery scheme under the cluster-head based routing technique does show improved network coverage and connectivity dynamics even under the multi-hop communication system. 相似文献
To quantify hepatocellular carcinoma (HCC) perfusion and flow with the fast exchange regime-allowed Shutter-Speed model (SSM) compared to the Tofts model (TM).
Materials and methods
In this prospective study, 25 patients with HCC underwent DCE-MRI. ROIs were placed in liver parenchyma, portal vein, aorta and HCC lesions. Signal intensities were analyzed employing dual-input TM and SSM models. ART (arterial fraction), Ktrans (contrast agent transfer rate constant from plasma to extravascular extracellular space), ve (extravascular extracellular volume fraction), kep (contrast agent intravasation rate constant), and τi (mean intracellular water molecule lifetime) were compared between liver parenchyma and HCC, and ART, Ktrans, ve and kep were compared between models using Wilcoxon tests and limits of agreement. Test–retest reproducibility was assessed in 10 patients.
Results
ART and ve obtained with TM; ART, ve, ke and τi obtained with SSM were significantly different between liver parenchyma and HCC (p < 0.04). Parameters showed variable reproducibility (CV range 14.7–66.5 % for both models). Liver Ktrans and ve; HCC ve and kep were significantly different when estimated with the two models (p < 0.03).
Conclusion
Our results show differences when computed between the TM and the SSM. However, these differences are smaller than parameter reproducibilities and may be of limited clinical significance.