Applying combinatorial methods to materials science offers the opportunity to accelerate the discovery of more efficient dielectric ceramics. High-throughput methods have the potential to investigate the effects of a wide range of dopants on the dielectric properties, and to optimise existing systems, encouraging the short innovation cycles that the communications technology industry requires. The London University Search Instrument (LUSI) is a fully automated, high-throughput combinatorial robot that has the potential capability to produce large numbers of sintered bulk ceramic samples with varying composition in 1 day, as combinatorial libraries on alumina substrates. Ba1−xSrxTiO3 (BST) libraries were produced by LUSI as a proof-of-principle, with x = 0–1 in steps of 0.1, and fired to 1350 and 1400 °C for 1 h. Part I of this paper described the manufacture and physical characterisation of BST libraries, showing a regular change in composition with x across the libraries. In this second part, the dielectric properties of BST libraries produced by LUSI are assessed at frequencies between 100 Hz and 1 MHz, and at temperatures between 150 and 500 K. Local piezoelectric properties were also characterised by scanning probe microscope (SPM). All measurements showed evidence of a clear functional gradient varying with x across the library, with measured r corresponding to expected values for BST. 相似文献
The effects of log-normal pore size distributions on the rejection of uncharged solutes and NaCl at hypothetical nanofiltration membranes have been assessed theoretically. The importance of pore radius-dependent properties such as solvent viscosity and dielectric constant is increased by the introduction of a pore size distribution in calculations. However, the effect of porewise variation in viscosity is less apparent when considered at a defined applied pressure rather than at a defined flux, showing a further advantage of basing theoretical analysis of nanofiltration in terms of applied pressure.Truncated pore size distributions gave better agreement than full distributions with experimental rejection data for a Desal-DK nanofiltration membrane. Such truncation is in agreement with the findings of atomic force microscopy (AFM). Analysis of uncharged solute rejection data alone could not give useful information about membrane pore size distribution. Neither could such a distribution be obtained quantitatively directly from AFM images. However, use of the shape of the distribution obtained by AFM in conjunction with experimental rejection data for an uncharged solute allows calculation of corrected distributions. Importantly, incorporation of such a corrected pore size distribution in calculations of NaCl rejection gave better agreement with experimental data, compared to calculations assuming uniform pores, at high pressure, the region of industrial interest. 相似文献
Some basic aspects of the kinetics and mechanisms of anionic and cationic ring-opening polymerization of cyclic siloxanes are discussed in connection with their use in polymer synthesis. The emphasis is put on the polymerization of strained ring monomers, such as cyclic trisiloxanes, since these provide the possibility of tailoring the polymer structure. Much attention is devoted to association phenomena and to oligomer formation processes.This review is from the Second International Topical Workshop, Advances in Silicon-Based Polymer Science. 相似文献
ABSTRACTAdvanced high strength steels are usually coated by a zinc layer for an increased resistance against corrosion. During the resistance spot welding of zinc coated steel grades, liquid metal embrittlement (LME) may occur. As a result, cracking inside and around the spot weld indentation is observable. The extent of LME cracks is influenced by a variety of different factors. In this study, the impact of the used electrode geometry is investigated over a stepwise varied weld time. A spot welding finite element simulation is used to analyse and explain the observed effects. Results show significant differences especially for highly increased weld times. Based on identical overall dimensions, electrode geometries with a larger working plane allow for longer weld times, while still preventing LME within the investigated material and maintaining accessibility. 相似文献
Evolution-in-materio uses evolutionary algorithms to exploit properties of materials to solve computational problems without requiring a detailed understanding of such properties. We show that using a purpose-built hardware platform called Mecobo, it is possible to solve computational problems by evolving voltages and signals applied to an electrode array covered with a carbon nanotube–polymer composite. We demonstrate for the first time that this methodology can be applied to function optimization and also to the tone discriminator problem (TDP). For function optimization, we evaluate the approach on a suite of optimization benchmarks and obtain results that in some cases come very close to the global optimum or are comparable with those obtained using well-known software-based evolutionary approach. We also obtain good results in comparison with prior work on the tone discriminator problem. In the case of the TDP we also investigated the relative merits of different mixtures of materials and organizations of electrode array. 相似文献
A recent paper \cite{CaeCaeSchBar06} proposed a provably optimal, polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be \emph{globally rigid}, implying that exact inference provides the \emph{same} matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph which is also globally rigid but has an advantage over the graph proposed in \cite{CaeCaeSchBar06}: its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal and thus standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that of \cite{CaeCaeSchBar06} when there is noise in the point patterns. 相似文献
Parallel and distributed methods for evolutionary algorithms have concentrated on maintaining multiple populations of genotypes,
where each genotype in a population encodes a potential solution to the problem. In this paper, we investigate the parallelisation
of the genotype itself into a collection of independent chromosomes which can be evaluated in parallel. We call this multi-chromosomal evolution
(MCE). We test this approach using Cartesian Genetic Programming and apply MCE to a series of digital circuit design problems
to compare the efficacy of MCE with a conventional single chromosome approach (SCE). MCE can be readily used for many digital
circuits because they have multiple outputs. In MCE, an independent chromosome is assigned to each output. When we compare
MCE with SCE we find that MCE allows us to evolve solutions much faster. In addition, in some cases we were able to evolve
solutions with MCE that we unable to with SCE. In a case-study, we investigate how MCE can be applied to to a single objective
problem in the domain of image classification, namely, the classification of breast X-rays for cancer. To apply MCE to this
problem, we identify regions of interest (RoI) from the mammograms, divide the RoI into a collection of sub-images and use
a chromosome to classify each sub-image. This problem allows us to evaluate various evolutionary mutation operators which
can pairwise swap chromosomes either randomly or topographically or reuse chromosomes in place of other chromosomes. 相似文献
In this paper we present a novel methodology based on non-parametric deformable prototype templates for reconstructing the
outline of a shape from a degraded image. Our method is versatile and fast and has the potential to provide an automatic procedure
for classifying pathologies. We test our approach on synthetic and real data from a variety of medical and biological applications.
In these studies it is important to reconstruct accurately the shape of the object under investigation from very noisy data.
Here we assume that we have some prior knowledge about the object outline represented by a prototype shape. Our procedure
deforms this shape by means of non-affine transformations and the contour is reconstructed by minimizing a newly developed
objective function that depends on the transformation parameters. We introduce an iterative template deformation procedure
in which the scale of the deformation decreases as the algorithm proceeds. We compare our results with those from a Gaussian
Mixture Model segmentation and two state-of-the-art Level Set methods. This comparison shows that the proposed procedure performs
consistently well on both real and simulated data. As a by-product we develop a new filter that recovers the connectivity
of a shape.
Francesco de PasqualeEmail:
Francesco de Pasquale
received his Ph.D. in Applied
Statistics from the University of Plymouth, United Kingdom in 2004 discussing a thesis on Bayesian and Template based methods
for image analysis. Since his degree in Physics obtained at the University of Rome ‘La Sapienza’in 1999 his work has been
focused on developing models and methods for Magnetic Resonance Imaging, in particular image registration, classification
and segmentation in a Bayesian framework. After being appointed a 2-year contract as a Lecturer at the University of Plymouth
from 2003 to 2004 he is now a post-Doc researcher at the ITAB, Institute for Advanced Biomedical Technologies, University
of Chieti, Italy and he works on the analysis of fMRI and MEG data.
Julian Stander
was born in Plymouth, UK in 1964. He received a BA in Mathematics with first class honours from University of Oxford in 1987,
a Diploma in Mathematical Statistics with distinction from University of Cambridge in 1988, and a PhD from University of Bath
in 1992. He has been a lecturer at the School of Mathematics and Statistics, University of Plymouth, since 1993, and was promoted
to Reader in 2006. His fields of interest are: applications of statistics including image analysis, spatial modelling and
disclosure limitation. He has published over 20 refereed journal articles.
相似文献
Contemporary attackers, mainly motivated by financial gain, consistently devise sophisticated penetration techniques to access important information or data. The growing use of Internet of Things (IoT) technology in the contemporary convergence environment to connect to corporate networks and cloud-based applications only worsens this situation, as it facilitates multiple new attack vectors to emerge effortlessly. As such, existing intrusion detection systems suffer from performance degradation mainly because of insufficient considerations and poorly modeled detection systems. To address this problem, we designed a blended threat detection approach, considering the possible impact and dimensionality of new attack surfaces due to the aforementioned convergence. We collectively refer to the convergence of different technology sectors as the internet of blended environment. The proposed approach encompasses an ensemble of heterogeneous probabilistic autoencoders that leverage the corresponding advantages of a convolutional variational autoencoder and long short-term memory variational autoencoder. An extensive experimental analysis conducted on the TON_IoT dataset demonstrated 96.02% detection accuracy. Furthermore, performance of the proposed approach was compared with various single model (autoencoder)-based network intrusion detection approaches: autoencoder, variational autoencoder, convolutional variational autoencoder, and long short-term memory variational autoencoder. The proposed model outperformed all compared models, demonstrating F1-score improvements of 4.99%, 2.25%, 1.92%, and 3.69%, respectively. 相似文献
Organizations are increasingly delegating customer inquiries to speech dialog systems (SDSs) to save personnel resources. However, customers often report frustration when interacting with SDSs due to poorly designed solutions. Despite these issues, design knowledge for SDSs in customer service remains elusive. To address this research gap, we employ the design science approach and devise a design theory for SDSs in customer service. The design theory, including 14 requirements and five design principles, draws on the principles of dialog theory and undergoes validation in three iterations using five hypotheses. A summative evaluation comprising a two-phase experiment with 205 participants yields positive results regarding the user experience of the artifact. This study contributes to design knowledge for SDSs in customer service and supports practitioners striving to implement similar systems in their organizations.