The effects of log-normal pore size distributions on the rejection of uncharged solutes and NaCl at hypothetical nanofiltration membranes have been assessed theoretically. The importance of pore radius-dependent properties such as solvent viscosity and dielectric constant is increased by the introduction of a pore size distribution in calculations. However, the effect of porewise variation in viscosity is less apparent when considered at a defined applied pressure rather than at a defined flux, showing a further advantage of basing theoretical analysis of nanofiltration in terms of applied pressure.Truncated pore size distributions gave better agreement than full distributions with experimental rejection data for a Desal-DK nanofiltration membrane. Such truncation is in agreement with the findings of atomic force microscopy (AFM). Analysis of uncharged solute rejection data alone could not give useful information about membrane pore size distribution. Neither could such a distribution be obtained quantitatively directly from AFM images. However, use of the shape of the distribution obtained by AFM in conjunction with experimental rejection data for an uncharged solute allows calculation of corrected distributions. Importantly, incorporation of such a corrected pore size distribution in calculations of NaCl rejection gave better agreement with experimental data, compared to calculations assuming uniform pores, at high pressure, the region of industrial interest. 相似文献
Some basic aspects of the kinetics and mechanisms of anionic and cationic ring-opening polymerization of cyclic siloxanes are discussed in connection with their use in polymer synthesis. The emphasis is put on the polymerization of strained ring monomers, such as cyclic trisiloxanes, since these provide the possibility of tailoring the polymer structure. Much attention is devoted to association phenomena and to oligomer formation processes.This review is from the Second International Topical Workshop, Advances in Silicon-Based Polymer Science. 相似文献
Evolution-in-materio uses evolutionary algorithms to exploit properties of materials to solve computational problems without requiring a detailed understanding of such properties. We show that using a purpose-built hardware platform called Mecobo, it is possible to solve computational problems by evolving voltages and signals applied to an electrode array covered with a carbon nanotube–polymer composite. We demonstrate for the first time that this methodology can be applied to function optimization and also to the tone discriminator problem (TDP). For function optimization, we evaluate the approach on a suite of optimization benchmarks and obtain results that in some cases come very close to the global optimum or are comparable with those obtained using well-known software-based evolutionary approach. We also obtain good results in comparison with prior work on the tone discriminator problem. In the case of the TDP we also investigated the relative merits of different mixtures of materials and organizations of electrode array. 相似文献
A recent paper \cite{CaeCaeSchBar06} proposed a provably optimal, polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be \emph{globally rigid}, implying that exact inference provides the \emph{same} matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph which is also globally rigid but has an advantage over the graph proposed in \cite{CaeCaeSchBar06}: its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal and thus standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that of \cite{CaeCaeSchBar06} when there is noise in the point patterns. 相似文献
Parallel and distributed methods for evolutionary algorithms have concentrated on maintaining multiple populations of genotypes,
where each genotype in a population encodes a potential solution to the problem. In this paper, we investigate the parallelisation
of the genotype itself into a collection of independent chromosomes which can be evaluated in parallel. We call this multi-chromosomal evolution
(MCE). We test this approach using Cartesian Genetic Programming and apply MCE to a series of digital circuit design problems
to compare the efficacy of MCE with a conventional single chromosome approach (SCE). MCE can be readily used for many digital
circuits because they have multiple outputs. In MCE, an independent chromosome is assigned to each output. When we compare
MCE with SCE we find that MCE allows us to evolve solutions much faster. In addition, in some cases we were able to evolve
solutions with MCE that we unable to with SCE. In a case-study, we investigate how MCE can be applied to to a single objective
problem in the domain of image classification, namely, the classification of breast X-rays for cancer. To apply MCE to this
problem, we identify regions of interest (RoI) from the mammograms, divide the RoI into a collection of sub-images and use
a chromosome to classify each sub-image. This problem allows us to evaluate various evolutionary mutation operators which
can pairwise swap chromosomes either randomly or topographically or reuse chromosomes in place of other chromosomes. 相似文献
In this paper we present a novel methodology based on non-parametric deformable prototype templates for reconstructing the
outline of a shape from a degraded image. Our method is versatile and fast and has the potential to provide an automatic procedure
for classifying pathologies. We test our approach on synthetic and real data from a variety of medical and biological applications.
In these studies it is important to reconstruct accurately the shape of the object under investigation from very noisy data.
Here we assume that we have some prior knowledge about the object outline represented by a prototype shape. Our procedure
deforms this shape by means of non-affine transformations and the contour is reconstructed by minimizing a newly developed
objective function that depends on the transformation parameters. We introduce an iterative template deformation procedure
in which the scale of the deformation decreases as the algorithm proceeds. We compare our results with those from a Gaussian
Mixture Model segmentation and two state-of-the-art Level Set methods. This comparison shows that the proposed procedure performs
consistently well on both real and simulated data. As a by-product we develop a new filter that recovers the connectivity
of a shape.
Francesco de PasqualeEmail:
Francesco de Pasquale
received his Ph.D. in Applied
Statistics from the University of Plymouth, United Kingdom in 2004 discussing a thesis on Bayesian and Template based methods
for image analysis. Since his degree in Physics obtained at the University of Rome ‘La Sapienza’in 1999 his work has been
focused on developing models and methods for Magnetic Resonance Imaging, in particular image registration, classification
and segmentation in a Bayesian framework. After being appointed a 2-year contract as a Lecturer at the University of Plymouth
from 2003 to 2004 he is now a post-Doc researcher at the ITAB, Institute for Advanced Biomedical Technologies, University
of Chieti, Italy and he works on the analysis of fMRI and MEG data.
Julian Stander
was born in Plymouth, UK in 1964. He received a BA in Mathematics with first class honours from University of Oxford in 1987,
a Diploma in Mathematical Statistics with distinction from University of Cambridge in 1988, and a PhD from University of Bath
in 1992. He has been a lecturer at the School of Mathematics and Statistics, University of Plymouth, since 1993, and was promoted
to Reader in 2006. His fields of interest are: applications of statistics including image analysis, spatial modelling and
disclosure limitation. He has published over 20 refereed journal articles.
相似文献
Contemporary attackers, mainly motivated by financial gain, consistently devise sophisticated penetration techniques to access important information or data. The growing use of Internet of Things (IoT) technology in the contemporary convergence environment to connect to corporate networks and cloud-based applications only worsens this situation, as it facilitates multiple new attack vectors to emerge effortlessly. As such, existing intrusion detection systems suffer from performance degradation mainly because of insufficient considerations and poorly modeled detection systems. To address this problem, we designed a blended threat detection approach, considering the possible impact and dimensionality of new attack surfaces due to the aforementioned convergence. We collectively refer to the convergence of different technology sectors as the internet of blended environment. The proposed approach encompasses an ensemble of heterogeneous probabilistic autoencoders that leverage the corresponding advantages of a convolutional variational autoencoder and long short-term memory variational autoencoder. An extensive experimental analysis conducted on the TON_IoT dataset demonstrated 96.02% detection accuracy. Furthermore, performance of the proposed approach was compared with various single model (autoencoder)-based network intrusion detection approaches: autoencoder, variational autoencoder, convolutional variational autoencoder, and long short-term memory variational autoencoder. The proposed model outperformed all compared models, demonstrating F1-score improvements of 4.99%, 2.25%, 1.92%, and 3.69%, respectively. 相似文献
Organizations are increasingly delegating customer inquiries to speech dialog systems (SDSs) to save personnel resources. However, customers often report frustration when interacting with SDSs due to poorly designed solutions. Despite these issues, design knowledge for SDSs in customer service remains elusive. To address this research gap, we employ the design science approach and devise a design theory for SDSs in customer service. The design theory, including 14 requirements and five design principles, draws on the principles of dialog theory and undergoes validation in three iterations using five hypotheses. A summative evaluation comprising a two-phase experiment with 205 participants yields positive results regarding the user experience of the artifact. This study contributes to design knowledge for SDSs in customer service and supports practitioners striving to implement similar systems in their organizations.
The time-dependent Stokes equations were solved in the vicinity of two spheres colliding in a viscous fluid with viscosity ν to determine the rate of change of the hydrodynamic forces during large accelerations associated with Hertzian mechanical contact of small duration \({\tau_{\rm c}}\). It was assumed that the gap clearance remains finite during contact and is approximately equal to the height σ of surface micro-asperities. The initial condition corresponds to the steady-state axisymmetric solution of Cooley and O’Neill (Mathematika 16:37–49, 1969), and the initial value problem for the time-dependent Stokes streamfunction was solved using Laplace transform methods. Assuming that σ is small compared to the sphere radius a, we used singular perturbation expansions and tangent-sphere coordinates to obtain an asymptotic solution for the viscous flow in the gap and around the moving sphere. The solution provides the dependence of the resistance, added mass and history forces on σ, the sphere velocity and acceleration, and the ratio of the sphere diameters. We found that the relative importance of viscous and mechanical forces during contact depends on a new Stokes number \({St_{\rm c}=\sigma^2/\nu \tau_{\rm c}}\). Integration of Newton’s equation for the motion of the sphere during mechanical contact showed that there is a critical \({St_{\rm c}=O(\sigma/a)}\) for which there is no rebound at the end of contact. 相似文献
This paper proposes a scenario-based two-stage stochastic programming model with recourse for master production scheduling under demand uncertainty. We integrate the model into a hierarchical production planning and control system that is common in industrial practice. To reduce the problem of the disaggregation of the master production schedule, we use a relatively low aggregation level (compared to other work on stochastic programming for production planning). Consequently, we must consider many more scenarios to model demand uncertainty. Additionally, we modify standard modelling approaches for stochastic programming because they lead to the occurrence of many infeasible problems due to rolling planning horizons and interdependencies between master production scheduling and successive planning levels. To evaluate the performance of the proposed models, we generate a customer order arrival process, execute production planning in a rolling horizon environment and simulate the realisation of the planning results. In our experiments, the tardiness of customer orders can be nearly eliminated by the use of the proposed stochastic programming model at the cost of increasing inventory levels and using additional capacity. 相似文献