首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this work, two different manufacturing approaches are presented that create water-repellency (hydrophobicity and super-hydrophobicity) for acrylonitrile butadiene styrene (ABS) structures. In particular, this is the first study to render three-dimensional (3-D) printed ABS surfaces with internal flow paths to be superhydrophobic. The first approach uses standard wet-based chemical processing for surface preparation after which a fluorocarbon layer is deposited by dip coating or with vapor deposition. This first approach creates hydrophobic surfaces with roll-off angles of less than 30°. In the second approach, the ABS structures are dip-coated with a commercial rubber coating solution and subsequently surface-modified by reactive ion etching (RIE) with fluorinated gases to render the samples superhydrophobic, with roll-off angles as low as 6°. In order to further enhance their water-repellency, the dip-coating rubber solution is mixed with polytetrafluoroethylene (PTFE) colloidal dispersions to form a nanocomposite layer prior to the RIE process. The PTFE particles induce surface roughness as well as hydrophobicity. The modified surfaces created by the two approaches are further characterized by scanning electron microscopy and water drainage performance. Water drainage (prevention of water retention) is especially important for high thermal efficiency of 3-D printed heat exchangers. However, water-repellency for ABS is also interesting for a broader range of applications that use this material.  相似文献   

2.
In this paper we present an approach for the Bayesian estimation of piecewise constant failure rates under the constraint that the constant value of the failure rate in an interval of time is greater than a function of its values in the prior intervals. We apply this approach to the estimation of piecewise constant failure rates for conditional IFR, IFRA and NBU distributions. The prior distribution for the failure rate in each interval is specified through gamma distributions with functions of the failure rate values corresponding to the rest of the intervals as location parameters. Using this approach the prior distribution parameters have interpretations through prior means and variances of the values of the piecewise constant failure rate. The posterior distributions and expected values can be found in terms of gamma functions, without the necessity of numerical integrations. We apply this approach to a model for reliability estimation when two operational modes exists and the number of failures in each operational mode is unknown. Finally a numerical example is presented in which simulations of posterior densities are carried out.  相似文献   

3.
《IEEE sensors journal》2009,9(12):1763-1771
Potentiometry with ion-selective electrodes (ISEs) provides a simple and cheap approach for estimating ionic activities. However, a well-known shortcoming of ISEs regards their lack of selectivity. Recent works have suggested that smart sensor arrays equipped with a blind source separation (BSS) algorithm offer a promising solution to the interference problem. In fact, the use of blind methods eases the time-demanding calibration stages needed in the typical approaches. In this work, we develop a Bayesian source separation method for processing the outputs of an ISE array. The major benefit brought by the Bayesian framework is the possibility of taking into account some prior information, which can result in more realistic solutions. Concerning the inference stage, it is conducted by means of Markov chain Monte Carlo (MCMC) methods. The validity of our approach is supported by experiments with artificial data and also in a scenario with real data.   相似文献   

4.
This paper investigates a meta-heuristic solution approach to the early/tardy single machine scheduling problem with common due date and sequence-dependent setup times. The objective of this problem is to minimise the total amount of earliness and tardiness of jobs that are assigned to a single machine. The popularity of just-in-time (JIT) and lean manufacturing scheduling approaches makes the minimisation of earliness and tardiness important and relevant. In this research the early/tardy problem is solved by Meta-RaPS (meta-heuristic for randomised priority search). Meta-RaPS is an iterative meta-heuristic which is a generic, high level strategy used to modify greedy algorithms based on the insertion of a random element. In this case a greedy heuristic, the shortest adjusted processing time, is modified by Meta-RaPS and the good solutions are improved by a local search algorithm. A comparison with the existing ETP solution procedures using well-known test problems shows Meta-RaPS produces better solutions in terms of percent difference from optimal. The results provide high quality solutions in reasonable computation time, demonstrating the effectiveness of the simple and practical framework of Meta-RaPS.  相似文献   

5.
Organic solar cells (OSCs) have made rapid progress in terms of their development as a sustainable energy source. However, record-breaking devices have not shown compatibility with large-scale production via solution processing in particular due to the use of halogenated environment-threatening solvents. Here, slot-die fabrication with processing involving hydrocarbon-based solvents is used to realize highly efficient and environmentally friendly OSCs. Highly compatible slot-die coating with roll-to-roll processing using halogenated (chlorobenzene (CB)) and hydrocarbon solvents (1,2,4-trimethylbenzene (TMB) and ortho-xylene (o-XY)) is used to fabricate photoactive films. Controlled solution and substrate temperatures enable similar aggregation states in the solution and similar kinetics processes during film formation. The optimized blend film nanostructures for different solvents in the highly efficient PM6:Y6 blend is adopted to show a similar morphology, which results in device efficiencies of 15.2%, 15.4%, and 15.6% for CB, TMB, and o-XY solvents. This approach is successfully extended to other donor–acceptor combinations to demonstrate the excellent universality of this method. The results combine a method to optimize the aggregation state and film formation kinetics with the fabrication of OSCs with environmentally friendly solvents by slot-die coating, which is a critical finding for the future development of OSCs in terms of their scalable production and high-performance.  相似文献   

6.
Many algorithms for cell formation have been developed for past three decades in cellular manufacturing. Some use binary data for cell formation and others use production data such as operation sequence, processing times, production volumes, etc. for cell formation. All these algorithms assume that the conversion of job shop to cellular manufacturing is performed comprehensively. (In other words, they assume that all the cells are formed at a time.) However, this is far from reality. In practice, cell formation is done incrementally, one after the other, rather than comprehensively. None of the algorithms developed so far addresses the issue of incremental cell formation. In this paper, the incremental cell formation problem is defined and various categories of problems are mentioned. One type of those categories is selected for solving. Two methods, namely the branch and bound technique and a heuristic based on a multistage programming approach, have been applied to solve the chosen problem. Data sets have been generated to compare these two methods in terms of quality of solution and demand on computational time. It has been found that the branch and bound technique gives a superior quality solution, but is computationally more demanding, where as heuristic based on a multistage programming approach is computationally far superior.  相似文献   

7.
《Composites Science and Technology》2006,66(11-12):1606-1614
Three methods to mix ceramic fillers, hydroxyapatite or β-tricalcium phosphate, with a polymer matrix, a poly l-lactic acid, are investigated as a first step prior to supercritical foaming to prepare porous composite structures for biomedical applications. First the dry process consists in mixing ceramic powder and polymer pellets before a compression molding step. The second technique is based on the dispersion of ceramic fillers into a polymer–solvent solution. The third method is a melt extrusion of a ceramic/polymer powder mixture. Each technique is first optimized by defining the processing parameters suitable for the bioresorbable materials considered. Then comparison of the three methods shows that solvent or melt processing results in a more homogeneous filler distribution than the dry technique. Extrusion leads to composites with a higher modulus than solvent prepared compounds and is a solvent-free approach. The former technique is therefore selected to prepare ceramic/polymer blends before supercritical CO2 foaming.  相似文献   

8.
This paper reports a numerical approach for nonlinear problems in fluid mechanics based on the boundary element method (BEM), incorporating a particular solution (PS) technique and neural network (NN) methodology. The advantage of the present approach is that domain discretisation is completely eliminated, even for problems involving highly nonlinear viscoelastic materials. Two different integral equation (IE) formulations are described. The so-called direct formulation deals with physical variables (traction and velocity) as explicit unknowns and therefore the resulting IEs are usually of the mixed type. In contrast, a new indirect formulation discussed in this paper always yields IEs of the second kind, in terms of surface single and double layer potentials, for certain problems with surfaces which are not closed. Discrete version of IEs of the second kind is numerically well-conditioned. Nonlinear problems of interest in this paper occur in the field of rheology, particularly the flow of polymeric liquids whose constitutive equations are highly complex, nonlinear and usually implicit. Even for the BEM, traditionally one has to discretise the domain for the purpose of non-simultaneous evaluation of nonlinear effects. It is shown in this paper how domain discretisation can be completely eliminated from the discrete IE implementation with the help of particular solution and artificial NN techniques. The resulting method is illustrated with viscous and viscoelastic flow problems encountered in polymer processing analysis.  相似文献   

9.
In this paper, we present a branch and bound algorithm for the parallel batch scheduling of jobs having different processing times, release dates and unit sizes. There are identical machines with a fixed capacity and the number of jobs in a batch cannot exceed the machine capacity. All batched jobs are processed together and the processing time of a batch is given by the greatest processing time of jobs in that batch. We compare our method to a mixed integer program as well as a method from the literature that is capable of optimally solving instances with a single machine. Computational experiments show that our method is much more efficient than the other two methods in terms of solution time for finding the optimal solution.  相似文献   

10.
A joint decision of cell formation and parts scheduling is addressed for a cellular manufacturing system where each type of machine and part may have multiple numbers and parts must require processing and transferring in batches. The joint decision problem is not only to assign batches and associated machine groups to cells, but also to sequence the processing of batches on each machine in order to minimise the total tardiness penalty cost. A nonlinear mixed integer programming mathematical model is proposed to formulate the problem. The proposed model, within nonlinear terms and integer variables, is difficult to solve efficiently for real size problems. To solve the model for practical purposes, a scatter search approach with dispatching rules is proposed, which considers two different combination methods and two improvement methods to further expand the conceptual framework and implementation of the scatter search so as to better fit the addressed problem. This scatter search approach interactively uses a combined dispatching rule to solve a scheduling sub-problem corresponding to each integer solution visited in the search process. A computational study is performed on a set of test problems with various dimensions, and computational results demonstrate the effectiveness of the proposed approach.  相似文献   

11.
The problem of differentiating non-smooth functions of specimen displacements, which are measured during the material removal, is discussed. This problem arises when employing the layer removal method, namely a method of rings and strips, for residual stress depth profiling. It is shown that this problem is ill-posed and special solution methods are required in order to obtain a stable solution. The stability of the solution affects to a high extent the resulting accuracy of the residual stress evaluation in the investigated material. The presented study discusses a numerical approach to solving such ill-posed problems. The proposed approach, which is based on the Tikhonov regularization and a regularized finite difference method, provides a stable approximate solution, including its pointwise error estimation. The advantage of this approach is that it does not require any knowledge about the unknown exact solution; the pointwise error estimation of the measured data is the only prior information that must be available. In addition, this approach provides a convergence of the approximate solution to the unknown exact one when the perturbation of the initial data approaches zero.  相似文献   

12.
In this paper, the proper generalized decomposition (PGD) is used for model reduction in the solution of an inverse heat conduction problem within the Bayesian framework. Two PGD reduced order models are proposed and the approximation Error model (AEM) is applied to account for the errors between the complete and the reduced models. For the first PGD model, the direct problem solution is computed considering a separate representation of each coordinate of the problem during the process of solving the inverse problem. On the other hand, the second PGD model is based on a generalized solution integrating the unknown parameter as one of the coordinates of the decomposition. For the second PGD model, the reduced solution of the direct problem is computed before the inverse problem within the parameter space provided by the prior information about the parameters, which is required to be proper. These two reduced models are evaluated in terms of accuracy and reduction of the computational time on a transient three-dimensional two region inverse heat transfer problem. In fact, both reduced models result on substantial reduction of the computational time required for the solution of the inverse problem, and provide accurate estimates for the unknown parameter due to the application of the approximation error model approach.  相似文献   

13.
Localizing brain neural activity using electroencephalography (EEG) neuroimaging technique is getting increasing response from neuroscience researchers and medical community. It is due to the fact that brain source localization has a variety of applications for diagnoses of various brain disorders. This problem is ill-posed in nature because an infinite number of source configurations can produce the same potential at the head surface. Recently, a new technique that is based on Bayesian framework, called the multiple sparse priors (MSP), was proposed as a solution to this problem. The MSP develops the solution for source localization using the current densities associated with dipoles in terms of prior source covariance matrix and sensor covariance matrix, respectively. Then, it uses the maximization of the cost function of the free energy under the assumption of a fixed number of hyperparameters or patches in order to obtain the elements of prior source covariance matrix. This research work aims to further enhance the maximization process of MSP with regard to the free energy by considering a variable number of patches. This will lead to a better estimation of brain sources in terms of localization errors. The performance of the modified MSP with a variable number of patches is compared with the original MSP using simulated and real-time EEG data. The results show a significant improvement in terms of localization errors.  相似文献   

14.
In this study, we consider stochastic single machine scheduling problem. We assume that setup times are both sequence dependent and uncertain while processing times and due dates are deterministic. In the literature, most of the studies consider the uncertainty on processing times or due dates. However, in the real-world applications (i.e. plastic moulding industry, appliance assembly, etc.), it is common to see varying setup times due to labour or setup tools availability. In order to cover this fact in machine scheduling, we set our objective as to minimise the total expected tardiness under uncertain sequence-dependent setup times. For the solution of this NP-hard problem, several heuristics and some dynamic programming algorithms have been developed. However, none of these approaches provide an exact solution for the problem. In this study, a two-stage stochastic-programming method is utilised for the optimal solution of the problem. In addition, a Genetic Algorithm approach is proposed to solve the large-size problems approximately. Finally, the results of the stochastic approach are compared with the deterministic one to demonstrate the value of the stochastic solution.  相似文献   

15.
We present an approach to uncertainty propagation in dynamic systems, exploiting information provided by related experimental results along with their models. The approach relies on a solution mapping technique to approximate mathematical models by polynomial surrogate models. We use these surrogate models to formulate prediction bounds in terms of polynomial optimizations. Recent results on polynomial optimizations are then applied to solve the prediction problem. Two examples which illustrate the key aspects of the proposed algorithm are given. The proposed algorithm offers a framework for collaborative data processing among researchers. This work was supported by the National Science Foundation, Information Technology Research Program, Grant No. CTS-0113985.  相似文献   

16.
Medical image processing is typically performed to diagnose a patient's brain tumor prior to surgery. In this study, a technique in denoising and segmentation was developed to improve medical image processing. The proposed approach employs multiple modules. In the first module, the noisy brain tumor image is transformed into multiple low- and high-pass tetrolet coefficients. In the second module, multiple low-pass tetrolet coefficients are applied through a modified transform-based gamma correction method. Generalized cross-validation is used on multiple high-pass tetrolet coefficients to obtain the best threshold value. In the third module, all enhanced coefficients are applied to the partial differential equation method. In the final module, the denoised image is applied to Atanassov's intuitionistic fuzzy set histon-based fuzzy clustering method with centroid optimization using an elephant herding method. Accordingly, the tumor part is segmented from the nontumor part in the magnetic resonance imaging brain images. The method was assessed in terms of peak signal-to-noise ratio, mean square error, specificity, sensitivity, and accuracy. The experimental results showed that the suggested method is superior to traditional methods.  相似文献   

17.
The numerical solution of advection–diffusion equations has been a long standing problem and many numerical methods that attempt to find stable and accurate solutions have to resort to artificial methods to stabilize the solution. In this paper, we present a meshless method based on thin plate radial basis functions (RBF). The efficiency of the method in terms of computational processing time, accuracy and stability is discussed. The results are compared with the findings from the dual reciprocity/boundary element and finite difference methods as well as the analytical solution. Our analysis shows that the RBFs method, with its simple implementation, generates excellent results and speeds up the computational processing time, independent of the shape of the domain and irrespective of the dimension of the problem.  相似文献   

18.
Zhao M  Zhang W  Wang Z  Hou Q 《Applied optics》2010,49(32):6286-6294
The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem, which can be regularized under the Bayesian framework by introducing an appropriate image prior. In this paper, we derive a new image prior based on the state-of-the-art nonlocal means (NLM) denoising approach under Markov random field theory. Inheriting from the NLM, the prior exploits the intrinsic high redundancy of satellite images and is able to encode the image's nonsmooth information. Using this prior, we propose an inhomogeneous deconvolution technique for satellite images, termed nonlocal means-based deconvolution (NLM-D). Moreover, in order to make our NLM-D unsupervised, we apply the L-curve approach to estimate the optimal regularization parameter. Experimentally, NLM-D demonstrates its capacity to preserve the image's nonsmooth structures (such as edges and textures) and outperforms the existing total variation-based and wavelet-based deconvolution methods in terms of both visual quality and signal-to-noise ratio performance.  相似文献   

19.
There is often significant risk and uncertainty associated with the development of manufacturing processes for large integrated composites structures. A good understanding of the outcome of the process is required so that process tooling and other process parameters can be designed appropriately and costly redesign and rework avoided. This paper presents an approach to risk reduction in composites processing using prior knowledge, prototype data, and model results. A Bayesian methodology for combining this information into a probability density function of the outcome of the process that explicitly accounts for reliability of the data sources is developed. The use and effectiveness of the approach is demonstrated with a case study.  相似文献   

20.
Finite element based formulations for flexible multibody systems are becoming increasingly popular and as the complexity of the configurations to be treated increases, so does the computational cost. It seems natural to investigate the applicability of parallel processing to this type of problems; domain decomposition techniques have been used extensively for this purpose. In this approach, the computational domain is divided into non-overlapping sub-domains, and the continuity of the displacement field across sub-domain boundaries is enforced via the Lagrange multiplier technique. In the finite element literature, this approach is presented as a mathematical algorithm that enables parallel processing. In this paper, the divided system is viewed as a flexible multibody system, and the sub-domains are connected by kinematic constraints. Consequently, all the techniques applicable to the enforcement of constraints in multibody systems become applicable to the present problem. In particular, it is shown that a combination of the localized Lagrange multiplier technique with the augmented Lagrange formulation leads to interesting solution strategies. The proposed algorithm is compared with the well-known FETI approach with regards to convergence and efficiency characteristics. The present algorithm is relatively simple and leads to improved convergence and efficiency characteristics. Finally, implementation on a parallel computer was conducted for the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号