首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Alphabetic optimality criteria, such as the D, A, and I criteria, require specifying a model to select optimal designs. They are not model‐free, and the designs obtained by them may not be robust. Recently, many extensions of the D and A criteria have been proposed for selecting robust designs with high estimation efficiency. However, approaches for finding robust designs with high prediction efficiency are rarely studied in the literature. In this paper, we propose a compound criterion and apply the coordinate‐exchange 2‐phase local search algorithm to generate robust designs with high estimation, high prediction, or balanced estimation and prediction efficiency for projective submodels. Examples demonstrate that the designs obtained by our method have better projection efficiency than many existing designs.  相似文献   

2.
Abstract

In this paper, we present a new scheme called the maximum log‐likelihood sum (MLSUM) algorithm to simultaneously determine the number of closely‐spaced sources and their locations by uniform linear sensor arrays. Based on the principle of the maximum likelihood (ML) estimator and a newly proposed orthogonal‐projection decomposition technique, the multivariate log‐likelihood maximization problem is transformed into a multistage one‐dimensional log‐likelihood‐sum maximization problem. The global‐optimum solution of the approximated ML localization is obtained by simply maximizing the single one‐dimensional log‐likelihood function. This algorithm is applicable to coherent sources as well as incoherent sources. The computer simulations show that the MLSUM algorithm is much superior to the MUSIC when the element SNR is low and/or the number of snapshots is small.  相似文献   

3.
In this study, a support vector machine (SVM)‐based ensemble model was developed for reliability forecasting. The hyperparameters of the SVM were selected by applying a genetic algorithm. Input variables of the SVM model were selected by maximizing the mean entropy value. The diverse members of the ensemble model were obtained by a k‐means clustering algorithm, and one ensemble member was selected from each cluster by choosing the closest from the cluster center. The optimum cluster number was selected using the Davies–Bouldin index. The developed model was validated by a benchmark turbocharger data set. A comparative study reveals that the proposed method performs better than existing methods on benchmark data sets. A case study was conducted investigating a dumper operated at a coal mine in India. Time‐to‐failure historical data for the dumper were collected, and cumulative time to failure was calculated for reliability forecasting. Study results demonstrate that the developed model performs well with high accuracy (R2 = 0.97) in the prediction of dumper failure, and a comparison with other methods demonstrates the superiority of the proposed ensemble SVM model. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Abstract

This paper presents a heuristic for solving a single machine scheduling problem with the objective of minimizing the total absolute deviation. The job to be scheduled on the machine has a processing time, pi , and a preferred due date, di . The total absolute deviation is defined as the sum of the earliness or tardiness of each job on a schedule 5. This problem is proved to be NP‐complete by Garey et al. [8]. As a result, we developed a two‐phase procedure to provide a near‐optimal solution to this problem. The two‐phase procedure includes the following steps: First, a greedy heuristic is applied to the set of jobs, N, to generate a “good” initial sequence. According to this initial sequence, we run Garey's local optimization algorithm to provide an initial schedule. Then, a pairwise switching algorithm is adopted to further reduce the total deviation of the schedule. The effectiveness of the two‐phase procedure is empirically evaluated and has been found to indicate that the solutions obtained from this heuristic procedure are often better than other heuristic approaches.  相似文献   

5.
D-optimal fractions of three-level factorial designs for p factors are constructed for factorial effects models (2 ≤ p ≤ 4) and quadratic response surface models (2 ≤ p ≤ 5). These designs are generated using an exchange algorithm for maximizing |XX| and an algorithm which produces D-optimal balanced array designs. The design properties for the DETMAX designs and the balanced array designs are tabulated. An example is given to illustrate the use of such designs.  相似文献   

6.
The quest for novel deformable image sensors with outstanding optoelectronic properties and large‐scale integration becomes a great impetus to exploit more advanced flexible photodetector (PD) arrays. Here, 10 × 10 flexible PD arrays with a resolution of 63.5 dpi are demonstrated based on as‐prepared perovskite arrays for photosensing and imaging. Large‐scale growth controllable CH3NH3PbI3?xClx arrays are synthesized on a poly(ethylene terephthalate) substrate by using a two‐step sequential deposition method with the developed Al2O3‐assisted hydrophilic–hydrophobic surface treatment process. The flexible PD arrays with high detectivity (9.4 × 1011 Jones), large on/off current ratio (up to 1.2 × 103), and broad spectral response exhibit excellent electrical stability under large bending angle (θ = 150°) and superior folding endurance after hundreds of bending cycles. In addition, the device can execute the functions of capturing a real‐time light trajectory and detecting a multipoint light distribution, indicating that it has widespread potential in photosensing and imaging for optical communication, digital display, and artificial electronic skin applications.  相似文献   

7.
Ensemble Methods are proposed as a means to extendbiAdaptive One‐Factor‐at‐a‐Time (aOFAT) experimentation. The proposed method executes multiple aOFAT experiments on the same system with minor differences in experimental setup, such as ‘starting points’. Experimental conclusions are arrived at by aggregating the multiple, individual aOFATs. A comparison is made to test the performance of the new method with that of a traditional form of experimentation, namely a single fractional factorial design which is equally resource intensive. The comparisons between the two experimental algorithms are conducted using a hierarchical probability meta‐model and an illustrative case study. The case is a wet clutch system with the goal of minimizing drag torque. In this study, the proposed procedure was superior in performance to using fractional factorial arrays consistently across various experimental settings. At the best, the proposed algorithm provides an expected value of improvement that is 15% higher than the traditional approach; at the worst, the two methods are equally effective, and on average the improvement is about 10% higher with the new method. These findings suggest that running multiple adaptive experiments in parallel can be an effective way to make improvements in quality and performance of engineering systems and also provides a reasonable aggregation procedure by which to bring together the results of the many separate experiments. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
A high‐quality field emission electron source made of a highly ordered array of carbon nanotubes (CNTs) coated with a thin film of hexagonal boron nitride (h‐BN) is fabricated using a simple and scalable method. This method offers the benefit of reproducibility, as well as the simplicity, safety, and low cost inherent in using B2O3 as the boron precursor. Results measured using h‐BN‐coated CNT arrays are compared with uncoated control arrays. The optimal thickness of the h‐BN film is found to be 3 nm. As a result of the incorporation of h‐BN, the turn‐on field is found to decrease from 4.11 to 1.36 V μm?1, which can be explained by the significantly lower emission barrier that is achieved due to the negative electron affinity of h‐BN. Meanwhile, the total emission current is observed to increase from 1.6 to 3.7 mA, due to a mechanism that limits the self‐current of any individual emitting tip. This phenomenon also leads to improved emission stability and uniformity. In addition, the lifetime of the arrays is improved as well. The h‐BN‐coated CNT array‐based field emitters proposed in this work may open new paths for the development of future high‐performance vacuum electronic devices.  相似文献   

9.
Abstract

In this paper we apply the balancing reduction method to derive reduced‐order models for linear systems having multiple delays. The time‐domain balanced realization is achieved through computing the controllability and observability gramians in the frequency domain. With the variable transformation s = i tan(θ/2), the gramians of linear multi‐delay systems can be accurately evaluated by solving first‐order differential equations over a finite domain. The proposed approach is computationally superior to that of using the two‐dimensional realization of delay differential systems.  相似文献   

10.
There has been a great amount of publicity about Taguchi methods which employ deterministic sampling techniques for robust design. Also given wide exposition in the literature is tolerance design which achieves similar objectives but employs random sampling techniques. The question arises as to which approach—random or deterministic—is more suitable for robust design of integrated circuits. Robust design is a two-step process and quality analysis—the first step—involves the estimation of ‘quality factors’, which measure the effect of noise on the quality of system performance. This paper concentrates on the quality analysis of integrated circuits. A comparison is made between the deterministic sampling technique based on Taguchi's orthogonal arrays and the random sampling technique based on the Monte Carlo method, the objective being to determine which of the two gives more reliable (i.e. more consistent) estimates of quality factors. Results obtained indicated that the Monte Carlo method gave estimates of quality which were at least 40 per cent more consistent than orthogonal arrays. The accuracy of prediction of quality by Taguchi's orthogonal arrays is strongly affected by the choice of parameter quantization levels—a disadvantage—since there is a very large number (theoretically infinite) of choices of quantization levels for each parameter of an integrated circuit. The cost of the Monte Carlo method is independent of the dimensionality (number of designable parameters), being governed only by the confidence levels required for quality factors, whereas the size of orthogonal array required for a given problem is partly dependent on the number of circuit parameters. Two integrated circuits—a 7-parameter CMOS voltage reference and a 20-parameter bipolar operational amplifier—were employed in the investigation. Quality factors of interest included performance variability, acceptability (relative to customer specifications) and deviation from target.  相似文献   

11.
A novel experimental procedure is introduced to determine phase fractions and the distribution of individual phases of TiAl‐based two‐phase alloys using the focused ion beam (FIB) technique. Two γ‐titanium aluminide alloys with a fine‐grained duplex and a nearly lamellar microstructure are examined. The special FIB‐based preparation procedure results in high contrast ion beam‐induced images for all investigated alloys and allows to quantify the phase contents easily by automated microstructural analysis. Fine two‐phase structures, e.g. lamellar colonies in γ‐TiAl, can be imaged in high resolution with respect to different phases. To validate the FIB‐derived data, we compare them to results obtained with another method to determine phase fractions, electron back‐scatter diffraction (EBSD). This direct comparison shows that the FIB‐based technique generally provides slightly higher α2‐fractions, and thus helps to overcome the limited lateral resolution near grain boundaries and interfaces associated with the conventional EBSD approach. Our study demonstrates that the FIB‐based technique is a simple, fast, and more exact way to determine high resolution microstructural characteristics with respect to different phase constitutions in two‐phase TiAl alloys and other such materials with fine, lamellar microstructures.  相似文献   

12.
Design of magnetic resonance micro‐coil arrays with low cross‐talk among the coils can be the main challenge to improve the effectiveness of magnetic resonance micro‐imaging because the electrical cross‐talk which is mainly due to the inductive coupling perturbs the sensitivity profile of the array and causes image artifacts. In this work, a capacitive decoupling network with N(M ? 1) + (N ? 1)(M ? 2) capacitors is proposed to reduce the inductive coupling in an N × M array. A 3 × 3 array of optimized micro‐coils is designed using the finite element simulations and all the needed elements for the array equivalent circuit are extracted in order to evaluate the effectiveness of the proposed decoupling method by assessing the reduction of the coupled signals after employing the capacitive network on the circuit. The achieved results for the designed array show that the high cross‐talk level is reduced by the factor of 2.2–3.4 after employing the capacitive network. By employing this method of decoupling, the adjacent coils in each row and inner columns can be decoupled properly while the minimum decoupling belongs to the outer columns because of the lack of all necessary decoupling capacitances for these columns. The main advantages of the proposed decoupling method are its efficiency and design easiness which facilitates the design of dense arrays with the properly decoupled coils, especially the inner coils which are more coupled due to their neighbors. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 353–359, 2013  相似文献   

13.
This paper details a multigrid‐accelerated cut‐cell non‐conforming Cartesian mesh methodology for the modelling of inviscid compressible and incompressible flow. This is done via a single equation set that describes sub‐, trans‐, and supersonic flows. Cut‐cell technology is developed to furnish body‐fitted meshes with an overlapping mesh as starting point, and in a manner which is insensitive to surface definition inconsistencies. Spatial discretization is effected via an edge‐based vertex‐centred finite volume method. An alternative dual‐mesh construction strategy, similar to the cell‐centred method, is developed. Incompressibility is dealt with via an artificial compressibility algorithm, and stabilization achieved with artificial dissipation. In compressible flow, shocks are captured via pressure switch‐activated upwinding. The solution process is accelerated with full approximation storage (FAS) multigrid where coarse meshes are generated automatically via a volume agglomeration methodology. This is the first time that the proposed discretization and solution methods are employed to solve a single compressible–incompressible equation set on cut‐cell Cartesian meshes. The developed technology is validated by numerical experiments. The standard discretization and alternative methods were found equivalent in accuracy and computational cost. The multigrid implementation achieved decreases in CPU time of up to one order of magnitude. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
A number of operational situations exist in which certain facilities are available and where a number of commodities must be processed on some or all of these facilities. The paper describes an algorithm to generate schedules which are near optimal or optimal with respect to the total processing time of all the commodities, the idle lime of facilities and production rate. Thus, these schedules are characterized by near minimal or minimal total processing time and idle time of facilities and near maximal or maximal production rate. Usually this algorithm does not result in the desired schedule after the first application; it is therefore proposed to generate a set “D” of schedules from which the desired schedule can be selected. A decision rule determines the optimal number of elements belonging to set D

In order to justify the concept of the algorithm for the determination of the schedules mentioned above, an analysis is given of the decision tree associated with the sequencing model in terms of the probabilities related to the nodes in the decision tree.  相似文献   

15.
An a priori error estimator for the generalized‐α time‐integration method is developed to solve structural dynamic problems efficiently. Since the proposed error estimator is computed with only information in the previous and current time‐steps, the time‐step size can be adaptively selected without a feedback process, which is required in most conventional a posteriori error estimators. This paper shows that the automatic time‐stepping algorithm using the a priori estimator performs more efficient time integration, when compared to algorithms using an a posteriori estimator. In particular, the proposed error estimator can be usefully applied to large‐scale structural dynamic problems, because it is helpful to save computation time. To verify efficiency of the algorithm, several examples are numerically investigated. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
The development of a new algorithm to solve the Navier–Stokes equations by an implicit formulation for the finite difference method is presented, that can be used to solve two‐dimensional incompressible flows by formulating the problem in terms of only one variable, the stream function. Two algebraic equations with 11 unknowns are obtained from the discretized mathematical model through the ADI method. An original algorithm is developed which allows a reduction from the original 11 unknowns to five and the use of the Pentadiagonal Matrix Algorithm (PDMA) in each one of the equations. An iterative cycle of calculations is implemented to assess the accuracy and speed of convergence of the algorithm. The relaxation parameter required is analytically obtained in terms of the size of the grid and the value of the Reynolds number by imposing the diagonal dominancy condition in the resulting pentadiagonal matrixes. The algorithm developed is tested by solving two classical steady fluid mechanics problems: cavity‐driven flow with Re=100, 400 and 1000 and flow in a sudden expansion with expansion ratio H/h=2 and Re=50, 100 and 200. The results obtained for the stream function are compared with values obtained by different available numerical methods, to evaluate the accuracy and the CPU time required by the proposed algorithm. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
This paper presents a multilevel algorithm for balanced partitioning of unstructured grids. The grid is partitioned such that the number of interface elements is minimized and each partition contains an equal number of grid elements. The partition refinement of the proposed multilevel algorithm is based on iterative tabu search procedure. In iterative partition refinement algorithms, tie‐breaking in selection of maximum gain vertices affects the performance considerably. A new tie‐breaking strategy in the iterative tabu search algorithm is proposed that leads to improved partitioning quality. Numerical experiments are carried out on various unstructured grids in order to evaluate the performance of the proposed algorithm. The partition results are compared with those produced by the well‐known partitioning package Metis and k‐means clustering algorithm and shown to be superior in terms of edge cut, partition balance, and partition connectivity. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
A reduced order model (ROM) based on the proper orthogonal decomposition (POD)/Galerkin projection method is proposed as an alternative discretization of the linearized compressible Euler equations. It is shown that the numerical stability of the ROM is intimately tied to the choice of inner product used to define the Galerkin projection. For the linearized compressible Euler equations, a symmetry transformation motivates the construction of a weighted L2 inner product that guarantees certain stability bounds satisfied by the ROM. Sufficient conditions for well‐posedness and stability of the present Galerkin projection method applied to a general linear hyperbolic initial boundary value problem (IBVP) are stated and proven. Well‐posed and stable far‐field and solid wall boundary conditions are formulated for the linearized compressible Euler ROM using these more general results. A convergence analysis employing a stable penalty‐like formulation of the boundary conditions reveals that the ROM solution converges to the exact solution with refinement of both the numerical solution used to generate the ROM and of the POD basis. An a priori error estimate for the computed ROM solution is derived, and examined using a numerical test case. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

19.
Both principal components analysis (PCA) and orthogonal regression deal with finding a p-dimensional linear manifold minimizing a scale of the orthogonal distances of the m-dimensional data points to the manifold. The main conceptual difference is that in PCA p is estimated from the data, to attain a small proportion of unexplained variability, whereas in orthogonal regression p equals m ? 1. The two main approaches to robust PCA are using the eigenvectors of a robust covariance matrix and searching for the projections that maximize or minimize a robust (univariate) dispersion measure. This article is more akin to second approach. But rather than finding the components one by one, we directly undertake the problem of finding, for a given p, a p-dimensional linear manifold minimizing a robust scale of the orthogonal distances of the data points to the manifold. The scale may be either a smooth M-scale or a “trimmed” scale. An iterative algorithm is developed that is shown to converge to a local minimum. A strategy based on random search is used to approximate a global minimum. The procedure is shown to be faster than other high-breakdown-point competitors, especially for large m. The case whereas p = m ? 1 yields orthogonal regression. For PCA, a computationally efficient method to choose p is given. Comparisons based on both simulated and real data show that the proposed procedure is more robust than its competitors.  相似文献   

20.
The possibility of reconstructing two-dimensional electron-density profiles in the ionosphere with ionospheric tomography is significant. However, due to the nature of the imaging system, there are several resolution degradation parameters. In order to compensate for these degradation parameters, a priori information must be used. This article introduces the orthogonal decomposition algorithm for image reconstruction, which uses the a priori information to generate a set of orthogonal basis functions for the source domain. This algorithm consists of two simple steps: orthogonal decomposition and recombination. In the development of the algorithm, it is shown that the degradation parameters of the imaging system result in correlations among projections of orthogonal functions. Gram–Schmidt orthogonalization is used to compensate for these correlations, producing a matrix that measures the degradation of the system. Any set of basis functions can be used, and depending upon this choice, the nature of the algorithm varies greatly. Choosing the basis functions of the source domain to be the Fourier kernels produces an algorithm capable of isolating individual frequency components of individual projections. This particular choice of basis functions also results in an algorithm that strongly resembles the direct Fourier method, but without requiring the use of inverse Fourier transforms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号