共查询到20条相似文献,搜索用时 15 毫秒
1.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed. 相似文献
2.
Ferruh Özbudak Henning Stichtenoth 《Applicable Algebra in Engineering, Communication and Computing》2002,13(1):53-56
We present a simple construction of long linear codes from shorter ones. Our approach is related to the product code construction;
it generalizes and simplifies substantially the recent “Propagation Rule” by Niederreiter and Xing. Many optimal codes can
be produced by our method.
Received: June 23, 2000 相似文献
3.
Sequential experiments composed of initial experiments and follow-up experiments are widely adopted for economical computer emulations. Many kinds of Latin hypercube designs with good space-filling properties have been proposed for designing the initial computer experiments. However, little work based on Latin hypercubes has focused on the design of the follow-up experiments. Although some constructions of nested Latin hypercube designs can be adapted to sequential designs, the size of the follow-up experiments needs to be a multiple of that of the initial experiments. In this article, a general method for constructing sequential designs of flexible size is proposed, which allows the combined designs to have good one-dimensional space-filling properties. Moreover, the sampling properties and a type of central limit theorem are derived for these designs. Several improvements of these designs are made to achieve better space-filling properties. Simulations are carried out to verify the theoretical results. Supplementary materials for this article are available online. 相似文献
4.
The calibration of computer models using physical experimental data has received a compelling interest in the last decade. Recently, multiple works have addressed the functional calibration of computer models, where the calibration parameters are functions of the observable inputs rather than taking a set of fixed values as traditionally treated in the literature. While much of the recent work on functional calibration was focused on estimation, the issue of sequential design for functional calibration still presents itself as an open question. Addressing the sequential design issue is thus the focus of this article. We investigate different sequential design approaches and show that the simple separate design approach has its merit in practical use when designing for functional calibration. Analysis is carried out on multiple simulated and real-world examples. 相似文献
5.
Rumen N. Daskalov T. Aaron Gulliver 《Applicable Algebra in Engineering, Communication and Computing》1999,9(6):547-558
Let [n, k, d; q]-codes be linear codes of length n, dimension k and minimum Hamming distance d over GF(q). Let d
5(n, k) be the maximum possible minimum Hamming distance of a linear [n, k, d; 5]-code for given values of n and k. In this paper, forty four new linear codes over GF(5) are constructed and a table of d
5(n, k) k≤ 8, n≤ 100 is presented. 相似文献
6.
Patrick Fitzpatrick Sylvia M. Jennings 《Applicable Algebra in Engineering, Communication and Computing》1998,9(3):211-220
We compare the key equation solving algorithm introduced by Fitzpatrick to the Berlekamp-Massey algorithm. Our main result
is that the two algorithms have the same computational complexity. It follows that in practice Fitzpatricks algorithm improves
on Berlekamp-Massey since it uses less storage and has a simpler control structure. We also give an improved version of Fitzpatricks
algorithm and a new simplified proof of the central inductive step in the argument.
Received: June 6, 1997 相似文献
7.
8.
O. Shirakura M. Yamada M. Hashimoto S. Ishimaru K. Takayama T. Nagai 《Drug development and industrial pharmacy》1991,17(4):471-483
The effect of binder solution (amount and composition) on the mean particle size and its distribution of granule was investigated by using a computer optimization technique. The granules were manufactured by two continuous processes, granulating and sizing using a high-speed mixer granulator and a hammer mill, respectively. The particle size distribution pattern of granules was markedly varied with the change in the amount and composition of the binder solution. The distribution pattern could be well expressed by the log-normal distribution model. For designing the optimal particle, a computer optimization technique was applied to the experimental results obtained in this study. The technique was found to be useful for searching the optimal formula in the practical scale. 相似文献
9.
Najib Ahmed Mohammed Ali Mohammed Mansoor Rodina Binti Ahmad Saaidal Razalli Bin Azzuhri 《计算机、材料和连续体(英文)》2022,71(1):573-592
Mission critical Machine-type Communication (mcMTC), also referred to as Ultra-reliable Low Latency Communication (URLLC), has become a research hotspot. It is primarily characterized by communication that provides ultra-high reliability and very low latency to concurrently transmit short commands to a massive number of connected devices. While the reduction in physical (PHY) layer overhead and improvement in channel coding techniques are pivotal in reducing latency and improving reliability, the current wireless standards dedicated to support mcMTC rely heavily on adopting the bottom layers of general-purpose wireless standards and customizing only the upper layers. The mcMTC has a significant technical impact on the design of all layers of the communication protocol stack. In this paper, an innovative bottom-up approach has been proposed for mcMTC applications through PHY layer targeted at improving the transmission reliability by implementing ultra-reliable channel coding scheme in the PHY layer of IEEE 802.11a standard bearing in mind short packet transmission system. To achieve this aim, we analyzed and compared the channel coding performance of convolutional codes (CCs), low-density parity-check (LDPC) codes, and polar codes in wireless network on the condition of short data packet transmission. The Viterbi decoding algorithm (VA), logarithmic belief propagation (Log-BP) algorithm, and cyclic redundancy check (CRC) successive cancellation list (SCL) (CRC-SCL) decoding algorithm were adopted to CC, LDPC codes, and polar codes, respectively. Consequently, a new PHY layer for mcMTC has been proposed. The reliability of the proposed approach has been validated by simulation in terms of Bit error rate (BER) and packet error rate (PER) vs. signal-to-noise ratio (SNR). The simulation results demonstrate that the reliability of IEEE 802.11a standard has been significantly improved to be at PER = 10−5 or even better with the implementation of polar codes. The results also show that the general-purpose wireless networks are prominent in providing short packet mcMTC with the modification needed. 相似文献
10.
Matthias Hwai Yong Tan 《技术计量学》2015,57(4):468-478
Robust parameter design with computer experiments is becoming increasingly important for product design. Existing methodologies for this problem are mostly for finding optimal control factor settings. However, in some cases, the objective of the experimenter may be to understand how the noise and control factors contribute to variation in the response. The functional analysis of variance (ANOVA) and variance decompositions of the response, in addition to the mean and variance models, help achieve this objective. Estimation of these quantities is not easy and few methods are able to quantity the estimation uncertainty. In this article, we show that the use of an orthonormal polynomial model of the simulator leads to simple formulas for functional ANOVA and variance decompositions, and the mean and variance models. We show that estimation uncertainty can be taken into account in a simple way by first fitting a Gaussian process model to experiment data and then approximating it with the orthonormal polynomial model. This leads to a joint normal distribution for the polynomial coefficients that quantifies estimation uncertainty. Supplementary materials for this article are available online. 相似文献
11.
12.
Rachel T. Silvestrini 《Quality and Reliability Engineering International》2015,31(3):399-410
Classical D‐optimal design is used to create experimental designs for situations in which an underlying system model is known or assumed known. The D‐optimal strategy can also be used to add additional experimental runs to an existing design. This paper demonstrates a study of variable choices related to sequential D‐optimal design and how those choices influence the D‐efficiency of the resulting complete design. The variables studied are total sample size, initial experimental design size, step size, whether or not to include center points in the initial design, and complexity of initial model assumption. The results indicate that increasing total sample size improves the D‐efficiency of the design, less effort should be placed in the initial design, especially when the true underlying system model isn't known, and it is better to start off with assuming a simpler model form, rather than a complex model, assuming that the experimenter can reach the true model form during the sequential experiments. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
13.
在利用人脸分形码距离进行识别时,需要大量的时间对人脸库中每张人脸图像进行迭代与距离运算.为克服这一缺点,本文提出了用水平方向高频子带来定位眼睛并将其从人脸中抽取出来,进一步提出了基于人眼分形码距离的人脸快速识别算法.利用该算法,可去掉大部分人眼分形码距离较大的图像,从识别时间复杂性分析,本文算法所需时间主要与人眼大小以及用于最后识别的图像数目有关.在ORL和YALE两个人脸库上的实验结果表明,与本征脸方法和直接利用人脸分形码距离方法比较,在用于最后识别的图像数目占人脸库中人脸总数的20%左右时,本文算法可使平均识别率保持在约90%,与其它方法基本持平. 相似文献
14.
Stochastic differential equations (SDEs) are used as statistical models in many disciplines. However, intractable likelihood functions for SDEs make inference challenging, and we need to resort to simulation-based techniques to estimate and maximize the likelihood function. While importance sampling methods have allowed for the accurate evaluation of likelihoods at fixed parameter values, there is still a question of how to find the maximum likelihood estimate. In this article, we propose an efficient Gaussian-process-based method for exploring the parameter space using estimates of the likelihood from an importance sampler. Our technique accounts for the inherent Monte Carlo variability of the estimated likelihood, and does not require knowledge of gradients. The procedure adds potential parameter values by maximizing the so-called expected improvement, leveraging the fact that the likelihood function is assumed to be smooth. Our simulations demonstrate that our method has significant computational and efficiency gains over existing grid- and gradient-based techniques. Our method is applied to the estimation of ocean circulation from Lagrangian drift data in the South Atlantic ocean. 相似文献
15.
介绍研制的软件CADCB,采用人机交互式对话操作方式,能迅速地依据用户要求选择出最佳纸箱结构造型,精确地计算出该纸箱结构尺寸,并能进行纸箱强度分析和排料方案设计等等。另外.数据图形的输入输出多样化,且配有可扩充的瓦楞纸箱数学模型库和图形库。 相似文献
16.
R. Jha F. Pettersson G. S. Dulikravich H. Saxen 《Materials and Manufacturing Processes》2015,30(4):488-510
Data-driven models were constructed for the mechanical properties of multi-component Ni-based superalloys, based on systematically planned, limited experimental data using a number of evolutionary approaches. Novel alloy design was carried out by optimizing two conflicting requirements of maximizing tensile stress and time-to-rupture using a genetic algorithm-based multi-objective optimization method. The procedure resulted in a number of optimized alloys having superior properties. The results were corroborated by a rigorous thermodynamic analysis and the alloys found were further classified in terms of their expected levels of hardenabilty, creep, and corrosion resistances along with the two original objectives that were optimized. A number of hitherto unknown alloys with potential superior properties in terms of all the attributes ultimately emerged through these analyses. This work is focused on providing the experimentalists with linear correlations among the design variables and between the design variables and the desired properties, non-linear correlations (qualitative) between the design variables and the desired properties, and a quantitative measure of the effect of design variables on the desired properties. Pareto-optimized predictions obtained from various data-driven approaches were screened for thermodynamic equilibrium. The results were further classified for additional properties. 相似文献
17.
This study employed a piezoelectric sensors-based structural health monitoring (SHM) technique to monitor debonding defects in real-time. A carbon fiber–reinforced polymer (CFRP) concrete beam specimen was fabricated, and the debonding conditions were inflicted in three successive steps. When the damage level was increased, the electromechanical impedance and guided wave signals were measured at each different damage level from the piezoelectric sensor array already surface mounted on the CFRP. A damage metric based on the root mean square deviation (RMSD) was investigated to quantify the variations in the signals between the intact and progressive damage conditions. To improve the performance of “debonding damage localization,” a new damage metric obtained by the superposition of the damage sensitive features extracted from both the impedance and guided wave signals was introduced. Polynomial curve fitting was performed using the superposed damage metric values. The location corresponding to the highest peak of the curve was determined. The location point was then compared with the actual inflicted damage points to confirm the effectiveness of the proposed damage localization technique. Further research issues are discussed for real-world implementation of the proposed approach using the above experimental results. 相似文献
18.
针对受相干斑噪声影响较严重的合成孔径雷达(SAR)图像,提出了一种基于边缘保持(EPR)的区域MRF快速分割算法.基于EPR的SAR图像表示方法包括各向异性扩散的相干斑降噪算法和分水岭变换两部分,该方法在存在相干斑噪声的情况下,能够有效地抑制过分割和在区域边界进行目标边缘的准确定位.将基于EPR的表示方法和区域MRF相结合,能够大幅减少优化过程的搜索空间,获得准确的分类结果和统计特性,同时减少了计算量和分割错误.将提出的算法用于一幅添加了各种不同噪声水平的合成图像和SAR海冰影像的分割中,实验结果证明了该算法的有效性.该算法与现有的区域MRF相比,实验结果证明新算法能够节约计算时间50%,同时提高了分割准确性,尤其是在相干斑噪声较强的区域. 相似文献
19.
天津城市色彩的整体设计策略 总被引:1,自引:2,他引:1
结合历史演进,气候条件,技术进步对天津城市的色彩规律进行探讨,提出天津城市色彩的规划与保护方法,对天津城市色彩的空间分布做出总结并对未来色彩设计提出具体建议. 相似文献
20.
嵌入式计算机系统人机界面设计 总被引:3,自引:0,他引:3
根据嵌入式计算机系统特点,运用人机工程学理论,研究嵌入式计算机系统人机界面的设计准则和规范化方法,并给出一个通用的嵌入式计算机系统的人机界面结构框架 相似文献