Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/265013708

Development and validation of a dynamic metamodel based on stochastic


radial basis functions and uncertainty quantification

Article in Structural and Multidisciplinary Optimization · February 2015


DOI: 10.1007/s00158-014-1128-5

CITATIONS READS

133 3,384

8 authors, including:

Silvia Volpi Matteo Diez


University of Iowa Italian National Research Council
13 PUBLICATIONS 393 CITATIONS 226 PUBLICATIONS 2,816 CITATIONS

SEE PROFILE SEE PROFILE

Nicholas Gaul Hyeongjin Song


RAMDO Solutions University of Iowa
24 PUBLICATIONS 532 CITATIONS 3 PUBLICATIONS 231 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Matteo Diez on 07 April 2015.

The user has requested enhancement of the downloaded file.


Struct Multidisc Optim (2015) 51:347–368
DOI 10.1007/s00158-014-1128-5

RESEARCH PAPER

Development and validation of a dynamic metamodel based


on stochastic radial basis functions and uncertainty
quantification
Silvia Volpi · Matteo Diez · Nicholas J. Gaul ·
Hyeongjin Song · Umberto Iemma · K. K. Choi ·
Emilio F. Campana · Frederick Stern

Received: 26 August 2013 / Revised: 16 May 2014 / Accepted: 3 June 2014 / Published online: 7 August 2014
© Springer-Verlag Berlin Heidelberg 2014

Abstract A dynamic radial basis function (DRBF) meta- considered as the efficiency metric, focusing on fitting accu-
model is derived and validated, based on stochastic RBF racy and UQ variables. DKG is found more efficient for
and uncertainty quantification (UQ). A metric for assessing fitting low-dimensional test functions and one-dimensional
metamodel efficiency is developed and used. The valida- UQ, whereas DRBF has a greater efficiency for fitting
tion includes comparisons with a dynamic implementation higher-dimensional test functions and two-dimensional UQ.
of Kriging (DKG) and static metamodels for both deter-
ministic test functions (with dimensionality ranging from Keywords Simulation-based design · Dynamic
two to six) and industrial UQ problems with analytical and metamodels · Uncertainty quantification · Radial basis
numerical benchmarks, respectively. DRBF extends stan- function networks · Kriging
dard RBF using stochastic kernel functions defined by an
uncertain tuning parameter whose distribution is arbitrary
and whose effects on the prediction are determined using 1 Introduction
UQ methods. Auto-tuning based on curvature, adaptive
sampling based on prediction uncertainty, parallel infill, and Simulation-based design (SBD) of complex engineering
multiple response criteria are used. Industrial problems are systems requires high-fidelity solvers to guarantee the accu-
two UQ applications in ship hydrodynamics using high- racy of the solution. Real-world problems are affected
fidelity computational fluid dynamics for the high-speed by different sources of uncertainty (environmental, opera-
Delft catamaran with stochastic operating and environmen- tional, geometrical) and therefore need uncertainty quan-
tal conditions: (1) calm water resistance, sinkage and trim tification (UQ) methods. Combining design optimization
with variable Froude number; and (2) mean value and root and UQ into stochastic SBD, such as robust and reliability-
mean square of resistance and heave and pitch motions with based design optimization, requires a high number of
variable regular head wave. The number of high-fidelity function evaluations and large computational resources.
evaluations required to achieve prescribed error levels is This represents a significant challenge from the algo-
rithmic and technological viewpoints, requiring efficient
computational methods and high-performance computer
systems.
S. Volpi · M. Diez · N. J. Gaul · H. Song · The application of surrogate models, i.e. metamodels,
K. K. Choi · F. Stern () alleviates the computational cost by reducing the number
The University of Iowa, Iowa City, IA, USA
e-mail: [email protected]
of high-fidelity evaluations needed. Metamodels have been
widely used in several engineering contexts, such as struc-
S. Volpi · U. Iemma tural optimization (Jansson et al. 2003), aeronautics and
Department of Engineering, Roma Tre University, Rome, Italy aerospace (Sobieszczanski-Sobieski and Haftka 1997), and
ground vehicles (Yang et al. 2005), including stochastic
M. Diez · E. F. Campana
CNR-INSEAN, National Research Council-Marine Technology applications and UQ (Giunta et al. 2006; Kennedy et al.
Research Institute, Rome, Italy 2006).
348 S. Volpi et al.

The choice of the metamodelling technique is based on procedure, for which the choice of the degrees of freedom
accuracy and efficiency. The latter is based on the number of and the minimization algorithm represents a critical issue.
high-fidelity evaluations required, since the computational Adaptive sampling supports the design of experiments
cost of the metamodelling algorithm is deemed negligi- (DoE) used for training, which is not defined anymore a
ble in comparison. Radial basis functions (RBF, Hardy priori but dynamically updated using available information.
1971) have been demonstrated accurate and efficient in The purpose of performing an adaptive DoE is to add train-
several applications such as analytical test problems (Jin ing points where it is most useful and to use the minimum
et al. 2001), stochastic search in optimization (Regis and number of high-fidelity evaluations to represent the func-
Shoemaker 2007; Regis 2011) and UQ (Loeven et al. tion. Li et al. (2010) shows a classification of sampling
2007, He et al. 2013). Accuracy and efficiency of Krig- techniques, setting apart non-adaptive DoE from adaptive
ing (Matheron 1963) has been demonstrated for several DoE. The latter can follow different approaches, depending
applications including optimization subject to uncertainty on application and aim of the analysis. Relevant issues for
(Jin et al. 2003); moreover, extensions of Kriging to efficient adaptive DoE are the possibility to add more than
stochastic approaches including uncertain basis/correlation one training point per iteration, in order to take advantage
functions and tuning parameters has been addressed of parallel computing systems (referred to as parallel infill,
in Bayesian Kriging, using UQ (Pilz and Spock 2008; Forrester and Keane 2009) and the capability of managing
Gramacy and Lee 2008). Although RBF and Kriging are more than one function at a time (when the relevant outputs
found adequate for most problems, it is difficult to pre- are multiple).
dict their effectiveness for new applications. Metamod- Several metrics are used to evaluate metamodels accu-
els performance is problem dependent and determined racy. R-Square, relative average absolute error and relative
by different factors such as the degree of non-linearity, maximum absolute error (Jin et al. 2001), root mean square
the problem dimensionality, the noisy or smooth behav- error (Jones et al. 1998) and relative root mean square
ior of the function and the approach used for training error (Zhao et al. 2011) are some of the most widely used.
(Jin et al. 2001). These are based on the Lp norm (suitably normalized) of
In order to develop accurate and efficient methods for the difference between observations and predictions. Met-
metamodel-based analysis and optimization, research has rics based on absolute (p = 1) and square (p = 2) errors
recently moved from standard (or static) metamodelling give an assessment of the global fitting, whereas maxi-
techniques to function-adaptive approaches, also referred mum errors (p = ∞) focus on local differences; their
to as dynamic metamodels. A dynamic metamodel is able application to metamodels comparison gives similar trends,
to improve its fitting capability by exploiting the informa- especially for smooth functions and predictions. Metrics
tion that becomes available during the analysis process. and methods for validation of metamodel-based UQ have
Two main characteristics identify metamodels as dynamic: been presented by Mousaviraad et al. (2013), providing
an auto-tuning of the metamodel itself and an adaptive errors for function fitting, expected value (EV), standard
sampling technique. deviation (SD) and cumulative distribution function (CDF)
In auto-tuning, the metamodel itself is not defined a pri- versus numerical benchmark obtained by computational
ori. Auto-tuning can be applied considering several degrees fluid dynamics (CFD). Metrics for accuracy do not directly
of freedom, from tuning parameters to the choice of the provide the quantification of metamodels efficiency, which
metamodel itself. Auto-tuning approaches to RBF have is of primarily importance when high-fidelity solvers
been applied to auto-configure RBF networks: Mullur and are used.
Messac (2005) propose an extended RBF approach, where The objective of the present work is the development
more than one basis function per data point is used result- and validation of an efficient dynamic RBF (DRBF) meta-
ing in an under-determined system of equations; Acar model based on stochastic kernel functions. A metric for
and Rais-Rohani (2009) and Zhou et al. (2011) present assessing metamodels efficiency is developed and used. The
weighted sum of multiple metamodels, updating the weights validation includes comparison with an existing dynamic
at each iteration for improving the accuracy; Billings and Kriging (DKG) method (Zhao et al. 2011; Song et al.
Zheng (1995) propose a global optimization process using 2013), available through collaboration among co-authors,
genetic algorithms; Sarimveis et al. (2004) make use of and static metamodels for both deterministic test functions
a genetic algorithm to minimize the prediction error and (with dimensionality ranging from two to six) and industrial
auto-configure dynamically RBF neural networks; Meng et UQ problems (Diez et al. 2014, He et al. 2013) with analyt-
al. (2009) present self-adaptive RBF neural networks using ical and numerical benchmarks, respectively. Development
differential evolution. Auto-tuning applications to Kriging and validation of DKG are beyond the scope of the current
have been shown in Peri (2009), Zhao et al. (2011) and work, which focuses on DRBF and uses a well-established
Song et al. (2013). Auto-tuning involves an optimization implementation of DKG for comparison.
Dynamic metamodel based on stochastic radial basis functions 349

DRBF extends standard RBF using stochastic kernel where the elements of A are aij = ϕ(||xi − xj ||), with
functions defined by an uncertain tuning parameter whose xi , xj ∈ T , w = {wj } and y = {yi }. The ε power of the
distribution is arbitrary and whose effects on the prediction Euclidean distance is used as the kernel function, with ε
are determined using UQ methods, similarly to Bayesian treated as a tuning parameter:
Kriging. Auto-tuning based on curvature, adaptive sampling ⎡ ⎤ε
 n
based on prediction uncertainty, parallel infill, and multi- 
ϕ||x − xi || = ||x − xi ||ε = ⎣ (xk − xk,i )2 ⎦ (3)
ple response criteria are used. Since DRBF is aimed at UQ
k=1
in ship hydrodynamics problems with low dimensionality,
test functions with dimensionality ranging from two to six where n is the number of independent variables, xk , k =
are used. Industrial problems are two UQ applications in 1, ..., n. The methodology proposed consists in considering
ship hydrodynamics making use of high fidelity CFD for the a stochastic sample of RBF predictions S , defined assuming
high-speed Delft catamaran with stochastic operating and the tuning parameter ε as a stochastic exponent, following a
environmental conditions: (1) calm water resistance, sink- uniform distribution, as per
age and trim for variable Froude number (Fr) (Diez et al.
2014); and (2) mean value and root mean square (RMS) S = fˆ(x, ε); x ∈ D, ε ∼ unif[εmin ; εmax ] (4)
deviation from mean of resistance and heave and pitch
RBF has been widely applied using linear and cubic kernels,
motions for variable regular head wave He et al. (2013). The
corresponding to ε = 1 (polyharmonic spline of first order)
number of evaluations required to achieve prescribed error
and ε = 3 (polyharmonic spline of third order), respectively
levels is considered as the efficiency metric, focusing on fit-
(Gutmann 2001; Forrester and Keane 2009). This suggests
ting capability and UQ variables. An appendix provides the
the range of ε to be defined within εmin = 1 and εmax =
UQ equations for EV, SD, CDF and stochastic uncertainty
3. Note that the choice of the distribution for ε is arbitrary
Uf , where f indicates the output function; they are used
and, from a Bayesian viewpoint, this represents the degree
both for determining prediction uncertainty stemming from
of belief in the definition of the tuning parameter.
stochastic tuning parameter in DRBF and for industrial UQ
The prediction provided by the metamodel is given at
problems.
each x by the EV of fˆ over ε:
The paper is organized as follows. Section 2 introduces
the proposed approach for DRBF and the validation met- fˆ(x) = EV[fˆ(x, ε)] (5)
ric used. Section 3 presents the static metamodels and
the dynamic implementation of Kriging used for compar- which is solved by UQ, using (39) with ξ = ε. The meta-
ison. Deterministic and stochastic analytical and numeri- model stochastic uncertainty Ufˆ (x), is quantified at each x
cal benchmarks for validation of current methodology are by the 95%-confidence band of fˆ(x), using UQ as per (38)
presented in Section 4, whereas the associated numerical and (41). Equations (39) and (41) are solved by Monte Carlo
results are discussed in Section 5. Final remarks and future (MC) method, with random sample {εi }N i=1 ∼ unif[1; 3].
ε

work are given in Section 6. Finally, Appendices A and


B provide the UQ methods used and the equations for the 2.1 Auto-tuning
analytical test functions, respectively.
RBF capability in functions approximation is known to
be sensitive to curvature and non-linearities. Hence, the
2 Stochastic dynamic radial basis functions following approaches are proposed to scale the variables
and evaluation metrics domain:
1. Non-adaptive. Each independent variable is scaled to
Given a training set T of M points {xi }Mi=1 with associated the interval [0; 1].
function (f ) evaluations yi = f (xi ), standard RBF (with
2. Adaptive. Each independent variable is scaled accord-
centers coincident with xi ) provides the prediction (fˆ) at x
ing to the curvature of the function.
as per
The latter is performed with the introduction of a scal-

M
fˆ(x) = wi ϕ(||x − xi ||) (1) ing factor resulting from the evaluation of the curvature.
i=1 The main idea is that of having a maximum second deriva-
tive having the same value for all variables. Accordingly,
where ϕ is the kernel function and the wi are the coefficients
adaptive scaling is performed only when n ≥ 2. An ana-
of the combination. These are solution of the linear system,
lytical expression of the second derivative is available from
which provides exact predictions at the training points:
(1); however, the function is strongly influenced by the local
Aw = y (2) behavior of the metamodel and the quality of the approxi-
350 S. Volpi et al.

mation deteriorates with higher derivatives. Hence, a finite metric, by integration of Ufˆ over the domain, simi-
differences method is applied hereafter. Accordingly, the larly to Kriging’s integrated mean square error (IMSE,
RBF kernel is defined as Sacks et al. 1989). Herein, the maximum value of Ufˆ
⎡ ⎤ε through (9) is preferred, since relatively easy to use
 n

ϕ||x − xi || = ⎣ ck2 (xk − xk,i )2 ⎦ (6) and implement, leaving alternative sampling criteria for
future work.
k=1

with ck given by 2.3 Parallel infill


⎛    ⎞
  ∂ 2 fˆ 
 
ck = ⎝max  2  − 1⎠ rf + 1
The parallel infill is performed by applying sequentially (9)
(7)
 ∂xk  using a group of I dummy predictions fˆ as follows.
Step 1. For i = 1 : I , do
where rf is a relaxation factor. Herein, the identification of
identify xM+i as per (9);
the maximum value of the second derivative in (7) is per-
predict fˆ(xM+i ) using the metamodel;
formed using a deterministic particle swarm optimization
add [xM+i ; fˆ(xM+i )] to the training set T .
algorithm (Campana et al. 2009) over x.
End
Note that if a non-adaptive normalization of the indepen-
Step 2. For i = 1 : I , do (in parallel)
dent variables is used, ck becomes
evaluate the function f (xM+i );
1 add [xM+i ; f (xM+i )] to the training set T .
ck = (8)
max{xk } − min{xk } End
(10)
2.2 Adaptive sampling
This method makes an effective use of parallel com-
An initial training set T is built by evaluating the func- puting resources; however, its accuracy is affected by the
tion at M0 = 2n + 1 points: one training point is set at number of training points per group, I , and may not be as
the center of the domain and the other training points are high as using a purely sequential scheme, I = 1 (Forrester
set at the center of each boundary hyper-face. Predictions and Keane 2009).
are made available as EV of fˆ (5) with related metamodel
stochastic uncertainty Ufˆ (38). Additional training points 2.4 Multiple response criteria
are placed where the metamodel stochastic uncertainty
is largest: In order to perform adaptive sampling when high-fidelity
simulations provide multiple responses, two criteria are
xM+1 = argmax[Ufˆ (x)] (9) formulated and applied:
Equation 9 is used to increase the size of T and update the Uave : new training points are placed based on the maxi-
metamodel iteratively, as shown in Fig. 1. For the solution mum average value of the uncertainty among the functions:
of (9), the same deterministic particle swarm optimization
used for (7) is applied. xM+1 = argmax[Ūfˆ (x)] (11)
It may be noted that the approach proposed for adap- where
1 
tive sampling is similar to considering the maximum m
mean square error (MMSE) in Kriging (Sacks et al. Ūfˆ (x) = Ufˆj (x)Rj−1 (12)
m
1989). This can be extended taking into account a global j =1

Fig. 1 DRBF model: uncertainty-based adaptive sampling


Dynamic metamodel based on stochastic radial basis functions 351

where Ufˆj indicates the stochastic uncertainty of the j th When multiple output functions are assessed, fj , j =
function, Rj = max{fj } − min{fj }, and m is the number of 1, ..., m, all errors in (17-21) are studied by their average
responses. among multiple responses:
Umax : new training points are defined, based on maxi-
mum absolute uncertainty among the functions: 1 
m
ĒX = EX,j (22)
xj = argmax[Ufˆj (x)] (13) m
j =1

xM+1 = argmax[Ufˆj (xj )Rj−1 ] (14) where EX,j is respectively ERMS , EEV , ESD , ECDF , and
Note that in the case of multiple responses, (2) may be finally EUQ for output function fj . ĒRMS and ĒUQ are used
rewritten in the compact form: to provide an overall assessment of accuracy with focus
on function fitting and UQ, respectively; the convergence
AW = Y (15)
  of such parameters versus the training set size M provides
where W = [w1 | . . . |wm ] and Y = y1 | . . . |ym , with an insight of the metamodel efficiency. Thus, the number
subscripts indicating different output responses. The above M required to achieve specified errors is introduced as an
system of equations may be solved at once, taking advantage important metric, providing the computational cost. 5, 2.5
of a single factorization of the matrix A. and 1.25% error levels are considered, since comparable to
the typical uncertainty of CFD outputs due to iterative grid
2.5 Evaluation metrics and time step convergence:

In order to investigate the effectiveness of metamodels,


M5% the minimum number of training points required
function predictions and metamodel-based results are sys-
to achieve ĒX < 5%
tematically validated. For each training set, the fitting error
M2.5% the minimum number of training points required
is computed as the normalized error between predictions
to achieve ĒX < 2.5%
and benchmark values: (23)
M1.25% the minimum number of training points required
[f (x) − fˆ(x)] to achieve ĒX < 1.25%
E(x) = (16)
max{f } − min{f } Mave average of the above,
For a given validation set V = {xi }Pi=1 , the normalized root Mave = (M5% + M2.5% + M1.25% ) /3
mean square error is given by
 where Mave is used as an overall index for metamodels

1  P
ERMS = 
efficiency.
E(xi )2 (17)
P
i=1
For current UQ applications, metamodel-based estima- 3 Static metamodels and dynamic Kriging used
tors are evaluated following Mousaviraad et al. (2013) and for comparison
the error for EV, SD and CDF are defined by:
EV − EVV 3.1 Static metamodels
EEV = (18)
EVV
The following static metamodels are used for com-
SD − SDV parison: k th order inverse distance weighting, IDW
ESD = (19)
SDV (Shepard 1968); radial basis function network with multi-
 quadric and inverse multiquadric kernels, RBF MQ/IMQ

1  K
 2 (Buhmann 2003); k th order polyharmonic spline, PHS
ECDF = CDFV (yk ) − CDF(yk ) (20)
K (Wahba 1990); least-square support vector machine with
k=1
multiquadric and inverse multiquadric kernels, LS-SVM
where EV, SD and CDF are evaluated using (39-41) MQ/IMQ (Suykens et al. 2002); ordinary Kriging with lin-
(Appendix A) substituting ξ with x. Superscript V indicates ear and exponential correlation functions, OKG lin./exp.
validation values, obtained using f instead of metamodel (Peri 2009); an implementation of stochastic RBF with-
predictions fˆ. out adaptive scaling and sampling, implementing power-
Finally, the average UQ error is defined as law, multiquadric and inverse multiquadric kernels, SRBF
|EEV | + |ESD | + ECDF P/MQ/IMQ; and DKG without adaptive sampling. A sum-
EUQ = (21)
3 mary of static techniques used is given in Table 1.
352 S. Volpi et al.

Table 1 Summary of the static metamodels used for comparison

Technique Acronym Kernel Tuning parameter

Stochastic-speed Stochastic-wave

Inverse distance weighting IDW 1/r k k = 2, 4, 6 k = 2, 4, 6



1 + (α r)2 (MQ) α = 10; 100; 1, 000 α = 1; 10; 100
Radial basis function network RBF
[1 + (α r)2 ]−0.5 (IMQ) α = 5; 10; 15 α = 0.75; 1.0; 1.25
Polyharmonic Spline PHS r k , odd k; r k log(r), even k k = 1, 2, 3 k = 1, 2, 3

Least-square support vector machine LS-SVM 1 + (α r)2 (MQ) α = 10; 100; 1, 000 α = 1; 10; 100
[1 + (α r)2 ]−0.5 (IMQ) α = 5; 10; 15 α = 0.75; 1.0; 1.25
1 − (α r) (lin.) not applied α = 0.25; 0.5; 1
Ordinary Kriging OKG
exp (−α r) (exp-) not applied α = 0.25; 0.5; 1
r ε (P) not applied ε ∈1;3

Stochastic radial basis functions SRBF 1 + (α r)2 (MQ) not applied α ∈[1;100]
[1 + (α r)2 ]−0.5 (IMQ) not applied α ∈[0.75;1.25]
Dynamic Kriging (without adaptive sampling) DKG auto-selected auto-selected

3.2 Dynamic Kriging mean structure F is selected minimizing the cross-validation


error among ordinary (OKG) and first/second-order uni-
The response fˆ is modeled in two parts, namely the mean versal Kriging (UKG) methods; the best ρ(θ, x1 , x2 ) and
structure and the stochastic process: θ are identified using the maximum likelihood estimation
(MLE, Martin and Simpson 2005). Assuming a Gaussian
fˆ = φ T β + Z (24)
distribution, the log-likelihood of the model parameters is
Z is a Gaussian random process with zero-mean and covari- defined as
ance given by
C(x1 , x2 ) = σ 2 ρ(θ , x1 , x2 ) M 1 1
(25) l=− ln(2πσ 2 )− ln(|R|)− 2 (y−Fβ)T R−1 (y−Fβ)
2 2 2σ
where σ 2 is the process variance, ρ is the spatial correlation (28)
function and θ collects the correlation function parameters.
The prediction provided by the Kriging model is Computing the derivative with respect to β and σ 2 and using
fˆ(x) = φ T β + rT R−1 (y − Fβ) (26) the approximation in (27), leads to an estimation of the
process variance
where y are the observations at the training points, y =
{f (xi )}, R = {ρ(θ, xi , xj )}, r(x) = {ρ(θ, x, xi }, F =
1
[φ(γ , x1 ), φ(γ , x2 ), ..., φ(γ , xM )]T , and φ are the basis σ̂ 2 = (y − Fβ̂)T R−1 (y − Fβ̂) (29)
M
functions with tuning parameters collected in γ . Using a
least-squares estimation, β is approximated by β̂ as
and then of l. The goal is to find the optimal θ that
β̂(x) = (FT R−1 F)−1 FT R−1 y (27) maximizes the likelihood function. To solve the opti-
mization problem a generalized pattern search (GPS,
3.2.1 Auto-tuning Torczon 1995) is applied; GPS is sensitive to the ini-
tial guess, therefore, a genetic algorithm (GA) is used to
The dynamic Kriging presents an automatic selection of get a best-candidate initial guess. The advantage of using
the basis-functions, of the correlation function and of the derivative-free approaches such as GPS and GA is that
correlation function parameter. Note that if auto-tuning is gradient information of the log-likelihood is not required
not performed the expected value of the prediction and and the method has good performances also in the pres-
the intrinsic uncertainty stemming from the choice of the ence of noisy functions. When the training set size is
basis and correlation functions including γ and θ may small, MLE could be inaccurate, therefore a penalized-MLE
be evaluated by UQ as in Bayesian Kriging. Herein, the (PMLE) is introduced (Li and Sudjianto 2005). The log-
Dynamic metamodel based on stochastic radial basis functions 353

likelihood function is modified adding a penalty function, new training points. Thus, I new training points are selected
such as based on the following steps:
M 1
Q=− ln(2πσ 2 ) − ln(|R|) (30) Step 1. Prediction MSE (PMSE) is calculated using current
2 2 DoE training points.
1  N
− 2 (y − Fβ)T R−1 (y − Fβ) − M λ|θi | Step 2. The test point with the largest PMSE is inserted.
2σ Set i = 1.
i=1

λ is the penalty function parameter and is identified using a Step 3. If i = I , go to Step 5. Otherwise, go to Step 4.
GPS with the cross-validation error. Step 4. The nearest distance D(x) from existing training
points is calculated from the test points.
3.2.2 Adaptive sampling The test point with the largest PMSE(x) · D(x) is
inserted. Set i = i + 1. Go to Step 3.
M0 initial training points are distributed in the design space
Step 5. At all selected test points, responses are evaluated
using a Latin centroidal Voronoi tassellation (LCVT, Saka et
in parallel.
al. 2007) sampling, then the Kriging model is built. Once the
metamodel is constructed, it is used to predict the function (33)
and to approximately bound the errors in such predictions. It may be noted that a common procedure to imple-
The prediction mean square error (MSE) from the Kriging ment the parallel infill with Kriging is that of using I
model (Sacks et al. 1989), dummy predictions, similarly to the approach shown for
 
   0 FT −1 φ(x) DRBF in Section 2.3. This approach requires a number of
MSE[fˆ(x)] = σ 1 − φ (x) r (x)
2 T T I − 1 sequential evaluations of the Kriging model and could
F R r(x) be computationally expensive, depending on the problem
(31) dimension and the value of I . For this reason, herein the use
of PMSE together with the nearest distance D from exist-
is taken as the metric of accuracy. Larger values of predic- ing training points is preferred, allowing for a good level
tion MSE are associated with larger uncertainty in predic- of accuracy at a reasonable computational cost. Issues con-
tion (Booker et al. 1999); since the current DKG has been nected with the parallel infill of Kriging models are beyond
developed for general purpose, namely to minimize the pre- the scope of the current work, and not further addressed.
diction variance over the domain, the MMSE is used to
determine the choice of new training points: 3.2.4 Multiple response criterion
xM+1 = argmax{MSE[fˆ(x)]} (32)
If multiple responses are assessed, the maximum normal-
As alternative metric to MMSE, one could use the inte- ized PMSE among multiple responses is used. Thus, new
grated value of MSE over the domain, IMSE. This provides training points are inserted for more highly non-linear
a global MSE-based metric and has been used for efficient responses, similarly to (13) and (14) for DRBF.
DoE in several Kriging applications (see, e.g., Buslig et al.
2014). Herein, MMSE is preferred for its straightforward
implementation and ease of use, in analogy with the choice 4 Analytical and numerical benchmark
made for adaptive sampling with DRBF. for deterministic and stochastic validation problems

3.2.3 Parallel infill 4.1 Test functions fitting

In the previous section, training points are selected one-at- Test functions (Lucidi and Piccioni 1989; Ali et al. 2005;
a-time following a sequential sampling approach. If parallel Taddy et al. 2009) are used as analytical benchmark
computing is available, multiple training points by parallel and presented in Appendix B. These have dimensionality
infill may be more convenient than one-at-a-time sampling. ranging from two (low-dimensionality) to six (medium-
However, if multiple training points are selected only by dimensionality), with different degree of non-linearities.
prediction MSE, many training points could be clustered Among 2D functions, three are polynomial (two of fourth
in some small region. This is because test points near the order and one of sixth order), one is a combination of a
test point with the largest prediction MSE also have large fourth order polynomial and a trigonometric function, one
prediction MSE. Therefore, multiple training points need is trigonometric. Among 3D functions, one is a fourth order
to be distributed considering the distances to existing and polynomial and one is exponential. 4D functions include
354 S. Volpi et al.

two fourth order polynomial, whereas the 6D function is g acceleration of gravity [m/s2 ] and Lpp length between
exponential. The number of independent variables and their perpendiculars [m]) is assumed to have a Normal dis-
bounds are summarized in Table 2. tribution with expected value EV(Fr) = 0.5 and a stan-
dard deviation SD(Fr) = 0.05. The distribution is trun-
4.2 Uncertainty quantification problems for high-speed cated to its 95% confidence interval, which have lower
catamaran and upper bounds approximately equal to EV(Fr)±2SD(Fr)
respectively.
The two ship hydrodynamics problems represent extensions Numerical benchmark consists in fi (Fr), EV(fi ),
of the basic resistance, sinkage and trim and seakeeping SD(fi ), and CDF(fi ) with i = 11, ..., 13. Specifically,
(resistance, heave and pitch) deterministic problems to UQ f11 = CT = RT /0.5ρU 2 S (where RT is the total resis-
problems. In ship design, resistance and seakeeping are usu- tance [N]; ρ is the water density [kg/m3 ]; S is the static
ally evaluated using towing tank tests; however, recently wetted area) is the non-dimensional total resistance, f12 =
CFD is replacing the build and test approach with SBD, σ = z/Lpp (where z is the stationary sinkage at the
which offers more detailed analysis and innovative opti- center of gravity G [m]) is the non-dimensional sinkage
mized designs. Formulation of resistance and seakeeping as and f13 = τ is the stationary trim angle [rad]. Output
UQ problems is relatively new and preparatory to devel- functions and input variable bounds are summarized in
opment of stochastic optimization approaches. The present Table 4. Numerical benchmark values are given by con-
research builds on previous Delft catamaran studies of deter- verged MC with Latin hypercube sampling, using N = 257
ministic single and multiple objective optimizations for URANS computations. Benchmark functions are shown in
resistance (Kandasamy et al. 2013) and resistance and sea- Fig. 3.
keeping (Tahara et al. 2012) and UQ for calm water (Diez
et al. 2014) and seakeeping (He et al. 2013) including 4.2.2 Uncertainty quantification for high-speed catamaran
comparison of several static metamodels. in stochastic regular wave

4.2.1 Uncertainty quantification for high-speed catamaran The second ship hydrodynamics problem is the two-
in calm water with stochastic speed dimensional UQ of the Delft catamaran performance in
regular head waves as presented in He et al. (2013). A
The first ship hydrodynamics problem is the one- full-scale ship is considered and main particulars and con-
dimensional UQ of the Delft catamaran performances in ditions are included in Table 3. The ship (Fig. 2) is free
calm water, presented in (Diez et al. 2014). Main particu- to heave and pitch. A design speed corresponding to Fr
lars and conditions are shown in Table 3, whereas the hull = 0.5 is assumed and stochastic wave conditions pertain
geometry is presented in Fig. 2. The towing tank model to sea state 6, described by the Bretschneider spectrum.
scale (Kandasamy et al. 2013) is assumed for the cur- Specifically, the wave period and height, T and H , follow
rent problem. The model has two degrees of freedom (it a joint probability density function which depends on the
is free to sink and trim). The speed U [m/s] is taken as spectrum parameters. The focus of the analysis is on the
an operational uncertainty. Accordingly, the Froude
 number output variables: x-force coefficient Cx = −Fx /0.5ρU 2 S
(used as non-dimensional speed, Fr = U/ gLpp , with (where Fx is the force in x direction), heave ζ = z/Lpp

Table 2 Test functions


Function No. of Variable Max. training Validation
variables bounds set size, Mmax set size, P

f1 Branin-Hoo 2 −5 < x1 < 10; 0 < x2 < 15 100 10,000


f2 Six-hump camel back 2 −2 < x1 < 2; −1 < x2 < 1 100 10,000
f3 Rosenbrock 2 −2 < x1 < 2; −1.5 < x2 < 2 100 10,000
f4 Quartic 2 −2 < x1 < 2; −1 < x2 < 1 100 10,000
f5 Shubert 2 −4 < xk < 2, ∀k 250 10,000
f6 Rosenbrock 3 −2 < xk < 2, ∀k 150 15,625
f7 Hartman 3 0 < xk < 1, ∀k 150 15,625
f8 Rosenbrock 4 −2 < xk < 2, ∀k 300 160,000
f9 Styblinski-Tang 4 −5 < xk < 5, ∀k 300 160,000
f10 Hartman 6 0 < xk < 1, ∀k 500 531,441
Dynamic metamodel based on stochastic radial basis functions 355

Table 3 Delft catamaran main


particulars and simulation Main Particular/condition Symbol Value, stochastic-speed Value, stochastic-wave
conditions
Length overall [m] Loa 3.822 105.4
Length between perpendiculars [m] Lpp 3.627 100.0
Breadth overall B/Lpp 0.313 0.313
Breadth demi-hull b/Lpp 0.080 0.080
Draught at mid-ship T /Lpp 0.050 0.050
Distance between center of demi-hulls s/Lpp 0.234 0.234
Longitudinal center of gravity Lcg /Lpp 0.527 0.527
Vertical center of gravity Kg /Lpp 0.074 0.113
Pitch radius of gyration ρy /Lpp not used 0.261
Froude number Fr [0.402; 0.598] 0.5
Reynolds number Re 1.019 107 7.144 106
Wave period [s] T not used [2.2;17.7]
Wave height [m] H not used [0.5;6.4]

and pitch θ motions. The following global output param- 5 Numerical results
eters are assessed: time mean and root mean square devi-
ation from mean (RMS) of output time-histories η(t), Test functions results are presented showing: convergence
evaluated as of DRBF MC method for EV and Ufˆ for f1 ; average fit-
 t2 ting error ĒRMS , using different relaxation factors (rf =
1
η̄ = η(t)dt (34) 0, 0.25, 0.5, 0.75, 1.0 as per (7)) and non-adaptive scal-
t2 − t1 t1
  t2 0.5 ing as per (8); breakdown of results among test functions,
1 comparing DRBF and DKG, and effects of sequential
ηRMS = [η(t) − η̄]2 dt
t2 − t1 t1 sampling (no parallel infill, I = 1) and parallel infill
using I = 5 and I = 10 (10). Table 2 shows the
where t2 − t1 = T .
maximum training set size Mmax and the size of the val-
Numerical benchmark is defined using fully non-linear
idation set {xi }Pi=1 , the latter defined by a regularly dis-
irregular wave URANS statistically converged computation,
tributed Cartesian grid; values of Mmax and P are chosen
and used to validate UQ methods based on regular wave
according to the problem dimensionality and the degree of
models. Numerical benchmark to validate metamodel-based
non-linearity.
UQ is defined using converged Markov-chain MC with
Catamaran problems are assessed presenting: adaptive
N = 129 regular wave URANS and consists in fi (T , H ),
sampling and fitting error E; average ĒRMS and ĒUQ ,
EV(fi ), SD(fi ), and CDF(fi ) with i = 14, ..., 19. Specif-
using different criteria for multiple response and different
ically, f14 = C̄x , f15 = Cx,RMS , f16 = ζ̄ , f17 =
relaxation factor; breakdown of results among functions,
ζRMS , f18 = θ̄ and f19 = θRMS , as summarized in
comparing DRBF and DKG; effects of sequential sampling
Table 4. Benchmark functions are shown in Fig. 4. These
(I = 1) and parallel infill (I = 5 and 10) and conver-
are made available for adaptive sampling by a thin plate
gence of ĒRMS and ĒUQ versus M, with comparison to
spline model.
static metamodels from earlier work. No scaling approach
is applied for the stochastic speed case since the problem is
one-dimensional, whereas different relaxation factors (rf =
0, 0.25, 0.5, 0.75, 1.0) and non-adaptive scaling are used
for the stochastic wave problem. The maximum training set
size is set to Mmax = 33 and 65 (Table 4), according to
Diez et al. (2014) and He et al. (2013) for the two catamaran
problems, respectively.

5.1 Test functions fitting

Figure 5 shows the convergence of the MC method used to


Fig. 2 Delft catamaran geometry and dimensions build the DRBF. f1 is shown as an example, however, the
356 S. Volpi et al.

Table 4 Delft catamaran UQ problems

Problem Function No. of Variable Max. training Validation


variables bounds set size, Mmax set size, P

Stochastic-speed f11 resistance, CT


f12 sinkage, σ 1 0.402 < Fr < 0.598 33 128
f13 trim, τ
Stochastic-wave f14 x-force mean, C̄x
f15 x-force RMS, Cx,RMS
f16 heave mean, ζ̄ 2 2.2 < T < 17.7 65 64
f17 heave RMS, ζRMS 0.5 < H < 6.4
f18 pitch mean, θ̄
f19 pitch RMS, θRMS

plot in Fig. 5 displays the typical behavior of all the prob- (Hartman, 3D) whereas DKG does not achieve errors < 5%
lems considered. EV and Ufˆ are shown for x = xM0 +1 , forf10 (Hartman, 6D). Overall, DRBF shows a better per-
given by (9), using a number of training points Nε up to formance for f5 and f10 , likely due to the trigonometric
1,000. Variations of EV are within ±1% of final value for and medium-dimensional transcendental nature of the func-
all Nε > 200 whereas Ufˆ is within ±1% of its final tions, whereas DKG performs better for test functions f1−4
value for Nε > 800. Accordingly, hereafter a number of and f6−9 , providing mostly a sudden convergence. Table 6
Nε = 1, 000 is selected. The cost of the resulting MC proce- shows the average performance of DRBF and DKG using
dure is reasonable for test functions and negligible for UQ, both sequential sampling and parallel infill. For I = 1
if compared to the cost of CFD simulations. the metamodels provide similar performance in achieving
Table 5 shows a comparison among scaling approaches ĒRMS < 5%. DKG requires a smaller training set size for
as performed by DRBF, using the average fitting error achieving ĒRMS < 2.5% and 1.25%, and is found the best
ĒRMS . Although performance differences are not large, method on average, as per Mave values. Using I = 1 and
using rf = 0.25 is found the best approach and is used for parallel infill with I = 5 and 10 presents similar trends and
comparison with DKG. The associated Mave is found 4.7% close results, revealing the high scalability of the approach
smaller than the average over all scaling approaches. Non- used. The relative increase in the number of evaluations
adaptive domain normalization (8) provides the slowest required with I = 5 and 10 compared to I = 1 is fairly
convergence. small.
M5% , M2.5% , M1.25% and Mave for each test function
are presented in Fig. 6. A grey bar is used whenever the 5.2 Delft catamaran with stochastic speed
required training set size exceeds the evaluations budget
as per Table 2, and therefore indicates that the specific 5.2.1 Fitting
error level was not achieved. Both metamodels achieve
ERMS < 1.25% for test functions f1 (Branin-Hoo, 2D), f3 Figure 7 shows the normalized fitting error E versus Fr.
(Rosenbrock, 2D), f4 (Quartic, 2D), f6 (Rosenbrock, 3D), Black diamonds indicate the training set {xi }M
i=1 , with M =
f8 (Rosenbrock, 4D) and f9 (Styblinsky-Tang, 4D). Both 33, using DRBF by Uave and Umax and DKG by Umax .
DRBF and DKG do not achieve errors < 2.5% for f5 (Shu- The error presents similar trends and is within ±1% for
bert, 2D). DRBF is not able to achieve errors < 1.25% for all functions and methods. The error peaks are given by
f2 (Six-hump camelback, 2D) and errors < 2.5% for f7 DKG for f11 (close to 1%) and f12 (-1%). Generally, errors

Fig. 3 Stochastic-speed problem: output functions provided by URANS (Diez et al. 2014)
Dynamic metamodel based on stochastic radial basis functions 357

Fig. 4 Stochastic-wave problem: output functions provided by URANS (He et al. 2013)

are larger for f11 and f12 , and very small for f13 . Com- convergence of ĒRMS versus the training set size M. Aver-
paring Figure 7 with Figure 3 shows that DRBF training age error and range of static approaches are shown by
points are mostly located where the average curvature is diamonds and error bars. DRBF and DKG present sim-
greater; training points are found more evenly distributed ilar trends. Table 8 shows the percentile of the average
with DKG. error provided by DRBF and DKG compared to other static
Figure 8 presents the fitting performance for each metamodels, for M = 3, 5, 9, 17 and 33. The error
function, comparing DRBF and DKG. DRBF sampling difference with the best metamodel is also shown. Specif-
by Uave and Umax gives the same results in terms of ically, DRBF is found the best metamodel for M = 33,
M5% , M2.5% , M1.25% and Mave . Both metamodels achieve whereas DKG is found the best metamodel for M = 9;
ERMS < 1.25% for each function. DRBF shows equal for M = 17 the metamodels provide the same results. For
or better performance than DKG for M5% in f11 , f12 M ≥ 5 DRBF and DKG presents an error difference with
and f13 , and for M2.5% in f11 and f12 . Overall, perfor- the best metamodel always less than 1.65%. The best static
mances do not differ by more than 2 function evalua- metamodel is, on average, LS-SVM with IMQ kernel and
tions. Table 6 shows the average fitting error ĒRMS over α = 5.
all functions. Using sequential sampling (I = 1), meta-
models provide very close performances; Mave ≈ 6 for 5.2.2 Uncertainty quantification
both DRBF and DKG with a maximum difference of one
function evaluation. Since the number of function evalu- Figure 10 presents the UQ performance for each function
ations needed to achieve the convergence is less than 7, comparing DRBF and DKG. DRBF sampling by Uave and
using the parallel infill with I = 5 and I = 10 is not Umax gives the same results in terms of M5% , M2.5% , M1.25%
needed. and Mave . DRBF is the most effective in achieving EUQ <
Table 7 shows a summary of the results with comparison 5% for f11 , providing M5% , M2.5% , M1.25% equal to 4, 6 and
to earlier studies based on static metamodels, with M = 3, 7, whereas DKG shows a sudden convergence with M5% ,
5, 9, 17, 33 (Diez et al. 2014). Figure 9 (a) shows the M2.5% , M1.25% equal to 5. Both DRBF and DKG have a

Fig. 5 Convergence of the 80


Monte Carlo method in (39) (a) 240 70
and (41) (b) for test function f1
60
EV[f(xM+1)]

220
U(xM+1)

50
200 40
30
180
20
160 10
0
200 400 600 800 1000 200 400 600 800 1000
Nε Nε
358 S. Volpi et al.

Table 5 DRBF relaxation factor studies for adaptive scaling

Error Multiple-resp. rf Non-adapt.


Problem Metric
assessed approach 0 0.25 0.50 0.75 1 scaling

M5% 106.3 90.9 90.1 89.4 106.1 107.1


M2.5% 156.6 147.4 153.1 159.2 151.7 162.1
Test functions ĒRMS n.a.
M1.25% 185.8 178.3 180.1 184 184.3 191.0
Mave 149.6 138.9 141.1 144.2 147.4 153.4

M5% 27 33 23 27 28 59
M2.5% 46 48 44 44 - -
Uave
M1.25% - - - - - -
Mave 46.0 48.7 44.0 45.3 52.7 63.0
ĒRMS
M5% 34 40 38 36 29 54
M2.5% 41 49 44 53 56 -
Umax
M1.25% - - - - - -
Stochastic-wave Mave 46.7 51.3 49.0 51.3 50.0 61.3

M5% 23 19 23 24 28 22
M2.5% 43 35 43 27 44 51
Uave
M1.25% 60 42 50 - - -
Mave 42.0 32.0 38.7 38.7 45.7 46.0
ĒUQ
M5% 24 21 17 27 28 24
M2.5% 34 40 42 49 46 38
Umax
M1.25% 50 49 58 62 - -
Mave 36.0 36.7 39.0 46.0 46.3 42.3

sudden convergence for f12 and f13 , with DRBF perform- The convergence of the ĒUQ versus M is depicted in Fig. 9
ing better for f12 and DKG for f13 . Table 6 shows the (b). Average error and range of static approaches are shown
average UQ error, ĒUQ , over all functions. Using sequen- by diamonds and error bars. DRBF and DKG present similar
tial sampling (I = 1) DRBF needs 4, 5 and 7 evaluations trends. Generally, errors are small and few evaluations (< 7)
to achieve average errors < 5%, 2.5% and 1.25%, respec- are required to achieve errors < 1.25%. Table 8 shows error
tively. DKG reveals a sudden convergence, showing M5% , percentiles and differences with the best metamodel. Specif-
M2.5% , M1.25% equal to 5. Overall performance by Mave are ically, DRBF is found the best metamodel for M = 9 and
very close for DRBF and DKG. Since the number of func- 17, whereas DKG is found the best metamodel for M = 33.
tion evaluations needed to achieve the convergence is less For M ≥ 9 DRBF presents an error difference with the best
than 7, using the parallel infill with I = 5 and I = 10 is not metamodel always less or equal to 0.04%. For M ≥ 5 the
needed. error difference of DKG with the best metamodel is always
Finally, Table 7 shows a summary of the results with less or equal to 0.04%. The best static metamodel is, on
comparison to static metamodels from Diez et al. (2014). average, LS-SVM with MQ kernel and α = 5.

1000
M5% M2.5% M1.25% Mave n.a.
Training set size, M

100

10
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
DRBF DKG DRBF DKG DRBF DKG DRBF DKG DRBF DKG DRBF DKG DRBF DKG DRBF DKG DRBF DKG DRBF DKG

Fig. 6 Test functions: performance of DRBF and DKG for each function
Dynamic metamodel based on stochastic radial basis functions 359

Table 6 Overall performance summary

Error I =1 I =5 I = 10
Problem Metric rf
assessed DRBF DKG DRBF DKG DRBF DKG

M5% 91 96 100 95 112 99


M2.5% 147 105 152 109 156 115
Test functions ĒRMS 0.25
M1.25% 178 116 181 119 176 123
Mave 138.6 105.6 144.6 107.5 147.8 112.5

M5% 5 6 8 8 13 13
M2.5% 5 6 8 8 13 13
ĒRMS
M1.25% 7 6 13 8 13 13
Mave 5.7 6.0 9.7 8.0 13. 13.
Stochastic-speed n.a.
M5% 4 5 8 8 13 13
M2.5% 5 5 8 8 13 13
ĒUQ
M1.25% 7 5 13 8 13 13
Mave 5.3 5.0 9.7 8.0 13. 13.

M5% 23 28 35 35 35 35
M2.5% 0.5 44 54 45 - 55 -
ĒRMS
M1.25% (Uave ) - - - - - -
Mave 44. 49. 48. 55. 52. 55.
Stochastic-wave
M5% 19 28 20 45 25 25
M2.5% 0.25 35 51 45 45 45 -
ĒUQ
M1.25% (Uave ) 42 61 50 - 55 -
Mave 32. 47. 38. 52. 42. 52.

5.3 Delft catamaran with stochastic regular wave range from -15 to 15%. Errors are generally found larger
for DKG than DRBF, especially for f14 and f19 . Comparing
5.3.1 Fitting Fig. 4 with Figs. 11, 12 and 13 shows that training points
for DRBF are located in those regions where the curva-
Figures 11, 12 and 13 show the normalized fitting error E, ture is higher, whereas they are more uniformly distributed
as a function of T and H for each function, using DRBF using DKG. DRBF training points by Uave and Umax present
by Uave and Umax and DKG by Umax . Black diamonds indi- similar trends.
cate the training points used, with M = 65. Normalized Table 5 shows a comparison among scaling approaches
errors are found greater than in the stochastic speed case and as performed by DRBF, using the average fitting error

Fig. 7 Stochastic-speed problem: DRBF and DKG normalized errors, E%


360 S. Volpi et al.

10 Figure 14 presents ERMS for each function, comparing


M5% M2.5% M1.25% Mave
DRBF and DKG. DRBF is found the most effective meta-
Training set size, M

8
model for f14 , f15 , f18 and f19 , whereas DRBF and DKG
6 are comparable for f16 and f17 . Table 6 shows the average
fitting performance over all functions, in terms of ĒRMS .
4
Using sequential sampling (I = 1) both DRBF and DKG
2 do not achieve an error lower than 1.25%; overall, DRBF
is found performing better than DKG requiring on aver-
0
f11 f12 f13 age five evaluations less than DKG. Using sequential and
DRBF DKG DRBF DKG DRBF DKG parallel infill with I = 5 and 10 presents similar trends,
Fig. 8 Stochastic-speed problem: fitting performance of DRBF and with an increase in the number of evaluations required by
DKG for each function the parallel infill with I = 5 and I = 10, compared
to I = 1.
Table 9 shows a summary of present results with com-
parison to static metamodels used in earlier research, with
M = 9, 17, 33, 65 (He et al. 2013). Fig. 15 (a) shows
ĒRMS . Both Uave and Umax approaches for multiple- the convergence of ĒRMS versus M for DRBF and DKG,
response sampling are presented. Sampling by Uave with comparison to static metamodels. Average error and
with rf = 0.5 is identified as the best overall range of static approaches are shown by diamonds and
approach and is used for comparison with DKG. Asso- error bars. DRBF and DKG outperform static metamodels
ciated Mave is found 13.3% smaller than the overall for M > 33. Table 8 shows the percentile of the aver-
average. age error provided by DRBF and DKG compared to other

Table 7 Summary of results for stochastic-speed (average errors are given in %)

ERMS EUQ
Techniques M M

3 5 9 17 33 Average 3 5 9 17 33 Average

Static
IDW (k =2) 13.3 5.73 3.95 2.92 1.91 5.56 10.4 5.9 3.96 4.02 3.08 5.46
IDW (k =4) 15.5 7.84 4.31 2.39 1.25 6.26 16.5 6.22 2.00 0.66 0.34 5.15
IDW (k =5) 17.1 9.09 5.02 2.78 1.41 7.09 16.8 7.56 2.44 0.81 0.29 5.57
RBF (MQ, α =10) 7.47 0.64 1.23 1.74 1.18 2.45 7.59 0.73 0.73 2.41 0.97 2.48
RBF (MQ, α =100) 12.8 4.74 2.24 1.01 0.47 4.25 9.55 6.22 3.52 2.18 0.34 4.36
RBF (MQ, α =1000) 11.8 3.45 1.17 0.45 0.24 3.42 7.12 3.57 2.09 0.29 0.15 2.64
RBF (IMQ, α =5) 6.77 0.61 0.87 4.78 4.86 3.58 6.56 0.50 0.66 4.16 3.04 2.98
RBF (IMQ, α =10) 7.51 0.94 1.27 5.84 2.69 3.65 6.96 0.64 0.85 4.64 1.98 3.01
RBF (IMQ, α =15) 8.87 1.59 0.97 18.1 1.36 6.18 7.26 1.04 0.72 18.0 1.13 5.62
PHS (k =1) 11.6 3.33 1.07 0.41 0.21 3.33 6.98 3.44 0.83 0.20 0.15 2.32
PHS (k =2) 11.4 3.76 1.46 0.49 0.23 3.46 9.65 5.32 2.58 0.42 0.14 3.62
PHS (k =3) 76.1 17.7 4.26 1.30 0.38 19.9 43.9 12.3 7.35 2.48 0.35 13.3
LS-SVM (MQ, α =10) 6.84 9.23 0.64 0.44 0.56 3.54 6.44 7.41 0.30 0.30 0.37 2.96
LS-SVM (MQ, α =100) 10.2 2.91 1.10 0.44 0.50 3.04 6.95 3.09 2.05 0.30 0.22 2.52
LS-SVM (MQ, α =1000) 11.5 3.24 1.04 0.39 0.20 3.26 6.88 3.36 0.78 0.19 0.10 2.26
LS-SVM (IMQ, α =5) 6.95 1.48 0.88 0.63 0.52 2.09 6.62 1.95 0.67 0.52 0.51 2.05
LS-SVM (IMQ, α =10) 8.49 1.56 0.67 0.39 0.32 2.29 8.32 2.62 0.64 0.20 0.21 2.39
LS-SVM (IMQ, α =15) 9.52 2.58 1.01 0.38 0.25 2.75 8.76 3.62 1.00 0.27 0.16 2.76
Dynamic
DRBF 13.6 2.03 0.36 0.27 0.17 3.28 11.1 2.43 0.23 0.17 0.13 2.80
DKG 13.0 2.24 0.35 0.27 0.18 3.21 17.5 0.60 0.27 0.21 0.09 3.74
Dynamic metamodel based on stochastic radial basis functions 361

100 100
DRBF, I = 1 DRBF, I = 1
DRBF, I = 5 DRBF, I = 5
DRBF, I = 10 DRBF, I = 10
DKG, I = 1 DKG, I = 1
DKG, I = 5 10 DKG, I = 5
DKG, I = 10 DKG, I = 10
10 static static
5% 5%
2.5% 2.5%
ERMS%

EUQ%
1.25% 1 1.25%

1
0.1

0.1 0.01
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
Training set size, M Training set size, M

Fig. 9 Stochastic-speed problem: convergence of average fitting (a) and UQ (b) errors comparing dynamic and static metamodels

Table 8 Comparison of DRBF and DKG performance with static metamodels (P % refers to the percentile)

DRBF DKG
M
ERMS EUQ ERMS EUQ

P (%) E − Ebest P (%) E − Ebest P (%) E − Ebest P (%) E − Ebest

3 15.8 6.83 21.1 4.62 21.1 6.25 5.26 11.1


5 68.4 1.42 68.4 1.93 63.2 1.63 94.7 0.04
Stochastic-speed 9 94.7 0.01 100 0.00 100 0.00 78.9 0.04
17 100 0.00 100 0.00 100 0.00 94.7 0.04
33 100 0.00 89.5 0.04 94.7 0.01 100 0.00
9 10.3 4.17 24.1 4.85 3.44 7.47 20.7 5.19
17 31.0 1.22 20.7 3.92 0.00 13.0 0.00 15.8
Stochastic-wave
33 79.3 0.75 82.8 0.33 31.0 1.03 72.4 1.05
65 100 0.00 100 0.00 96.6 0.62 96.6 0.31

static metamodels, for M = 9, 17, 33 and 65, also provid-


ing error difference with the best metamodel. Specifically,
DRBF is found the best metamodel for M = 65. For
10 M ≥ 33 dynamic metamodels present an error difference
M5% M2.5% M1.25% Mave
with the best metamodel always less than 0.75%. The best
Training set size, M

8
static metamodel is, on average, LS-SVM with MQ kernel
6 and α = 1.
4
5.3.2 Uncertainty quantification
2
Table 5 shows a comparison among scaling approaches as
0
f11 f12 f13 performed by DRBF, using the average UQ error ĒUQ .
DRBF DKG DRBF DKG DRBF DKG Both Uave and Umax approaches for multiple-response sam-
Fig. 10 Stochastic-speed problem: performance of DRBF and DKG pling are presented. Sampling by Uave with rf = 0.25 is
for each function identified as the overall best approach and is used for com-
362 S. Volpi et al.

Fig. 11 Stochastic-wave problem: DRBF (Uave ) normalized error, E%

Fig. 12 Stochastic-wave problem: DRBF (Umax ) normalized error, E%

Fig. 13 Stochastic-wave problem: DKG (Umax ) normalized error, E%

parison with DKG. The associated Mave is found 21.5% Figure 16 presents results for each function. The
smaller than the overall average over different scaling most effective performance with DRBF is found for f14
approaches. and f15 ; DKG provides the best performance for f17 .
Dynamic metamodel based on stochastic radial basis functions 363

Fig. 14 Stochastic-wave
problem: fitting performance of
DRBF and DKG for each
function

Table 6 shows the average UQ error, ĒUQ , over all requiring on average 14 functions evaluations less than
functions. Using sequential sampling (I = 1) DRBF DKG. Using sequential and parallel infill with I = 5
and DKG present similar trends achieving ĒUQ < and 10, DRBF provides similar trends. DKG is found
1.25%; overall, DRBF is found more efficient than DKG quite affected by parallel infill, showing different perfor-

Table 9 Summary of results for stochastic-wave (average errors are given in %)

ERMS EUQ
Techniques M M

9 17 33 65 Average 9 17 33 65 Average

Static
IDW (k =2) 14.0 12.1 11.3 8.96 11.6 17.9 16.4 13.7 7.37 13.8
IDW (k =4) 13.4 10.1 8.43 5.77 9.42 11.7 6.89 4.92 2.12 6.40
IDW (k =6) 14.1 10.5 8.91 6.44 9.97 9.88 6.22 3.87 1.78 5.44
RBF (MQ, α =1) 11.3 7.28 3.99 2.78 6.33 10.3 5.95 2.23 1.54 5.01
RBF (MQ, α =10) 11.5 7.39 4.51 2.99 6.58 12.1 5.51 4.05 1.20 5.71
RBF (MQ, α =100) 11.5 7.49 4.67 3.12 6.70 12.5 5.79 4.29 1.37 5.98
RBF (IMQ, α =0.75) 11.4 8.14 4.43 2.73 7.67 14.0 8.37 4.15 1.34 6.96
RBF (IMQ, α =1.0) 17.7 9.84 5.33 2.86 8.94 16.7 10.5 5.21 2.30 8.66
RBF (IMQ, α =1.25) 20.1 11.6 6.42 3.26 10.4 19.1 12.3 6.19 2.83 10.1
PHS (k =1) 11.6 7.50 4.69 3.14 6.72 12.5 5.82 4.32 1.37 6.01
PHS (k =2) 14.5 7.46 4.13 2.89 7.24 13.2 6.20 2.33 1.28 5.74
PHS (k =3) 18.4 8.43 4.45 2.95 8.55 16.0 6.80 3.34 1.29 6.85
LS-SVM (MQ, α =1) 11.3 7.06 3.96 2.82 6.28 10.1 5.73 2.25 1.43 4.88
LS-SVM (MQ, α =10) 11.3 7.10 4.40 2.93 6.42 12.2 5.41 3.04 1.19 5.46
LS-SVM (MQ, α =100) 11.3 7.19 4.56 3.06 6.54 12.5 5.73 4.19 1.30 5.94
LS-SVM (IMQ, α =0.75) 12.4 7.46 4.19 2.57 6.66 13.1 6.71 3.25 1.31 6.09
LS-SVM (IMQ, α =1.0) 13.1 7.99 4.67 2.62 7.09 14.5 8.48 4.05 1.17 7.06
LS-SVM (IMQ, α =1.25) 13.7 8.64 5.24 2.82 7.60 15.9 10.5 6.08 1.30 8.46
OKG (lin., α =0.25) 11.3 7.20 4.58 3.08 6.55 12. 6 5.75 4.23 1.32 5.96
OKG (lin., α =0.5) 11.3 7.20 4.58 3.08 6.55 12. 6 5.75 4.23 1.32 5.96
OKG (lin., α =1.0) 11.3 7.20 4.58 3.08 6.55 12. 6 5.75 4.23 1.32 5.96
OKG (exp., α =0.25) 11.4 7.20 4.60 3.08 6.56 12.7 5.82 4.26 1.30 6.02
OKG (exp., α =0.5) 11.4 7.20 4.61 3.07 6.57 12.8 5.84 4.27 1.29 6.05
OKG (exp., α =1.0) 11.5 7.20 4.65 3.07 6.60 13.0 6.00 4.37 1.30 6.16
SRBF (P) 11.4 7.30 4.18 2.90 6.45 10.4 5.10 2.43 1.14 4.75
SRBF (MQ) 11.5 7.45 4.60 3.07 6.65 12.3 5.69 4.21 1.29 5.88
SRBF (IMQ) 17.7 9.80 5.33 2.86 8.93 16.6 10.4 5.18 2.31 8.63
DKG 12.9 6.95 3.63 2.29 6.44 10.9 5.90 2.51 1.26 5.15
Dynamic
DRBF 15.4 8.17 4.38 1.45 7.35 14.7 9.02 2.56 0.78 6.77
DKG 18.7 19.9 4.66 2.07 11.4 15.1 20.9 3.28 1.09 10.1
364 S. Volpi et al.

Fig. 15 Stochastic-wave problem: convergence of average fitting (a) and UQ (b) errors comparing dynamic and static metamodels

mance trends, varying I . Overall efficiency decreases as to both deterministic test functions (with dimensional-
I increases. ity ranging from two to six) and ship hydrodynamics
Table 9 shows a summary of the present results with UQ problems with analytical and numerical benchmarks,
comparison to static metamodels (He et al. 2013). Fig. 15 respectively.
(b) shows the convergence of ĒUQ versus M for DRBF Assessing test functions fitting, DRBF is found the
and DKG, with comparison to static metamodels. Average most effective for trigonometric and medium dimensional
errors using static approaches are shown using a diamond, functions, whereas DKG has the best fitting capabil-
whereas an error bar indicates their range. Table 8 shows ity when applied to polynomial and low dimensional
the percentile of average error provided by DRBF and DKG functions. Overall, average number of training points
compared to other static metamodels and the error differ- required, Mave , equals 139 for DRBF and 106 for DKG
ence with the best metamodel. DRBF is found the best (Table 6).
metamodel for M = 33. For M ≥ 33 DRBF presents Assessment of Delft catamaran performance (total
an error difference with the best metmodel always less resistance, sinkage and trim) in calm water with stochas-
or equal to 0.3%. For M ≥ 33 the error difference of tic speed reveals that multiple response criterion has no
DKG with the best metamodel is always less or equal to significant effect on DRBF. Relatively few training points
1%. The best static metamodel is, on average, SRBF with are needed by DRBF and DKG for getting small fitting
P kernel. errors. Specifically, Mave for average fitting error ĒRMS
equals 5.66 using DRBF and 6 using DKG (Table 6).
Comparing to static metamodels used in earlier research,
6 Conclusions and future work dynamic approaches are found the most accurate for M ≥
9 (Tables 7 and 8). Best static approach has been found
A dynamic metamodel based on stochastic RBF has on average LS-SVM with IMQ kernel. Also for UQ
been derived and validated by comparison with an exist- analyses, DRBF and DKG need few training points to
ing DKG method and static metamodels used in ear- achieve fairly good accuracy. Specifically, Mave for aver-
lier research. A metric for the evaluation of the effi- age uncertainty-quantification error ĒUQ equals 5.33 using
ciency of the metamodels has been introduced and applied DRBF and 5 using DKG (Table 6). Comparing to static

Fig. 16 Stochastic-wave problem: performance of DRBF and DKG for each function
Dynamic metamodel based on stochastic radial basis functions 365

metamodels, dynamic approaches are found the most accu- errors reveals similar trends. Fitting errors have been found
rate for M ≥ 9 (Tables 7 and 8). LS-SVM with IMQ generally larger than those found in uncertainty quantifi-
kernel has been identified as the best static approach cation. Errors for stochastic wave problem (2D) have been
on average. found nearly twice than those for stochastic speed prob-
Assessment of Delft catamaran performances (time mean lem (1D). Finally, the use of parallel infill with groups
and RMS of x-force, heave and pitch motions) in stochas- of 5 and 10 training points is found affordable, showing
tic regular wave shows that DRBF has the most effective an acceptable loss of efficiency compared to the gain in
performances using multiple response criterion Uave . Both wall-clock time.
DRBF and DKG have fairly good fitting performance, Future research includes the use of multiple kernels
compared to static metamodels. Specifically, DRBF shows and their automatic selection in order to auto-configure
Mave = 44 for average fitting error ĒRMS whereas Mave = the DRBF network, and the application of DRBF and
49 using DKG (Table 6.) Comparing to static metamodels, DKG to design optimization problems in ship hydrody-
the error difference between dynamic and best metamodels namics, including deterministic and stochastic applications
is always less or equal than 0.75% for M ≥ 33 (Tables 8 with global and local optimization algorithms. The pos-
and 9). DRBF is found the best metamodel for M = 65. sibility to extend current adaptive sampling approaches,
Best static approach has been found on average LS-SVM based on maximum uncertainty or MSE, to integral met-
with MQ kernel. DRBF is found more efficient than DKG rics will be addressed in future studies. Applicability of
for UQ analyses. Both metamodels perform well, com- present methodology to large problems (with a number of
pared to static metamodels. Specifically, DRBF presents independent variables greater than 10) will also be inves-
Mave = 32 for average uncertainty-quantification error tigated in future work. Comparison with Bayesian Kriging
ĒUQ , whereas Mave = 46.7 using DKG (Table 6). Com- methods is of interest and will be evaluated in the future.
paring to static metamodels, the error difference between In addition, model tests campaign to collect experimental
dynamic and best metamodels is always less or equal than benchmark values for UQ for Delft catamaran in wave is in
1.05% for M ≥ 33 (Tables 8 and 9). DRBF is found the progress.
most accurate overall for M = 65. Best static approach has
been found on average SRBF with P kernel.
Acknowledgments The present research is supported by the US
Overall, training points for DRBF are located in high- Navy Office of Naval Research, Grant N00014-11-1-0237 and Office
curvature regions, whereas they are more uniformly dis- of Naval Research Global, NICOP Grant N62909-11-1-7011, under
tributed using DKG. Since DRBF locates training points the administration of Dr. Ki-Han Kim and Dr. Woei-Min Lin, and
by the Italian Flagship Project RITMARE, coordinated by the Ital-
where the uncertainty is larger, current results indicate that
ian National Research Council and funded by the Italian Ministry of
the metamodel stochastic uncertainty is larger in high- Education, within the National Research Program 2011-2013.
curvature regions. Generally, accuracy of metamodel pre-
diction in high-curvature regions is difficult to achieve and
many training points are required. Thus, the metamodel Appendix A: Uncertainty quantification
stochastic uncertainty is an effective metric for adaptive
sampling. DKG fills more uniformly the domain, which UQ studies assess the effects of uncertain parameters ξ
is reasonable since prediction MSE by (31) depends on with probability density function φ(ξ ) on the relevant out-
training-points distribution. puts f , quantifying EV, SD, CDF and 95%-confidence
In conclusion, the introduction of a metric based on the band of CDF, herein called output stochastic uncertainty
number of evaluations required has allowed for a straight- Uf , as:
forward assessment of metamodels efficiency, which is 
of primarily interest when high-fidelity solvers are used.
EV(f ) = (ξ )φ(ξ )dξ (35)
DRBF has been found with a greater efficiency for fitting
trigonometric and medium-dimensional functions and two-
dimensional UQ. The use of an adaptive scaling with rf 
ranging from 0 to 0.5 has been found efficient compared SD(f ) = [f (ξ ) − EV (f )]2 φ(ξ )dξ (36)
to a non-adaptive scaling approach; however, the optimal
value of rf is problem dependent. DKG is found more effec-

tive for fitting polynomial and low-dimensional functions
and one-dimensional UQ. In general, as the training set CDF(y) = H [y − f (ξ )]φ(ξ )dξ (37)
size increases, dynamic approaches are found more efficient
than static metamodels used in earlier research. In addi-
tion, comparison of fitting with uncertainty-quantification Uf = CDF−1 (0.975) − CDF−1 (0.025) (38)
366 S. Volpi et al.

H (·) is the Heaviside step function. Using the Monte Carlo Hartman function (3D)
(MC) method with {ξi }N ⎧ ⎫
i=1 ∼ φ, (35-37) are solved respec- ⎨  ⎬

4 3
tively by f7 (x) = − ai exp − bij (xj − dij )2 (48)
⎩ ⎭
1  j =1
N i=1
EV(f ) = f (ξi ) (39)
N with
i=1 ⎧ ⎫ ⎧ ⎫ ⎧ ⎫
 ⎪
⎪1.0⎪⎪ ⎪
⎪3.0 10.0 30.0⎪⎪ ⎪
⎪ 0.3689 0.1170 0.2673⎪⎪
 ⎨ ⎬ ⎨ ⎬ ⎨ ⎬
1.2 0.1 10.0 35.0 0.4699 0.4387 0.7470
 1 
N
2 a= b= d=
SD(f ) = 

⎪3.0 ⎪ ⎪ 3.0 10.0 30.0⎪ ⎪ 0.1091 0.8732 0.5547⎪
f (ξi ) − EV (40) ⎩ ⎪ ⎭ ⎪
⎩ ⎪
⎭ ⎪
⎩ ⎪

N −1 3.2 0.1 10.0 35.0 0.03815 0.5743 0.8828
i=1 (49)

1 
N
 
CDF(y) = H y − f (ξi ) (41)
N
i=1
Extended Rosenbrock function (4D)
3 
 
f8 (x) = (1 − xi )2 + 100(xi+1 − xi2 )2 (50)
Appendix B: Analytical formulation of test functions i=1

This appendix provides the analytical formulation used for Styblinski-Tang function (4D)
the test functions. )4
x 4 − 16xi2 + 5xi
f9 (x) = i=1 i (51)
Branin-Hoo function (2D) 2
 2  
f1 (x) = x2 −
5.1 2
x +
5
x − 6 + 10 1 −
1
cosx1 + 10 (42)
Hartman function (6D)
4π 2 1
π
1
8π ⎧ ⎫

4 ⎨ 6 ⎬
f10 (x) = − ai exp − bij (xj − dij )2 (52)
⎩ ⎭
i=1 j =1
Six-hump camelback function (2D)
  with
1 4 2 ! ⎧ ⎫ ⎧ ⎫
f2 (x) = 4 − 2.1x1 + x1 x1 + x1 x2 + 4x22 − 4 x22
2 ⎪
⎨ ⎪
1.0
⎬ ⎪

10.0 3.0 17.0 3.5 1.7 8.0 ⎪

3 1.2 0.05 10.0 17.0 0.1 8.0 14.0
a= b= (53)
(43) ⎪
⎩3.0⎪ ⎭ ⎪
⎩ 3.0 3.5 1.7 10.0 17.0 8.0 ⎪

3.2 17.0 8.0 0.05 10.0 0.1 14.0
⎧ ⎫
Rosenbrock function (2D) ⎪
⎪ 0.1312 0.1696 0.5569 0.0124 0.8283 0.5886⎪

⎨ ⎬
!2 d=
0.2329 0.4135 0.8307 0.3736 0.1004 0.9991
f3 (x) = (1 − x1 )2 + 100 x2 − x12 (44) ⎪
⎪ 0.2348 0.1451 0.3522 0.2883 0.3047 0.6650⎪

⎩ ⎭
0.4047 0.8828 0.8732 0.5743 0.1091 0.0381
Quartic function (2D)
x14 x2 x1 x2 References
f4 (x) = − 1 + + 2 (45)
4 2 10 2
Acar E, Rais-Rohani M (2009) Ensemble of metamodels with opti-
Shubert function (2D) mized weight factors. Struct Multidisc Optim 37(3):279–294.
doi:10.1007/s00158-008-0230-y
 5  5  Ali MM, Khompatraporn C, Zabinsky Z (2005) A numerical eval-
 
f5 (x) = icos [(i + 1)x1 + i] icos [(i + 1)x2 + i] uation of several stochastic algorithms on selected continuous
i=1 i=1 global optimization test problems. J Glob Optim 31(4):635–672.
(46) doi: 10.1007/s10898-004-9972-2
Billings SA, Zheng GL (1995) Radial basis function network con-
figuration using genetic algorithms. Neural Netw 8(6):877–890.
doi:10.1016/0893-6080(95)00029-Y
Booker AJ, Dennis JE, Frank PD, Serafini DB, Torczon V, Trosset
Extended Rosenbrock function (3D) MW (1999) A rigorous framework for optimization of
expensive functions by surrogates. Struct Optim 17(1):1–13.
2 
  doi:10.1007/BF01197708
f6 (x) = (1 − xi )2 + 100(xi+1 − xi2 )2 (47) Buhmann MD (2003) Radial basis functions: theory and implementa-
i=1 tions, vol 12. Cambridge university press
Dynamic metamodel based on stochastic radial basis functions 367

Buslig L, Baccou J, Picheny V (2014) Construction and Lucidi S, Piccioni M (1989) Random tunneling by means of
efficient implementation of adaptive objective-based acceptance-rejection sampling for global optimization. J Optim
designs of experiments. Math Geosci 46(3):285–313. Theory Appl 62(2):255–277. doi:10.1007/BF00941057
doi:10.1007/s11004-013-9481-2 Martin JD, Simpson TW (2005) Use of kriging models to approx-
Campana EF, Liuzzi G, Lucidi S, Peri D, Piccialli V, Pinto A (2009) imate deterministic computer models. AIAA J 43(4):853–863.
New global optimization methods for ship design problems. Optim doi:10.2514/1.8650
Eng 10(4):533–555. doi:10.1007/s11081-009-9085-3 Matheron G (1963) Principles of geostatistics. Econ Geol 58(8):1246–
Diez M, He W, Campana EF, Stern F (2014) Uncertainty quantifica- 1266. doi:10.2113/gsecongeo.58.8.1246
tion of delft catamaran resistance, sinkage and trim for variable Meng K, Dong ZY, Wong KP (2009) Self-adaptive radial basis
froude number and geometry using metamodels, quadrature and function neural network for short-term electricity price forecast-
karhunen–loève expansion. Journal of Marine Science and Tech- ing. Generation, Transmission & Distribution. IET 3(4):325–335.
nology 19(2):143–169. doi:10.1007/s00773-013-0235-0 doi:10.1049/iet-gtd.2008.0328
Forrester A, Keane A (2009) Recent advances in surrogate- Mousaviraad SM, He W, Diez M, Stern F (2013) Framework for con-
based optimization. Prog Aerosp Sci 45(1-3):50–79. vergence and validation of stochastic uncertainty quantification
doi:10.1016/j.paerosci.2008.11.001 and relationship to deterministic verification and validation. Int
Giunta AA, McFarland JM, Swiler LP, Eldred MS (2006) The J Uncertain Quantif 3(5):371–395. doi:10.1615/Int.J.Uncertainty
promise and peril of uncertainty quantification using response Quantification.2012003594
surface approximations. Struct Infrastruct Eng 2(3-4):175–189. Mullur AA, Messac A (2005) Extended radial basis functions: more
doi:10.1080/15732470600590507 flexible and effective metamodeling. AIAA J 43(6):1306–1315.
Gramacy RB, Lee HKH (2008) Bayesian treed gaussian process mod- doi:10.2514/1.11292
els with an application to computer modeling. J Am Stat Assoc Peri D (2009) Self-learning metamodels for optimization. Ship Tech-
103(483):1119–1130. doi:10.1198/016214508000000689 nol Res 56:94–108
Gutmann HM (2001) A radial basis function method for global Pilz J, Spock G (2008) Why do we need and how should we imple-
optimization. J Glob Optim 19(3):201–227. doi:10.1023/A: ment bayesian kriging methods. Stoch Environ Res Risk Assess
1011255519438 22(5):621–632. doi:10.1007/s00477-007-0165-7
Hardy RL (1971) Multiquadric equations of topography and Regis RG (2011) Stochastic radial basis function algorithms for
other irregular surfaces. J Geophys Res 76(8):1905–1915. large-scale optimization involving expensive black-box objective
doi:10.1029/JB076i008p01905 and constraint functions. Comput & Oper Res 38(5):837–853.
He W, Diez M, Zou Z, Campana EF, Stern F (2013) Urans study doi:10.1016/j.cor.2010.09.013
of delft catamaran total/added resistance, motions and slam- Regis RG, Shoemaker CA (2007) A stochastic radial basis func-
ming loads in head sea including irregular wave and uncertainty tion method for the global optimization of expensive functions.
quantification for variable regular wave and geometry. Ocean INFORMS J Comput 19(4):497–509. doi:10.1287/ijoc.1060.0182
Engineering 74:189–217. doi:10.1016/j.oceaneng.2013.06.020 Sacks J, Welch W, Mitchell T, Wynn H (1989) Design and analysis of
Jansson T, Nilsson L, Redhe M (2003) Using surrogate models and computer experiment. Stat Sci 4(4):409–435
response surfaces in structural optimization with application to Saka Y, Gunzburger M, Burkardt J (2007) Latinized, improved LHS,
crashworthiness design and sheet metal forming. Struct Multidisc and CVT point sets in hypercubes. Int J Numer Anal Model 4(3-
Optim 25(2):129–140. doi:10.1007/s00158-002-0279-y 4):729–743
Jin R, Chen W, Simpson TW (2001) Comparative studies of meta- Sarimveis H, Alexandridis A, Mazarakis S, Bafas G (2004) A new
modeling techniques under multiple modelling criteria. Struct algorithm for developing dynamic radial basis function neural net-
Multidisc Optim 23(1):1–13. doi:10.1007/s00158-001-0160-4 work models based on genetic algorithms. Comput & chem eng
Jin R, Du X, Chen W (2003) The use of metamodeling techniques 28(1):209–217. doi:10.1016/S0098-1354(03)00169-8
for optimization under uncertainty. Struct Multidiscip Optim Shepard D (1968) A two-dimensional interpolation function for
25(2):99–116. doi:10.1007/s00158-002-0277-0 irregularly-spaced data. In: Proceedings of the 1968 23rd
Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimiza- ACM national conference, ACM:517–524. doi:10.1145/800186.
tion of expensive black-box functions. J Glob Optim 13(4):455– 810616
492. doi:10.1023/A:1008306431147 Sobieszczanski-Sobieski J, Haftka RT (1997) Multidisciplinary
Kandasamy M, Peri D, Tahara Y, Wilson W, Miozzi M, Georgiev S, aerospace design optimization: survey of recent developments.
Milanov E, Campana EF, Stern F (2013) Simulation based design Struct Optim 14(1):1–23. doi:10.1007/BF01197554
optimization of waterjet propelled delft catamaran. Int Shipbuild Song H, Choi KK, Lamb D (2013) A study on improving the
Prog 60(1):277–308. doi:10.3233/ISP-130098 accuracy of kriging models by using correlation model/mean
Kennedy MC, Anderson CW, Conti S, O’Hagan A (2006) Case studies structure selection and penalized log-likelihood function. 10th
in gaussian process modelling of computer codes. Reliab Eng Syst World Congress on Structural and Multidisciplinary Optimization.
Saf 91(10–11):1301–1309. doi:10.1016/j.ress.2005.11.028 Florida, Orlando
Li G, Aute V, Azarm S (2010) An accumulative error based adaptive Suykens J, Van Gestel T, De Brabanter J, De Moor B, Vandewalle J
design of experiments for offline metamodeling. Struct Multidisc (2002). Least squares support vector machines. World Scientific
Optim 40(1–6):137–155. doi:10.1007/s00158-009-0395-z Taddy MA, Lee HKH, Gray GA, Griffin JD (2009) Bayesian
Li R, Sudjianto A (2005) Analysis of computer experiments using guided pattern search for robust local optimization. Technometrics
penalized likelihood in gaussian kriging models. Technometrics 51(4):389-401. doi:10.1198/TECH.2009.08007
47(2):111–120. doi:10.1198/004017004000000671 Tahara Y, Kobayashi H, Kandasamy M, He W, Peri D, Diez
Loeven GJA, Witteveen JAS, Bijl H (2007) A probabilistic radial M, Campana EF, Stern F (2012) CFD-based multiobjective
basis function approach for uncertainty quantification. Proceed- stochastic optimization of a waterjet propelled high speed
ings of the NATO RTO-MP-AVT-147 Computational Uncertainty ship. 29th Symposium on Naval Hydrodynamics, Gothenburg,
in Military Vehicle design symposium Sweden
368 S. Volpi et al.

Torczon V (1995) Pattern search methods for nonlinear optimization, Zhao L, Choi KK, Lee I (2011) Metamodeling method using
vol 6. SIAG/OPT Views and News, pp 7–11 dynamic kriging for design optimization. AIAA J 49(9):2034–
Wahba G (1990) Spline models for observational data. 59, Siam 2046. doi:10.2514/1.J051017
Yang RJ, Wang N, Tho CH, Bobineau JP, Wang BP (2005) Metamod- Zhou XJ, Ma YZ, Li XF (2011) Ensemble of surrogates with recur-
eling development for vehicle frontal impact simulation. J Mech sive arithmetic average. Struct Multidisc Optim 44(5):651–671.
Des 127(5):1014–1020. doi:10.1115/1.1906264 doi:10.1007/s00158-011-0655-6

View publication stats

You might also like