19
Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation errors can help us to obtain the best predictor Felipe A. C. Viana · Raphael T. Haftka · Valder Steffen Jr. Received: 26 May 2008 / Revised: 5 September 2008 / Accepted: 2 November 2008 / Published online: 8 January 2009 © Springer-Verlag 2008 Abstract Surrogate models are commonly used to re- place expensive simulations of engineering problems. Frequently, a single surrogate is chosen based on past experience. This approach has generated a collection of papers comparing the performance of individual surrogates. Previous work has also shown that fitting multiple surrogates and picking one based on cross- validation errors (PRESS in particular) is a good strat- egy, and that cross-validation errors may also be used to create a weighted surrogate. In this paper, we discussed how PRESS (obtained either from the leave-one-out or from the k-fold strategies) is employed to estimate the RMS error, and whether to use the best PRESS solution or a weighted surrogate when a single surro- gate is needed. We also studied the minimization of the integrated square error as a way to compute the weights of the weighted average surrogate. We found that it pays to generate a large set of different surrogates and then use PRESS as a criterion for selection. We found that (1) in general, PRESS is good for filtering out inaccurate surrogates; and (2) with sufficient number of points, PRESS may identify the best surrogate of the set. Hence the use of cross-validation errors for F. A. C. Viana (B ) · R. T. Haftka Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611, USA e-mail: fchegury@ufl.edu R. T. Haftka e-mail: haftka@ufl.edu V. Steffen Jr. School of Mechanical Engineering, Federal University of Uberlandia, Uberlandia, MG 38400, Brazil e-mail: [email protected] choosing a surrogate and for calculating the weights of weighted surrogates becomes more attractive in high dimensions (when a large number of points is naturally required). However, it appears that the potential gains from using weighted surrogates diminish substantially in high dimensions. We also examined the utility of using all the surrogates for forming the weighted surrogates versus using a subset of the most accurate ones. This decision is shown to depend on the weighting scheme. Finally, we also found that PRESS as obtained through the k-fold strategy successfully estimates the RMSE. Keywords Multiple surrogate models · Weighted average surrogates · Cross-validation errors · Prediction sum of squares 1 Introduction Despite advances in computer throughput, the compu- tational cost of complex high-fidelity engineering simu- lations often makes it impractical to rely exclusively on simulation for design optimization (Jin et al. 2001). In addition, these advances do not seem to affect the time for a state-of-the-art simulation, but instead to be used to add complexity to the modeling (Venkataraman and Haftka 2004). To reduce the computational cost, surro- gate models, also known as meta-models, are often used in place of the actual simulation models. With advances in computer throughput, the cost of fitting a given surrogate drops in relation to the cost of simulations. Consequently more sophisticated and more expensive surrogates have become popular. Surrogates such as radial basis neural networks (Smith 1993; Cheng and Titterington 1994), kriging models (Sacks et al. 1989;

Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Struct Multidisc Optim (2009) 39:439–457DOI 10.1007/s00158-008-0338-0

RESEARCH PAPER

Multiple surrogates: how cross-validation errorscan help us to obtain the best predictor

Felipe A. C. Viana · Raphael T. Haftka ·Valder Steffen Jr.

Received: 26 May 2008 / Revised: 5 September 2008 / Accepted: 2 November 2008 / Published online: 8 January 2009© Springer-Verlag 2008

Abstract Surrogate models are commonly used to re-place expensive simulations of engineering problems.Frequently, a single surrogate is chosen based on pastexperience. This approach has generated a collectionof papers comparing the performance of individualsurrogates. Previous work has also shown that fittingmultiple surrogates and picking one based on cross-validation errors (PRESS in particular) is a good strat-egy, and that cross-validation errors may also be used tocreate a weighted surrogate. In this paper, we discussedhow PRESS (obtained either from the leave-one-outor from the k-fold strategies) is employed to estimatethe RMS error, and whether to use the best PRESSsolution or a weighted surrogate when a single surro-gate is needed. We also studied the minimization of theintegrated square error as a way to compute the weightsof the weighted average surrogate. We found that itpays to generate a large set of different surrogates andthen use PRESS as a criterion for selection. We foundthat (1) in general, PRESS is good for filtering outinaccurate surrogates; and (2) with sufficient numberof points, PRESS may identify the best surrogate ofthe set. Hence the use of cross-validation errors for

F. A. C. Viana (B) · R. T. HaftkaDepartment of Mechanical and Aerospace Engineering,University of Florida, Gainesville, FL 32611, USAe-mail: [email protected]

R. T. Haftkae-mail: [email protected]

V. Steffen Jr.School of Mechanical Engineering,Federal University of Uberlandia,Uberlandia, MG 38400, Brazile-mail: [email protected]

choosing a surrogate and for calculating the weights ofweighted surrogates becomes more attractive in highdimensions (when a large number of points is naturallyrequired). However, it appears that the potential gainsfrom using weighted surrogates diminish substantiallyin high dimensions. We also examined the utility of usingall the surrogates for forming the weighted surrogatesversus using a subset of the most accurate ones. Thisdecision is shown to depend on the weighting scheme.Finally, we also found that PRESS as obtained throughthe k-fold strategy successfully estimates the RMSE.

Keywords Multiple surrogate models · Weightedaverage surrogates · Cross-validation errors ·Prediction sum of squares

1 Introduction

Despite advances in computer throughput, the compu-tational cost of complex high-fidelity engineering simu-lations often makes it impractical to rely exclusively onsimulation for design optimization (Jin et al. 2001). Inaddition, these advances do not seem to affect the timefor a state-of-the-art simulation, but instead to be usedto add complexity to the modeling (Venkataraman andHaftka 2004). To reduce the computational cost, surro-gate models, also known as meta-models, are often usedin place of the actual simulation models. With advancesin computer throughput, the cost of fitting a givensurrogate drops in relation to the cost of simulations.Consequently more sophisticated and more expensivesurrogates have become popular. Surrogates such asradial basis neural networks (Smith 1993; Cheng andTitterington 1994), kriging models (Sacks et al. 1989;

Page 2: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

440 F.A.C. Viana et al.

Lophaven et al. 2002) support vector regression (Smolaand Scholkopf 2004; Clarke et al. 2005) that requireoptimization in the fitting process, increasingly replacethe traditional polynomial response surfaces (Box et al.1978; Myers and Montgomery 1995) that only requirethe solution of a system of linear equations.

It is difficult to predict the performance of a surro-gate for a new problem. In most of the cases, the prac-tice is to use a surrogate model of preference based onpast experience. Alternatively, the generation of largeand diverse set of surrogates reduces the chances ofusing poorly fitted surrogates. Zerpa et al. (2005) usedmultiple surrogates for optimization of an alkaline–surfactant–polymer flooding processes incorporatinga local weighted average model of the individualsurrogates. Goel et al. (2007) explored different ap-proaches in which the weights associated with eachsurrogate model are determined based on the globalcross-validation error measure called prediction sumof squares (PRESS). Acar and Rais-Rohani (2008)discussed variations in the choice weights using cross-validation errors, and studied weight selection viaoptimization. PRESS can also be used to identify thesurrogate most likely to be the most accurate. Lin et al.(2002) and Meckesheimer et al. (2002) presented aninvestigation on the use of cross-validation for RMSEestimation.

The objectives of this paper are: (1) investigate theeffectiveness of PRESS for selecting the best surrogatein a diverse set; and (2) explore two weighting schemesbased on minimization of the mean integrated squareerror and their potential to improve upon the bestPRESS surrogate. The rest of the paper is organizedas follows. Sections 2 and 3 present the methodologyof the investigation. Section 4 defines the numericalexperiments used for the investigation and Section 5presents results and discussion. Finally, the paper isclosed by recapitulating salient points and concludingremarks. Three appendices present (1) an overviewof the used surrogate modeling techniques; (2) basicconcepts about boxplots; and (3) how reducing the costof PRESS computation in kriging affects the accuracyof the RMSE estimation.

2 Background

Surrogates are fitted to function values at p points,which are known as the design of experiments (DOE).The accuracy of the surrogates is then evaluated on theentire domain, with one of the main error metrics beingthe root mean square error (RMSE).

2.1 Root mean square error

We denote by y(x) the actual simulation at the pointx = [

x1 x2 . . . xndv

]T, and by e (x) = y (x) − y (x) the

error associated with the prediction of the surro-gate model, y (x). The actual root mean square error(RMSE) in the design domain with volume V is givenby:

RMSEactual =√

1

V

Ve2 (x) dx . (1)

In this paper, when we check the accuracy of a sur-rogate, we compute the RMSE by Monte Carlo integra-tion at a large number of ptest test points:

RMSE =√√√√ 1

ptest

ptest∑

i=1

e2i , (2)

where ei = yi − yi is the error associated with the pre-diction, yi, compared to the actual simulation, yi, in thei-th test point.

We use five different Latin hypercube designs(Mckay et al. 1979), created by the MATLAB Latinhypercube function lhsdesign, set with the “maxmin”option with ten iterations. The RMSE is taken as themean of the values for the five designs, and the un-certainty about this value is approximately standarddeviation divided by

√5.1

For comparing surrogates based on the data only atthe p points of the design of experiments (DOE), weuse cross-validation errors.

2.2 Cross-validation errors

A cross-validation error is the error at a data pointwhen the surrogate is fitted to a subset of the datapoints not including that point. When the surrogate isfitted to all the other p − 1 points, (so-called leave-one-out strategy), we obtain the vector of cross-validationerrors, e. This vector is also known as the PRESS vector(PRESS stands for prediction sum of squares). Figure 1illustrates the cross-validation errors for a polynomialresponse surface and kriging surrogates.

The RMSE is estimated from the PRESS vector:

PRESSRMS =√

1

peT e. (3)

1We performed a comparison with the RMSE computation usingthe whole set of points. We found that the uncertainty can be alsoadequately estimated from the standard deviation of the entireset divided by the square root of the number of points.

Page 3: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 441

Fig. 1 The cross-validation error (CVE) at the third point ofthe DOE is exemplified by fitting a polynomial response surface(PRS) and kriging model (KRG) to five data points of thefunction sin(x). After repeating this process for all p points, thesquare root of the mean square error is used as an estimator ofthe RMSE

The leave-one-out strategy is computationally expen-sive for large number of points. We then use a varia-tion of the k-fold strategy (Kohavi 1995) to overcomethis problem. According to the classical k-fold strat-egy, after dividing the available data (p points) intop/k clusters, each fold is constructed using a pointrandomly selected (without replacement) from each ofthe clusters. Of the k folds, a single fold is retainedas the validation data for testing the model, and theremaining k − 1 folds are used as training data. Thecross-validation process is then repeated k times witheach of the k folds used exactly once as validation data.Note that k-fold turns out to be the leave-one-out whenk = p. We implemented the strategy by: (1) extractingp/k points of the set using a “maximin” criterion (max-imization of the minimum inter-distance); (2) removingthese points from the set and repeating step (1) withthe remaining points. Each set of extracted points isused for validation and the remaining for fitting. Thisprocess is repeated k times, each of it with a differentvalidation set. The k-fold strategy is that it can be ap-plied to any surrogate technique. For KRG models, lesscomputationally expensive way of computing PRESS isto freeze the correlation parameters to their value forthe original surrogate. We have studied this approachand the results (shown in Appendix 3) show that thereis substantial loss of correlation between PRESSRMS

and RMSE.

3 Ensemble of surrogates

3.1 Selection based on PRESS

Since PRESSRMS is an estimator of the RMSE, onepossible way of using multiple surrogates is to select

the model with best (i.e., smallest) PRESS value (Best-PRESS surrogate). Because the quality of fit dependson the data points, the BestPRESS surrogate may varyfrom DOE to DOE. This strategy may include surro-gates based on the same methodology, such as differentinstances of kriging (e.g., kriging models with differ-ent regression and/or correlation functions). The mainbenefit from a diverse and large set is the increasingchance of avoiding (1) poorly fitted surrogates and (2)DOE dependence of the performance of individual sur-rogates. Obviously, the success when using BestPRESSrelies on the diversity of the set of surrogates and on thequality of the PRESS estimator.

3.2 Weighted average surrogate

Alternatively, a weighted average surrogate (WAS)intends to take advantage of n surrogates in the hope ofcanceling errors in prediction through proper weightingselection in the linear combination of the models:

yWAS (x) =n∑

i=1

wi (x) yi (x) = wT (x) y (x) , (4)

n∑

i=1

wi (x) = 1Tw (x) = 1 , (5)

where yWAS (x) is the predicted response by the WASmodel, wi(x) is the weight associated with the ith surro-gate at x, and yi (x) is the predicted response by the ithsurrogate. Furthermore, so that if all surrogates providethe same prediction, so would yWAS (x).

The weights and the predicted responses can bewritten in the vector form as w(x) and y (x). In thiswork we study weight selection based on PRESS, whichis a global measure, so w (x)= w, ∀x.

In a specific DOE, when considering a set of sur-rogates, if not all are used, we assume surrogates areadded to the ensemble one at a time based on therank given by the PRESSRMS. Then, the first one to bepicked is the BestPRESS.

3.2.1 Heuristic computation of the weights

Goel et al. (2007) proposed a heuristic scheme forcalculation of the weights, namely the PRESS weightedaverage surrogate (PWS). In PWS, the weights arecomputed as:

wi = w∗i

n∑

j=1w∗

j

, w∗i = (

Ei + αEavg)β ,

Eavg = 1

n

n∑

i=1

Ei , β < 0, α < 1, (6)

where Ei is given by the PRESSRMS of the ith surrogate.

Page 4: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

442 F.A.C. Viana et al.

The two parameters α and β, control the importanceof averaging and importance of individual PRESS, re-spectively. Goel et al. (2007) suggested α = 0.05 andβ = −1.

3.2.2 Computation of the weights for minimum RMSE

Using an ensemble of neural networks, Bishop (1995)proposed a weighted average surrogate obtained byapproximating the covariance between surrogates fromresiduals at training or test points. Here, as in Acar andRais-Rohani (2008), we opt instead for basing Bishop’sapproach on minimizing the mean square error (MSE):

MSEWAS = 1

V

Ve2

WAS (x) dx = wTCw , (7)

where eWAS (x) = y (x) − yWAS (x) is the error associ-ated with the prediction of the WAS model, and theintegral, taken over the domain of interest, permits thecalculation of the elements of C as:

cij = 1

V

Vei (x) e j (x) dx , (8)

where ei(x) and e j(x) are the errors associated withthe prediction given by the surrogate model i and j,respectively.

C plays the same role as the covariance matrix inBishop’s formulation. However, we approximate C byusing the vectors of cross-validation errors, e,

cij � 1

peT

i e j , (9)

where p is the number of data points and the sub-indexes i and j indicate different surrogate models.

Given the C matrix, the optimal weighted surrogate(OWS) is obtained from minimization of the MSE as:

minw

MSEWAS = wTCw , (10)

subject to:

1Tw = 1 . (11)

The solution is obtained using Lagrange multipliers,as:

w = C−111TC−11

. (12)

The solution may include negative weights as well asweights larger than one. Allowing this freedom wasfound to amplify errors coming from the approxima-tion of C matrix (9). One way to enforce positivity is

to solve (12) using only the diagonal elements of C,which are more accurately approximated than the off-diagonal terms. We denote this approach OWSdiag. Itis worth observing that when α = 0 and β = −2, PWSgives the same weights of OWSdiag. We also studiedthe possibility of adding the constraint wi ≥ 0 to theoptimization problem; however it was not sufficient toovercome the effect of poor approximations of the Cmatrix. OWS is recommended for the cases when anaccurate approximation of C is available and OWSdiag

for a less accurate one.Either when employing BestPRESS or one of the

above mentioned WAS schemes, the computationalcost of using an ensemble of surrogates depends onthe calculation of the cross-validation errors. Since eachsurrogate of the ensemble is fit as many times as thenumber of points in the data set, the larger the set thehigher the cost.

3.2.3 Should we use all surrogates?

When forming a weighted surrogate, we may use all msurrogates we have created; or alternatively, we mayuse just a subset of the n best ones (n ≤ m). With anexact C matrix, there is no reason not to use them all.However, with the approximation of (9), it is possiblethat adding inaccurate surrogates will lead to loss ofaccuracy. When we use a partial set, we will add thesurrogates according to the rank given by PRESSRMS.BestPRESS (surrogate with lowest PRESSRMS) is thefirst one to be picked; which corresponds to n = 1.When the worst PRESSRMS-ranked surrogate is addedto the ensemble, n = m.

This can be shown to be the best approach with anexact over diagonal C matrix.

Note that when n < m (i.e., not all surrogates areused) it is expected that the set of surrogates in theensemble can change with the DOE, since the per-formance of individual surrogates may vary from oneDOE to another.

3.2.4 Combining accuracy and diversity

For both selection and combination, the best case sce-nario would be to have a set of surrogates that aredifferent in terms of prediction values

(y (x)

)but similar

in terms of prediction accuracy (RMSE). This wouldincrease the chance that WAS would allow error can-cellation. In the present work, even though we gener-ated a substantial number of surrogates with differentstatistical models, loss functions, and shape functions,we usually missed this goal. That is, comparably ac-curate surrogates were often highly correlated. Future

Page 5: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 443

Tab

le1

Info

rmat

ion

abou

tthe

seto

f24

basi

csu

rrog

ates

(see

App

endi

x1

for

are

view

onth

esu

rrog

ate

mod

els)

Surr

ogat

esM

odel

ing

tech

niqu

eD

etai

ls

1K

RG

-Pol

y0-E

xpK

rigi

ngm

odel

Pol

y0,P

oly1

,and

Pol

y2in

dica

teze

ro,fi

rst,

and

seco

ndor

der

poly

nom

ialr

egre

ssio

nm

odel

,res

pect

ivel

y.E

xpan

dG

auss

indi

cate

gene

rale

xpon

enti

alan

dG

auss

ian

corr

elat

ion

mod

el,r

espe

ctiv

ely.

Inal

lcas

es,θ

0=

10×

1 ndv

×1,a

nd10

−2≤

θ i≤

200

,i=

1,2,

...,

ndv

.2

KR

G-P

oly0

-Gau

ssW

ech

ose

6di

ffer

entK

rigi

ngsu

rrog

ates

byva

ryin

gth

ere

gres

sion

and

corr

elat

ion

mod

els.

3K

RG

-Pol

y1-E

xp4

KR

G-P

oly1

-Gau

ss5

KR

G-P

oly2

-Exp

6K

RG

-Pol

y2-G

auss

7P

RS2

Pol

ynom

ialr

espo

nse

surf

ace

Ful

lmod

elof

degr

ee2.

8R

BN

NR

adia

lbas

isne

ural

netw

ork

Goa

l=(0

.05

y )2

and

Spre

ad=

1/3.

9SV

R-A

NO

VA

-E-F

ull

Supp

ortv

ecto

rre

gres

sion

AN

OV

A,E

RB

F,G

RB

Fan

dSp

line

indi

cate

the

kern

elfu

ncti

on(E

RB

Fan

dG

RB

Fke

rnel

func

tion

sw

ere

setw

ith

σ=

0.5)

.E

and

Qin

dica

teth

elo

ssfu

ncti

onas

ε-i

nsen

siti

vean

dqu

adra

tic,

resp

ecti

vely

.F

ulla

ndSh

ortr

efer

todi

ffer

entv

alue

sfo

rth

ere

gula

riza

tion

para

met

er,C

,and

for

the

inse

nsit

ivit

y,ε.F

ulla

dopt

sC

=∞

and

ε=

10−4

,whi

leSh

ort0

1an

dSh

ort0

2us

esth

ese

lect

ion

ofva

lues

acco

rdin

gto

Che

rkas

sky

and

Ma

(200

4).F

or

Shor

t01

ε=

σy/

√ kan

dfo

rSh

ort0

=3σ

y

√ln

(k)/

k;an

dfo

rbo

thC

=10

0m

ax(∣ ∣

y+

3σy∣ ∣ ,∣ ∣ y

−3σ

y∣ ∣)

,whe

rey

and

σy

are

the

mea

nva

lue

and

the

stan

dard

devi

atio

nof

the

func

tion

valu

esat

the

desi

gnda

ta,r

espe

ctiv

ely.

We

chos

e16

diff

eren

tSV

Rsu

rrog

ates

byva

ryin

gth

eke

rnel

func

tion

,the

loss

func

tion

(ε-i

nsen

siti

veor

quad

rati

c)an

dth

eSV

Rpa

ram

eter

s(C

and

ε)

defin

eth

ese

surr

ogat

es.

10SV

R-A

NO

VA

-E-S

hort

0111

SVR

-AN

OV

A-E

-Sho

rt02

12SV

R-A

NO

VA

-Q13

SVR

-ER

BF

-E-F

ull

14SV

R-E

RB

F-E

-Sho

rt01

15SV

R-E

RB

F-E

-Sho

rt02

16SV

R-E

RB

F-Q

17SV

R-G

RB

F-E

-Ful

l18

SVR

-GR

BF

-E-S

hort

0119

SVR

-GR

BF

-E-S

hort

0220

SVR

-GR

BF

-Q21

SVR

-Spl

ine-

E-F

ull

22SV

R-S

plin

e-E

-Sho

rt01

23SV

R-S

plin

e-E

-Sho

rt02

24SV

R-S

plin

e-Q

For

supp

ortv

ecto

rre

gres

sion

,det

ails

abou

tthe

kern

els

are

give

nin

Tab

le8

Page 6: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

444 F.A.C. Viana et al.

Table 2 Selection of surrogates according to different criteria

Surrogate Definition

BestRMSE Most accurate surrogate for a given DOE (basis of comparison for all other surrogates).BestPRESS Surrogate with lowest PRESSRMS for a given DOE.PWS Weighted surrogate with heuristic computation of weights.OWSideal Most accurate weighted surrogate based on the true C matrix. Depending on the surrogate selection, OWSideal may be

less accurate than BestRMSE.OWS Weighted surrogate based on an approximation of the C matrix using cross-validation errors.OWSdiag Weighted surrogate based on the main diagonal elements of the approximated C matrix (using cross-validation errors).

BestRMSE and OWSideal are defined based on testing points; all others are obtained using data points

research may need to consider the problem of how togenerate a better set of basic surrogates.

4 Numerical experiments

4.1 Basic surrogates and derived surrogates

Table 1 gives details about the 24 different basic surro-gates used during the investigation (see Appendix 1 fora short theoretical review). The toolbox of Lophavenet al. (2002), the native neural networks Matlab toolbox(Mathworks contributors 2004), and the code devel-oped by Gunn (1997) were used to execute the KRG,RBNN, and SVR algorithms, respectively. The SUR-ROGATES toolbox of Viana and Goel (2007) is usedto execute the PRS and WAS algorithms and also foran easy manipulation of all these different codes.

No attempt was made to improve the predictions ofany surrogate by fine tuning their respective parameters(such as the initial θ parameter in kriging models).

Table 2 summarizes the surrogates that can beselected from the set using different criteria. Theseinclude the best choice of surrogate that would beselected if we had perfect knowledge of the function(BestRMSE), the surrogate selected based on the low-est PRESS error (BestPRESS), as well as the variousweighted surrogates. Of these, the ideal weighted sur-rogate has weights based on perfect knowledge, and itprovides a bound to the gains possible by using WAS.

4.2 Performance measures

The numerical experiments are intended to (1) mea-sure how efficient is PRESSRMS as an estimator ofthe RMSE (and consequently, how good is PRESSRMS

for identifying the surrogate with the smallest RMSE),and (2) explore how much the RMSE can be furtherreduced by the WAS. The first objective is quantifiedby comparing the correlation between PRESSRMS andRMSE across the 24 surrogates. For both objectives,we compare each basic surrogate, BestPRESS, and

WAS model with the best surrogate of the set in a spe-cific DOE (BestRMSE). We define %difference suchas the percent gain by choosing a specific model overBestRMSE:

%difference = 100RMSEBestRMSE − RMSESurr

RMSEBestRMSE. (13)

where RMSEBestRMSE is the RMSE of the best sur-rogate of that specific DOE (i.e., BestRMSE) andRMSESurr is the RMSE of the surrogate we are inter-ested in (it can be either a single or a WAS model).When %difference > 0 there is a gain in using the spe-cific surrogate, and when %difference < 0 there is a loss.

For each basic surrogate and also for BestPRESS,it is expected that %difference ≤ 0, which means thatthere may be losses and the best case scenario is whenone of the basic surrogates (hopefully BestPRESS) coin-cides with BestRMSE. When considering BestPRESS,the smaller the loss the better is the ability of PRESSRMS

to select the best surrogate of the set. For the WAS, ina particular DOE, we add surrogates according to therank given by the PRESSRMS value (i.e., we always startfrom BestPRESS). Thus, the %difference may startnegative, and as we increase the number of surrogates

Table 3 Parameters used in Hartman function

Hartman3 B =

⎢⎢⎣

3.0 10.0 30.00.1 10.0 35.03.0 10.0 30.00.1 10.0 35.0

⎥⎥⎦

D =

⎢⎢⎣

0.3689 0.1170 0.26730.4699 0.4387 0.74700.1091 0.8732 0.55470.03815 0.5743 0.8828

⎥⎥⎦

Hartman6 B =

⎢⎢⎣

10.0 3.0 17.0 3.5 1.7 8.00.05 10.0 17.0 0.1 8.0 14.03.0 3.5 1.7 10.0 17.0 8.017.0 8.0 0.05 10.0 0.1 14.0

⎥⎥⎦

D =

⎢⎢⎣

0.1312 0.1696 0.5569 0.0124 0.8283 0.58860.2329 0.4135 0.8307 0.3736 0.1004 0.99910.2348 0.1451 0.3522 0.2883 0.3047 0.66500.4047 0.8828 0.8732 0.5743 0.1091 0.0381

⎥⎥⎦

Page 7: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 445

Table 4 Specifications for the DOEs, and test points for the testfunctions

Test problem No. of No. of points No. of pointsdesign for fitting for test (in eachvariables of the 5 DOEs)

Branin–Hoo 2 12, 20, and 42 2,000Camelback 2 12 2,000Hartman3 3 20 2,000Hartman6 6 56 2,000Extended Rosenbrock 9 110 and 220 2,500Dixon–Price 12 182 4,000

in the ensemble, it is expected that %difference turnsto positive, which express the potential of WAS to bebetter than the best surrogate of the set.

4.3 Test functions

To test the effectiveness of the various approaches,we employ a set of analytical functions widely used asbenchmark problems in optimization (e.g. Dixon andSzegö 1978; Lee 2007). These are:

1. Branin–Hoo function (two variables):

y (x) =(

x2 − 5.1x21

4π2+ 5x1

π− 6

)2

+ 10

(1 − 1

)cos (x1) + 10 ,

− 5 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 15 . (14)

2. Camelback function (two variables):

y (x) =(

x 41

3− 2.1x2

1 + 4

)x2

1

+ x1x2 + (4x2

2 − 4)

x22 ,

− 3 ≤ x1 ≤ 3, − 2 ≤ x2 ≤ 2 . (15)

3. Hartman functions (three and six variables):

y (x) = −q∑

i=1

ai exp

⎝−m∑

j=1

bij(x j − dij

)2

⎠ ,

0 ≤ x j ≤ 1 , j = 1, 2, . . . , m. (16)

We use two instances: Hartman3 (m = 3) andHartman6 (m = 6), with 3 and with 6 variables, respec-tively. For both q = 4 and a = [

1.0 1.2 3.0 3.2]. Other

parameters are given in Table 3.

4. Extended Rosenbrock function (nine variables):

y (x) =m−1∑

i=1

[(1 − xi)

2 + 100(xi+1 − x2

i

)2]

,

− 5 ≤ xi ≤ 10 , i = 1, 2, . . . , m = 9.

(17)

Fig. 2 Plot of test functions.See Appendix 2 for detailsabout boxplots

(a) Branin-Hoo (b) Camelback (c)Hartman3

(d) Hartman6 (e) Extended Rosenbrock (f)Dixon-Price

Page 8: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

446 F.A.C. Viana et al.

5. Dixon–Price function (12 variables):

y (x) = (x1 − 1)2 +m∑

i=2

i[2x2

i − xi−1]2

,

− 10 ≤ xi ≤ 10 , i = 1, 2, . . . , m = 12.

(18)

As stated before, the quality of fit depends on thetraining data (function values at DOE). As a conse-quence, performance measures may vary from DOEto DOE. Thus, for all test problems, a set of 100 dif-ferent DOEs were used as a way of averaging out theDOE dependence of the results. They were createdby the MATLAB Latin hypercube function lhsdesign,set with the “maxmin” option with 1,000 iterations.Table 4 shows details about the data set generated

for each test function. Naturally, the number of pointsused to fit surrogates increase with dimensionality. Wealso use the Branin–Hoo and the extended Rosenbrockfunctions to investigate what happens in low-and high-dimensions, respectively, if we can afford more points.For the computation of the cross-validation errors, inmost cases we use the leave-one-out strategy (k = pin the k-fold strategy, see Section 2). However, for theDixon–Price function, due to the high cost of the leave-one-out strategy for the 24 surrogates for all 100 DOEs;we adopted the k-fold strategy with k = 14, instead.This means that the surrogate is fitted 14 times, eachtime with 13 points left out (that is to the remaining169 points). For all problems, we use five differentLatin hypercube designs for evaluating the accuracyof the surrogate by Monte Carlo integration of theRMSE. These DOEs are also created by the MATLAB

(a) Branin-Hoo, 12 points. (b) Branin-Hoo, 20 points.

(c) Branin-Hoo, 42 points.

(d) Camelback, 12 points. (e) Hartman3, 20 points.

Fig. 3 Boxplot of the correlation coefficient between the vectorsof PRESSRMS and RMSE (24 surrogates each) for the low-dimensional problems (this coefficient is computed for all 100DOEs). In parenthesis, it is the median, mean, and standarddeviation of the correlation coefficient, respectively. PRESSRMSis calculated via k-fold strategy (which reduces to the leave-one-

out strategy when k is equal to the number of points used forfitting). It can be seen that except when we use the smallestvalue of k, there is no significant disadvantage comparing thek-fold and the leave-one-out strategies. See Appendix 2 fordetails about boxplots

Page 9: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 447

Latin hypercube function lhsdesign, but set with the“maxmin” option with ten iterations. The RMSE istaken as the mean of the values for the five DOEs

Figure 2 illustrates the complexity of the problems.For the two-dimensional cases, plots using the test pointsreveal the presence of high gradients. For all othercases, the test points were used to obtain box plots ofthe functions, which show variation in the values of thefunctions by more than one order of magnitude.

5 Results and discussion

We begin with a short study on the use of the k-foldstrategy. When the number of folds is small, the accu-racy of the surrogate fit by omitting this fold can bemuch poorer than that of the original surrogate, so thatPRESSRMS is likely to be much larger than RMSE.Since we mostly used a number of points which is twicethat of the polynomial coefficients, two folds would fita polynomial with the same number of points as thenumber of coefficients, which is likely to be much lessaccurate than using all the points. So in this study,we varied the values of k, starting from the smallestvalue of k that divides the p points into more thantwo folds For example, if p = 12, this value is k =3 (which generates folds of four points each). Thenwe use all possible values of k up to p (when the

k-fold turns into the leave-one-out strategy). Due tothe computational cost of the PRESS errors, we dividedthe test problems into two sets:

1. Low-dimensional problems (Branin–Hoo, Camel-back and Hartman3). We compute the correlationsbetween RMSE and PRESSRMS for different k val-ues. For a given DOE, the correlation is computedbetween the vectors of RMSE and PRESSRMS val-ues for the different surrogates. The correlationmeasures the ability of PRESS to substitute for theexact RMSE for choosing the best surrogate (thecloser the correlation is to 1 the better). This isrepeated for all DOEs.

2. High-dimensional problems (Hartman6, extendedRosenbrock, and Dixon–Price). The cost of per-forming PRESS for several k values is high. Tokeep low computational cost, we do the study onlyfor the least expensive surrogate, i.e., the PRS(degree = 2) and we calculate the ratio betweenPRESSRMS and RMSE for each DOE (the closerthe ratio is to 1 the better).

Figure 3 shows that in low-dimensions the use of thek-fold does not drastically affect the correlation be-tween RMSE and PRESSRMS; even though the smallestvalue of k always presents the worst correlation coeffi-cient between PRESSRMS and RMSE. It means that the

(a) Hartman6, 56 points (b) Extended Rosenbrock, 110 points

(c) Extended Rosenbrock, 220 points (d) Dixon-Price, 182 points

Fig. 4 Boxplot of the PRESSRMS/RMSE ratio for PRS (degree =2) for the low-dimensional problems (this ratio is computedfor all 100 DOEs). PRESSRMS is calculated via k-fold strategy(which equals to the leave-one-out strategy when k is equal tothe number of points used for fitting). In parenthesis, it is the me-

dian, mean, and standard deviation of the ratio, respectively. Asexpected, the ratio becomes better as k approaches the numberof points. In these cases, it does not pay much to perform morethan 30 fits for the cross-validations (i.e. k > 30). See Appendix 2for details about boxplots

Page 10: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

448 F.A.C. Viana et al.

Tab

le5

Fre

quen

cy,i

nnu

mbe

rof

DO

Es

(out

of10

0),o

fbes

tRM

SEan

dP

RE

SSR

MS

for

each

basi

csu

rrog

ate

Surr

ogat

eB

rani

n–H

ooB

rani

n–H

ooB

rani

n–H

ooC

amel

back

Har

tman

3H

artm

an6

Ext

ende

dE

xten

ded

Dix

on–P

rice

(12

poin

ts)

(20

poin

ts)

(42

poin

ts)

(12

poin

ts)

(20

poin

ts)

(56

poin

ts)

Ros

enbr

ock

Ros

enbr

ock

(182

poin

ts)

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

(110

poin

ts)

(220

poin

ts)

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

RM

SEP

RE

SSR

MS

10

00

00

05

154

21

30

01

00

02

11

5879

1631

16

3811

72

00

00

00

43

014

946

280

05

20

00

01

00

05

01

00

00

163

00

00

1832

3439

1021

62

223

437

4130

61

10

077

4135

2674

407

01

00

00

018

00

00

423

01

1639

81

30

00

012

336

3117

60

00

00

09

4219

05

10

711

04

00

00

00

00

100

30

00

00

60

20

00

00

00

011

01

00

00

00

00

00

00

00

00

120

50

00

02

110

10

10

00

00

013

00

00

00

00

68

13

00

00

00

140

00

00

00

00

50

10

00

00

016

00

00

00

00

02

00

00

00

00

1711

320

00

00

35

414

180

00

00

018

01

00

00

03

313

5053

00

00

00

2016

180

10

00

52

910

90

00

00

021

248

52

00

2610

01

00

02

2112

00

220

10

00

00

00

30

40

16

140

023

00

00

00

00

00

00

00

28

00

240

40

00

01

00

10

01

10

00

0T

op3

8269

9593

9910

072

4480

5581

8099

9690

7910

010

0su

rrog

ates

The

num

bers

indi

cate

the

iden

tity

asin

Tab

le1.

Itca

nbe

seen

that

(1)

for

both

RM

SEan

dP

RE

SSR

MS,t

hebe

stsu

rrog

ate

depe

nds

onth

epr

oble

man

dal

soon

the

DO

E;a

nd(2

)es

peci

ally

inth

eca

seof

the

high

-dim

ensi

onal

prob

lem

s;th

ebe

stth

ree

surr

ogat

esac

cord

ing

tobo

thR

MSE

and

PR

ESS

RM

Ste

ndto

beth

esa

me

(bol

dfa

ce).

As

the

num

ber

ofpo

ints

incr

ease

sm

osts

urro

gate

sfa

llbe

hind

,i.e

.the

adva

ntag

eof

the

top

surr

ogat

esis

mor

epr

onou

nced

Page 11: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 449

(a) Branin-Hoo, 12 Points (b) Branin-Hoo, 20 Points (c) Branin-Hoo, 42 Points

(d) Camelback, 12 Points (e) Hartman3, 20 Points (f) Hartman6, 56 Pints

o

(g) Extended Rosenbrock, 110 Points (h) Extended Rosenbrock, 220 Points (i)Dixon-Price, 182 Points

Fig. 5 Correlation between PRESSRMS and RMSE. In a given DOE, the correlation is computed between the sets of PRESSRMSvalues and RMSE. The correlation appears to improve with the number of points

Fig. 6 Frequency of successin selecting BestRMSE(out of 100 experiments).The success of using PRESSfor surrogates selectionincreases with the numberof points

Page 12: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

450 F.A.C. Viana et al.

poor correlation has to do with the number of pointsused to fit the set of surrogates. This is clearly seenin the Branin–Hoo example, Fig. 3a–c, where morepoints improve the correlation. Figure 4 illustrates thatin high-dimensions, an increasing of the value of k im-proves the quality of the information given by the cross-validation errors. The best scenario is when k = p.However, the ratio seems to be acceptable when theeach fold has around 10% of the p points (i.e., k = 8when p = 56; k = 11 when p = 110; and k = 14 whenp = 182). This agrees with Meckesheimer et al. (2002).

Back to the discussion about PRESSRMS as a cri-terion for surrogate selection, Table 5 shows the fre-

quency of best RMSE and the PRESSRMS for each ofthe surrogates in all test problems. It can be observedthat the best surrogate depends (1) on the problem,i.e. no single surrogate or even modeling technique isalways the best; and (2) on the DOE, i.e. for the sameproblem, the surrogate that performs the best can varyfrom DOE to DOE. In addition, as the number ofpoints increases, there is a better agreement betweenRMSE and PRESSRMS. Particularly, the top three sur-rogates (bold face in Table 5) are identified better.However, for the Branin–Hoo problem, we note dete-rioration in identifying the best surrogate when goingfrom 20 to 42 points. This is because for high density of

Table 6 %difference in the RMSE, defined in (13), of the best three basic surrogates (according to how often they have the bestPRESSRMS, see Table 5) and BestPRESS for each test problem

Problem Surrogate Freq. of best PRESSRMS Median Mean SD

Branin–Hoo (12 points) 9 19 −3 −21 3617 32 −18 −27 3320 18 −17 −19 18BestPRESS – −26 −43 55

Branin–Hoo (20 points) 2 79 0 −13 314 9 −14 −29 559 5 −152 −189 148BestPRESS – −3 −31 61

Branin–Hoo (42 points) 2 31 −12 −26 574 28 −3 −21 566 41 −21 −42 55BestPRESS – −18 −37 51

Camelback (12 points) 1 15 −40 −42 307 18 −8 −12 139 11 −24 −31 25BestPRESS – −29 −35 29

Hartman3 (20 points) 2 11 −12 −22 268 31 −5 −33 20618 13 −23 −26 17BestPRESS – −21 −28 31

Hartman6 (56 points) 17 18 −7.9 −10.2 9.718 53 −0.1 −1.7 2.620 9 −3.5 −4.0 3.4BestPRESS – −1.9 −5.2 8.2

Extended Rosenbrock (110 points) 5 32 −0.09 −0.47 0.856 41 0.00 −0.19 1.027 23 −0.13 −0.52 0.94BestPRESS – −0.02 −1.03 4.15

Extended Rosenbrock (220 points) 5 39 −1.28 −3.55 7.856 26 −2.10 −6.55 15.1122 14 −3.75 −9.19 17.54BestPRESS – −2.80 −7.17 16.49

Dixon–Price (182 points) 5 21 0.00 0.00 0.006 40 0.00 −0.01 0.057 39 0.00 0.00 0.00BestPRESS – 0.00 0.00 0.03

For the basic surrogates, the numbers indicate the identity as in Table 1. The negative sign of %difference indicates a loss in terms ofRMSE for the specific surrogate compared with the best surrogate. The loss decreases with increasing number of points. At low numberof points, BestPRESS may not be even as good as the third best surrogate. In high dimension, BestPRESS is as good as the best ofthe three

Page 13: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 451

points, the trend for kriging becomes less important, sosurrogates 2, 4, and 6 have very similar performance. Inaddition, for KRG surrogates in high dimensions, thecorrelation model is less important since points are sosparse, hence several almost identical surrogates.

Figure 5 shows histograms of the correlations be-tween RMSE and PRESSRMS. Figure 6 complementsFig. 5 and shows that as the number of points increases,there is a better agreement between the selectionsgiven by RMSE and PRESSRMS. Particularly, the topthree surrogates are identified better. For the extendedRosenbrock with 220 points, the number of surrogatesthat are almost equally accurate is larger than 3. This

explains why the number of times that BestRMSEis within the best 3 PRESS-ranked surrogates dropswhen compared to the case with 110 points. Altogether,when there are few points (low-dimensional problems,i.e., two and three variables), PRESSRMS is good forfiltering out bad surrogates; when there are more points(high-dimensional problems, i.e., six, nine and 12 vari-ables) and PRESSRMS can also identify the subset of thebest surrogates.

Table 6 provides the mean, median and standarddeviation of the %difference in the RMSE for the bestthree surrogates and BestPRESS. Since a single fixedsurrogate cannot beat the best surrogate of the set

(a)OWS for Branin-Hoo, 12 Points ideal (b) OWSideal for Extended Rosenbrock, 110 Points

(c) OWSideal for all test problems (using all 24 surrogates)

(d) Median of the %difference of WAS for Branin-Hoo, 12 Points (e) Median of the %difference of WAS for Extended Rosenbrock, 110 Points

Fig. 7 %difference in the RMSE, defined in (13), when usingweighted average surrogates. a, b illustrates the effect of addingsurrogates to the ensemble (picked one at a time accordingto the PRESSRMS ranking) for the Branin–Hoo and ExtendedRosenbrock functions (best cases in low and high dimensions,respectively). c shows that the gain in using OWSideal decreasesto around 10% for problems in high-dimension (BH, CB, H3,H6, ER, and DP are the initials of the test problem and in

parenthesis it is the number of points on the DOEs). For d, e, inparenthesis, it is the median and standard deviation when using5 surrogates (after which there is no improvement in practice).While in theory the surrogate with best RMSE (BestRMSE)can be beaten (OWSideal), in practice, the quality of informationgiven by the cross-validation errors makes it very difficult in lowdimensions. In high dimension the quality permits gain, but it isvery limited. See Appendix 2 for details about boxplots

Page 14: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

452 F.A.C. Viana et al.

(which in fact varies from DOE to DOE), there cannotbe any gain. As the loss approaches zero, BestPRESSbecomes more reliable in selecting the best surrogateof the set, instead of being a hedge against selectingan inaccurate surrogate. It can be observed that: (1) inlow dimensions, the poor quality of information givenby PRESSRMS makes some surrogates outperformBestPRESS (i.e., %difference closer to zero, or smallerloss); and (2) in high dimension, the quality of informa-tion given by PRESSRMS is better and as a consequence,BestPRESS becomes hard to beat. For Dixon–Price forexample, the best three surrogates practically coincidewith BestPRESS, so the %difference is close to zerofor them.

Next, we study if we can do any better by using aweighted average surrogate rather than BestPRESS.For a given DOE, surrogates are added according torank based on PRESSRMS. We start the study consid-ering best possible performance of OWS, i.e., basedon exact computation of the C matrix (which may notbe possible in real world applications, but it is easy tocompute in our set of analytical functions). Figure 7a, bexemplify what ideally happens with the %differencein the RMSE as we add surrogates to the WAS forthe Branin–Hoo and Extended Rosenbrock functionsfitted with 12 and 110 points, respectively. It is seenthat we can potentially gain by adding more surrogates,but even the ideal potential gain levels off after awhile. Disappointingly, Fig. 7c shows that the maximumpossible gain decreases with dimensionality. Keepingthe Branin–Hoo and extended Rosenbrock functions,Fig. 7d, e compare the ideal gain with the gain obtainedwith information based on cross-validation errors. Itcan be observed that while in theory (that is withOWSideal), BestPRESS as well as the surrogate withbest RMSE (BestRMSE) can be beaten, in practicenone of the WAS schemes is able to substantiallyimprove the results of BestPRESS, i.e., more than 10%gain. In both low and high-dimensions, the best sce-nario is given by OWSdiag, which appears to toleratewell the use of a large number of surrogates. PWS is notable to handle the addition of poorly fitted surrogates.For this reason, the remainder of the paper does notinclude PWS. OWS is unstable in low dimensions whilepresenting small gains and the risk of losses in highdimensions.

Table 7 summarizes the information about the %dif-ference in the RMSE for all test problems. It is thenclear that in low-dimension very little can be done toimprove BestPRESS (see mean and median). For thehigh-dimensional problems, OWSdiag seems to handlethe uncertainty on the cross-validation errors betterthan OWS. However, the gains are limited to between T

able

7%

diff

eren

cein

the

RM

SE,d

efine

din

(13)

,oft

heW

AS

sche

mes

and

Bes

tPR

ESS

for

each

test

prob

lem

whe

nus

ing

all2

4su

rrog

ates

Tes

tpro

blem

Bra

nin–

Hoo

Bra

nin–

Hoo

Bra

nin–

Hoo

Cam

elba

ckH

artm

an3

Har

tman

6E

xten

ded

Ext

ende

dD

ixon

–Pri

ce,

Dix

on–P

rice

,(1

2po

ints

)(2

0po

ints

)(4

2po

ints

)(1

2po

ints

)(2

0po

ints

)(5

6po

ints

)R

osen

broc

kR

osen

broc

k24

surr

ogat

es21

surr

ogat

es(1

10po

ints

)(2

20po

ints

)(1

82po

ints

)(1

82po

ints

)

OW

S ide

alM

edia

n62

6966

5434

12.3

20.8

25.8

9.2

30.5

Mea

n61

6866

5534

12.7

21.1

24.9

9.7

30.7

SD10

1012

97

3.3

4.6

3.9

2.4

2.5

Bes

tPR

ESS

Med

ian

−26

−3−1

8−2

9−2

1−1

.90.

0−3

.20.

000.

0M

ean

−43

−31

−37

−35

−28

−5.2

−1.0

−7.6

0.00

−2.6

SD55

6151

2931

8.2

4.2

16.6

0.03

5.2

OW

S dia

gM

edia

n−3

5−2

4−3

4−2

3−1

8−2

.76.

111

.4−7

.04.

4M

ean

−43

−41

−51

−42

−24

−3.9

5.3

9.1

−7.3

4.1

SD39

5561

131

265.

24.

211

.84.

94.

2O

WS

Med

ian

−119

−249

−61

−84

−182

−31

615

.6−9

17.2

Mea

n−1

44−3

56−1

04−1

81−2

05−3

7−1

313

.6−1

5315

.5SD

9937

415

176

912

024

182

13.5

1396

10.7

For

the

Bra

nin–

Hoo

func

tion

,the

num

ber

ofpo

ints

used

for

fitti

ngth

esu

rrog

ates

isva

ried

.For

the

Dix

on–P

rice

func

tion

,bot

hth

efu

llse

tof

surr

ogat

esan

da

part

ials

etob

tain

edw

hen

the

first

thre

ebe

stsu

rrog

ates

wer

ele

ftou

tare

cons

ider

edan

dus

ed.T

hesi

gnof

%di

ffer

ence

indi

cate

sth

atth

ere

islo

ss(−

)or

gain

(+)

inus

ing

the

spec

ific

WA

Ssc

hem

ew

hen

com

pare

dw

ith

the

best

surr

ogat

eof

the

seti

nte

rms

ofR

MSE

Page 15: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 453

(a) Branin-Hoo, 12 Points (b) Branin-Hoo, 20 Points

(c) Branin-Hoo, 42 Points

(d) Extended Rosenbrock, 110 Points (e) Extended Rosenbrock, 220 Points

Fig. 8 %difference in the RMSE, defined in (13), for the overallbest six surrogates and BestPRESS. In parenthesis, it is the me-dian, mean, and standard deviation. Values closer to zero indicate

better fits. Numbers on the abscissa indicate the surrogate num-ber in Table 1. It can be seen that BestPRESS has performancecomparable with the best three surrogates of the set

10% and 20%. As noted earlier, the behavior for theBranin–Hoo function when going from 12 to 42 pointsis anomalous because surrogates 2, 4 and 6 become verysimilar and dependent on the DOE. While PRESS isless reliable in identifying the most accurate surrogate,this is less important, because as can be seen from Fig. 8,the differences are miniscule for these surrogates. Theresults of BestPRESS for the extended Rosenbrockfunction shown in Table 7 are also counter-intuitive.Figure 8 shows that unlike the case of 220 points, for 110points the first three surrogates are equally much betterthan the remaining surrogates. This makes selectionless risky for 110 points.

6 Conclusions

In this paper, we have explored the use of multiple sur-rogates for the minimum RMSE in meta-modeling. Weexplored (1) the generation of a large set of surrogatesand the use of PRESS errors as a criterion for surrogate

selection; and (2) a weighted average surrogate basedon the minimization of the integrated square error(in contrast to heuristic schemes).

The study allows the conclusion that the benefits ofboth strategies depend on dimensionality and numberof points:

• With sufficient number of points, PRESSRMS be-comes very good for ranking the surrogates accord-ing to prediction accuracy. In general, PRESSRMS isgood for filtering out inaccurate surrogates, and asthe number of points increases PRESSRMS can alsoidentify the best surrogate of the set (or anotherequally accurate surrogate).

• As the dimension increases the possible gains froma weighted surrogate diminish even though ourability to approximate the ideal weights improves.In the two dimensional problems, OWSideal pre-sented a median gain of 60%; while in practice,there was a loss of 25% and no improvement overBestPRESS. For the extended Rosenbrock func-tion (nine variables), OWSideal presented a median

Page 16: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

454 F.A.C. Viana et al.

gain of 20%; while in practice, there was a gain ofjust 6% (for both OWSdiag and OWS; however, theformer presents better performance when consider-ing the mean and standard deviation).

Therefore, we can say that using multiple surrogatesand PRESSRMS for identifying the best surrogate isa good strategy that becomes ever more useful withincreasing number of points. On the other hand, the useof weighted average surrogate does not seem to havethe potential of substantial error reductions even withlarge number of points that improve our ability toapproximate the ideal weighted surrogate.

Additionally, we can point out the following findings:

• For large number of points, PRESSRMS as obtainedthrough the k-fold strategy successfully estimatesthe RMSE (as shown in Fig. 5i). In the set oftest problems, there is very little improvementwhen performing more than 30 fits for the cross-validations (i.e. k > 30).

• At high point density (possible in low dimensions)the choice of the trend function (or regression func-tion as it is also sometimes called) for kriging is notimportant. At very low point density (characteris-tic of high dimensions) the choice of correlationfunction is not important. Both situations lead tothe emergence of multiple almost identical krigingsurrogates.

• When using a WAS, OWSdiag seems to be the bestchoice, unless the large number of points allowsthe use of OWS. In the cases that we have stud-ied, OWSdiag was able to stabilize the addition oflousy surrogates up to the point of using all createdsurrogates.

• While we have been able to generate a large num-ber of diverse surrogates, we were less successfulin generating a substantial number of diverse sur-rogates with comparable high accuracy. This chal-lenge is left for future research.

Acknowledgements This research was supported by NationalScience Foundation, USA (grant no. 0423280) and Coordenaçãode Aperfeiçoamento de Pessoal de Nível Superior, Brazil (proc.no. 3327/06-0). Authors gratefully acknowledge these grants.

Appendix 1. Surrogate modeling techniques

The principal features of the surrogate modeling tech-niques used in this study are described in the followingsections.

1.1 Kriging (KRG)

KRG is named after the pioneering work of the SouthAfrican mining engineer D.G. Krige. It estimates thevalue of a function as a combination of a known func-tion fi(x) (e. g., a linear model such as a polyno-mial trend) and departures (representing low and highfrequency variation components, respectively) of theform:

y (x) =m∑

i=1

βi fi (x) + z (x) , (19)

where z(x) is assumed to be a realization of a stochasticprocess Z (x) with mean zero, process variance σ 2, andspatial covariance function given by:

cov(Z (xi) , Z

(x j)) = σ 2 R

(xi, x j

), (20)

where R(xi, x j) is the correlation between xi and x j.The conventional KRG models interpolate training

data. This is an important characteristic when dealingwith noisy data. In addition, KRG is a flexible tech-nique since different instances can be created by choos-ing different pairs of fi(x) and correlation functions.Finally, complexity and the lack of commercial softwaremay hinder this technique from popularity in the nearterm (Simpson et al. 1998).

The Matlab code developed by Lophaven et al.(2002) was used to execute the KRG algorithm. Moredetails about KRG are provided in Sacks et al. (1989),Simpson et al. (1998) and Lophaven et al. (2002).

1.2 Polynomial response surface (PRS)

The PRS approximation is one of the most well estab-lished meta-modeling techniques. In PRS modeling, apolynomial function is used to approximate the actualfunction. A second-order polynomial model can beexpressed as:

y (x) = β0 +m∑

i=1

βixi +m∑

i=1

m∑

j=1

βijxix j , (21)

The set of coefficients can be obtained by leastsquares and according to the PRS theory are unbiasedand have minimum variance. Another characteristic isthat it is possible to identify the significance of dif-ferent design factors directly from the coefficients inthe normalized regression model (in practice, usingt-statistics). In spite of the advantages, there is always a

Page 17: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 455

Fig. 9 Radial basis neuralnetwork architecture

drawback when applying PRS to model highly nonlin-ear functions. Even though higher-order polynomialscan be used, it may be difficult to take sufficient sampledata to estimate all of the coefficients in the polynomialequation.

See Box et al. (1978) and Myers and Montgomery(1995) for more details about PRS.

1.3 Radial basis neural networks (RBNN)

RBNN is an artificial neural network which uses radialbasis functions as transfer functions. RBNN consist oftwo layers: a hidden radial basis layer and an outputlinear layer, as shown in Fig. 9. The output of thenetwork is thus:

y (x) =N∑

i=1

aiρ (x, ci) , (22)

where N is the number of neurons in the hidden layer,ci is the center vector for neuron i, and ai are the weightsof the linear output neuron. The norm is typically takento be the Euclidean distance and the basis function istaken to be the following:

ρ (x, ci) = exp(−β ‖x − ci‖2

), (23)

where exp (·) is the exponential function.RBNN may require more neurons than standard

feedforward/backpropagation networks, but often they

can be designed in a fraction of the time it takes to trainstandard feedforward networks. A crucial drawback forsome applications is the need of many training points.

The native neural networks Matlab toolbox (Math-works contributors 2004) was used to execute the RBNNalgorithm. RBNN is comprehensively presented inSmith (1993) and Cheng and Titterington (1994).

1.4 Support vector regression (SVR)

SVR is a particular implementation of support vectormachines (SVM). In SVR, the aim is to find y (x) thathas at most a deviation of magnitude ε from each ofthe training data. Mathematically, the SVR model isgiven by:

y (x) =p∑

i=1

(ai − a∗

i

)K (xi, x) + b , (24)

where K(xi,x) is the so-called kernel function, xi aredifferent points of the original DOE and x is the pointof the design space in which the surrogate is evaluated.Parameters ai, a∗

i , and b are obtained during the fit-ting process.. Table 8 lists the kernel functions used inthis work.

During the fitting process, SVR minimizes an up-per bound on the expected risk unlike empirical riskminimization techniques, which minimize the error onthe training data. This is done by using alternative lossfunctions. Figure 10 shows two of the most commonpossible loss functions. Figure 10a corresponds to theconventional least squares error criterion. Figure 10billustrates the loss function used in this work, which isgiven by the following equation:

Loss (x) ={

ε, if∣∣y (x) − y (x)

∣∣ ≤ ε

∣∣y (x) − y (x)∣∣ , otherwise

. (25)

Table 8 Example of kernelfunctions Gaussian radial basis function (GRBF) K

(x, x′) = exp

(−‖x−x′‖2

2σ 2

)

Exponential radial basis function (ERBF) K(x, x′) = exp

(−‖x−x′‖

2σ 2

)

SplinesK(x, x′) = 1 + ⟨

x, x′⟩+12

⟨x, x′⟩min

(x, x′)− 1

6

(min

(x, x′))3

Anova-spline (Anova)

K(x, x′) = ∏

iKi(xi, x′

i

)

Ki(xi, x′

i

) = 1 + xix′i + (

xix′i

)2 +(xix′

i

)2 min(xi, x′

i

)− xix′i

(xi + x′

i

) (min

(xi, x′

i

))2 +13

(x2

i + 4xix′i + x′

i2) (

min(xi, x′

i

))3 +− 1

2

(xi + x′

i

) (min

(xi, x′

i

))4 +15

(min

(xi, x′

i

))5

Page 18: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

456 F.A.C. Viana et al.

(a) Quadratic (b) e - insensitive

Fig. 10 Loss functions

The implication is that in SVR the goal is to find afunction that has at most ε deviation from the trainingdata. In other words, the errors are considered zero aslong as they are less than ε.

Besides ε, the fitting the SVR model has a regulariza-tion parameter, C. Parameter C determines the compro-mise between the complexity and the degree to whichdeviations larger than ε are tolerated in the optimiza-tion formulation. If C is too large, the tolerance is smalland the tendency is to have the most possible complexSVR model. An open issue in SVR is the choice of thevalues of parameters for both kernel and loss functions.

The Matlab code developed by Gunn (1997) wasused to execute the SVR algorithm. To learn moreabout SVR see Gunn (1997), Clarke et al. (2005) andSmola and Scholkopf (2004).

Finally, SURROGATES ToolBox is used for aneasy manipulation of all these different models.SURROGATES ToolBox, developed by Viana andGoel (2007), integrates several open-source tools pro-viding a general-purpose MATLAB library of multidi-mensional function approximation methods.

Appendix 2. Box plots

In a box plot, the box is defined by lines at the lowerquartile (25%), median (50%), and upper quartile(75%) values. Lines extend from each end of the boxand outliers show the coverage of the rest of the data.Lines are plotted at a distance of 1.5 times the inter-quartile range in each direction or the limit of the data,if the limit of the data falls within 1.5 times the inter-quartile range. Outliers are data with values beyond theends of the lines by placing a “+” sign for each point.

Appendix 3. Reducing the costs of PRESScomputation in kriging

As the number of point increases, the computation ofPRESS via the leave-one-out approach may becomeprohibitive. One way to tackle this issue is to use thek-fold strategy (see section about cross-validation er-rors). However, for kriging, another alternative is toavoid the costly estimation of the correlation parame-ters and keep them the same as those found for themodel fit to all data. We used the extended Rosenbrockfunction to study the potential of the second approach.

Figure 11a shows the correlation coefficient betweenthe vector of PRESSRMS, obtained with repeated com-putation of the correlation parameters, and RMSE,but only for the KRG models (total of six, shown inTable 1). Figure 11b shows the same plot but withcorrelation parameters kept the same as for the originalfit (avoiding the costly optimization). Finally, Fig. 11c

(a) Correlation between PRESSRMS and RMSE for the leave-one-out with computation of correlation coefficients.

(b) Correlation between PRESSRMS and RMSE for the leave-one-out with frozen computation of correlation coefficients.

(c) Ratio of computational time of PRESS computation with frozen correlation parameters (t) to repeated computation of correlations (T).

Fig. 11 Performance of PRESS computation: work horse versus freezing correlation parameters. t and T have mean values of 23 and850 s, respectively. Savings in time may hurt the correlation with the RMSE

Page 19: Multiple surrogates: how cross-validation errors can …Struct Multidisc Optim (2009) 39:439–457 DOI 10.1007/s00158-008-0338-0 RESEARCH PAPER Multiple surrogates: how cross-validation

Multiple surrogates: how cross-validation errors can help us to obtain the best predictor 457

compares computational times. It can be seen that thesavings in time may hurt the correlation with RMSE.Ultimately, this would also translate in poorer data forthe computation of the weights in a WAS strategy.

References

Acar E, Rais-Rohani M (2008) Ensemble of metamodels withoptimized weight factors. In: 49th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials,Schaumburg, IL, AIAA 2008-1884

Bishop CM (1995) Neural networks for pattern recognition.Oxford University Press, Oxford, pp 364–371

Box GEP, Hunter WG, Hunter JS (1978) Statistics for experi-menters. Wiley, New York

Cheng B, Titterington DM (1994) Neural networks: a reviewfrom a statistical perspective. Stat Sci 9:2–54

Cherkassky V, Ma Y (2004) Practical selection of SVM parame-ters and noise estimation for SVM regression. Neural Netw17(1):113–126

Clarke SM, Griebsch JH, Simpson TW (2005) Analysis of supportvector regression for approximation of complex engineeringanalyses. J Mech Des 127:1077–1087

Dixon LCW, Szegö GP (1978) Towards global optimization 2.North-Holland, Amsterdam, The Netherlands

Goel T, Haftka RT, Shyy W, Queipo NV (2007) Ensemble ofsurrogates. Struct Multidisc Optim 33:199–216

Gunn SR (1997) Support vector machines for classification andregression. Technical Report, Image Speech and IntelligentSystems Research Group, University of Southampton, UK

Jin R, Chen W, Simpson TW (2001) Comparative studies ofmetamodelling techniques under multiple modeling criteria.Struct Multidisc Optim 23:1–13

Kohavi R (1995) A study of cross-validation and bootstrap foraccuracy estimation and model selection. In: Proceedingsof the fourteenth international joint conference on artificialintelligence, pp 1137–1143

Lee J (2007) A novel three-phase trajectory informed search meth-odology for global optimization. J Glob Optim 38(1):61–77

Lin Y, Mistree F, Tsui K-L, Allen JK (2002) Metamodel vali-dation with deterministic computer experiments. In: 9thAIAA/ISSMO symposium on multidisciplinary analysis andoptimization, Atlanta, GA, AIAA, AIAA-2002-5425

Lophaven SN, Nielsen HB, Søndergaard J (2002) DACE—A MATLAB kriging toolbox. Technical Report IMM-TR-2002-12, Informatics and Mathematical Modelling.Technical University of Denmark

Mathworks Contributors (2004) MATLAB® The language oftechnical computing, version 7.0 Release 14, The Math-Works Inc.

Mckay MD, Beckman RJ, Conover WJ (1979) A comparison ofthree methods for selecting values of input variables from acomputer code. Technometrics 21:239–245

Meckesheimer M, Barton RR, Simpson TW, Booker A (2002)Computationally inexpensive metamodel assessment strate-gies. AIAA J 40(10):2053–2060

Myers RH, Montgomery DC (1995) Response surface methodol-ogy: process and product optimization using designed exper-iments. Wiley, New York

Sacks J, Welch WJ, Mitchell TJ, Wynn HP (1989) Design andanalysis of computer experiments. Stat Sci 4(4):409–435

Simpson TW, Peplinski J, Koch PN, Allen JK (1998) On theuse of statistics in design and the implications for determin-istic computer experiments. In: Proceedings of the designtheory and methodology (DTM’97), Paper No. DETC97/DTM-3881. ASME

Smith M (1993) Neural networks for statistical modeling. VonNostrand Reinhold, New York

Smola AJ, Scholkopf B (2004) A tutorial on support vectorregression. Stat Comput 14:199–222

Venkataraman S, Haftka RT (2004) Structural optimization com-plexity: what has Moore’s law done for us? Struct MultidiscOptim 28:375–387

Viana FAC, Goel T (2007) SURROGATES ToolBox, http://fchegury.googlepages.com

Zerpa L, Queipo NV, Pintos S, Salager J (2005) An optimiza-tion methodology of alkaline–surfactant–polymer floodingprocesses using field scale numerical simulation and multiplesurrogates. J Pet Sci Eng 47:197–208