12
The Open Civil Engineering Journal, 2008, 2, 51-62 51 1874-1495/08 2008 Bentham Open Open Access Limit Analysis of Masonry Structures Accounting for Uncertainties in Constituent Materials Mechanical Properties Denis Benasciutti 1 and Gabriele Milani* ,2 1 DIEGM, Dipartimento di Ingegneria Elettrica Gestionale Meccanica, via delle Scienze 208, 33100 Udine (Italy) and 2 IBK-ETHZ, Institut f. Baustatik und Konstruktion, Swiss Federal Institute of Technology, Wolfgang-Pauli-Str. 15, 8093 Zürich, Switzerland Abstract: The uncertainty often observed in experimental strengths of masonry constituent materials makes critical the selection of the appropriate inputs in the finite elements limit analysis of complex masonry buildings, as well as requires modeling the building ultimate load as a random variable. The most direct approach to solve limit analysis problems in presence of random input parameters is the use of extensive Monte Carlo (MC) simulations. Nevertheless, when MC methods are used to estimate the collapse load cumulative distribution of a masonry structure, large scale linear program- ming problems must be numerically tackled several times, so precluding the practical utilization of large scale MC simu- lations. To reduce the computational cost of a traditional MC approach, in the present paper direct computer calculations are replaced with inexpensive Response Surface (RS) models. In particular, RS models are utilized for the limit analysis of a masonry structure in- and out-of-plane loaded, assuming input mechanical properties as random parameters. Two dif- ferent RS models are analyzed, derived respectively from small scale (20 replicates) MC and Latin Hypercube (LH) simu- lations. The accuracy of the estimated RS models, as well as the good estimations of the collapse load cumulative distribu- tions obtained via polynomial RS models in comparison with large scale MC simulations, show how the proposed ap- proach could be a useful tool in problems of technical interest. Key Words: Masonry, Limit analysis, Monte carlo simulations, Latin Hypercube. INTRODUCTION The structural analysis of masonry buildings under seis- mic actions is generally a complex task, requiring complicate nonlinear analyses, performed nowadays with finite elements (FE) methods. The collapse load of a given building is clearly a function of the geometry, of the external actions, of the materials properties and finally of the environmental conditions (hu- midity, temperature). In the technical literature, many differ- ent methods based on FE simulations can be found for the evaluation of the ultimate loads of engineering structures (see for instance [1-5]). A suitable way, which requires a relatively low computational cost, is the use of limit analysis theorems in combination with FE simulations. Such an ap- proach is able to give important information at failure, as for instance collapse loads, failure mechanisms and, at least on critical sections, the stress distribution. The hypotheses at the base of the method are statically applied loads, infinite duc- tility and associated flow rules for the constituent materials. These hypotheses yield well for steel structures (see Olsen [4]), but it has been shown that quite reliable results can be obtained also for concrete (Olsen [6]) and masonry (Sutcliffe et al. [7] and Milani et al. [2]) materials that, as well known, exhibit a finite ductility. Nevertheless, an analysis at collapse for masonry structures remains a very difficult task, both from a theoretical and numerical point of view, essentially *Address correspondence to this author at the Institut f. Baustatik und Kon- struktion, Swiss Federal Institute of Technology (ETH), Wolfgang-Pauli- Str. 15, 8093 Zürich, Switzerland (Formerly: ENDIF, Engineering Depart- ment in Ferrara, University of Ferrara, via Saragat 1, 44100 Ferrara, Italy); E-mail: [email protected], [email protected] because brickwork is constituted by an assemblage of bricks between which thin mortar joints are laid. At present, the three approaches most utilized in practice to tackle engineer- ing problems involving the study of masonry structures rely on micro-modeling [1, 8], macro-modeling [9] and homog- enization [10]. An alternative technically meaningful meth- odology is finally represented by the so-called macro- elements approach presented, for instance, in [11]. While micro-modeling is limited to small structures [12], since a separate modeling of bricks and joints is required in the framework of finite elements, in macro-modeling the heterogeneous material is substituted with a macroscopic homogeneous fictitious one, obtained essentially from ex- perimental data fitting. Despite the fact that macro-modeling is suitable for large scale structures, it requires a difficult calibration of its mechanical properties, usually obtained by means of costly experimental campaigns [13]. In light of these considerations, homogenization theory seems particularly attractive, since it is able to reproduce macroscopic masonry behavior at failure requiring only the knowledge of mechanical properties of the constitutive mate- rials (always available with low cost), once that a suitable repetitive unitary cell is found. In the framework of homogenization and limit analysis, for a fixed building geometry and for a given set of applied external actions, the collapse load y of a masonry structure resulting from numerical FE calculations will only depend on the input materials properties (bricks and mortar), accord- ing to the mathematical model: ) (x h y = (1)

Limit Analysis of Masonry Structures Accounting for

Embed Size (px)

Citation preview

The Open Civil Engineering Journal, 2008, 2, 51-62 51

1874-1495/08 2008 Bentham Open

Open Access

Limit Analysis of Masonry Structures Accounting for Uncertainties in Constituent Materials Mechanical Properties

Denis Benasciutti1 and Gabriele Milani*

,2

1DIEGM, Dipartimento di Ingegneria Elettrica Gestionale Meccanica, via delle Scienze 208, 33100 Udine (Italy) and

2IBK-ETHZ, Institut f. Baustatik und Konstruktion, Swiss Federal Institute of Technology, Wolfgang-Pauli-Str. 15, 8093

Zürich, Switzerland

Abstract: The uncertainty often observed in experimental strengths of masonry constituent materials makes critical the

selection of the appropriate inputs in the finite elements limit analysis of complex masonry buildings, as well as requires

modeling the building ultimate load as a random variable. The most direct approach to solve limit analysis problems in

presence of random input parameters is the use of extensive Monte Carlo (MC) simulations. Nevertheless, when MC

methods are used to estimate the collapse load cumulative distribution of a masonry structure, large scale linear program-

ming problems must be numerically tackled several times, so precluding the practical utilization of large scale MC simu-

lations. To reduce the computational cost of a traditional MC approach, in the present paper direct computer calculations

are replaced with inexpensive Response Surface (RS) models. In particular, RS models are utilized for the limit analysis

of a masonry structure in- and out-of-plane loaded, assuming input mechanical properties as random parameters. Two dif-

ferent RS models are analyzed, derived respectively from small scale (20 replicates) MC and Latin Hypercube (LH) simu-

lations. The accuracy of the estimated RS models, as well as the good estimations of the collapse load cumulative distribu-

tions obtained via polynomial RS models in comparison with large scale MC simulations, show how the proposed ap-

proach could be a useful tool in problems of technical interest.

Key Words: Masonry, Limit analysis, Monte carlo simulations, Latin Hypercube.

INTRODUCTION

The structural analysis of masonry buildings under seis-mic actions is generally a complex task, requiring complicate nonlinear analyses, performed nowadays with finite elements (FE) methods.

The collapse load of a given building is clearly a function of the geometry, of the external actions, of the materials properties and finally of the environmental conditions (hu-midity, temperature). In the technical literature, many differ-ent methods based on FE simulations can be found for the evaluation of the ultimate loads of engineering structures (see for instance [1-5]). A suitable way, which requires a relatively low computational cost, is the use of limit analysis theorems in combination with FE simulations. Such an ap-proach is able to give important information at failure, as for instance collapse loads, failure mechanisms and, at least on critical sections, the stress distribution. The hypotheses at the base of the method are statically applied loads, infinite duc-tility and associated flow rules for the constituent materials. These hypotheses yield well for steel structures (see Olsen [4]), but it has been shown that quite reliable results can be obtained also for concrete (Olsen [6]) and masonry (Sutcliffe et al. [7] and Milani et al. [2]) materials that, as well known, exhibit a finite ductility. Nevertheless, an analysis at collapse for masonry structures remains a very difficult task, both from a theoretical and numerical point of view, essentially

*Address correspondence to this author at the Institut f. Baustatik und Kon-

struktion, Swiss Federal Institute of Technology (ETH), Wolfgang-Pauli-

Str. 15, 8093 Zürich, Switzerland (Formerly: ENDIF, Engineering Depart-

ment in Ferrara, University of Ferrara, via Saragat 1, 44100 Ferrara, Italy);

E-mail: [email protected], [email protected]

because brickwork is constituted by an assemblage of bricks between which thin mortar joints are laid. At present, the three approaches most utilized in practice to tackle engineer-ing problems involving the study of masonry structures rely on micro-modeling [1, 8], macro-modeling [9] and homog-enization [10]. An alternative technically meaningful meth-odology is finally represented by the so-called macro-elements approach presented, for instance, in [11].

While micro-modeling is limited to small structures [12],

since a separate modeling of bricks and joints is required in

the framework of finite elements, in macro-modeling the

heterogeneous material is substituted with a macroscopic

homogeneous fictitious one, obtained essentially from ex-

perimental data fitting. Despite the fact that macro-modeling

is suitable for large scale structures, it requires a difficult

calibration of its mechanical properties, usually obtained by

means of costly experimental campaigns [13].

In light of these considerations, homogenization theory seems particularly attractive, since it is able to reproduce macroscopic masonry behavior at failure requiring only the knowledge of mechanical properties of the constitutive mate-rials (always available with low cost), once that a suitable repetitive unitary cell is found.

In the framework of homogenization and limit analysis, for a fixed building geometry and for a given set of applied external actions, the collapse load y of a masonry structure resulting from numerical FE calculations will only depend on the input materials properties (bricks and mortar), accord-ing to the mathematical model:

)(xhy = (1)

52 The Open Civil Engineering Journal, 2008, Volume 2 Benasciutti and Milani

where vector ),...,,( 21 mxxx=x collects all the relevant

input material parameters (e.g. cohesion, friction angle, cut-

off stress, joints and bricks compressive strength, etc.).

Function )(�h just introduced to formalize the in-put/output relation of the computer FE simulation, even though highly non-linear, is strictly deterministic, i.e. repli-cated calculations from running the same input parameters will give the same output collapse loads.

Considering that the material strength properties of single masonry constituents (e.g. mortar, brick) determined by ex-perimental testing often show a very large variability [1, 14, 15] selection of the appropriate values to use in numerical simulations could become a critical task. For instance, ex-perimental data are quite often so spread, that simply using mean values could give unsafe results. In addition, the uncer-tainty in materials strength properties also induces a corre-sponding variability in the resultant collapse load. On the other hand, in building practice the so called characteristic values approach is often used. Such an approach consists in the following procedures: (1) a deterministic structural analysis for the structure at hand is performed using as mate-rial mechanical properties a single strength value corre-sponding to a small probability (usually 5 %, denoting mate-rial strengths characteristic values) of a Gaussian distribution (b) obtained from a very small experimental data set (for instance, 5 cubic compressive strengths in case of concrete structures). Therefore, it is clear that no information on the collapse load probability distribution can be deduced from such a unique deterministic structural analysis performed.

As a consequence, a reliable approach would require a probabilistic analysis for estimating the probability distribu-tion of the building limit load (assigned the probability dis-tributions of the input strength properties of single masonry constituents) and then calculating failure probabilities and safety levels.

However, explicit determination of the collapse load probability distribution needs the explicit knowledge of function )(�h , which unfortunately is rarely known in a closed-form.

A possible alternative is the utilization of an approximate probabilistic analysis based on extensive Monte Carlo (MC) simulations, in which sampling-based techniques are used to study how uncertainty propagates from random input pa-rameters to the calculated analysis results, by repetitively selecting values for the random input variables and then cal-culating the corresponding analysis outputs. Specifically, in the MC method the input values are chosen at random from their own domains of definition, which makes the sampled values more likely to occur in regions associated to high probabilities, usually around the mean of the distribution. Hence, large sample sizes (i.e. many simulations runs) are needed to assure a reasonable coverage of the range of each input variable (and of the calculated analysis output), as well as to achieve a sufficient statistical convergence of the esti-mated output probability distribution.

Considering that in numerical limit analyses of complex buildings a single deterministic simulation could take many hours of calculations (see [16]), the large sample sizes used

by the extensive MC simulations becomes prohibitive for practical applications.

To alleviate the overall computational cost, a possibility is to replace the MC method with improved sampling-based techniques, which guarantee the same statistical convergence with smaller sample sizes. As an example, in the last decades the Latin Hypercube (LH) method has received an increasing attention in many different research areas and applications, where the use of large samples is not computationally practi-cable [17-19]. However, even though the LH method can help in reducing the total number of simulation runs re-quired, in some applications the overall simulation time could still remain unacceptably high.

A completely different approach which can drastically

reduce the total computation time is to adopt the so-called

response surface (RS) techniques, see Fig. (1), to construct a

surrogate model (or metamodel), that can be used as inex-

pensive mathematical approximation of the actual, yet time-

consuming, computer simulation [20-24]. More precisely,

the true, but generally unknown, function )(�h introduced

in Eq. (1), formalizing the input/output relationship of a FE

limit analysis, is replaced by an approximation )(ˆ �h , which

is calibrated on observed outputs )( ,c,c kk hy x= , resulting

from running computer calculations on a small set of opti-

mally selected design input points k,cx .

Assumed that the overall approximation error is suffi-ciently small and technically acceptable, the decisive advan-tage of RS techniques is that the sample sizes required for an accurate calibration are significantly lower (e.g. two order of magnitude) than those used in extensive MC simulations.

Once the RS model has been calibrated, it can be used as a proxy of direct computer calculations in extensive MC simulations, which are used to generate large sets analysis outputs used to estimate the output probability distribution.

Different RS models exist, which differ in respect to their

relative accuracy and complexity. Even though many studies

have investigated what combination of design scheme and

RS technique would give the best level of accuracy, no gen-

eral rules were found for all engineering problems [20-22,

24]. On the other hand, other aspects than accuracy were

indicated as equally important: robustness (accuracy and

stability through different types of problems), efficiency

(computational effort required in metamodel calibration),

transparency (existence of explicit relationships between

inputs and outputs) and conceptual simplicity (easiness of

implementation). As an example, the classical polynomial

RS approximation was indicated as the optimal choice due to

its great simplicity and possible fairly good accuracy [21].

With the aim of investigating the potentiality of polyno-

mial RS models in replacing actual computer simulations in

the homogenized limit analysis of complex masonry build-

ings with random input parameters, this work will attempt:

1. to investigate the accuracy and efficiency of polynomial RS models as inexpensive replacement of direct computer simulations;

Limit Analysis of Masonry Structures Accounting for Uncertainties The Open Civil Engineering Journal, 2008, Volume 2 53

2. to study the correlation between RS accuracy and design scheme, by referring to the MC and the LH techniques for generating input points for RS fitting;

3. to evaluate the accuracy of the estimated output probabil-ity distribution when polynomial RS models are used in place of direct computer calculations in extensive MC simulations.

The utilization of limit analysis in combination with RS

approximation seems particularly adequate, being related to

the fact that the response of limit analysis is smooth with

respect to variation of the material parameters. Obviously,

the extension of this statement to experimentally observed

data is, in principle, not possible since due to the practical

impossibility to perform extensive experimental MC tests on

real scale masonry structures.

Numerical simulations concerning an example of rele-vant technical interest of an in- and out-of-plane loaded ma-sonry wall is presented.

First, a quadratic polynomial RS model is constructed on

a small set of observed analysis outputs ky ,c , resulting from

running computer simulations on a small set of input points

k,cx (at most 20 or 30), generated by either the MC method

or the LH technique.

The estimated polynomial RS model is then used as a proxy of direct computer calculations in extensive MC simu-

lations with large sample sizes (equal to 3000), used to as-sess the collapse load probability distribution. The sample sizes of large MC samples were chosen as a necessary com-promise between the high computational cost of all numeri-cal simulations and the required accuracy in the estimate of the collapse load probability distribution, as well as consid-ering 0.5% as a sufficient value for the failure probability in technical applications. On the other hand, considering the large variability of input material properties due to usually scarce experimental tests, higher sample size would unneces-sarily increase the overall computational cost, without im-proving the accuracy of the tail estimates of the collapse load probability distribution.

A comparison between the probability distribution esti-mated via RS model and the one obtained by direct computer calculations is presented.

The same large MC samples are also used as validation points to compare the fitting performance of RS models con-structed from MC or LH design points.

The presented results show how the use of polynomial RS with MC simulations can drastically reduce the overall computation time, while providing acceptable levels of accu-racy, which assures quite good estimates of the collapse load probability distribution.

MASONRY HOMOGENIZED FAILURE SURFACES

Failure loads of complex 3D masonry structures can be obtained with a relatively low computational cost by means

Fig. (1). Schematic representation of extensive Monte Carlo simulations (via direct computer calculations or via RS model) used to estimate

the collapse load probability distribution. Construction and validation of the polynomial RS model is also shown.

����������������� � �� ��� ��� ��

���

�������������������

�� ���!������������������������������"�������������������

#�����$!�� ������������%�&' (!)

*�+��,-. / 0 1 2 34 33 3- 354

46-

467

46/

������� �� �� � � '� ��8��������� ��

#���#���

�����������9���#��:�

83 -8

��������������#���

���%���� �� �� ' �;3'666 '�<�;5444=

���%���� �� �� ' �;3' 666'�<�;5444=

#��������������>

#���#��'�#��'

#���������#��:�

<8=

%$;

����������� �����

� � ������!') � !�

�����#����:��� ��

�!') ' );3'666'�!

54 The Open Civil Engineering Journal, 2008, Volume 2 Benasciutti and Milani

of a recently presented FE limit analysis approach [10], in which masonry failure surface is obtained through a micro-mechanical model which bases on homogenization theory applied in the rigid-plastic case.

A detailed description of the equilibrated micro-mechanical model adopted is reported in [1, 3, 10] and the reader is referred there for an exhaustive discussion. In this section, only the basic idea of the model proposed is recalled in order to show how FE homogenized limit analysis Monte Carlo simulations have been performed at a structural level.

In, Fig. (2-a), masonry wall � constituted by a periodic

arrangement of bricks and mortar disposed in running bond

texture is reported. Following the general procedure pro-

posed by Suquet in [25], homogenization techniques com-

bined with limit analysis can be used for the evaluation of

masonry homogenized strength domain homS for combined

in- and out-of-plane loads. In the framework of the lower

bound limit analysis theorem (i.e. assuming associated flow

rules for the constituent materials and imposing equilibrium

equations and admissibility conditions), it can be shown that

the frontier � of homS is obtained solving the following

linear programming problem (see also Fig. (2)):

�Shom= max M,N( ) |

N =1

Y�

Y � � h2

h2

��

dV (a)

M =1

Yy3�

Y � � h2

h2

��

dV (b)

div� = 0 (c)

�[ ]�� �nint

= 0 (d)

� n anti-periodic on �Yl (e)

� y( ) � Sm�y � Y m ; � y( ) � Sb

�y � Y b ( f )

�����

�����

�����

�����

�����

�����

(2)

In the previous equation the following symbols have been used:

- N and M , which represent macroscopic in-plane (membrane forces) and out-of-plane (bending moments and torsion) tensors;

Fig. (2). Proposed micro-mechanical model. -a: elementary cell identification. -b: subdivision of the elementary cell in layers along thickness and subdivision of each layer in sub-domains.

y1

y2

y3

b/2b/2

b

ev

eh

eh

a/2

a/2

a

h

Yn Y

l

1X

2X

Global frame of reference

Elementary cell

-a

y1

y2

y3

Layer i

Each layer is

subdivided in 36

sub-domains

wall thickness is

subdivided in layers

Elementary cell V

2

1

3

89

7

6

5

4

1011

17

18

16

15

14

13

12

20

19

21

2627

25

24

23

22

28

29

35 36

34

33

3231

30

L

2

1

3

89

7

6

5

4

y1

y3

1/4 of the REV is

subdivided in 9

sub-domains

iL

2y

intn

-b

Limit Analysis of Masonry Structures Accounting for Uncertainties The Open Civil Engineering Journal, 2008, Volume 2 55

- �� , which is the projection of the local stress tensor along

directions 1y and 2y , see Fig. (2);

- n , the outward unit vector orthogonal to lY� surface, Fig. (2-a);

- lY� , which is the internal boundary of the elementary

cell, Fig. (2-a);

- ��[ ]�� �� , representing the jump of micro-stresses across any

discontinuity surface of normal int

n , Fig. (2-b);

- mS and bS , denoting respectively the strength domains

of mortar and bricks;

- Y , which is the cross section of the 3D elementary cell

with 03 =y (see Fig. (2), Y is its area, V is the

elementary cell volume, h represents the wall thickness

and ( )321 yyy=y is the position of a point in the

local frame of reference.

As shown in Fig. (2-b), in the model the unit cell is sub-

divided into a fixed number of layers along its thickness. For

each layer out-of-plane components 3i� ( 3,2,1=i ) of the

micro-stress tensor �� are set to zero (i.e. a typical plane

stress condition for each layer is adopted), so that only in-

plane components ij� ( 2,1, =ji ) are considered active.

Furthermore, ij� ( 2,1, =ji ) are kept constant along the

thickness Li

� of each layer, i.e. in each layer

),( 21 yyijij �� = . For each layer, each fourth of the repre-

sentative volume element is sub-divided into nine geometri-

cal elementary entities (sub-domains), so that the entire ele-

mentary cell is sub-divided into 36 sub-domains (see [10]

and Fig. (2-b) for further details).

For each sub-domain k and layer Li , polynomial distri-

butions of degree ( )m in the variables ( )21 , yy are a priori

assumed for the stress components. Since stresses are poly-

nomial expressions, the generic ij th component can be writ-

ten as follows:

� ij(k ,iL )

= X y( )Sij(k ,iL )T y � Y (k ,iL )

(3)

Where:

- X y( ) =[1 y1 y2 y1

2 y1y2 y22

K]

- ),( Lik

Y represents the k th sub-domain of layer )( Li

is a vector representing the unknown stress parameters of

sub-domain k of layer Li .

The imposition of equilibrium inside each sub-domain,

the continuity of the stress vector on interfaces and the anti-

periodicity of � n permits a strong reduction in the number

of independent stress parameters (see [10] for further de-

tails), allowing to write the stress vector (k, i L)~�� of layer

Li

inside each sub-domain as (k, i L)~

= X (k, iL ) (y) S~ (iL )

��

where ( )LiS

~ is the vector of linearly independent unknown

stress parameters of layer Li , k is a sub-domain and

),(~Lik

X

is a 3xsn matrix, which depends only on the geometry of the

sub-domain ( sn is the length of vector ( )LiS

~).

Once that an equilibrated polynomial field in each layer

is obtained (here fourth-order polynomials are used), the

proposed in- and out-of-plane model requires a subdivision

of the wall thickness into Ln layers (Fig. (2-b), with a con-

stant thickness Li nh

L/=� . This allows to derive the fol-

lowing simple non-linear optimization problem:

max �{ }

%N =

k ,iL

(k,iL )

dV~

%M =

k ,iL

(k ,iL )

dV~

= %N %M��

��= �n�

(k ,iL )

= %X(k ,iL )

y( ) %SiL( )

~

~

(k ,iL )

S(k ,iL )

(a)

(b)

(c)

(d)

(e)

(f)

(g)

�S hom �

k = 1,...., number of sub - domains

iL = 1,...., number of sub - layers

y3

�such that

(4)

In equation (4) � represents the so-called load multi-

plier, ),( Lik

S denotes the non-linear strength domain of the

constituent material (mortar or brick) corresponding to the

k th sub-domain and Li th layer. As a rule, � is an ultimate

moment, an ultimate membrane action or a combination of

moments and membrane actions. In order to recover homS�

surface point by point, a fixed direction �n in the six dimen-

sional space of membrane actions ( %N =[Nxx Nxy Nyy] )

and bending + torsion moments ( %M =�[M xx M xy M yy] )

is chosen. For each �n , a failure load � at a cell level is

computed. The knowledge of �n and � permits to find a

point of the masonry failure surface in six dimensions.

Changing the direction �n , a new � is calculated from the

optimization problem, hence a new point of the failure sur-

face is collected. In this manner, repeating the procedure for

a suitable number of different directions, several points of homS� can be calculated. With the hypotheses assumed in

[10] homS� results convex. Therefore, a final Delaunay tes-

sellation permits to find a lower bound linear approximation

of homS� .

-����Sij

(k ,iL )= Sij

(k ,iL )(1) Sij(k ,iL )(2) Sij

(k ,iL )(3) Sij(k ,iL )(4 ) Sij

(k ,iL )(5) Sij(k ,iL )(6)

K��

��

56 The Open Civil Engineering Journal, 2008, Volume 2 Benasciutti and Milani

A linearization with 80 planes of such failure surface is

implemented in the FE limit analysis code described in the

following section, for performing the upper bound homoge-

nized limit analyses presented in this paper.

Of course, the failure surface obtained with the homog-

enization procedure presented above depends on both me-

chanical properties assumed for joints and bricks; therefore,

also the collapse load of the structures considered in Exam-

ple 2 and 3 of this paper indirectly depends on both bricks

and mortar mechanical properties. Finally, it is worth noting

that, in principle, also fracture energy of constituent materi-

als should be ideally considered as a random variable (see

[15]), although in limit analysis fracture energy effect is dis-

regarded.

3D KINEMATIC FE LIMIT ANALYSIS

The upper bound approach developed in this paper is

fully described in [3] and the reader is referred there for a

detailed description of the numerical model. Here, only the

bases of the procedure proposed are reported. The formula-

tion uses three noded triangular elements with linear interpo-

lation of the velocity field inside each element, so that three

velocity unknowns per node i , say i

xxw , i

yyw and i

zzw (re-

spectively 2 in-plane velocities and 1 out-of-plane velocity,

see Fig. (3-a) are introduced for each element E , meaning

that the velocity field is linear inside an element.

A possible jump of velocities on interfaces between ad-

joining elements is supposed to occur, with linear interpola-

tion of the jump along the interface. The introduction of a

jump of displacements is useful to obtain reliable

collapse loads for friction materials (see [5]). In this

framework, for each interface between coplanar adjacent elements, four additional unknowns are introduced

( [ ]TI uvuv 2211 ����=�u ), representing the normal

(iv� ) and tangential (

iu� ) jumps of velocities (with respect

to the discontinuity direction) evaluated on nodes 1=i and

2=i of the interface (see Fig. (3-b)). For any pair of nodes

on the interface between two adjacent and coplanar triangles

R and K , the tangential and normal velocity jumps can be

written in terms of the Cartesian nodal velocities of elements

R - K (see Fig. (3) for details), so that a system of four lin-

ear equations in the form 0uAwAwA =�++IeqKeqReq

131211

can be written, being R

w and K

w the 19� vectors that

collect velocities of elements R and K respectively and eq

j1A 3,2,1=j matrices which depend only on the interface

orientation I

� (Fig. (3)).

Since velocities interpolation is kept linear inside each

triangular element, only three equality constrains represent-

ing the plastic flow in continuum (obeying an associated

flow rule) are introduced for each element in the form

�/hom�S=pl

E��.

&�

E� , where

plE��

. is the plastic strain rate

vector of element E , 0�E�& is the plastic multiplier, homS

is the homogenized (non) linear failure surface of masonry in

the six dimensional space of membrane 221211 ,, NNN

and bending 221211 ,, MMM actions, i.e.

� = (N11, N12 , N22 , M11, M12 , M 22 ) .

For each element, plastic flow in continuum may be writ-

ten in the form .

0=Aeq

11 w Aeq

12+E� , where E

w is the

Fig. (3-a). Triangular plate and shell element used for the upper bound FE limit analysis. -b: discontinuity of the in-plane velocity field.

x wxx

y wyy

z wzz

1xx yy

1 1

zz

1

2xx yy zz

3(w ;w ;w )xx yy zz

1

2

u

n

xx yy

v

(w ;w )xx yy

(w ;w )xx yy

1(R)

xx yy

wyy

wxx

n: normal to the interface

x

y

(K) element

(R) element

1(R)

1(K) 1(K)

2(R) 2(R)

2(K) 2(K)

elements middle plane

(w ;w ;w )

(w ;w ;w )

(w ;w )

(w ;w )

I

(E) element

t

Triangular discretization

z

x

2 22

3 3 3

-a

-b

Limit Analysis of Masonry Structures Accounting for Uncertainties The Open Civil Engineering Journal, 2008, Volume 2 57

vector of element velocities and E.�� is a 1�m vector of

plastic multiplier rates, one for each plane of the linearized

failure surface.

Denoting with [ ]TEk

zz

Ej

zz

Ei

zzEzz www )()()(

, =w the ele-

ment E out-of-plane nodal velocities and with

�� &� i

&�j &�k

.= EE E

E the side normal rotation rates, it is

possible to show that ��.

E and Ezz ,w are linked by the com-

patibility equation (Fig. (4)) E = BEwzz,E�.

, where EB is a

3x3 matrix that depends only on the geometry of element E .

The total internal power dissipated inP is constituted by

the power dissipated in continuum, in

EP , and the power dis-

sipated on interfaces, in

IP . It is interesting to note that out-

of-plane plastic dissipation occurs only along each interface

I between two adjacent triangles R and K or on a bound-

ary side B of an element Q (see Fig. (4)). Therefore in

EP

can be evaluated for each triangle E of area EA taking into

account only in-plane actions.

Let us assume that a linear approximation (with m hy-

per-planes) of masonry failure surface in the form

Shom � A � � bin in

is at disposal solving, as already dis-

cussed in the previous Section, a number of linear program-

ming problems (4). Here, in

A is a 6�m matrix of coeffi-

cients of each hyper-plane and in

b is a 1�m vector of the

right hand sides of the linear approximation.

As the homogenized (linearized) failure surface is consti-

tuted by m hyper-planes of equation

+++++ yy

q

yyxx

q

xxxy

q

xyyy

q

yyxx

q

xx MBMBNANANAq

Exy

q

xy CMB = , with

mq ��1 , an estimation of in

EP can be easily obtained as

�=

=m

q

q

E

q

EE

in

E CAP1

)(�& with curvature rate tensor equal to

zero, being )(q

E�& the plastic multiplier rate of the triangle E

associated to the q th hyper-plane of the linearized failure

surface.

On the other hand, for an interface I between adjoining

elements of length � and orientation I

� , a rotation operator

is applied to the linearized homogenized admissible domain

frontier in order to obtain with a limited computational effort

m equations (one for each hyper-plane) in the form

+++ tn

q

tnnn

q

nntt

q

tt NANANA ++ nn

q

nntt

q

tt MBMB tn

q

tn MBq

IC=

representing hom~

S� in the tn � interface frame of refer-

ence, defined in Fig. (3).

In this way, the power dissipated in

IP along an interface

I can be easily estimated as reported by Krabbenhoft et al.

[26] and the reader is referred there for a detailed discussion

of the kinematic hypotheses adopted for the evaluation of in

IP .

After some assemblage operations (see for instance [4, 14, 26]), the following linear programming problem is ob-tained, where the objective function is the total internal power dissipated:

min PI

in

I =1

nI

� + PE

in

E=1

nE

� �P0

T

w�

��

��

Aeq

U = beq

. I ,ass . E ,ass

. ass

=.

�. -

.+

� 0

� 0

� 0

� 0. -

such that��

� � �

(5)

In equation (5) the following symbols are used:

- U is the vector of global unknowns and collects the vec-

tor of assembled nodal velocities ( w ), the vector of as-

sembled element plastic multiplier rates (. E ,ass

� ), the

vector of assembled jump of velocities on interfaces

( assI ,u� ), the vector of assembled interface plastic mul-

Fig. (4). Rotation along an interface between adjacent triangles or in correspondence of a boundary side.

x

I

I

wzz

w =wzz zz

y Boundary side

w =wzz zz

wzz

wzzj(Q)

(R) element

(K) element

(Q) element

nn

I

nt

1(R) 2(K)

2(R)

3(R) 3(K)

1(K)

B

58 The Open Civil Engineering Journal, 2008, Volume 2 Benasciutti and Milani

tiplier rates (. I ,ass

�� ) and the vector of interface and

boundary out-of-plane rotation rates . ass

�� .

- eq

A is the overall constraints matrix and collects nor-malization conditions, velocity boundary conditions, rela-tions between velocity jumps on interfaces and elements velocities, constraints for plastic flow in velocity discon-tinuities and constraints for plastic flow in continuum.

- En and In are the total number of elements and inter-

faces, respectively.

- 0P is the vector of (equivalent nodal) permanent loads.

POLYNOMIAL RESPONSE SURFACE (RS) MODEL-ING

Given the vector ix of analysis inputs for the i th com-

puter simulation, a quadratic polynomial RS model has the form:

���= ==

++=m

q

m

qr

riqiqr

m

q

qiqi xxcxccy1

,,

1

,0,p

(6)

where m is the total number of input variables (i.e. random material properties), qix , is the value of the q th input vari-able for the i th computer run and qc are (unknown) coeffi-cients.

Given n independent observed analyses outputs iy ,

ni ,...,1= , the estimation problem can be formulated in compact matrix notation as:

cXy ˆ� (7)

where y is the vector of observed outputs, X is the so

called design matrix, while the unique least-squares estima-

tor of the unknown coefficients is:

( ) yXXXcTT 1

ˆ�

=

(8)

A RS model is fitted on a set ky ,c

, c,...,1 nk = of ob-

served analysis outputs, derived from computer calculations

on a set of optimally selected values k,cx for the relevant

input variables. Different design of experiments strategies

can be used to sample the input variables for RS fitting [27].

Several studies in the literature, in trying to discover what design of experiment would provide the best accuracy, found that no method has been recognized as the best one [22, 23] and one should always refer to the specific problem under study [22, 24], although according to Simpson et al. [23] the basic requirement for an experimental design in de-terministic computer analyses is space filling.

In the present work, the input design points are generated by two different sampling schemes: classical MC method and the LH technique, the latter being considered for its rela-tive simplicity with respect to other existing techniques, and because it provides a better space filling compared to the MC method.

Experimental design: Monte Carlo and Latin Hypercube method

In the classical MC method, the input design random variables are selected at random from their domains of defi-nition, hence the sampled values have more probability to occur in regions with higher probabilities (e.g. close to the mean value of the distribution).

To assure a more uniform sampling from the interval of each random variable the LH technique has been proposed as an improvement to the classical MC method [17, 19]. In the LH method, to obtain a sample of size

cn by m input ran-dom variables ),...,,( 21 mxxx=x , we first divide the defini-tion domain of each variable in

cn disjoint intervals of equal probability, according to the corresponding probability dis-tribution of each variable. Then, we extract a sample value from each interval, leading to a set of

cn sampled values for each variable. Compared to the classical MC method, the LH design assures that each of the input random variables has all portions of its range represented, thus providing a more uni-form sampling of the input design space. Finally, the

cn samples for vector x are obtained by combining all values previously sampled according to

cn random permutations. Special techniques are used to impose desired correlations among several variables, see Refs. [17, 28, 29].

VALIDATION OF FITTED RS MODEL

To quantify the prediction accuracy of an estimated RS

model we calculate at n specified validation points the dif-

ference between the true analysis output iy from a direct

computer simulation and the value iy ,p predicted by the

polynomial RS model. Following some existing references

[22, 24, 30], we refer to the Root Mean Square Error:

RMSE =1

nyi � yp, i( ) �

2

i=1

n

� (9)

to the Mean Absolute Error:

�=

�=

n

i i

ii

y

yy

nMAE

1

,p1

(10)

and to the Maximum Absolute Relative Error:

)max(

...,,1

,p

nii

iii

y

yyMARE

=

�=

(11)

Note that while RMSE and MAE provide an average measure of the overall and local prediction accuracy, respec-tively, MARE quantifies the absolute worst relative predic-tion error.

In order to capture the trend of the prediction error as a

function of the observed output values iy , in addition to the

errors introduced above we also consider the percentage rela-

tive error:

( ) ��

��

��=

i

ii

iy

yyerr

ˆ100%

(12)

For each fitted RS model a value of RMSE, MAE and

MARE error metrics is obtained, as well as a set of percent-

age relative errors, which depend on the particular set of MC

Limit Analysis of Masonry Structures Accounting for Uncertainties The Open Civil Engineering Journal, 2008, Volume 2 59

and LH calibration points used to construct the RS model,

and also on the particular set of validation points considered.

In fact, due to the random sampling technique adopted to

generate calibration points, different MC or LH small repli-

cated samples lead, in general, to different polynomial RS

models (even with the same degree). On the other hand, the

authors experienced that the use of MC and LH sampling

performs better with respect to the use of a simple regular

input grid, as a consequence of the over fitting polynomial

approximation, see Ref. [20].

Fig. (5). Perpendicular masonry walls subjected to horizontal ac-

tion. Geometry (-a) and mesh used (-b). A typical de

at collapse obtained by means of the limit analysis FE procedure

adopted is also reported (NP is the in-plane plastic dissipation

evaluated at node N and N is the node of maximum dissipation).

NUMERICAL SIMULATIONS: AN EXAMPLE OF TECHNICAL INTEREST

A meaningful example of practical interest is examined

in this section with the aim of illustrating the use of polyno-

mial RS models as a computationally inexpensive alternative

to direct computer calculations in extensive MC simulations,

used to estimate the collapse load probability distribution of

masonry buildings having random material properties.

Let us consider two perpendicular masonry walls of di-

mensions 5001 =L cm, 3002 =L cm, 300=H cm,

30=t cm and with perfect interlocking, as shown in Fig.

(5). Such walls are subjected to a constant vertical load due

to typical dead and live loads assumed equal to =0P 120

N/mm and an increasing horizontal load depending on a load

multiplier �=y , simulating an equivalent static seismic

action.

Several homogenized limit analyses FE simulations are

performed on the structure by means of the triangular discre-

tization shown in Fig. (5-b). Masonry is supposed consti-

tuted by Italian common bricks of dimensions 250 mm�120

mm�55 mm disposed in running bond texture, with joints

thickness equal to 10 mm.

We assume for joints a Mohr-Coulomb failure criterion

with cohesion c and friction angle )tan(� both modeled as

independent normally distributed random variables, with

mean and standard deviation (listed as bold numbers in

Table 1) deduced from some experimental data reported in

Ref. [15].

As a first step, a polynomial RS model is fitted on a small

set of cn observed collapse load values

ky ,c, obtained from

k,cx input values sampled by either the MC method or the

LH technique, see Fig. (1). More precisely, for each input

pair ( )( )iic �tan, , a 3D homogenized limit analysis is per-

formed, so obtaining a set of cn collapse loads

iy , used to

construct a polynomial RS model. As suggested in [21], the

number cn of calibration points can be correlated to the

number of input random variables; in our example, cn

equals 20 since only two input variables are taken into ac-

count. The independence among input random variables for

LH samples is imposed by the procedure described in Ref.

[29]. An example of fitted RS model with the set of LH cali-

bration points is shown in Fig. (6).

As a second step, the fitted RS model is used in place of

direct computer simulations in extensive MC simulations

with large sample sizes (3000). The size of large MC sam-

ples was chosen as a necessary compromise between the

high computational cost of all numerical simulations and the

required accuracy in the estimate of the collapse load prob-

ability distribution, as well as considering 5% as a sufficient

value for the failure probability in technical applications. On

the other hand, considering the large variability of input ma-

terial properties due to usually scarce experimental tests,

higher sample size would unnecessarily increase the overall

computational cost, without improving the accuracy of the

tail estimates of the collapse load probability distribution.

As can be seen in Table 1, there is a quite good agree-ment between the theoretical mean and standard deviation with those calculated on each large MC sample, which con-firms the correctness of the sizes adopted for large MC sam-ples.

In the third step, the n collapse load values iy , ni ,...,1=

resulting from large MC simulations are used to compute the empirical cumulative distribution [31]:

H

L2

L1

t

P0

P0

t

-a

-b

60 The Open Civil Engineering Journal, 2008, Volume 2 Benasciutti and Milani

( )�=

�=n

i

iyyuyP1

)(ˆ

(13)

(where 1)( =xu for 0>x and zero elsewhere is an indica-

tor function) which is an estimator of the collapse load prob-

ability distribution )(yP . Note that )(ˆ yP depends on the

particular MC outputs iy considered, hence it is affected by

a statistical uncertainty. Replicated MC simulations are then

used to estimate a mean empirical cumulative distribution

[17]:

Table 1. Theoretical Mean and Standard Deviation (Bold

Numbers) of the Random Input Parameters

Compared with Those Calculated on the Three

Large MC Samples

Cohesion [N/mm2] Tangent of Friction

Angle [ – ]

cx =1 ( )�= tan2x

Mean valuea 0.1457 0.75

0.143 0.753

0.143 0.752

0.142 0.751

Standard deviationa 0.034 0.045

0.036 0.048

0.037 0.046

0.036 0.048

a The theoretical mean and standard deviation (bold numbers) are estimated from the

experimental data reported in [15].

( )�=

=rn

r

r

r

yPn

yP1

ˆ1)(

(14)

rn being the number of replicates (in our example, 3=rn ).

Following the scheme of Fig. (1), direct computer calcu-lations on the same large MC samples are also performed, in order to compare the collapse load empirical distribution obtained via direct simulations with the one calculated via fitted RS models. It is worth noting that each of the 3000 MC simulations required 37 h 27 min to be performed on an Intel Pentium 3 GHz PC equipped with 1GB RAM, whereas the construction of the polynomial RS model and the evalua-tion of the predicted output required only 12 min.

In Fig. (7) we show the variability in the collapse load empirical cumulative distribution obtained from the three 3000 MC computer simulations via direct computer calcula-tions, compared with the mean probability distribution.

Fig. (7). Comparison of the empirical cumulative distribution from

three replicated Monte Carlo samples of size 3000 with the mean empirical cumulative distribution.

Fig. (6). Polynomial RS model with the 20 LH calibration points.

344

14

/4

74

-4

4465.

465

46-.

46--

; ! *�+��,

*)�+�,

463.

463

464.

4 46..46/

46/.460

460.461

461.462

462.

?- ; ���<��

?3

0 10 20 30 40 50 60 70 80

10-3

10-2

10-1

100

y = � [kN/m]

P(y

)

Mean cumulative distribution Sample cumulative distributions

Limit Analysis of Masonry Structures Accounting for Uncertainties The Open Civil Engineering Journal, 2008, Volume 2 61

On the other hand, in Fig. (8), the probability distribu-tions provided by direct computer simulations and by the estimated RS models are represented. Furthermore, in the same figure, the mean percentage prediction errors (%)ierr for the RS models are depicted. As it is possible to notice from Fig. (8), reported in logarithmic scale along the y-axis in order to amplify the phenomenon, the RS models are able to give reliable results approximately for probability greater that 1%, thus demonstrating that the proposed approach can be used for practical applications.

Fig. (8). Comparison of the empirical distribution functions ob-

tained in large Monte Carlo simulations via direct computer simula-

tions and via polynomial RS models. The theoretical cumulative

distribution is also shown (dashed line). The average relative per-

centage error calculated on 10 replicated RS models is also shown.

As a rule, the probability distributions from RS models tend to be lower than those from direct MC simulations. This discrepancy for low collapse load values is easily justifiable by considering that, when collapse load tends to zero (i.e. when cohesion and friction angle of mortar joints are very low), a least-squares approximation used to calibrate the RS model tends to assign negligible optimization weights for low values of the collapse load (at least if such weights are compared to those relative to high values of y ). On the other hand, the prediction error becomes small (i.e. below 10%) for cumulated probabilities equal or greater of 1%, which seems acceptable in practical applications.

As a final step, a comparison of the prediction accuracy of the RS models constructed on MC and LH calibration points is also examined in terms of the error measures intro-duced previously. As done in Ref. [21], the same three repli-cated large MC sets previously used to estimate the output

probability distribution are used as validation points. To ac-count for variability in RS model fitting, mean and variance of all error measures are calculated on 10 independently rep-licated MC-RS and LH-RS polynomials. Note that while the mean of the errors indicates the average accuracy of a RS model, the variance illustrates the variability (i.e. robustness) of the prediction accuracy [21].

Fig. (9). Comparison of the mean of the fitting errors calculated on

10 replicated polynomial RS models constructed from MC and LH design points.

As can be seen in Fig. (9) and Fig. (10), it seems quite clear how the overall prediction accuracy of LH-RS models is almost systematically better than that of MC-RS models. In particular, LH-RS models show both an overall better accuracy (in terms of RMSE and MAE) and a best local fit-ting performance, quantified by MARE. In addition, LH-RS models show a better accuracy for low y values, corre-sponding to the left-end tail of the collapse load distribution, as confirmed by the slightly lower mean percentage error observed for low y values (see top of Fig. (8)).

Fig. (10). Comparison of the variance of the fitting errors calculated

on 10 replicated polynomial RS models constructed from MC and LH design points.

Mean of errors

0

0.2

0.4

0.6

0.8

1

RMSE MAE MARE

RS from MC points

RS from LH points

MC1 MC1MC1MC2 MC2MC2MC3 MC3MC3

x10

x100

Variance of errors

0

0.1

0.2

0.3

0.4

0.5

0.6

RMSE MAE MARE

RS from MC points RS from LH points

MC1 MC1MC1MC2 MC2MC2MC3 MC3MC3

x1E4

x1E6

0

10

20

30

40

50

mea

n re

l. er

ror (

%)

errors for MC-RS modelerrors for LH-RS model

10-3

10-2

10-1

100

P(y

)

ecdf (computer simulations)ecdf (from LH-RS models)

0 10 20 30 40 50 60 70 80

10-3

10-2

10-1

100

y = � [kN/m]

P(y

)

ecdf (computer simulations)ecdf (from MC-RS models)

62 The Open Civil Engineering Journal, 2008, Volume 2 Benasciutti and Milani

REFERENCES

[1] G. Milani, P.B. Lourenço, and A. Tralli, “Homogenised limit analysis

of masonry walls. Part II: structural examples”, Comput. Struct., vol.

84, pp. 181-195, January 2006.

[2] G. Milani, P.B. Lourenço, and A. Tralli, “Homogenization approach

for the limit analysis of out-of-plane loaded masonry walls”, J. Struct.

Eng., vol. 132(10), pp. 1650-1663, October 2006.

[3] G. Milani, P.B. Lourenço, and A. Tralli, “3D homogenized limit

analysis of masonry buildings under horizontal loads”, Eng. Struct.,

vol. 29(11), pp. 3134-3148, November 2007.

[4] P.C. Olsen, “Rigid-plastic finite element analysis of steel plates, struc-

tural girders and connections”, Comput. Meth. Appl. Mech. Eng., vol.

191, pp. 761-781, December 2001.

[5] S.W. Sloan, and P.W. Kleeman, “Upper bound limit analysis using

discontinuous velocity fields”, Comput. Meth. Appl. Mech. Eng., vol.

127(1-4), pp. 293-314, November. 1995.

[6] P.C. Olsen, “Evaluation of triangular elements in rigid-plastic finite

element analysis of reinforced concrete”, Comput. Meth. Appl. Mech.

Eng., vol. 179, pp. 1-17, August 1999.

[7] D.J. Sutcliffe, H.S. Yu, and A.W. Page, “Lower bound limit analysis

of unreinforced masonry shear walls”, Comput. Struct., vol. 79, pp.

1295-1312, June 2001.

[8] H.R. Lotfi, and B.P. Shing, “Interface model applied to fracture of ma-

sonry structures”, J. Struct. Eng., vol. 120, pp. 63-80, January 1994.

[9] P.B. Lourenço, R. de Borst, and J.G. Rots, "A plane stress softening

plasticity model for orthotropic materials”, Int. J. Numer. Methods

Eng., vol. 40, pp. 4033-4057, November 1997.

[10] G. Milani, P.B. Lourenço, and A. Tralli, “Homogenised limit analysis

of masonry walls. Part I: failure surfaces”, Comput. Struct., vol. 84,

pp. 166-180, January 2006.

[11] G. Magenes, and G.M. Calvi, “In-plane seismic response of brick

masonry walls”, Earth. Eng. Struct. Dynamics, vol. 26, pp. 1091-

1112, November 1997.

[12] G. Milani, F.A. Zuccarello, R.S. Olivito, and A. Tralli, “Heterogene-

ous upper-bound finite element limit analysis of masonry walls out-

of-plane loaded”, Comput. Mech., vol. 40(6), pp. 911-931, November

2007.

[13] P.B. Lourenço, “Anisotropic softening model for masonry plates and

shells”, J. Struct. Eng., vol. 126(9), pp. 1008-1016, September 2000.

[14] A. Brencich, and L. Gambarotta, “Mechanical response of solid clay

brickwork under eccentric loading. Part I: Unreinforced masonry”,

Mater. Struct., vol. 38(2), pp. 257-266, March 2005.

[15] R. van der Pluijm, Out-of-plane bending of masonry. Behaviour and

strength, PhD Thesis, Eindhoven University of Technology, 1999.

[16] M. Kobiyama, “The reduction of computational time of the Monte

Carlo method, applied to radiative heat transfer with variable proper-

ties, by using a fixed properties loop”, Comput. Mech., vol. 5(1), pp.

33-39, January 1989.

[17] J.C. Helton, and F.J. Davies, “Latin Hypercube and the propagation of

the uncertainty in analyses of complex systems”, Reliab. Eng. Syst.

Saf., vol. 81, pp. 23-69, July 2003.

[18] P. Hradil, J. �ak, D. Novák, D. Lavicky, “Stochastic analysis of his-

torical masonry structures”, In: Historical Constructions, Lourenço

PB, Roca P (eds), Guimarães, Portugal, November 2001.

[19] M.D. McKay, W.J. Conover, and R.J. Beckman, “A comparison of

three methods for selecting values of input variables in the analysis of

output from a computer code”, Technometrics, vol. 42(1), pp. 55-61,

February 2000.

[20] A.A. Giunta, and L.T. Watson, “A comparison of approximation

modeling techniques: polynomial versus interpolating models”, In: 7°

AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis

and Optimization, St. Louis, MO, 1998, Paper AIAA: 1998-4758.

[21] R. Jin, W. Chen, and T.W. Simpson, “Comparative studies of meta-

modelling techniques under multiple modeling criteria”, Struct.

Multidiscip. Optim., vol. 23, pp. 1-13, December 2001.

[22] T.W. Simpson, D.K.J. Lin, and W. Chen, “Sampling strategies for

computer experiments: design and analysis”, Int. J. Rel. Appl., vol.

2(3), pp. 209-240, January 2002.

[23] T.W. Simpson, J.D. Peplinski, P.N. Koch, and J.K. Allen, ”Metamod-

els for Computer-based Engineering Design: Survey and recommen-

dations”, Eng. Comput., vol. 17, pp. 129–150, July 2001.

[24] L.P. Swiler, R. Slepoy, A.A. Giunta, “Evaluation of sampling meth-

ods in constructing response surface approximations”, In: 47°

AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and

Materials Conference, Newport, Rhode Island, 2006.

[25] P. Suquet, "Analyse limite et homogeneisation", Comptes Rendus de

l'Academie des Sciences, Series IIB, Mechanics, Vol. 296, pp. 1355-

1358, 1983 [in French].

[26] K. Krabbenhoft, A.V. Lyamin, M. Hjiaj, and S.W. Sloan, ”A new

discontinuous upper bound limit analysis formulation”, Int. J. Numer.

Methods Eng., Vol. 63, pp. 1069-1088, June 2005.

[27] A.A. Giunta, S.F. Wojtkiewicz, and M.S. Eldred, “Overview of mod-

ern design of experiments methods for computational simulations”,

In: 41° AIAA, Aerospace Sciences Meeting and Exhibit, Reno, NV,

2003, Paper AIAA: 2003-0649.

[28] A. Florian, “An efficient sampling scheme: Updated Latin Hypercube

Sampling”, Probab. Eng. Mech., vol. 7, pp. 123-130, Apr.-June 1992.

[29] D.E. Huntington, and C.S. Lyrintzis, "Improvements to and limita-

tions of Latin Hypercube sampling”, Probab. Eng. Mech., vol. 13(4),

pp. 245-253, October 1998.

[30] P. Ramu, N.H. Kim, and R.T. Haftka, ”Error amplification in failure

probability estimates due to small errors in response surface”, SAE

paper 2007-01-0549, 2007.

[31] A. Mood A, F. Graybill, and D. Boes, Introduction to the theory of

statistics, McGraw-Hill, 1974.

Received: March 30, 2008 Revised: April 22, 2008 Accepted: April 22, 2008

© Benasciutti and Milani; Licensee Bentham Open.

This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.5/), which

permits unrestrictive use, distribution, and reproduction in any medium, provided the original work is properly cited.