Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

Embed Size (px)

Citation preview

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    1/22

    Methods for quantifying the uncertainty of production forecasts

    a comparative study

    F.J.T. Floris,Delft Geoscience Research Centre1, P.O. Box 5028, 2600 GA Delft, The NetherlandsM.D. Bush,BPAmoco, Chertsey Road, Sunbury-on-Thames, Middlesex, TW17 7LN, UK

    M. Cuypers,ELF Exploration UK, 30 Buckingham Gate, London SW1E 6NN, UK

    F. Roggero,Institut Franais du Ptrole, 2 Avenue President Angot, Helioparc, 64000 Pau, France

    A-R. Syversveen,Norwegian Computing Centre, P.O. Box 114 Blindern, Gaustadalleen 23, 0314 Oslo,

    Norway

    Abstract

    This paper presents a comparison study in which several partners have applied methods to quantify uncertainty on

    production forecasts for reservoir models conditioned to both static and dynamic well data. A synthetic case study

    was set up, based on a real field case. All partners received well porosity/permeability data and historic production

    data. Noise was added to both data types. A geological description was given to guide the parameterization of the

    reservoir model. Partners were asked to condition their reservoir models to these data and estimate the probabilitydistribution of total field production at the end of the forecast period. The various approaches taken by the partners

    were categorized. Results showed that for a significant number of approaches the truth case was outside the predicted

    range. The choice of parameterization and initial reservoir models gave largest influence on the prediction range,

    whereas the choice of reservoir simulator introduced a bias in the predicted range.

    Introduction

    Traditionally reservoir development decisions are based on a production forecast from a single history

    matched reservoir model. To assess risk, some runs are made to check the sensitivity of the forecast.

    However, formal quantification of risk requires full sampling of the forecast probability density function

    for the quantity to be forecast.Several papers have appeared for the generation of full pdfs in the geosciences. The reduction in

    uncertainty of hydrocarbon pore volume due to structural and porosity / permeability uncertainty is

    studied in Berteig et al. (1988). Methods for quantifying the pdf of hydrocarbon volumes in place due to

    top structure uncertainty are given in Abrahamsen et al. (1992), Samson et al. (1996) and Floris &

    Peersmann (1998). Uncertainty in production from fields without history matching is described in Lia et

    al. (1997). Landa & Horne (1997) investigate the reduction in uncertainty on reservoir description for a

    synthetic case where saturation maps from 4D seismic have been included as history data.

    In this paper, we focus on production forecast uncertainty quantification (PUNQ) methods. During the

    execution of the EC sponsored PUNQ project (Bos, 2000) a number of new methods have been

    developed which are published in previous papers. The aim of this paper is to report on an integrated

    case study to which all methods were applied.

    Work flow for uncertainty quantification

    The general workflow contains a number of standard components. Instead of organizing this paper in

    terms of the separate work flows, we choose to first categorize these workflow components as building

    blocks and then build the work flows from them. The general outline of the work flows is given in Figure

    1.

    1The Delft Geoscience Research Centre is a collaboration between Delft University of Technology and the

    Netherlands Organisation for Applied Scientific Research TNO

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    2/22

    Methods for quantifying uncertainty on production forecasts Page 2

    Parameterization

    In the Bayesian inversion approach, the key is to condition reservoir models to all available data. This

    conditioning is done through parameters present in the reservoir model. In this study we have

    concentrated on the spatial distribution of porosity and permeability. Their spatial distribution can be

    parameterized in various ways. Below explanation is provided of the ways of parameterization used in

    this study.

    Grid blocks

    The most general approach is to consider all grid block values as independent parameters. In this model

    the porosity, and directional permeabilities for all 1761 active grid blocks are used as parameters,

    resulting in about 5000 parameters. Through such an approach no preconceived idea about the geology is

    incorporated in the reservoir model. All knowledge must be inferred from the well and production data

    only. The main problems with this approach are the large number of parameters and the lack of spatial

    continuity present in the resulting reservoir models.

    RegionsThe use of homogeneous regions is a way to reduce the number of parameters. Through a proper

    selection of regions, some geological or reservoir engineering concepts can be incorporated in the

    reservoir model. Regions can either be used to follow geological layers or genetic units within layers, or

    can be used to characterize draining areas of wells. With regions, less parameters are needed to

    characterize the reservoir model, but preconceived ideas about the characteristics of regions may be

    inappropriate. The assumption of homogeneity within a region may not be justifiable and lead to abrupt

    changes between the boundaries of regions. The region approach has however been the standard

    approach in reservoir history matching.

    Pilot points

    With the advent of geostatistics, a new class of spatial parameterization approaches has emerged. Theyare known as pilot point or master point approaches. The developments for characterizing aquifers in a

    hydrogeological context started by de Marsily et al. (1984) with application to well interference testing.

    They used only kriging solutions to the transmissivity fields. The approach was extended to the use of

    Gaussian Random Fields by Ramaraoet al. (1995) and Gomez-Hernandezet al. (1996). Floris (1996)

    describes the application to hydrocarbon reservoir characterizing involving multi-phase production data.

    A number of pilot points are used to build smooth spatially correlated corrections to porosity /

    permeability fields. The method produces in continuously varying heterogeneous reservoir models,

    controlled by a limited number of pilot points. The pilot points are traditionally pre-fixed, but a method

    for selecting a limited number of pilot points based on a combination of geological uncertainty and

    sensitivity to production data has also been developed (Cuyperset al. (1998)).

    Global parametersA final class of parameters are those that cannot be linked to a particular spatial location, called global or

    underlying parameters. Examples are stochastic parameters such as mean values, standard deviation or

    correlation lengths, or object parameters such as channel width and height. Also for property fields,

    which are built from a weighted sum of basis functions, the weighting coefficients generally are not

    localized and thus are global parameters. In the adaptive chain approach by Holden (1998) or the gradual

    deformation method by Hu (2000), the coefficients in the linear combination of property fields used to

    generating a new field are a form of global parameters.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    3/22

    Methods for quantifying uncertainty on production forecasts Page 3

    Objective function

    In order to measure the extent to which a reservoir model is conditioned to the available history data, a

    measure must be defined to quantify the mismatch between the simulated response of the reservoir model

    and the history data. This measure is called the objective function.

    Least Squares norm

    Traditionally, the goal of history matching is to define a reservoir model that optimally reproduces the

    observed production history of a field. The mismatch between the simulated production data and the

    observed history data can be quantified using a Sum of Squares norm. For each quantity, e.g. Bottom

    Hole Pressure (BHP), Gas Oil Ratio (GOR), Water Cut (WCT), and for each time step,tk, the difference

    of the simulated value,osim

    , and the observed value,oobs

    , can be calculated. Squaring the numbers and

    summing them up results in the SoSnorm. The number of data available for each quantity may differ, for

    example BHP will be measured more frequently than GOR or WCT when using permanent pressure

    gauges. To avoid that the more frequently measured observations overshadow the significance of other

    observations, theSoSvalue can be divided by the total number of values available,

    SoS(oobs

    ,p) =

    k ijk

    ksim

    ij

    kobs

    ij

    ijk

    tjpiw

    ptotow

    nnn

    2);()(111

    (1)

    Subscriptsi and j run over the wells and production data types and kruns over the report times and nw,np,

    ntare the respective number of samples. Symbolp denotes the parameters, denotes the model +

    measurement error andw denotes extra weighting factors.

    When the difference between simulated and historic response are normalized by the measurement plus

    modeling error, a normalized SoSnorm results. If all weightsw equal unity, then a normalized SoSof 1

    implies that on average the simulated data is within the error band around the historic data. Variation of

    the weightw can be used to assign more significance to particular production data for example on

    reservoir engineering grounds.

    Likelihood function

    The definition of a Bayesian likelihood function relies more on the specification of a model for the

    uncertainty associated with the observed production. When the measurement plus modeling error are

    assumed to be independently Gaussian distributed, the likelihood function f(oobs

    |p) follows formally as,

    f(oobs

    |p) =c

    2

    );()(

    2

    1exp

    ijk

    ksim

    ij

    kobs

    ij

    i j k

    ptoto

    (2)

    wherec is a normalization constant. This likelihood function expresses the likelihood that the historic

    data can be explained by the reservoir model for which the likelihood holds. If it is low, the historic

    data are unlikely to come from the reservoir model. If it is high, then the reservoir model response fits the

    historic data well. Note that the Bayesian formalism does not allow subjective weighting of the

    mismatches as done in theSoSobjective function (both weighting with the number of samples and the

    extra weights unless this is formally brought in as a model assumption in the production uncertainty

    model.

    Posterior distribution

    It is generally believed that conditioning reservoir models on the least squares or likelihood function

    alone is mathematically ill-posed. Several solutions to the optimization problem may occur. Moreover,

    even for a single solution production data is often insensitive to some parameters (e.g. permeability in

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    4/22

    Methods for quantifying uncertainty on production forecasts Page 4

    unswept areas) or only sensitive to a combination of parameters (e.g. harmonic mean of permeability

    data). Consequently, individual parameter values can be changed quite dramatically without deterioration

    of the history match. Note that parameter values, which are insensitive during the history matching

    period, may become sensitive during the forecasting period. Through the Bayesian prior function, the ill-

    posed mathematics of the optimization process can be resolved / reduced. The general formula for the

    Bayesian posterior distribution is given by

    )()|()|( pfpofcopf (3)

    wheref(o|p) is the likelihood function and f(p) is the prior function. When both the prior distributions on

    the parameters and the production uncertainty are assumed to be Gaussian, including the prior results in

    an extra Sum of Squares term (here denoted in vector notation),

    pp

    T

    pi j k ijk

    ksim

    ij

    kobs

    ij

    pCp

    ptoto

    copf

    1

    2

    );()(

    2

    1

    exp)|(

    (4)

    where pis the vector of expectations ofp and Cpis the covariance matrix.

    Optimization algorithms

    In order to do the conditioning to production data, most approaches employ an optimization technique.

    One technique is to adjust the parameters manually and inspect the improvement on the match visually.

    However, here we have quantified the mismatch between observation and simulation data in terms of the

    objective function, allowing the use of optimization algorithms.

    Gradient optimizationThe most powerful tool for optimizing smooth functions is gradient optimization. Many variations on the

    theme of gradient optimization exist. The steepest descent technique follows the direction opposite to the

    gradient vector. The step size must be determined by, for example, a line search or a trust region

    approach. Conjugate gradients employ only mutually orthogonalized gradient directions. Levenberg-

    Marquardt switches from steepest descent to a Gauss-Newton approach, which also employs the second

    derivative when the optimum is approached. The dog-leg method by Powell (1972) uses another

    combination of the steepest descent direction and the direction proposed by the Gauss-Newton method to

    further improve the optimization.

    In general for numerical simulators, the gradient expressions are not present in closed form. Several

    methods exist to obtain the gradient information. The most straightforward approach is to calculate the

    sensitivity coefficients for each of the parameters by finite differences. Each iteration of the optimization

    process now requiresN+1 simulation runs, whereNis the number of parameters. This is a very time-

    consuming process if many parameters are present in the reservoir model. Using the Broyden (1965)

    technique, the gradient information can also be updated using only the evaluation of the optimization

    function for each new set of parameters. Thus, each iteration in the process requires only one extra

    simulation run. The gradient only needs to be initialized, requiringNextra up front simulation runs. In

    recent years some reservoir simulators calculate the gradient directly based on the finite difference

    equations present within the reservoir simulator. For Simulator 3, a reservoir simulator used in this study

    (see Table 2), the extra time needed to calculated the gradient to a single parameter is equivalent to

    approximately 30 % of a traditional simulation run.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    5/22

    Methods for quantifying uncertainty on production forecasts Page 5

    Genetic Algorithms

    The main problem with the gradient optimization technique is the danger of getting trapped in local

    minima. Two techniques that perform global optimization are simulated annealing (e.g. Hegstad et al,

    1994) and genetic algorithms (Goldberg, 1989). Tests have shown that for our case the SA and GA

    methods required approximately the same number of simulation runs to arrive at an optimal solution. In

    this work we have pursued the Genetic Algorithms, because they are easily parallelised and they can

    cope with multiple optima.

    In the Genetic Algorithm a population of random samples is evolved for a number of generations. During

    the evolution of the population, rules inspired by genetics are used to select the best fitting reservoir

    models. At the end of the optimization a population results which may consist of a number of optimal

    clusters.

    Table 1. Naming convention in Uncertainty Quantification methods.

    ML2

    Maximum Likelihood value

    MAP2 Maximum A Posteriori value

    ML+ Maximum Likelihood value + local characterization of the likelihood function

    MAP+ Maximum A Posteriori value + local characterization of the posterior distribution

    multi-ML/multi-MAP multiple ML or MAP values using different initial models

    multi-ML+ /

    multi-MAP+

    multiple ML or MAP values using different initial models + local characterization of

    the objective function around each peak

    Oliver-prod Oliver approach using only samples from the production data in the objective function

    Oliver-full Oliver approach using samples from both the prior and the production data in the

    objective function

    Posterior sampling Statistical sampling from the full posterior distribution

    Uncertainty quantification

    Having defined the parameters to be used in the reservoir model, and having a way to minimize the

    objective function, results in a single conditioned reservoir model. This model is called the ML

    (Maximum Likelihood) solution when only sum of squares or the likelihood function is used in the

    objective function or the MAP (maximum a posteriori) solution if the prior term is included. Production

    for the conditioned model can be forecast from the ML or MAP parameter values. The next step is to

    consider the approaches used in quantifying forecast uncertainty. Table 1 summarizes the naming

    convention.

    ML+/MAP+

    Having obtained the ML/MAP solution, it is possible to locally characterize the objective functionaround this solution and transfer this information into forecast uncertainty. Such an approach is called a

    ML+/MAP+ approach. An example is local linearization of the posterior. The Scenario Test Method

    (STM) is another such approach (Roggero, 1997). In the STM, a search for extreme high and low

    forecasts is invoked, starting from the ML/MAP solution. As a constraint in the search, the objective

    function value may not drop below a threshold value. The lower the threshold value is set, the more

    extreme forecasts may be produced. This method is believed to result in a characterization of the pdf

    envelope around a single ML/MAP solution. A third approach is the use of a Genetic Algorithm. As

    2These methods give single estimates and thus do not quantify uncertainty. They are included for naming convention

    only.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    6/22

    Methods for quantifying uncertainty on production forecasts Page 6

    detailed before, the GA results in a population of reservoir models. Near every optimum a set of

    individuals may be present, which contain information about the posterior locally around that optimum.

    Multi-ML/Multi-MAP

    Another class of approaches start from multiple initial reservoir models. This approach is termed multi-

    ML or multi-MAP depending on which objective function is optimized. If the objective function is truly

    multi-modal, then different conditioned models are expected, which will result in different forecasts.

    During the PUNQ project, a GA optimization code used by BPAmoco has been adapted, such that

    starting from a single initial population the final population may be centered around a number of distinct

    optima. Thus, with the resulting GA an initial random population may zoom into a number of distinct

    optima.

    Multi-ML+/multi-MAP+

    These approaches are a combination of the previous two classes, where multiple ML/MAP solutions are

    found and local characterization is performed.

    Posterior sampling

    The above multi-ML+/multi-MAP+ approaches only give a local characterization of the objective

    function around a number of optima. When the space between the optima carries a significant

    probability, this approach gives a restricted view of the possible reservoir models, and consequently may

    result in an underestimation of the range of uncertainty as well as a severe bias in the forecasted mean.

    Only through sampling of the complete posterior distribution can the full uncertainty be quantified. A

    statistically correct way of sampling from the posterior distribution is obtained by using the class of

    Markov-Chain Monte-Carlo techniques (Hegstad & Omre, 1997). A straightforward MCMC technique is

    one where each time reservoir models are proposed from the prior model. After reservoir simulation, the

    likelihood for the reservoir model is determined. This likelihood is used as a weighting factor for the

    reservoir model forecast. The drawback of this technique is that many prior samples may be neededbefore a reservoir model with a reasonable likelihood occurs, each requiring a time-consuming reservoir

    simulation run.

    In order to improve this, a sequence of linked reservoir models can be generated, converging to samples

    from the posterior distribution. Each new model is generated by making limited alterations to the current

    model, creating a Markov-Chain. Still many samples may be needed in order to generate a useful number

    of posterior samples. Further improvement can be gained by letting the proposed new model depend on a

    set of previous models. To implement this, the GA approach can be used to select parent models and

    generate child realizations from them. This leads to a new version of MCMC termed Adaptive Genetic

    MCMC.

    Oliver

    In the approach suggested by Oliveret al. (1996), the advantages of the previous approaches are merged.The approach aims at sampling from the complete posterior distribution, but using an optimization

    technique to reduce the number of reservoir simulation runs needed. In the approach a sample is drawn

    from the prior reservoir model. Concurrently, a sample is also drawn from the production data. This

    production sampling is done because the production data contains observation errors and the reservoir

    model contains modeling error. Pairs of prior reservoir samples and production samples are subsequently

    history matched. The matching criterion is formed by both the mismatch between the production sample

    and the simulated production data and the deviation of the reservoir model from the sampled prior

    reservoir model used as starting point for the optimization. The latter term regularizes the ill-posed

    character of the optimization problem. It can be proved that this approach indeed leads to a correct

    sampling of the posterior distribution for Gaussian models for the reservoir geology and linear models for

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    7/22

    Methods for quantifying uncertainty on production forecasts Page 7

    the fluid flow. Oliveret al. (1996) do show that for a well test model, which is non-linear, the approach

    still compares well with results from MCMC sampling.

    Application of Oliver's approach to full-field reservoir engineering problems can be found in (Zhan Wu,

    1998) and Floris & Bos (1998). For such multi-phase problems there is no proof that the approach

    correctly samples from the posterior distribution.

    PUNQ-S3 truth case description

    The PUNQ-S3 case has been taken from a reservoir engineering study on a real field performed by one of

    the industrial partners in the PUNQ project. It was qualified as a small-size industrial reservoir

    engineering model. The model contains 19x28x5 grid blocks, of which 1761 blocks are active. A top

    structure map of the field is shown in Figure 2. The field is bounded to the east and south by a fault, and

    by a strong aquifer to the north and west. A small gas cap is located in the center of the dome shaped

    structure. The field initially had six production wells located around the gas oil contact. Due to the strong

    aquifer, there are no injection wells. The geometry of the field has been modeled using corner-point

    geometry.The porosity/permeability fields were regenerated in order to have more control over the underlying

    geological / geostatistical model. The corresponding geological description is given in Appendix A. A

    geostatistical model based on Gaussian Random Fields has been used to generate the porosity /

    permeability fields. The fields were generated independently for each of the five layers. Geostatistical

    parameters, such as means and variograms were chosen to be as consistent as possible with the

    geological model. Collocated co-simulation was used to correlate the porosity and permeability fields

    statistically within each layer. To generate the fields, the Fortran program SGCOSIM from Stanford

    University was used. It is part of an extension to the GSLIB software library (Deutsch & Journel, 1992).

    Figure 3 shows the resulting fields for permeability in each of the five layers. The porosity fields have

    similar characteristics.

    The reservoir engineering model was completed with the PVT and aquifer data from the originalreservoir model and with power law relative permeability functions. There is no capillary pressure in the

    model. The production scheduling resembles the history in the original model, i.e. a first year of extended

    well testing, followed by a three year shut-in period, before field production commences. The well testing

    year consists of four three-monthly production periods, each having its own production rate. During field

    production, two weeks of each year are used for each well to do a shut-in test to collect shut-in pressure

    data. A fixed fluid production constraint is imposed on the wells. After falling below a limiting bottom

    hole pressure, they will switch to BHP-constraint. Using this completed reservoir engineering model, a

    synthetic history is generated using the Simulator 1. The total simulation period is 16.5 years. Pressure,

    water-cut and gas-oil ratio curves have been generated for each of the wells. Figure 4 shows a typical set

    of production curves. The total oil recovery after the simulation period is 3.87 106

    Sm3. The complete

    data set suitable for Simulator 1 is publicly available on the internet at www.nitg.tno.nl/punq.

    Gaussian noise was added to the well porosities / permeabilities and to the synthetic production data. Thestandard deviation on poro / perm values was set to 15 %. For the production data the Gaussian noise was

    correlated in time to mimic the more systematic character of errors in such data. The noise level on the

    shut-in pressures was 3 times smaller than on the flowing pressure, respectively 1 bar and 3 bar, to reflect

    the more accurate shut-in pressures. The noise level on the GOR was set at 10 % before gas breakthrough

    and 25 % after gas breakthrough, reflecting the difference between the solution and the free gas situation.

    Similarly, Gaussian noise of 2 % before and 5 % after water breakthrough was used for the WCT.

    Each of the partners in the project was given the noisy well porosities / permeabilities and synthetic

    production history of the first 8 years (Note that this history includes 1 year of well testing, 3 years of

    field shut-in, and covers 4 years of actual field production). The synthetic production data consisted of

    the BHP, WCT and GOR for each of the six wells. Within the history period, two wells show gas

    breakthrough and one well shows the onset of water breakthrough. All partners were asked to forecast the

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    8/22

    Methods for quantifying uncertainty on production forecasts Page 8

    total oil production after 16.5 years including uncertainty estimates, using the Bayesian formalism. Note

    that none of the partners were given the exact porosity / permeability grids, only the geological

    description. Each of the partners used his own workflow to infer these grids. In a second stage, five

    incremental wells were defined. For the truth case, the extra wells resulted in an incremental recovery of

    1.46 106

    Sm3. The partners were again asked to forecast the incremental recovery as a probability

    distribution function.

    Table 2. Approaches used by the different partners.

    Partner +

    Approach

    Parameter

    Domains

    Independent

    Parameter(s)

    Spatial

    technique

    Uncertainty

    Quantification

    Optimization Reservoir

    Simulator3

    TNO-1 Homogeneous layers PORO, Kv, Kh,

    uncorrelated

    Piecewise

    Constant

    Oliver-full Dog-leg +

    Broyden gradients

    Simulator 1

    TNO-2 Homogeneous

    drainage area regions

    PORO, Kv, Kh,

    uncorrelated

    Piecewise

    Constant

    Oliver-full Dog-leg +

    Broyden gradients

    Simulator 1

    TNO-3 Homogeneous flow

    path regions

    PORO Piecewise

    Constant

    Oliver-full Dog-leg +

    Broyden gradients

    Simulator 1

    Amoco-

    Iso

    Fixed pilot points

    Isotropic variogram

    PORO, Kv, Kh,

    correlated statist.

    GRF multi-ML+, start from

    random prior models

    GA Simulator 1

    Amoco-

    Aniso

    Fixed pilot points

    Anisotropic variogram

    PORO, Kv, Kh,

    correlated statist.

    GRF multi-ML+, start from

    random prior models

    GA Simulator 1

    Elf Pilot points selected PORO GRF multi-ML, start from

    sampled prior models

    GN-SD hybrid +

    finite diff.

    gradients

    Simulator 1

    NCC-GA Global parameters

    acting on whole grid

    PORO, Kv, Kh,

    correlated

    GRF multi-ML GA Simulator 2

    NCC-AG

    MCMC

    Global parameters

    acting on whole grid

    PORO, Kv, Kh,

    correlated statist.

    GRF Posterior sampling GA Simulator 2

    IFP-STM Fixed pilot points PORO Kriging Scenario Test Method

    (ML+)

    Gauss-Newton +

    simulator

    gradients

    Simulator 3

    IFP-Oliver Fixed pilot points PORO Kriging Oliver-Prod, start from

    kriged solution

    Gauss-Newton +

    simulator gradients

    Simulator 3

    NCC-

    Oliver

    Fixed pilot points PORO, Kv, Kh,

    correlated statist.

    GRF Oliver-full Gauss-Newton +

    simulator gradients

    Simulator 3

    Results

    General setup

    In order to be able to compare results from the different approaches, a fair measure for assessing forecast

    uncertainty is required. This measure is determined by the objective function, but is also influenced by

    the assumptions in the model. In the Bayesian context, the objective function has a prior geological

    component and a production SoS or likelihood component. With the different parameterizations used bythe partners, prior pdfs cannot be made equivalent, so it was decided that a weak prior should be used.

    The production component was preset so that partners used the same objective function. The objective

    function used in most approaches is given by equation (1). Only for the NCC-MCMC case was the

    likelihood function (4) used. The production data comprised BHP, GOR and WCT data from all six

    wells. For the standard deviations, ijk, the noise levels applied to the synthetic case were used. The

    weights,wijk, were set to 4 just prior and after breakthrough in the wells, with gas or water breakthrough

    to give more weight to the occurrence of the breakthrough. The wells for which no water breakthrough

    was observed received a zero WCT data point, again with a weight of 4, to penalize reservoir models that

    3

    The simulators used are numbered as follows 1=Eclipse, 2=More, 3=Athos.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    9/22

    Methods for quantifying uncertainty on production forecasts Page 9

    showed early breakthrough. WCT and GOR data after breakthrough and all pressure data were given a

    weight of 1.

    We note that it is hard to isolate the influence of the model assumptions, for example to differentiate

    between a Gaussian Random Field model or a zone model for spatial character of the porosity /

    permeability field, or to use a deterministic versus a stochastic poro/perm relationship versus independent

    porosity and permeability. This hampers the comparison of the uncertainty ranges for forecasts from

    various approaches.

    Base Case

    Table 2 shows which selections have been made in the workflow for the quantification of production

    forecast uncertainty. Figure 5 shows the cumulative pdf curves generated by the various approaches used

    by the partners. Figure 6 displays a summary in terms of the low-median-high ranges forecasted. The

    results are classified into three groups, the TNO curves, the IFP/NCC-Oliver curves and the

    Amoco/Elf/other-NCC curves.

    TNO groupThe TNO curves are generated using piecewise homogeneous regions. The digits labeling the different

    curves correspond to the number of layers (first digit), number of regions within each layer (middle digit

    if it occurs) and parameter types (last digit), i.e. , or ,khand kv, respectively. The TNO curves show

    larger uncertainty ranges than the other curves. Although their shapes are the similar, they show a mutual

    shift along the horizontal axis. Initially the wider range was attributed to the use of production sampling

    in the Oliver approach. Testing this hypothesis showed that dropping the production sampling does not

    significantly affect the results. Other partners' results confirmed this. Further testing led to the final

    conclusion that the use of homogeneous regions is not justified in this case. The homogeneous region

    models do not lead to satisfactory history matches and result in large spreads in production forecast.

    Parameterizations based on heterogeneous models, such as the pilot point approaches, are more

    appropriate.

    Amoco / Elf / NCC-GA,-MCMC group

    The Amoco-Anisotropic curve and the Elf curve are practically identical. The main difference between

    the approaches that generated the curves is the use of a different optimizer. Apparently, it is not critical

    in this case to use global optimization or to consider multiple clusters. This result suggests that the

    posterior surface may look like a plateau and not a landscape of individually identifiable peaks. The

    Amoco-Isotropic curve shows a narrower uncertainty range than the Anisotropic curve because the

    variogram used for the anisotropic case has a longer range in the NW-SE direction that accounts for the

    presence of elongated channels. With the longer range, less variability is present in the reservoir porosity

    / permeability fields.

    The NCC-MCMC curve has a smaller range; two reasons are forwarded to explain this. Firstly, in the

    MCMC, the reservoir porosity/ permeability fields are linear combinations of Gaussian random fields,thus retaining Gaussianity. This extra constraint reduces the uncertainty ranges produced in the reservoir

    models. Porosity/permeability fields generated with the pilot-point approaches do not satisfy Gaussian

    constraints. Secondly, after detailed analysis of the results of various approaches (Omre et al., 1999), it

    was noted that the optimization step used in most approaches leads to generation of extreme values of

    parameters (many parameters end up at values prescribed at geological/physical limits). These extreme

    values do result in good history matches, but give wider spreads in the forecasts. Since MCMC results

    generally show much smoother optimized porosity/permeability fields their production forecast range is

    smaller. NCC have used the posterior distribution from equation (4) for the dynamic data conditioning in

    the NCC-GA and NCC-MCMC result. Since this criterion is weaker than the least squares norm from

    equation (1), because of the lack of normalization, one would expect wider ranges for these NCC curves.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    10/22

    Methods for quantifying uncertainty on production forecasts Page 10

    Since the ranges are smaller we may conclude that the exact form of the objective function does not

    significantly influence the result.

    IFP / NCC-Oliver group

    The two IFP curves and the NCC-Oliver curve have median values which are shifted to lower ranges.

    This systematic shift is attributed to the use of another reservoir simulator for these approaches, i.e.

    Simulator 3 versus Simulator 1. Note that there is no apparent shift between the approaches that use

    Simulator 2 or Simulator 1. In the incremental study we will see that this shift vanishes when incremental

    recovery is considered. The uncertainty range in the IFP-Oliver curve is very small. This can be

    attributed to the fact that no prior sampling was used in the IFP-Oliver approach, and that the production

    sampling does not contribute very much to the uncertainty range. In the NCC-Oliver approaches prior

    sampling is used, resulting in an increase in the forecasted range. The STM curve has the largest range.

    This is remarkable, because STM is believed to quantify the envelope of the forecast pdf around a single

    posterior peak. Again this result is acceptable if the posterior surface looks like a large plateau.

    Comparison to the truth caseFigure 6 shows that five out of the eleven estimated uncertainty ranges do not include the truth case

    value, thus confirming the general experience that uncertainty is often underestimated. Note that the

    TNO-2 approach with its large uncertainty range still doesnt include the truth case, and that NCC-

    MCMC with its small uncertainty range does include the truth case. Again this indicates the importance

    of using properly history-matched models and appropriate models for heterogeneity.

    Incremental case

    All partners were asked to generate production forecasts for the conditioned reservoir models, including

    five incremental wells. The partners were asked to quantify the incremental recovery, the difference

    between total oil production of the base case and the case with the incremental wells included. Again, the

    result should be a cumulative distribution function.Figure 7 shows the cdf curves of incremental recovery for the incremental wells case. These curves can

    be split into the TNO curve and all other curves. Again, we see that the homogeneous region

    parameterization leads to a severe bias in the results. All other curves tend to be in agreement. Note

    especially that the IFP curve now lines up with the other curves. The shift between Simulator 3 and

    Simulators 1 and 2 is negated because it occurs both in the base case and in the case including the

    incremental wells. In calculating the incremental recovery, the difference between the two results is

    calculated, so canceling the shift.

    Comparison to the truth case

    Most approaches underestimate the incremental recovery of the truth case. The truth case value lies on

    the high side of five of the curves. The NCC curves all have ranges which do not cover the truth case

    value. The fact that the truth case value lies in the upside of the curves can be explained in hindsight. Thewell locations were chosen in such a way that the recovery from the truth case permeability field would

    be optimized. For an ensemble of permeability fields, these well locations are not optimal and will

    produce at a lower average rate.

    Discussion

    Before drawing conclusions from the results, some discussion is needed regarding the comparison of the

    results from the various partners. In order to make an objective comparison between the various

    uncertainty quantification approaches possible, the prior and likelihood measures were prescribed.

    However, especially the results from the NCC approach make it clear that the uncertainty ranges are not

    only influenced by the information used to define the objective function, but also on the underlying

    assumptions made and techniques used in the reservoir modeling. These can be seen as an implicit form

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    11/22

    Methods for quantifying uncertainty on production forecasts Page 11

    of prior information. In this study the important assumptions are related to the fluid flow model and the

    spatial distribution of porosity and permeability. Making a particular choice for the fluid flow simulator

    and for modeling the spatial distributions will affect the uncertainty quantification in an implicit way.

    This makes comparison of uncertainty ranges somewhat ambiguous. In particular, it is not formally

    possible to compare all results with a single true uncertainty range. Each approach has its own

    underlying set of assumptions, resulting in its own uncertainty range. Therefore, we can only make

    observations and not judgements in terms of the correctness of estimates.

    Conclusions

    From the PUNQ-S3 integrated case study we conclude the following

    Using porosity and permeability multipliers for homogeneous layers or regions resulted in large

    uncertainty ranges and a significant bias in the production forecast due to the poor quality of history

    matches obtained with these models. Heterogeneous models parameterized by pilot points gave more

    consistent results.

    Production sampling (as introduced by Oliver et al. (1996)) did not significantly contribute to theforecast uncertainty range. Omitting the sampling of several prior geological models led to too

    narrow ranges of uncertainty around the forecasts.

    The use of different reservoir simulators to model the same reservoir led to a systematic difference in

    production forecast for one of the (three) simulators. Fortunately, when considering incremental

    recovery, the difference was negated.

    Approaches which include an optimization step, showed a tendency to predict larger uncertainty

    ranges. This is because the conditioned porosity/permeability fields contain significantly more

    extreme high and/or low values.

    Results suggest that the multi-dimensional posterior distribution looks more like a large plateau, than

    many isolated peaks.

    Comparison with the cumulative production forecast of the truth case showed that five out of theeleven predicted forecast ranges did not include the truth case value.

    All approaches underestimated the incremental recovery of the truth case. The latter is probably

    because the locations of the incremental wells were optimally designed for the truth case.

    We realize that these conclusions may be specific to the current test case. For confirmation, the above

    case study should in principle be repeated for many truth cases.

    Acknowledgements

    The European Commission is acknowledged for partly funding this project in the Joule-III Non-

    Nuclear Energy Programme. We thank all other partners in the PUNQ project for the lively

    discussions and valuable contribution to this study. We thank Elf for supplying a field case which

    acted as a valuable integration tool.

    References

    ABRAHAMSEN, P., EGELAND, T., LIA, O., OMRE, H., 1992, An integrated approach to prediction of

    hydrocarbon in place and recoverable reserve with uncertainty measures, Paper SPE 24276,

    presenated at 1st

    SPE Europ. Petr. Comp. Conf, Stavanger, 2527 May.

    BERTEIG, V., HALVORSEN, K.B., OMRE, H., 1988, Prediction of hydrocarbon pore volume with

    uncertainties, Paper SPE 18325, presented at SPE Annual Technical Conference & Exhibition,

    Houston, 25 Oct.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    12/22

    Methods for quantifying uncertainty on production forecasts Page 12

    BOS, C.F.M., 2000, Production forecasting with Uncertainty Quantification, Final report of EC project,

    NITG-TNO report NITG 99-255-A, Jan.

    BROYDEN, C.G., 1965, A class of methods for solving nonlinear equations,Math. Comp. 19, pp. 577

    593.

    CUYPERS, M., DUBRULE, O, LAMY, P, BISSEL, R, 1998., Optimal choice of inversion

    parameters for history matching with the pilot point method, Proc. ECMOR VI conf., Peebles,

    811 Sep.

    DEUTSCH, C.V., JOURNEL, A., 1992, GSLIB Geostatistical Software Library and Users Guide,

    Oxford Univesity Press, Oxford.

    FLORIS, F.J.T., 1996, Direct conditioning of Gaussian random fields to dynamic production data, Proc.

    ECMOR V conf., Leoben, 36 Sep.

    FLORIS, F.J.T., BOS, C.F.M., 1998, Quantification of uncertainty reduction by conditioning to dynamic

    production data, Proc. ECMOR VI conf., Peebles, 811 Sep.

    FLORIS, F.J.T., PEERSMANN, M.R.H.E, 1998, Uncertainty estimation in volumetrics for supporting

    hydrocarbon E&P decision making,J. of Petroleum Geoscience, Vol 4, No 1, pp. 3340.

    GOLDBERG, D.E., 1989,Genetic algorithms in search, optimization, and machine learning , Addison-Wesley, Reading.

    HEGSTAD, B.K., OMRE, H., TJELMELAND, H. and TYLER, K., 1994, Stochastic simulation and

    conditioning by annealing in reservoir description, In Armstrong and Dowd (ed), Geostatistical

    Simulations, Kluwer Academic Publisher, pp. 4355.

    HEGSTAD, B.K. and OMRE, H., 1997, Uncertainty Assessment in history matching and forecasting. in

    Baafi and Schofield (Ed.),Geostatistics Wollongong '96Vol. I Kluwer Academic Publishers, pp.

    585596.

    HOLDEN, L., 1998, Adaptive chains, Tech. Rep. SAND 11/98, Norwegian Computing Center, Oslo,

    Norway.

    HU, L.-Y., 2000, Gradual deformation and iterative calibration of Gaussian-related stochastic models,

    Math. Geol., 32 (1), pp. 87108.LANDA, J.L., HORNE, R.N., 1997, A procedure to integrate well test data, reservoir performance

    history and 4-D seismic information into a reservoir description, Paper SPE 38653, presented at SPE

    Annual Technical Conference & Exhibition, San Antonio, 58 Oct.

    LIA, O., OMRE, H., TJELMELAND, H., HOLDEN, L., EGELAND, T., 1997, Uncertainty in

    reservoir production forecasts,AAPG BulletinVol. 81, Nr. 5.

    MARSILY, G. de, LAVEDAN, G., BOUCHER, M., FASANINO, G., 1984, Interpretation of inference

    tests in a well field using geostatistical techniques to fit the permeability distribution in a reservoir

    model, inGeostatistics for Natural Resources Characterization, eds. G. Verly et al., Part 2, D. Reidel

    Publ. Comp., pp. 831849.

    OLIVER, D., HE, N., REYNOLDS, A.C., 1996, Conditioning permeability fields to pressure data, Proc.

    ECMOR V conf., Leoben, Sep 36.

    OMRE, H., TJELMELAND, H., WIST, H.T., 1999, Uncertainty in history matching Modelspecification and sampling algorithms, NTNU-Trondheim Internal report, Statistics No. 6.

    POWELL, M.J.D., 1972, inNumerical methods for non-linear algebraic equations, (ed. W.Murray),

    Academic Press, London and New York, p 29.

    RAMARAO, B.S., MARSH LAVENUE, A., DE MARSILY, G., MARIETTA, M.G., 1995, Pilot point

    methodology for automated calibration of an ensemble of conditionally simulated transmissivity fields

    1. Theory and computational experiments,Water Reseources Research, Vol 31, No 3, pp. 475493.

    ROGGERO, F., 1997, Direct Selection of Stochastic Model Realizations Constrained to Historical Data,

    Paper SPE 38731, paper presented at the 1997 SPE Annual Technical Conference and Exhibition, San

    Antonio, Texas, 58 October.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    13/22

    Methods for quantifying uncertainty on production forecasts Page 13

    SAMSON, P., DUBRULE, O., EULER, N., 1996, Quantifying the impact of structural uncertainties on

    gross-rock volume estimates, Paper SPE 35535, presented at European 3D research modelling

    conference, Stavanger, 1617 Apr.

    TJELMELAND, H., 1997, A note on the Bayesian approach to history matching of reservoir

    characteristics, Proceedings of IAMG'97: The Third Annual Conference of the International

    Association for Mathematical Geology, ed. Pawlowsky-Glahn V., International Center for Numerical

    Methods in Engineering (CIMNE), Barcelona, Spain, vol.~2, pp. 772777.

    WEN, X-H, GOMEZ-HERNANDEZ, J.J., CAPILLA, J.E., SAHUQUILLO, A., 1996, Significance of

    conditioning to piezometric head data for predictions of mass transport in groundwater modeling,

    Math. Geology, Vol 28, No 7.

    ZHAN WU, REYNOLDS, A.C., OLIVER, D., 1998, Conditioning geostatistical models to two-phase

    production data, Paper SPE 49003, presented at SPE Annual Technical Conference & Exhibition,

    New Orleans, 2730 Sep.

    Website

    http://www.nitg.tno.nl/punq

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    14/22

    Methods for quantifying uncertainty on production forecasts Page 14

    Appendix A. Geological description.

    This appendix gives a geological description of the heterogeneities that occur based on knowledge of the

    regional geology (such as paleoslope, paleo water depth, gross environments of deposition, size and

    shape of sedimentary bodies, structural trends and style) which is normally known from adjacent fields

    and wildcat wells. In the geological interpretation, the layer thicknesses, which are all in the order of 5

    meters, played an important role.

    Sediments were deposited in a deltaic coastal plain environment. Layers 1, 3, and 5 consist of fluvial

    channel fills encased in floodplain mudstone. Layer 2 represents marine or lagoonal clay with some distal

    mouthbar deposits. Layer 4 represents a mouthbar or lagoonal delta encased in lagoonal clays.

    Layers 1, 3, and 5 have linear streaks of highly porous sandstone (phi > 20 %), with an azimuth

    somewhere between 110 and 170 degrees (SE). These sandstone streaks of about 800 m width are

    embedded in a low porosity shale matrix (phi < 5 %). The width and the spacing of the streaks vary

    somewhat between the layers. A summary is given in Table 3. In layer 2 marine or lagoonal shales occur , in which distal mouthbar or distal lagoonal delta

    occur. They translate into a low-porous (phi < 5%), shaly sediment, with some irregular patches of

    somewhat higher porosity (phi > 5%).

    Layer 4 contains mouthbars or lagoonal deltas within lagoonal clays, so a flow unit is expected

    which consists of an intermediate porosity region (phi ~ 15%) with an approximate lobate shape

    embedded in a low-porosity matrix (phi < 5%). The lobate shape is usually expressed as an ellipse (ratio

    of the axes= 3:2) with the longest axis perpendicular to the paleocurrent (which is between 110 and 170

    degrees SE).

    Table 3. Expected sedimentary facies with estimates for width and spacing for major flow units for each

    layer.

    Layer Facies W Spacing

    1 Channel Fill 800 m 2-5 km

    2 Lagoonal Shale

    3 Channel Fill 1000 m 2-5 km

    4 Mouthbar 500-5000 m 10 km

    5 Channel Fill 2000 m 4-10 km

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    15/22

    Methods for quantifying uncertainty on production forecasts Page 15

    List of tables

    Table 1. Classification of Uncertainty Quantification methods.

    Table 2. Approaches used by the different partners.

    Table 3. Expected sedimentary facies with estimates for width and spacing for major flow units for each

    layer.

    List of Figures

    Figure 1. General work flow used for the quantification of forecast uncertainty.

    Figure 2. Top Structure map of the PUNQ-S3 case. The field contains both oil and gas. Black dots

    indicate six initial wells located around the gas-oil contact. White dots indicate additional wells added ina later phase.

    Figure 3. Horizontal permeability fields in the five layers for the synthetic PUNQ-S3 case.

    Figure 4. Typical dynamic production data for a well showing bottom hole pressure (BHP), oil

    production rate (OPR), gas-oil ratio (GOR) and water cut (WCT). After a 1 year extended production test

    and 3 years field shut-in, field production starts at a fixed oil rate. History data runs until year 8 and is

    forecasted until year 16.5. This well shows the start of water production. Two other wells show gas

    breakthrough.

    Figure 5. Cumulative distribution functions for the total oil production forecasted up to 16.5 years. Thecurves can be split up into three classes, the TNO curves, the IFP / NCC-Oliver curves and the Amoco /

    Elf / other-NCC curves.

    Figure 6. Summary of low - median high ranges of cumulative prouction at 16.5 years for all

    approaches. The ranges were generated by ordering the samples and taking the 10 %, 50 % and 90 %

    model.

    Figure 7. Cumulative distribution functions for incremental recovery at 16.5 years using 5 more wells.

    The curves can be split up into two classes, the TNO curve and the other curves.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    16/22

    Methods for quantifying uncertainty on production forecasts Page 16

    Figure 1. General work flow used for the quantification of forecast uncertainty.

    Prior sampling Production

    sampling

    Reservoir model Production data

    Parameterization

    + prior pdfs

    History data

    + error data

    Reservoir

    simulation

    History matched

    reservoir model

    Forecasting

    Objective function

    Calculation

    Updating of

    reservoir model

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    17/22

    Methods for quantifying uncertainty on production forecasts Page 17

    4000

    PUNQS2 XY plane 1 TOPS step 0

    2000

    0

    Tops at step 0, 0 days

    2339 2347 2355 2363 2371 2379 2387 2395 2403 2411

    1600800 2400 3200

    Figure 2. Top Structure map of the PUNQ-S3 case. The field contains both oil and gas. Black dots

    indicate six initial wells located around the gas-oil contact. White dots indicate additional wells added in

    a later phase.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    18/22

    Methods for quantifying uncertainty on production forecasts Page 18

    Figure 3. Horizontal permeability fields in the five layers for the synthetic PUNQ-S3 case.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    19/22

    Methods for quantifying uncertainty on production forecasts Page 19

    Figure 4. Typical dynamic production data for a well showing bottom hole pressure (BHP), oil

    production rate (OPR), gas-oil ratio (GOR) and water cut (WCT). After a 1 year extendedproduction test and 3 years field shut-in, field production starts at a fixed oil rate. History data

    runs until year 8 and is forecasted until year 16.5. This well shows the start of water

    production. Two other wells show gas breakthrough.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    20/22

    Methods for quantifying uncertainty on production forecasts Page 20

    Figure 5. Cumulative distribution functions for the total oil production forecasted up to 16.5 years. The

    curves can be split up into three classes, the TNO curves, the Amoco / Elf / other-NCC curves and theIFP / NCC-Oliver curves.

    Cumulative Oil Production after 16.5 years (in million Sm3)

    3 3.2 3.4 3.6 3.8 4 4.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Cumulativedistributionfunction

    TNO-1: 5x3 parsTNO-2: (5x6)x3 pars

    TNO-3: (5x14)x1 pars

    Amoco Isotropic

    Amoco Anisotropic

    Elf

    NCC-GA

    NCC-MCMC

    IFP-STM

    IFP-Oliver

    NCC-Oliver

    truth case

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    21/22

    Methods for quantifying uncertainty on production forecasts Page 21

    Figure 6. Summary of low - median high ranges of cumulative production at 16.5 years for all

    approaches. The ranges were generated by ordering the samples and taking the 10 %, 50 % and 90 %

    model.

  • 7/25/2019 Methods for Quantifying the Incertainty of Production Forecasts a Comparative Study

    22/22

    Methods for quantifying uncertainty on production forecasts Page 22

    Figure 7. Cumulative distribution functions for incremental recovery at 16.5 years using 5 more wells.

    The curves can be split up into two classes, the TNO curve and the other curves.

    0

    Incremental Oil Production after 16.5 years (in million Sm3)

    0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Cumu

    lativedistributionfunction

    TNO-2: (5x6)x3 pars

    Amoco IsotropicAmoco Anisotropic

    Elf

    NCC-GA

    NCC-MCMC

    IFP-STM

    NCC-Oliver

    truth case