24
Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system Shlomo Orr · Alexander M. Meystel Abstract Despite remarkable new developments in sto- chastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equa- tions (PDE) on a given scale. While the addition of un- certainty, and hence, stochasticity or randomness has in- creased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the depen- dency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented to- wards anticipation rather than prediction) and highly ef- ficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. RØsumØ MalgrȖ de remarquables nouveaux dȖveloppe- ments en hydrologie stochastique ainsi que de re- marquables adaptations de mȖthodes avancȖes pour les opȖrations de recherche, le contrɄle stochastique, et l’in- telligence artificielle, solutions pour les problŕmes com- plexes en hydrogȖologie sont restȖes assez limitȖes. La principale raison est l’ultime confiance en les modŕles qui conduisent Ȥ des Ȗquations partielles complexes aux pa- ramŕtres distribuȖs (PDE) Ȥ une Ȗchelle donnȖe. Alors que l’accumulation d’incertitudes et, par consȖquent, la stockasticitȖ ou l’alȖat a augmentȖ la perspicacitȖ et a mis en lumiŕre d’importantes relations entre l’incertitude, la fiabilitȖ, le risque, et leur effet sur les coȣts de fonc- tionnement, il a Ȗgalement (a) introduit une complexitȖ additionnelle qui rȖsulte dans un pouvoir prohibitif des moyens de calcul informatique mÞme pour une simple estimation de l’incertitude; et (b) a conduit a une recon- naissance de notre manque d’aptitude Ȥ maȸtriser l’in- certitude totale mÞme en introduisant tous les paramŕtres connus de l’incertitude. La reprȖsentation du changement est introduit: une adaptation de nouvelles mȖthodes de contrɄle intelligent qui va relȦcher la dȖpendance Ȥ la rigiditȖ des algorithmes, aux calculs informatiques in- tensifs, Ȥ la PDE stockastique, et qui modifiera l’emphase entre les MRDS—systŕmes interactifs d’aide Ȥ la dȖcision de multiresolutionelle (flexibles, adaptables et orientables selon les objectifs)—avec un fort apprentissage non (orientȖ vers l’anticipation plutɄt que la prȖdiction), et une capacitȖ d’optimisation efficiente trŕs ȖlevȖe, qui pourrait apporter le besoin de solutions pour la modȖli- sation des problŕmes de management des aquifŕres rȖa- listes. Cet article met en lumiŕre les liens entre les dȖ- veloppements passȖs et les futurs moyens d’optimisation, de gestion et de contrɄle des systŕmes hydrogȖologiques. Resumen A pesar de nuevos avances notables en hidro- logȷa estocƁstica y las adaptaciones de mȖtodos avanzados de investigaciɃn de operaciones, control estocƁstico, e inteligencia artificial, las soluciones de problemas com- plejos del mundo real en hidrogeologȷa han sido bastante limitadas. La principal razɃn es la dependencia definitiva en modelos de primer-principio que conducen a ecua- ciones parciales diferencias de parƁmetro distribuido complejas (PDE) a una escala dada. Mientras que la adiciɃn de incertidumbre, y por lo tanto, estocasticidad o aleatoriedad ha incrementado la profundidad y resaltado Received: 20 June 2004 / Accepted: 8 December 2004 Published online: 26 February 2005 # Springer-Verlag 2005 S. Orr ( ) ) MRDS Inc., 5900 W. 25th Avenue, Kennewick, WA, 99338, USA e-mail: [email protected] Tel.: +1-509-736-3111 Fax: +1-415-276-1998 A. M. Meystel ECE Department, Drexel University, 3141 Chestnut Street, Philadelphia, PA, 19104, USA Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

Approaches to optimal aquifer management and intelligent controlin a multiresolutional decision support system

Shlomo Orr · Alexander M. Meystel

Abstract Despite remarkable new developments in sto-chastic hydrology and adaptations of advanced methodsfrom operations research, stochastic control, and artificialintelligence, solutions of complex real-world problems inhydrogeology have been quite limited. The main reason isthe ultimate reliance on first-principle models that lead tocomplex, distributed-parameter partial differential equa-tions (PDE) on a given scale. While the addition of un-certainty, and hence, stochasticity or randomness has in-creased insight and highlighted important relationshipsbetween uncertainty, reliability, risk, and their effect onthe cost function, it has also (a) introduced additionalcomplexity that results in prohibitive computer powereven for just a single uncertain/random parameter; and (b)led to the recognition in our inability to assess the fulluncertainty even when including all uncertain parameters.A paradigm shift is introduced: an adaptation of newmethods of intelligent control that will relax the depen-dency on rigid, computer-intensive, stochastic PDE, andwill shift the emphasis to a goal-oriented, flexible,adaptive, multiresolutional decision support system(MRDS) with strong unsupervised learning (oriented to-wards anticipation rather than prediction) and highly ef-ficient optimization capability, which could provide theneeded solutions of real-world aquifer managementproblems. The article highlights the links between pastdevelopments and future optimization/planning/control ofhydrogeologic systems.

R�sum� Malgr� de remarquables nouveaux d�veloppe-ments en hydrologie stochastique ainsi que de re-marquables adaptations de m�thodes avanc�es pour lesop�rations de recherche, le contr�le stochastique, et l’in-telligence artificielle, solutions pour les probl�mes com-plexes en hydrog�ologie sont rest�es assez limit�es. Laprincipale raison est l’ultime confiance en les mod�les quiconduisent � des �quations partielles complexes aux pa-ram�tres distribu�s (PDE) � une �chelle donn�e. Alorsque l’accumulation d’incertitudes et, par cons�quent, lastockasticit� ou l’al�at a augment� la perspicacit� et a misen lumi�re d’importantes relations entre l’incertitude, lafiabilit�, le risque, et leur effet sur les co�ts de fonc-tionnement, il a �galement (a) introduit une complexit�additionnelle qui r�sulte dans un pouvoir prohibitif desmoyens de calcul informatique mÞme pour une simpleestimation de l’incertitude; et (b) a conduit a une recon-naissance de notre manque d’aptitude � ma�triser l’in-certitude totale mÞme en introduisant tous les param�tresconnus de l’incertitude. La repr�sentation du changementest introduit: une adaptation de nouvelles m�thodes decontr�le intelligent qui va rel�cher la d�pendance � larigidit� des algorithmes, aux calculs informatiques in-tensifs, � la PDE stockastique, et qui modifiera l’emphaseentre les MRDS—syst�mes interactifs d’aide � la d�cisionde multiresolutionelle (flexibles, adaptables et orientablesselon les objectifs)—avec un fort apprentissage non(orient� vers l’anticipation plut�t que la pr�diction), etune capacit� d’optimisation efficiente tr�s �lev�e, quipourrait apporter le besoin de solutions pour la mod�li-sation des probl�mes de management des aquif�res r�a-listes. Cet article met en lumi�re les liens entre les d�-veloppements pass�s et les futurs moyens d’optimisation,de gestion et de contr�le des syst�mes hydrog�ologiques.

Resumen A pesar de nuevos avances notables en hidro-log�a estocstica y las adaptaciones de m�todos avanzadosde investigacin de operaciones, control estocstico, einteligencia artificial, las soluciones de problemas com-plejos del mundo real en hidrogeolog�a han sido bastantelimitadas. La principal razn es la dependencia definitivaen modelos de primer-principio que conducen a ecua-ciones parciales diferencias de parmetro distribuidocomplejas (PDE) a una escala dada. Mientras que laadicin de incertidumbre, y por lo tanto, estocasticidad oaleatoriedad ha incrementado la profundidad y resaltado

Received: 20 June 2004 / Accepted: 8 December 2004Published online: 26 February 2005

� Springer-Verlag 2005

S. Orr ())MRDS Inc.,5900 W. 25th Avenue, Kennewick, WA, 99338, USAe-mail: [email protected].: +1-509-736-3111Fax: +1-415-276-1998

A. M. MeystelECE Department,Drexel University,3141 Chestnut Street, Philadelphia, PA, 19104, USA

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 2: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

relaciones importantes entre la incertidumbre, confiabili-dad, riesgo, y su efecto en la funcin de costo, la adicintambi�n ha permitido (a) introducir complejidad adicionalque resulta en potencia computacional excesiva affln paraun solo parmetro incierto/aleatorio; y (b) llevar a reco-nocer nuestra discapacidad para evaluar la incertidumbrecompleta affln cuando se incluyen todos los parmetrosinciertos. Se introduce un cambio paradigmtico: unaadaptacin de nuevos m�todos de control de inteligenciaque relajar la dependencia en PDE estocsticas, r�gidas yde uso computacional intensivo, cambiando el �nfasishacia un sistema de apoyo de decisiones de propsitosmfflltiples (MRDS) adaptivo, flexible, y orientado a ob-jetivos con fuerte aprendizaje sin supervisin (orientado ala anticipacin ms que a la prediccin) con fuerte ca-pacidad de optimizacin eficiente, lo cual podr�a aportarlas soluciones necesarias a los problemas de manejoreales con los acu�feros. El art�culo resalta los v�nculosentre desarrollos pasados y control/planificacin/optimi-zacin futura de sistemas hidrogeolgicos.

Introduction

Advances in optimal aquifer management over the lastfew decades have spanned from “decision making” to“optimization”, “planning”, and later, “control” tech-niques, all of which could be unified as “planning–con-trol–optimization”.

Planning/control/optimization in hydrogeology is adomain of intersection between complex natural systems,human industrial practice, and theoretical knowledge,which is extremely difficult to analyze and further de-velop; this is because, from a knowledge organizationperspective, this area of practical knowledge has been indisarray. Knowledge related to subsurface operations(groundwater remediation, water resources utilization, in-situ leaching, heap leaching, oil reservoir management)relates to many scales of representation, yet, this fact hasnot been taken into account in an organized manner forsolving aquifer management problems. Fortunately, therapid development of aquifer management optimizationand control in recent years has laid the foundation forsuch integration. These advances can be roughly dividedinto three major categories:

(1) Operations research (OR)– Decision analysis (Massmann and Freeze 1987a,

1987b; Ben-Zvi et al. 1988; Freeze et al. 1992, Freezeand Gorelick 1999), and worth of information (Tuc-ciarelli and Pinder 1991; James and Gorelick 1994).

– Stochastic optimization (optimization under uncer-tainty)—using first-order approximations, small per-turbation methods, or Monte Carlo simulations-opti-mization (e.g., Maddock 1973; Gorelick 1987; Wag-ner and Gorelick 1987, Wagner and Gorelick 1989;Chan 1993, 1994; Morgan et al. 1993; Ranjithan et al.1993; Wang and Ahlfeld 1994; Zheng and Wang1999; Guyaguler and Horne 2001).

(2) Control theories– Deterministic control—deterministic feedback (DF),

differential dynamic programming (DDP), (Jones etal. 1987; Jones 1992).

– Stochastic control—dynamic dual control or differ-ential control, combined with differential calculus,small perturbations, and Kalman filter or other sto-chastic inverse models (Andricevic and Kitanidis1990; Lee and Kitanidis 1991, 1996; Georgakakos andVlatsa 1991; Culver and Shoemaker 1992, 1993;Whiffen and Shoemaker 1993; Andricevic 1993;Philbrick and Kitanidis 1998, 2000).

(3) Artificial intelligence– (AI; also called “soft computing” and “machine

learning”) and Search (optimization) algorithms,particularly artificial neural networks (ANN; e.g.,Boger and Guterman 1997; Boger 2002; Rogers andDowla 1994; Bhattacharya et al. 2003), genetic al-gorithms (GA; e.g., Rogers and Dowla 1994; McK-inney et al. 1994; Maskey et al. 2000, 2002), fuzzylogic (FL; e.g., Dou et al. 1997; Wong et al. 2002;Lobbrecht and Solomatine 2002), simulated annealing(Dougherty and Marryott 1991), tabu search (Zhengand Wang 1999), and kriging interpolation (of re-sponse surfaces in the state space, generated by sen-sitivity coefficients; e.g., Landa and Guyaguler 2003).

It should be mentioned that the distinction between de-cision analysis and optimization/control seems rather ar-bitrary. Freeze and Gorelick (1999) suggested that “thefundamental difference between these two approaches liesin the fact that decision analysis considers a broad suit oftechnological strategies from which one of many prede-termined design alternatives is selected as the best, whilestochastic optimization determines the optimal pump-and-treat design but considers only one technological strategyat a time”. In other words, decision analysis is a crude(low-resolution) process of searching for a discrete set ofvalues for the decision variables by calculating the ob-jective function for every discrete design alternative.Later in the text, a framework that unifies these two ap-proaches is suggested.

On another front, the recent extension of uncertaintytheories to the conceptual models themselves (or modelstructure) by Neuman (2003); and Ye et al. (2004) is animportant milestone that enables the next step in sto-chastic optimization/control in hydrogeology and otherareas. In the following review, recent developments inoptimal watershed and surface water are not included, butmerely touch on a few recent advances in optimal aquifermanagement, with some parallels in oil reservoir man-agement, in order to highlight the links between pastdevelopments and future optimization–planning–controlof hydrogeologic systems.

224

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 3: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

Brief overview of stochastic optimal management

Optimization under uncertaintyPerhaps the first pioneering work in stochastic optimalmanagement was the stochastic farm irrigation manage-ment analyzed by Maddock (1973), where crop prices,pumping costs, interest rate, consumptive use, and aquiferparameters such as transmissivity and storativity wereconsidered uniform but uncertain (or random), with thegoal to maximize net revenues. Groundwater flow re-sponses were modeled as unit response functions (or re-sponse matrix), with well pumping and drawdown beingpart of the decision variables, while pumping costs de-pended on both lift and pumping rates. In Maddock’swork (ibid), the sensitivity analyses of the objectivefunction to all hydrologic and economic parameters (us-ing a regret function to represent net revenue losses) re-vealed higher sensitivity to economic parameters such ascrop prices than aquifer model parameters.

Feinerman et al. (1985) were the first ones to link theeffect of spatial variability with optimal management—i.e., stochastic optimization based on geostatistics (orrandom fields) of soil spatial variability of irrigationmanagement. The authors considered crop yield as afunction of irrigation water quantity and a 2D, normallydistributed, random soil function, and were searching foran optimal irrigation level that maximizes the expectednet profit (the difference between spatially-averaged yieldrevenue and irrigation cost) under a deterministic, risk-neutral, and risk-averse (vs. risk-adverse) decision mak-ing scenarios. The uncertainty in the yield function wasexpressed in terms of the first two statistical moments ofthe perturbed soil function, and was used explicitly in theobjective function. The authors concluded that propercharacterization of the spatial heterogeneity of soil andcrop yield could reduce irrigation water applications.

Tung (1986) used an analytical simulation model witha random/perturbed, uniform hydraulic conductivity andstorativity, using first-order uncertainty analysis, with thegoal of maximum pumping (under transient conditions)from a confined, uniform aquifer subject to probabilisticdrawdown constraints. Following Maddock (1972), unitresponse functions were used to quantify and relateaquifer response/drawdown to parameter randomness.Sensitivity analyses indicated that optimal pumping rateswere sensitive to aquifer transmissivity, while almost in-dependent of storativity. Tung (1986) was also the firstone to introduce the method of chance-constraints (orreliability, transforming probabilistic water quality con-straint to a deterministic one).

Wagner and Gorelick (1987) were the first to solve astochastic groundwater quality management of pump andtreat design (particularly, to find an optimal pumpingschedule needed to reduce contaminant concentration toan acceptable level) based on flow and transport simula-tions, using geostatistical formulation of uncertainty inhydraulic conductivity, effective porosity, and longitudi-nal and transverse dispersivities. Similar to the earlierworks by Tung and Maddock, the authors used first-or-

der1 approximation (following Dettinger and Wilson1981), essentially linearizing the stochastic flow andtransport equations in order to allow formulations of di-rect relationships between parameter uncertainty ex-pressed by the first two moments (mean and covariance)and the uncertainty/moments of the state-variables (hy-draulic heads and contaminant concentrations, assumingnormal distribution of the latter), which then propagate tothe objective function. Parameters were estimated usingnonlinear multiple regression, while risk was evaluatedusing chance-constrained optimization. They found thatparameter uncertainty significantly increased pumpingrequirements (and hence, cost).

It should be mentioned that first-order approximationsconstitute the most simplified stochastic models possible,while higher order perturbations are a major tool in sto-chastic theories (for extended description of these andother methods, see, for example, Dagan and Neuman1997, and Zhang 2002). Briefly, first-order approxima-tions are limited to very small variability in parametervalues, stationary fields, steady state flow, infinite/un-bounded domains, and consequently, cannot solve com-plex, real world problems. Perturbation methods improveon the first-order linearization approach by extending thevariables in Taylor series to second- and higher-orders,(typically as power series of a small perturbation pa-rameter) and thereby, lead to better stochastic approxi-mations, in terms of both accuracy and control (in thelatter, the ability to evaluate hedging or offset). However,most of the limitations of first-order approximations applyto perturbation methods as well (i.e., low-variance, sta-tionary fields; unbounded domains, steady state flow; e.g.,Orr 1993). Since convergence of the series that resultfrom perturbation methods cannot be guaranteed, neitherdoes the validity of the corresponding solutions (Beran1968; Dagan 1989); on the other hand, partial sums ofasymptotic power series (also called “semi-convergentseries”) could be very good approximations while thecomplete series expansion diverges (Arfken 1985; Hinch1991). Nevertheless, simplified first-order approximationsand more so, perturbation methods are elegant, efficient,provide insight, and with some intuition, may be extrap-olated to high variances (large uncertainty), non-uniformfields, and unsteady flows.

Neuman and Orr (1993) further developed a stochastic(integro-differential) approximation for highly heteroge-neous, non-uniform, bounded fields with high variance(see also Orr and Neuman 1994); their method was ap-plied by Guadagnini and Neuman (1999), and furtherextended to transient flow by Tartakovsky and Neuman(1998). Although these stochastic theories and approxi-mations include random boundary conditions and randomsources/sinks they are still limited to one uncertain pa-rameter and simple geometries (yet with significant nu-merical difficulty) and hence, cannot yet be applied tocomplex, real world problems.

1 First-order Taylor series expansion of perturbations about themean (parameters and state variables).

225

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 4: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

As was pointed out by Orr ( 2002a, 2002b), the tra-ditional stochastic PDE approach suffers from severaldrawbacks and inconsistencies, such as (a) ignoring un-certainty in the models themselves, and/or model struc-ture, all of which remain rigid; (b) employing additional(now statistical) models with new parameters that are alsouncertain, and need to be evaluated; (c) using interpre-tations of well tests that assume homogeneity on a “nearwell” scale as the basis for conditional (stochastic) sim-ulations; (d) using a single “dominant” parameter, on asingle scale, as the only random property (otherwise,computations are prohibitive even for limited cases); (e)assuming a PDF based on sparse spatial data; (f) assumingdeterministic boundary conditions despite the significantuncertainty in it; (g) inability to capture the physics on allscales, which leads to inability to simulate phenomenasuch as front instability where micro-scale processes(capillary and viscosity differences) and variations triggerand promote fingering and bypassing on a grand scale.Nevertheless, the stochastic approach is an importantbasis for approximations, correlations, and physical in-terpretations, including understanding and highlighting ofthe gaps and limitations of these interpretations. A partialsolution to the problem of model uncertainty was sug-gested by Neuman (2003), and experimented with by Yeet al. (2004).

The Monte Carlo approachIn order to overcome the limitations of the first-orderapproximation, Wagner and Gorelick (1989) used MonteCarlo simulations (MCS), which has become the mostpopular stochastic optimization and decision analysisapproach in management of groundwater and oil reser-voirs. In this straightforward (though computer-heavy)method, multiple equally probable realizations (orworlds) of the uncertain parameter (distributed, perturbedhydraulic conductivity) are generated and being employedin one or more configurations of remediation schemes;under this so-called stacking approach, the goal of therepeated simulation–optimization is to find an optimalsetting (pumping and injection rates/scheduling, new welllocations) in order to satisfy a set of constraints (plumecontainment) with minimum cost. In this particular work,Wagner and Gorelick searched for the set of remediationwells (in this case, only one remediation well) and min-imum pumping sequences/rates that will contain theplume for all (170) realizations, which implies 100% re-liability (i.e., r=1) or over-design. The authors also ex-perimented with relationships between pumping rate inone well (which implies cost) and reliability

It should be mentioned, however, that in addition tothe simplifications implied by the stochastic approach(mentioned above), MCS must assume certain statisticalproperties that are very difficult to evaluate from fielddata, and then require to generate and run many pseudo-random simulations (500–1,000; see Orr 1993) on large,high-resolution grids, which results in an enormouscomputational burden for predictions alone, and becomespractically prohibitive for optimization, unless far-reach-

ing simplifications are being made, as was done above (byWagner and Gorelick) and by Guyaguler and Horne(2001); where only two parameters, permeability andporosity were considered random, and only a limitednumber of MCS were performed). Of course, repeatingthis chain of simulations and optimization as soon as newinformation arrives is practically impossible under thecurrent scheme.

In terms of decision-making, MCS can only deal withthe evaluation of a proposed strategy or policy, and thereis still the issue of how to find the strategy in the firstplace, i.e., how to optimize. There is a difference betweenMCS methods and optimization with multiple realiza-tions; perhaps a two-step optimization process where theoptimal strategy is found first, followed by MCS is abetter approach. As will be shown later, such an approachimplies a two-resolution decision support system.

Gorelick (1997) developed a combined simulation–optimization model to test the sensitivity of well locationsand pumping rates under “optimal” design to uncertainty.Chance-constraints or reliability were not investigated,however. Different variations on the trade-off betweencost minimization (or profit maximization), reliability,and risk, reflected by the relative number of realizationsthat satisfy the requirements, have been exercised sincethen from both optimization and decision analysis per-spectives, with decision/control variables such as welllocations and pumping rates. Massmann and Freeze(1987a, 1987b) developed a general design framework fora waste management facility based on risk-cost-benefitanalysis, using a 2D analytical steady flow and advectivetransport model that showed the effect of uncertainty onmanagement decisions, as well as the power and promiseof the Bayesian approach. In the design of a groundwatercontrol system for an open pit mine Sperling et al. (1992)used MCS to calculate probability of failure (and hence,risk) as the number of realizations resulting in failure overthe total number of realizations.

Morgan et al. used an integer/binary indicator ap-proach to count realizations that violated specified con-straints, such that an optimal pumping strategy in a cap-ture zone design (pump and treat) for reliability r (0<r<1)is the design that satisfies rN realizations, where N is thetotal number of MCS. In their mixed integer chanceconstraint programming (MICCP), Morgan et al. (ibid)used linear programming and selective elimination ofconstraints to solve the plume containment problem ateach level of reliability. Chan (1993) analyzed reliabilityas a function of number of realizations, theoretically,using Bayesian statistics. Using MCS (again), Chan(1994) compared a reliability method similar to Morganet al. (1993); allowing (1 r) N simulations to violate theconstraints) to a method that penalizes constraint viola-tions in the objective function, directly. Ranjithan et al.(1993) used ANN as a screening tool to identify mostcritical realizations (worst-case scenarios) for capturezone design. Wagner et al. (1992) also used the equivalentvarying penalty on constraint violations (in the objectivefunction) as a function of expected reliability (see next

226

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 5: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

section). Guyaguler and Horne (2001) used a UtilityFunction (see Freeze et al. 1992) to calculate trade-offbetween risk and projected NPV for an operating oil field.

Stochastic vs. deterministic optimizationA comprehensive field scale test of groundwater pumpand treat (P&T) optimization was performed by the USDOD (see http://www.frtr.gov/estcp/estcp.htm and Zhanget al. 2004). The project evaluated the benefits of apply-ing deterministic flow and transport models and opti-mization (i.e., simulation–optimization) versus a typicaltrial-and-error approach. The goal was to determine thebest combination of well locations and pumping rates fora P&T system, with emphasis on long-term operating costreduction and/or improved performance of these systemswith respect to compliance objectives (e.g., achievecleanup standards in less time). Transport simulation-optimization was compared with a trial-and-error ap-proach for three sites in the US: one in NE Oregon (op-erating), one in Utah (operating), and one in Nebraska(planned). For all three sites, there were two groups ap-plying optimization algorithms and one group applyingtrial-and-error as a control measure. In each case, thegroups applying the optimization algorithms found im-proved solutions relative to the trial-and-error group. Atleast theoretically, the deterministic optimization useddemonstrated substantial potential cost saving at all threesites—by 5–50% (average improvement of about 20%),compared to currently used trial and error practices. De-spite the fact that uncertainty was not accounted for, thisis already encouraging. Yet, the heavy reliance on asimplified flow and transport (PDE) model leaves someuncertainty about the actual potential improvement. Itshould be mentioned, however, that uncertainty in actu-ally quantifying potential improvement applies to sto-chastic methods as well, since (as will be discussed later)the complete uncertainty could never be evaluated.

In the studies by Wagner et al. (1992), and by Wagnerand Gorelick (1987), the optimal design obtained using adeterministic approach (using mean parameter values)was found to be significantly (�20%) more expensivethan the one obtained by the stochastic approach. At leastin part, the increased cost could be a result of consideringmean parameter values (or estimates) as the effective (orequivalent) values—e.g., the geometric mean in the caseof 2D log-normally distributed hydraulic conductivity.

To varying degrees, differences between costs pro-jected by Deterministic Feedback control (using meanparameter values rather than effective flow parameters)and costs projected by stochastic Dual Control was foundby Andricevic and Kitanidis (1990), and Lee and Ki-tanidis (1991). The smaller difference between deter-ministic and stochastic solutions in the former work, aswell as overall uncertainty reduction, is attributed to thedynamic optimization used by the authors. The “dualcontrol” is due to (a) feedback from monitored heads andconcentrations, and (b) updated parameter estimation(particularly hydraulic conductivities). The latter wasperformed using an extended Kalman filter, which, in

turn, improves/adjusts model predictions (the model beinglinked flow and transport equations/PDE) at every timestep (and in later works—whenever new information ar-rives), thereby reducing both uncertainty and cost. (For areview of Kalman filtering in groundwater, see for ex-ample, Eigbe et al. 1998.) The stochastic part in theseworks (due to parameter uncertainty), is based onasymptotic small perturbation approximations, which,implies (a) small variance of the uncertain parameters(hydraulic conductivity, dispersivity) and/or state vari-ables (heads, concentrations); and (b) solving relativelysimplified (rather than real-world) problems. It is notclear, however, how well the extended Kalman filtercould calibrate the deterministic (and erroneous) model toprovide better predictions; therefore, it is hard to judgeabout the source of the difference in costs between thestochastic and the deterministic (yet dynamic/adaptive)control. Despite the severe practical limitations of thisapproach, these attempts to use optimal dynamic controlof aquifer systems represent a milestone that calls forfurther exploration and extension (later in the text).

The advantages of stochastic optimization over deter-ministic optimization can be summarized as: (a) the in-clusion of reliability and risk considerations (otherwise,the reliability is 50%, and the risk is risk-neutral); (b) theinclusion of the effect of uncertainty on cost; and (c) theimplication on the worth of information, particularly, theeffect of uncertainty on risk and network design—point-ing to areas where data could reduce cost or increaseprofit. In a recent work by Zheng and Wang (1999), byrunning 200 MCS (of random hydraulic conductivity re-alizations) and searching for best well locations andpumping rates under uncertainty in hydraulic conductiv-ity, the authors found that by doubling the variance of thelog-conductivity, while the mean optimal cost estimateremained almost unchanged (increased from $2.1 to$2.3M), the maximum cost (related to maximum reli-ability) has almost doubled. Optimal well location wasalso incorporated in optimal groundwater remediationdesign by Wang and Ahlfeld (1994), using Hermite in-terpolation functions within the finite element solution (ofthe PDE), which is relatively limited compared to theintegrated approach of Zheng and Wang. The latter ap-proach was possible by employing a highly efficientforward solution-updating procedure (though limited tolinear flow systems) that eliminated the need for repeated(many thousands of) simulations (for creating the searchspace) as part of the optimization.

Even under the deterministic approach, major effortshave been made to reduce the intensive forward simula-tions (that create the state-space) within the PDE simu-lation–optimization process. For example, Gordon et al.(2000; in the context of optimal aquifer managementunder salinization conditions) used “state sensitivity”coefficients—the partial derivatives of heads and con-centrations with respect to pumping rates, calculated off-line using a full-scale finite element simulator—at pointsrelevant to the management problem, within the searchalgorithm (in their case, a bundle-trust algorithm). Other

227

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 6: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

authors computed these state sensitivity coefficients (inthe context of simulation-optimization) by direct differ-entiation (Chang et al. 1992; Ahlfeld et al. 1988a; Xianget al. 1995). More aggressive shortcuts using sensitivitycoefficients to overcome the computational burden due tooptimization/search, are discussed later in the text.

Searching for a best new well location has been amajor challenge in oil reservoir management (Rian andHage 1994; Aanonsen et al. 1995; Bittencourt and Horne1997; Pan and Horne 1998; Centilmen et al. 1999;Guyaguler et al. 2000; Guyaguler and Horne 2001). Themultiple realizations approach was applied by Guyagulerand Horne (2001) in conjunction with the PUNQ project(e.g., Bush et al. 2002), where MCS with a single un-certain parameter (permeability) were followed by sto-chastic optimization, in an attempt to find the best wellplacement with respect to net present value (NPV) overthe next 16 years. Due to the site complexity and non-linear behavior, a complete set of 3D simulations wasnecessary to generate the search space in the optimizationprocess, and to overcome the prohibitive computationalburden, only a few calibrated realizations were actuallyemployed in the complete optimization process. The re-sults indicated (a) up to 100% difference between pro-jected production scenarios (with mean projected pro-duction of $250M); (b) deviation of as much as 15%between the results of calibrated, statistically similarproduction simulations and a simulation using mean pa-rameter values. Given the nonlinearity (in the parame-ters), the uncertainty in other parameters, and the uncer-tainty in the conceptual model itself (as well as in everyinterpretation and decision along the path of building theconceptual model), a non-optimal decision reached thisway could deviate by an order of magnitude from the trueoptimum.

Optimizing with multiple objectivesLooking closely at subsurface remediation and oil reser-voir management problems, it becomes apparent thatmost operations have more than one objective, and often,several conflicting objectives, such as minimizing costand maximizing reliability (or minimizing risk) or mini-mizing contaminant mass remaining within a certain pe-riod/zone. In the works discussed in previous sections,multiple objectives were transformed into a single costfunction either by a weighted linear combination(weighted sum) of the objective functions, or by turningthe remaining objectives into constraints (and as wasnoticed earlier, like all constraints, the latter could betransformed into a penalty function added to the objectivefunction). As was pointed out by Horn et al. (1994), andErickson et al. (2002), penalties and weights have beenproblematic, at least with GA solutions, which are usuallyvery sensitive to small changes in the penalty functioncoefficients and other weighting factors.

Several GA variations have been used to find allpossible trade-offs among multiple conflicting objectives.Such solutions are non-dominated, in that there are noother solutions superior in all attributes (“Pareto domi-

nation” of one solution by another means that the latter issuperior in at least one objective, but equal in all otherobjectives). In the objective-function (or attribute) space,the set of non-dominated solutions lies on a surface (or acurve, if only two objectives exist) known as “Paretooptimal frontier” (the trade-off surface, also called“Pareto optimal set”, “non-dominated frontier”, “efficientpoints”, and “admissible points”); for each point on thatcurve, none of the objective functions can be furtherminimized without increasing some of the remainingobjective functions; every such value of a decision vari-able is referred to as Pareto optimal (the “best that couldbe achieved without disadvantaging at least one group”).

Ritzel et al. (1994) applied two variations of GA: aPareto GA (Goldberg 1989) and a vector-evaluated GA(VEGA; Schaffer 1984) to a multi-objective groundwaterpollution containment problem with uncertain hydraulicconductivity, but certain location of the contaminatedplume; the latter is considered captured if the local hy-draulic gradients (down-gradients) at given check pairsare directed inward (i.e., containment rather than reme-diation). The decision variables were pumping rates atpre-selected potential well locations. The goal was tominimize containment design cost, while maximizing itsreliability. The reliability was represented by the fractionof the number of MCS (assuming a certain random hy-draulic conductivity field) that led to containment indi-cated by gradients at all check-pairs. The cost consisted ofboth fixed well installation cost and variable cost asso-ciated with pumping. The Pareto GA used a rankingscheme that ordered the population according to the de-gree of domination of each containment design. TheVEGA is based on a search for multiple solutions tomultiobjective problems simultaneously by selecting afraction of the next population based on the associatedvalues of each objective function. As mentioned by Hornet al. (1994) the VEGA seemed capable of finding onlyextreme points on the Pareto front, where one objective ismaximal, since it never selects according to trade-offsamong objectives. Indeed, in their independent compari-son, Ritzel et al. (1994) concluded that (a) the Pareto GAwas superior to the VEGA in finding the largest portion ofthe Pareto optimal solutions; (b) the trade-off curve foundby the Pareto GA was similar to the one obtained byanother optimization technique—mixed integer chanceconstrained programming (MICCP) by Morgan et al.(1993). However, according to Ritzel et al. (ibid), theMICCP is much slower (has to be solved many times inorder to generate the trade-off curve) and practically, doesnot allow including fixed cost components.

Cieniawski et al. (1995) investigated the performancesof four multi-objective groundwater monitoring problems,with maximum reliability of the monitoring system andminimum contaminant plume size at the time of firstdetection. Reliability was represented by the percentageof simulated plumes (in MCS with uncertain/randomhydraulic conductivity) that are detected by the wellnetwork (hence, maximizing reliability is equivalent tominimizing the number of undetected plumes). The con-

228

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 7: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

taminated area was defined by the area contained (in 2D)when a plume is first detected, summed over all plumes.Weighted GA, VEGA, Pareto GA, and a VEGA/Pareto-GA combination were compared by simulated annealing.The VEGA/Pareto-GA showed to be more computation-ally efficient and more successful in generating the largestportion of the trade-off curve (Pareto front) than the otherGA methods.

Following Horn et al. (1994), Erickson et al. (2002)used niched Pareto genetic algorithm (NPGA) to optimizegroundwater remediation by pump and treat, with thegoals to minimize cost (including the water treatment costthat depends on flow rates), as well as the mass remainingin the aquifer at the end of the remediation period, whileassuring plume containment in a homogeneous (uniform)hypothetical aquifer (i.e., deterministic optimization).Niching was suggested by Goldberg (1989) as a means toprevent GA from converging to a single point on thePareto front; the NPGA extends the traditional GAthrough the use of Pareto domination ranking (usingPareto dominating tournament selection, where candi-dates for the next generation are selected) and fitnesssharing (where the population is distributed over a num-ber of different local peaks in the search space, with eachpeak receiving a fraction of the population proportional tothe relative peak height; the winning candidates are thosethat promote the dispersal of candidate designs along thePareto front). Ercikson et al. (2002) considered three testscenarios of pumping rates from 2, 5, and 15 wells withfixed locations, as decision variables. A single objectiveGA (SGA) and a random search (RS) were applied alongwith NPGA. With 15 decision variables, the NPGA wassuperior to both the SGA and RS, by generating a bettertrade-off curve (in one case, with 75% less mass re-maining than the RS solution). In the 15-well scenario,the NPGA generated the full span of Pareto optimal de-signs, with 30% less computational effort than requiredby SGA. In fact, the RS failed to find any Pareto optimalsolution. In addition, for an optimal population size (ofabout 100 when limiting it to 2000 objective functionevaluations), the NPGA was found to be robust to allother algorithm parameters (tournament size and nicheradius).

Another milestone attempt to optimize sampling de-sign (network design) under uncertainty in hydraulicconductivity was performed by Wagner (1995b) who at-tempted to evaluate the trade-off between various costs ofdifferent data types and the contribution of those data tothe improvement of model reliability, where the latter ismeasured by the reduction of prediction uncertainty; thelatter was quantified by the “size” of the estimated co-variance of model prediction errors, expressed by thetrace of the covariance matrix. In order to determine thecovariance matrix of prediction errors, a first-order ap-proximation was used to linearize the flow and transportequations (PDE with slightly perturbed hydraulic con-ductivity), enabling direct relationships between param-eter uncertainty (expressed by its estimated covariancematrix) and prediction uncertainty. Parameter uncertainty

and model prediction uncertainty were coupled with op-timization with the goal of identifying the mix of hy-drogeologic information (heads, concentrations, and hy-draulic conductivity “measurement” locations) that willminimize prediction uncertainty for a given data collec-tion budget. One of the stumbling blocks in the metho-dology, noticed by Wagner was that while “overcoming”the inherent nonlinearity in the PDE (with respect tomodel parameters), a difficulty arises due to the need in a-priori uncertainty estimates of both parameters and modelpredictions; that is, the evaluation of these covariancesimplies the need in adaptive, dynamic (evolving) opti-mization—of the kind described in the next section.

Wagner (1995b) compared between two optimizationmethods: a branch-and-bound algorithm and a GA (an-other combinatorial search method, simulated annealing,was used earlier for network design optimization byChrsitakos and Killam 1993). A series of synthetic ex-amples were used to explore the effect of different sam-pling scenarios. trade-off curves were computed usingtraditional GA and manual (sequential) changes in sam-pling budget. The numerical experiments emphasized theimportance of network design for prediction reliability (inthis case, using a rigid PDE model with simplistic/limiteduncertainty) and the complex trade-off relationships be-tween cost of information, types of data, and predictionuncertainty. It was found that the GA method identifiednear optimal solutions with just a small fraction of thecomputational effort needed by the branch and boundalgorithm. For the particular model used (including goaldefinition and all simplifications), it was concluded thatthe network design model was able to efficiently identifythe mix of hydrogeologic information that maximizesinformation return for a given sampling budget. The nextstep would be incorporation of such network designmodels in a comprehensive remediation optimization, orbetter, in a dynamic control system (described next). Twoother worth-mentioning recent works on multi-objectiveoptimization are by Loughlin et al. (2001), Kumar andRanjithan (2002), and Reed and Minsker (2004).

It is apparent from all of the above works that reducinguncertainty leads to reducing cost; hence, there is a needin continuous, optimal monitoring, updating, re-analyz-ing, and re-modeling. The main efforts in this directionhave branched into (a) decision analysis and (b) dynamic,sequential optimization/control. In the following, the fo-cus is on the latter.

Dynamic control of aquifer managementAs has been implied earlier, optimization of dynamicsystems under uncertainty is very difficult because thenumber of alternatives explodes in any realistic repre-sentation of the system, particularly where dynamic pro-gramming is employed in stochastic problems with nu-merous state variables. This phenomenon is often de-scribed graphically as the “dimensionality curse” (Phil-brick and Kitanidis 1998; Yakowitz 1982).

Under the dynamic control approach, inverse modeling(i.e., updating/calibrating the uncertain parameters in a

229

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 8: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

flow and transport model) is done parallel to the opti-mization process, while feedback control rules enablechanging of pumping rates in response to changing hy-draulic heads. In the following, a brief review on recentdevelopments along this path is listed.

Following Tse et al. (1973) and Bar-Shalom (1981),Andricevic and Kitanidis (1990) combined an adaptivedual control method of joint feedback and parameter es-timation (using an extended Kalman filter) with stochasticcontrol derivation, with the goal of minimizing remedia-tion/containment cost while optimizing both sampling andcontrol actions. By using the method of small perturba-tions, the objective function was divided into a deter-ministic and a stochastic part. Differential dynamic pro-gramming was used to compute the deterministic control(similar to Murray and Yakowitz 1979; Yakowitz 1986;Jones et al. 1987), while the solution of the stochastic partof the objective function was obtained analytically usingstochastic control techniques applied to the governingflow and transport equations (PDE), with challengingmathematical derivation that requires a twice differen-tiable cost function. The on-line parameter estimation fedinto the flow equation and enabled updating of both statevariable estimates and state covariances. The method wasapplied to a hypothetical 1D aquifer, and did not accountexplicitly for inequality constraints. In terms of formula-tion of the objective function, the following highlights areworth mentioning: (a) the goal is to minimize the average(estimated, probabilistic) cost function; (b) the costfunction is separable in stages, and according to the dy-namic programming approach, whatever the initial stateand initial decision are, the remaining (future) decisions/solutions should constitute an optimal solution based onthe current state. That is, the problem is reduced tofinding a current optimal control variable, given an ob-jective function over the remaining (future) periods, andgiven the current information state which includes allrelevant a priori knowledge of the system and its historyof observations and control; probabilistically, this infor-mation state is the conditional probability density func-tion of the state at the current period conditioned on allpast information; consequently, the cost function dependson uncertainty, directly. The two hidden elements in thisapproach are: (a) the Bayesian approach, and (b) learning(from past experience).

Georgakakos and Vlatsa (1991) suggested a mathe-matical formulation of the stochastic control problem (ofcapture zone optimization) almost identical to that ofAndricevic and Kitanidis (1990). Their work was classi-fied as a discrete “open loop feedback” (OLF) controlmethod, with the basic premise of using all available in-formation to estimate all present and future uncertainties,solving the management problem over the designatedcontrol horizon, applying the optimal control action(pumping or injection) during the current time period, andrepeating this process at the next decision time. Here too,the flow equations were treated as a dynamical state-space system using finite element and finite differencetechniques, considering (both) transmissivities and

boundary conditions uncertain/random, and hence, per-turbed (slightly) in a simplified, two-layer aquifer system.The goal was to minimize pumping (and treatment) costswhile maintaining hydraulic heads close to target values.The results (a) provided insight into system responseunder uncertainty; (b) quantified trade-offs between sat-isfying goals and minimizing uncertainty; (c) emphasizedthe effect of management decisions at any stage on modelpredictions in the next step. The authors showed that theproblem’s complexity could be considerably reduced bysplitting the performance index terms into mean headvalues and head covariance elements. In addition, explicitoptimization with respect to mean heads combined withsensitivity analysis appeared to be an effective manage-ment approach.

The work of Lee and Kitanidis (1991) extended themethodology of the earlier work of Andricevic and Ki-tanidis (1990) on optimal estimation and scheduling ofaquifer remediation under uncertainty, by allowing morecomplexity to be introduced, particularly, 2D spatialvariability and various constraints. Again, the advantageof the method includes (a) real time (dynamic) feedbackfrom measurements; (b) joint (on-line) parameter esti-mation–optimization; and (c) stochastic optimization thataccounts for uncertainty. Subject to constraints and aspecified reliability of meeting water quality requirementsfor a current period, the method minimizes the expectedvalue of the cost in the next (remaining) periods. Heretoo, an extended Kalman filter was incorporated on line toimprove the accuracy of the estimated state variables andparameters using updated information. A comparisonbetween (adaptive) deterministic feedback control and thestochastic control formulated by Lee and Kitanidisshowed a clear cost reduction using the stochastic controlformulation, with increasing difference as the uncertaintyincreases. Despite the improvement (on Andricevic andKitanidis 1990) in its ability to handle more complexproblems (2D rather than 1D problems and more generalconstraints), this very attractive method is not yet suitablefor complex real world problems. One of the importantinsights that emerged in this implementation of stochasticcontrol is the “probing” and “caution” effects highlightedby Bar-Shalom (1981); the effect of the stochastic/per-turbation part in the dual-control example of Lee andKitanidis is that of sensitivity analysis and system exci-tation (the “probing” effect) followed by measurementsand gaining information about system parameters thatresulted in a substantial improvement (similar to the ef-fect of multiple pumping tests). According to the authors,the dual control anticipates how the actual (future) statewill deviate from estimated state currently in hand, andsteers the system to mitigate possible losses (the “caution”effect). These two effects (of probing and anticipation/caution) imply yet another effect—that of learning.

Chang et al. (1992) used differential dynamic pro-gramming to determine the benefits of time-varying op-timal groundwater pumping policies, with the goal toreduce groundwater concentrations (of a contaminant) toacceptable levels. It was demonstrated that static pumping

230

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 9: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

policies would cost 45–75% more than policies that allowtime-varying pumping rates, where the managementmodel can “track” the contaminant plume.

Culver and Shoemaker (1992) extended the work ofChang et al. (1992) to include management periods thatare greater than the simulation time steps (and hence,more practical and cost saving). Culver and Shoemaker(1993) used quasi-Newton approximated second deriva-tives of the transition function (that models/transforms thesystem from one state to the next; in the groundwatercontamination case, it consists of the matrices generatedby the finite element model at each time step) in order toreduce the number of iterations needed for convergenceand overall computational time in the differential dy-namic programming.

A more substantial use of the second derivatives of thetransition function in a constrained differential dynamicprogramming (DDP, in a complete form) was made byWhiffen and Shoemaker (1993), with respect to a generalcase pump and treat remediation, including pumpingscheduling and finding the best well location. The authorsused these derivatives to generate feedback laws with theaid of the penalty function method (which converts theconstrained optimal control problem to unconstrainedoptimization, and consequently, allows flexibility in theresponse of the feedback laws to violation of constraints).These feedback laws describe relationships between re-quired corrections of the control variables and weighteddeviations of observed states from the predicted states.The goal was to find the relationships between the secondderivatives of the transition function and evolutionaryfeedback laws, where the latter relate deviations from(and hence, required corrections to) optimal pumpingschedule and deviations of heads and concentrations fromtheir anticipated states, through weights discovered/as-signed to these state deviations. The methods requires, asa first step, to employ a deterministic model and initial“optimal” pumping policy, which enables building thefirst transition function, and finding relationships betweencontrol and state deviations. The feedback laws are ob-tained by adjusting the relative weight assigned to eachpenalty function (corresponding to each control variable).

Notice that if the evolutionary nature of the feedbacklaws is disregarded, the simple linear relationships ex-pressed by the feedback laws (between observed devia-tion and required action) resembles the inverse of theresponse functions of Maddock (1973) and of other sim-ilar action-response functions (described later in the text).Notice also that while the transition function is derivedfrom the flow and transport model (which is an elaboratedresponse function model), the feedback law representscause–effect rules (much like the inverse of the transitionfunction) that compensate for model errors, regardless ofthe source of the errors. It is also interesting to note thatthe evolution of the feedback laws over time has an ele-ment of memory and learning (from past cause–effectrelationships).

The results of the simplistic example provided byWhiffen and Shoemaker (1993) showed to be robust and

efficient in terms of reducing cost (by 4–51% less thanoptimization without using a feedback law) as well asrequired computer time, for up to 25% deviations fromthe mean of the parameter (i.e., uncertainty up toCV=0.25). The authors noted that larger fluctuations/un-certainty would have to be tested to determine furtherrobustness. Other weaknesses are (a) exclusion of net-work design (or worth of information); (b) the depen-dency on a twice-differentiable transition function asso-ciated with the specific DDP method; (c) lack of a linkbetween uncertainty and reliability or risk. On the otherhand, this is the first control/optimization method thatfrees itself not only from the need in a rigorous, welldefined statistical/uncertainty model (with assumed PDF,correlation structure, etc.) but also from both parameterand model errors, yet without neglecting uncertainty, andindirectly, reducing it, which makes this work a milestonethat calls for continuation.

Coupling of optimization with network design hasbeen explored by many investigators, including Maddock(1973; mentioned above), Massmann and Freeze (1987a,1987b), Tucciarelli and Pinder (1991), and James andGorelick (1994). In the context of dynamic control, An-dricevic (1993) coupled sequential development of thegroundwater withdrawal management with samplingstrategies, dynamically. The withdrawal design is solvedusing a closed-loop stochastic control (dual control)method that includes anticipation of future observationlocations, as well as decomposition of the objectivefunction into deterministic and stochastic parts. The in-clusion of uncertainty in the objective function leads totrade-off between cost of new wells and uncertainty re-duction. The sampling network design method sequen-tially selects new measurement locations based on thecombined effect of head (state variable) uncertainty atthat location, and the sensitivity of the objective functionto that uncertainty. More specifically, new sampling lo-cations are selected based on the product of the sensitivityof the stochastic part of the objective function and themodeled (predicted) head variance at that location, i.e.,the sensitivity (of the cost function) to head uncertainty isweighted by the magnitude of the prediction error—andvice versa (the prediction error is weighted by the sensi-tivity of the objective function to this error). In addition,the Bayesian approach was employed to condition newmeasurements by existing information. The head uncer-tainty is evaluated by a first-order, second-momentgroundwater flow model, where the head uncertainty islinked to uncertainty in hydraulic conductivity, boundaryconditions, recharge, and leakage. The hypothetical ex-ample showed that a sampling strategy designed solely onthe basis of reducing uncertainty in hydraulic heads wouldnot necessarily result in minimizing the overall costfunction.

A similar conclusion on best new observation welllocations under uncertainty was drawn in the work ofJames and Gorelick (1994), who explored the worth ofdata collection for groundwater remediation design, ex-tending the Bayesian approach of Freeze et al. (1992) to

231

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 10: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

characterize (or locate) a contaminant plume, and containit via pump and treat. A hypothetical contaminationproblem was used, where uncertainty in plume locationand extent are caused by uncertainty in source location,source release rate over time, and aquifer heterogeneity.The goal was to find the optimum number and best lo-cations for constructing a sequence of observation wellsthat will minimize the total remediation and samplingcost. The simplified problem assumed 2D steady-stateadvective transport, simple sorption/retardation, and re-mediation costs proportional to discharge rate. The con-taminant plume was considered binary (1 if any concen-tration detected, and 0 otherwise). The authors did notconsider well construction cost or its compensation byconverting observation wells into pumping wells later inthe process. The methodology was a step-wise approach;prior analysis included MCS, considering hydraulic con-ductivity, source location along a landfill boundary, andsource release time as random variables, in order to firstmap probabilities of plume location and estimate initialremediation cost. Preposterior analyses consisting ofsimple indicator kriging was used to evaluate cost-effec-tiveness of a new sample (one at a time). Posterior anal-yses considered hypothetical new sample information,which led to revised prediction of plume location, asso-ciated probabilities, and remediation cost, using MCS(again) and discarding all the simulations that “missed”the “measured” (binary) concentration at the new samplelocation. This process repeated itself, adding monitoringwells until the cost of additional monitoring well exceedsthe expected value of sample information.

The results of James and Gorelick (ibid) indicated thatthe optimum number of observation wells was particu-larly sensitive to the mean hydraulic conductivity, as wellas its variance (hence, to uncertainty), the annual discountrate, operating cost, and unit sample cost, but insensitiveto the correlation length of the random hydraulic con-ductivity field. It was found that randomly located sam-ples were not cost-effective, while, interestingly, zones ofgreater uncertainty in plume presence were not the bestcandidates for sampling location. It is interesting to notethat regardless of the grand simplifications and severelimitations, the sequential use of the Bayesian approach inthe work of James and Gorelick (ibid) leads (naturally) toa gradual increase in resolution and improvement of themanagement model (both groundwater monitoring anddecision making) regarding remediation under uncer-tainty. That is, new sample information decreases uncer-tainty and increases accuracy and resolution of the de-lineated plume, as well as the resolution in the state space,which enables moving both the monitoring system andremediation closer to optimality. Moreover, the works byAndricevic, James and Gorelick, and others have made itclear that (a) characterization and monitoring could resultin return on investment even at advanced stages of theaquifer management, and (b) modeling, optimization/control, and characterization are inseparable, and there-fore, should be integrated from the start in order to solveany real-world hydrogeologic problem.

Artificial intelligence in hydrogeologyArtificial intelligence (AI; also termed “soft computing”and “machine learning”) methods such as artificial neuralnetworks (ANN), genetics algorithms (GA), fuzzy logic(FL), probabilistic reasoning, simulated annealing, andtabu search, have been used increasingly in hydrogeolo-gy, and reservoir management (e.g., Wong et al. 2002),frequently in the context of integration. Good reviews,examples, and many good references on artificial neuralnetwork (ANN) and genetic algorithms (GA) are providedby several authors, including Goldberg (1989), Rogersand Dowla (1994), Ranjithan et al. (1993), Zheng andWang (2002), and Boger (2002). In the following, only afew features are highlighted.

A few comments

1. Like any other model, ANN requires calibration orparameter estimation, which implies inverse solutions,particularly, with respect to the assignment of (a)connection weights and (b) the number of neurons inthe hidden layer. GA, conjugate gradient methods, andother optimization methods have been used to searchfor the best set of connection weights.

2. Efficient training of large-scale ANN was published byGuterman (1994), starting from non-random initialconnection weights. Boger and Guterman (1997)published algorithms to optimize the number of hiddenneurons, and to analyze the trained ANN for subse-quent dimensionality reduction, leading to an optimalset of inputs after repeated retraining and dimension-ality reduction. These developments were then suc-cessfully applied to modeling the operation of a largewastewater treatment plant (Boger 1992). Bhat-tacharya et al. (2003) augmented ANN with Rein-forcement Learning, in order to shift the emphasis togoal-oriented learning, and close the loop of agent-environment decision processes (so far, only for sur-face-water management; more on this concept in thefollowing sections).

3. With respect to GA, as was noticed by Guyaguler et al.(2000), some of the same characteristics that make theGA robust and powerful (e.g., the fact that it does notneed to search from all states at all times) also make itslow and inexact in refinement of the solution. Typi-cally, the GA has rapid initial progress during thesearch, but has problems locating the final optimalsolution, and thus requires hybridization with other,local optimization method, such as the highly efficientGauss-Newton or Levenberg-Marquardt algorithms.

4. Despite a common claim that GA is a “global opti-mization” technique, there is no proof or reason to be-lieve that this is indeed so, and some examples testify tothe contrary (e.g., Kessler and Goldberg 2004). Nev-ertheless, Maskey et al. (2002) discuss GA and othersearch algorithms aiming at global optimization. One ofthese algorithms, “adaptive cluster covering” (ACCO),resembles two levels of resolution, using concepts ofgrouping and clustering (discussed later).

232

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 11: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

The works by Rogers and Dowla (1994), Rogers et al.(1995; and later, Guyaguler et al. 2000, and Maskey et al.2000) constitute a milestone in the use of AI tools inhydrogeology, particularly, in finding a compromise be-tween extensively complex and extensively simplifiedapproaches by combining ANN and other “proxies” withGA to design optimal aquifer remediation and best newwell placement. Rogers and Dowla ‘trained’ an ANN byusing multiple deterministic flow and transport simula-tions of a complex aquifer under a pump and treat oper-ation, then used the efficient ANN as a replacement(“proxy”) for the cumbersome, slow simulator. Due to thelimited extrapolation power of the ANN, many modelruns are needed for the training to cover the expected spanof possibilities (in the search space), to enable optimiza-tion of pumping and injection schedules. Although themethod used is deterministic, as pointed out by the au-thors, it could, in principle become stochastic, by gener-ating multiple realizations and running Monte Carlosimulations (MCS), and/or combining it with the methodof Ranjithan et al. (1993) to find most critical realizations.However, this would still result in either prohibitivecomputer power or suboptimal solutions. In any case, theworks by Rogers and co-workers is a milestone and a stepin the right direction.

There is a parallel between the use of ANN, and otherproxies, such as the early development of responsefunctions by Maddock (1973). Wagner (1995a) noticedthat the neural network approach presented by Rogers andDowla bears some similarity to the multiple regressionapproach used by Alley (1986) in which a regressionmodel is developed to map the output model predictionsas functions of the input pumping rates, and then use theseregression equations in an optimization model to deter-mine optimal management strategies. Along this line,Landa and Guyaguler (2003) used sensitivity coefficientsas proxies for a reservoir simulator, producing “responsesurfaces”, and followed by kriging interpolation to en-hance the search; since kriging implies smoothing, themethod implies reducing the resolution of the searchspace.

Eidi et al. (1994) used polynomial function as a proxyfor a reservoir simulator, while admitting that accuracybecomes an issue when the model is complicated andthere is an extensive or difficult-to-match productionhistory. Local “production curves” or “well performancemodels” have also been used in some complex cases oftwo-phase oil–gas reservoirs as proxies for complex,highly nonlinear, time-consuming models, using cause–effect relationships between production, fluid composi-tion, and pressures in producing and injection wells, overtime, using curve fitting/regression (Stoisits et al. 1999).Again, although this expedites the computations by ordersof magnitude, it has very limited extrapolation power.

A few conclusions: uncertainty, barriers,and partial solutionsThe collection of the variety of current methods used foroptimal management of groundwater and oil reservoirscould be organized and summarized roughly by the dia-gram shown in Fig. 1a. The most common subset of thesemethods is sketched in Fig. 1b. The adaptations ofmethods from the different disciplines of operations re-search, stochastic control theories, and artificial intelli-gence have enriched the field of hydrogeology with newinsight, such as:

(a) the effect of uncertainty (even if just in one parame-ter) on optimal management and cost;

(b) the inseparability of the various components of opti-mal remediation management, such as optimalscheduling and best new well location for eitherpumping/injection or new monitoring wells;

(c) the inseparability between optimal management andcharacterization;

(d) how parameter uncertainty is related to reliability andrisk,

(e) how parameter uncertainty translates to additionalcost;

(f) the effect of probing the system, system anticipation,and the “caution” that follows it, as well as the sim-ilarity between the effects of sensitivity analyses,random perturbations, and response functions andtheir “inverse”—weighted feedback laws;

(g) the ability to compensate for unknown model errorsby determining appropriate weighted feedback poli-cies, particularly under dynamic feedback control;

(h) the hidden forms of memory and learning that exist insome statistical models (e.g., Bayesian statistics),particularly where recursive/evolutionary informationprocessing takes place, as is the case in some dynamiccontrol systems, and particularly where such pro-cessing results in corresponding feedback;

(i) the strength of Bayesian approaches in both estima-tion and uncertainty reduction.

Notice that the emphasis has shifted from model predic-tions to system anticipation, which is the anticipation ofthe effect that a control action would have on the goal,which is the effect on the cost function. Likewise, sen-sitivity analysis (of predictions to parameter variations)has shifted to sensitivity of the cost/objective function toparameter and state uncertainties, which changes the ex-perimental design and overall planning.

However, despite all of the powerful new methods andthe progress that has been made in modeling, optimiza-tion, and control of uncertain hydrogeologic systems, theability to solve real world problems is still very limited.The main reason is that all the methods introduced so farrely heavily on solutions of a rigid model that consists ofcoupled partial differential equations (PDE), which orig-inally required an accurate knowledge of all the hydraulic,geochemical, and thermodynamic properties of the het-

233

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 12: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

erogeneous aquifer in order to predict system behaviorand enable problem solution. Obviously, if this does nothappen, these equations are bound to mislead the modeler.The stochastic approach has been trying to overcome thisfault, only to reveal (a) how wrong models are whendisregarding uncertainty (e.g., Neuman and Guadagnini1999), and (b) that the uncertainty could never be cap-tured more than partially (Neuman 2003). Yet, modelerscannot depart from these equations because they reflectthe basic physical principles of flow, transport, and en-ergy transfer, help to understand cause and effect, andtranslate control action to state response, even if justroughly. Unable or unwilling to find a replacement forthese equations/models, modelers accept that all modelparameters, as well as the models themselves, are un-certain, and therefore random, or stochastic, and conse-quently, end up with stochastic PDE and stochastic con-trol. Even when assuming that the stochastic modelscapture the mean world/aquifer behavior and the uncer-tainty associated with it (a quite pretentious assumption),modelers are stuck with either over simplistic models orwith prohibitive computer power.

Notice that the most advanced stochastic controlmethods (above) cannot handle more than one randomvariable at a time, and cannot handle complex 3D ge-ometries with data gaps, or multiphase flow, or chemical/biological reactions, which leaves all the above stochasticsolutions merely as exercises that (at best) provide insight

rather solutions to complex real world problems, unless ahighly simplified model is being used (e.g., Lee and Ki-tanidis 1996).

Neuman (2003) admits that “Hydrologic analysestypically rely on a single conceptual model of site hy-drogeology. Yet hydrologic environments are complex,rendering them prone to multiple interpretations andmathematical descriptions. Adopting only one of thesemay lead to statistical bias and underestimation of un-certainty”. Neuman (2003) and later, Ye et al. (2004),proposed a Maximum Likelihood Bayesian averaging of aset of acceptable models in a framework that incorporatessite characterization and monitoring data in order to en-able optimal combination of prior information (knowl-edge and data) and model predictions. Neuman’s strategyis based on a comprehensive strategy suggested by Neu-man and Wierenga (2003), in which they classify uncer-tainties as arising from “incomplete definitions of (a) theconceptual framework that determines model structure,(b) spatial and temporal variations in hydrologic variablesthat are either not fully captured by the available data ornot fully resolved by the model, and (c) the scaling be-havior of hydrogeologic variables”. Therefore, “Therethus appears to be no way to assess the uncertainty ofhydrologic predictions in an absolute sense, only in aconditional or relative sense”. (Neuman 2003). In otherwords, modelers cannot truly quantify total uncertainty,only partial (e.g., parameter) uncertainty, in some relative

Fig. 1 a The union of allmethods used for optimal aqui-fer management (ANN artificialneural networks; FL fuzzy log-ic; GA genetic algorithms) b Aschematic view of the typicalapproach to aquifer optimiza-tion (NN neural networks; MCSMonte Carlo simulations); NPVnet present value; PDE partialdifferential equation)

234

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 13: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

sense; however, modelers could (and should) still strive toreduce uncertainty.

Based on the strategy of Neuman and Wierenga and onthe well-established Bayesian Model Averaging (BMA;e.g., Hoeting et al. 1999), Neuman (2003) developed amethodology that combines the prediction of severalcompeting models (deterministic or stochastic) and as-sesses their joint predictive uncertainty. The general ef-fect of the BMA (as in any averaging) is that of de-creasing resolution and bounding the uncertainty (reduc-ing it with respect to the most uncertain model that is stillacceptable). In the following section, a multiresolutionalframework is introduced, which could integrate Neuman’sBayesian approach with the advanced characterization/sampling and optimization/control (of aquifer manage-ment) described above in a natural and efficient way.

One of the major contributions of the stochastic ap-proach is in quantifying the effect of conditioning anduncertainty reduction via correlations among variables.As will be shown in the following sections, our proposedapproach, which is also stochastic in nature, extends be-yond linear statistical correlations, to include many typesof associations among variables on all relevant scales, inunique multiresolutional (MR) knowledge representa-tions, which maximize the information hidden in inter-dependencies among these variables, on all levels ofresolution, free from any particular single-scale model.By maximizing extraction of information, the MR ap-proach effectively reduces uncertainty and overcomeslacking and corrupted information.

The future optimization/control system

Review of current approachThe current structure of aquifer management and char-acterization is depicted in Fig. 1b. This is a structure ofserial, segregated, and isolated “disciplines” that processthe information from geophysics to geology and sitecharacterization, filtered into flow and transport models,and then (finally) to optimization/planning.

Figure 1b illustrates the process of current modelconstruction, with the different subsystems that partici-pate in the planning, analysis, interpretations, modeling,and optimization of groundwater remediation and aquifermanagement, in general. Among the items not included inthe figures are remediation alternatives, well pattern de-sign, injection and pumping, airflow or air sparging,slurry walls, and well construction, all of which are in-terdependent, and are linked to decision making.

Blocks 1 and 2, and, to some extent, Block 3, representsite characterization, construction of the conceptualmodel(s), and subsequent aquifer management. Note thatwhile geophysics (Block 1) dominates the initial charac-terization stage, resistivity, micro-gravity, gamma rays,and various waveform methods are used at all stages ofaquifer remediation development. Typically, the concep-tual model (Block 3) of the flow and transport charac-teristics at the site is an undeclared part of the modeling;

this is where all the geology is filtered, upscaled, andtranslated into the model’s parameters, which inherentlyentails averaging and discarding of information (acting asa low-band filter), including small scale features that maybe crucial (in which case, their large-scale influenceswould be modeled as new large-scale parameters—e.g.,dispersion coefficient). Most of the assumptions and de-cisions related to the representation of a contaminatedaquifer are made at the conceptual model stage, and aresubject to the modeler’s understanding and experience.

Block 4 is the current quantitative “brain”/predictor,typically a solver of coupled PDE plus constitutive rela-tionships, or less commonly, cell models (implying or-dinary differential equations, or ODE representation), andsimplifying proxies (as mentioned above). The embeddedupscaling in these models, and the loss of informationassociated with it, obstruct the ability of inverse modelingto feedback from the groundwater model (4) to the geo-logical model (2), except for special cases where certaindisparity exists between the measuring window and thescale of the geologic feature, and where only a piece ofthe puzzle is missing (e.g., in a pumping test, with anappropriate monitoring system), or in dual porosity sys-tems where the rock properties are well characterized onall scales. Typical inverse or calibration procedures de-termine some local aquifer parameters that fit a particular(and hence, uncertain) model. Subsequent interpretationsof geologic features based on inverse modeling or modelcalibration are, therefore, speculative. This is why even ina relatively simple aquifer test, where analytical inverseprocedures are being used to determine aquifer parame-ters on a local scale, problems of identifiability andnonuniqueness already emerge and hinder parameter es-timation, while geologic interpretations based on suchtests are ambiguous and frequently speculative.

Though the need to integrate characterization andaquifer management has long been recognized, progressin this direction has been slow, and fragmented resultsstill dominate the field. The main reason for this is theneed to translate information among the subsystems—from geophysics to geology, and from geology to con-ceptual models, and then to flow and transport properties(on a particular scale), and perform all of these transla-tions on different scales of information, with differentgeometric and stratigraphic representations, with over-whelming uncertainty at each stage.

Neuman and Wierenga (2003) proposed a systematicand comprehensive approach that considers all the abovestages of model building, including site characterization,hydrogeologic conceptualization, development of theconceptual-mathematical model structure, parameter es-timation based on monitored system behavior, while ac-counting jointly for uncertainties that arise at each stage.As was mentioned above, their analysis confirms the in-ability to assess prediction uncertainty in an absolutesense—only in a relative sense; nevertheless, the insightobtained and subsequent corrective measures to reduceuncertainty justify striving to quantify it, even if only in arelative sense. Ultimately, however, the goal is to reduce

235

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 14: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

uncertainty in order to reach the true optimum—whetheruncertainty can be assessed or not.

In an example of heap leaching management of arelatively well-defined and controlled hydro-geologicoperation, Orr (2002b), and Orr and Vesselinov (2002)modeled basic leaching processes (flow and transport) inhypothetical heaps (constructed geostatistically based onknowledge of the effect of heap construction methods onheap structure), in order to recommend heap structure andirrigation scheduling and patterns that would maximizeleaching efficiency. This exercise showed that despite thepower, beauty, accuracy, and sophistication of the high-resolution finite element, two-phase flow and transportmodels used, the models could only predict general be-havior patterns—unable to provide everyday optimal heapleaching management. The only tool that could possiblyprovide such daily optimization would be a closed-loopcontrol system, preferably, intelligent, on several scales,using high-resolution resistivity (HRR; see Fink 2000,2001, and Ferr� et al. 2004) to map moisture variationswithin the heap, over time.

The future control system for hydrogeologyGiven all the milestones and knowledge accumulated todate, it is possible to shape the future control/optimizationsystem that will provide the needed breakthrough from puresciences to practice, and solve real-world hydrogeologicproblems within a reasonable budget, time, and computerpower. How could all of the accumulated knowledge andnew insight be integrated, as well as recognizing the illu-sive uncertainty and unavoidable limitations? This questionhas been asked in other areas of engineering, particularlythe area of robotics and intelligent control. In fact, it seemsthat every problem in the world ends (or should end) withoptimization, just like a robot that needs to learn and moveand find the best path from point A to point B.

One of the first attempts to combine machine intelli-gence (particularly ANN and FL) with real time control(i.e., fundamental intelligent control) in water resourceswas made by Lobbrecht and Solomatine (2002) for urbanflood control. Formally, intelligent control is the inter-section between operations research, control theories, andartificial intelligence (illustrated in Fig. 2), benefitingfrom the advantages of these methods and eliminatingtheir limitations. Under the intelligent control framework,decision analysis, games, image and pattern recognition,adaptation to uncertain media, self-organization, andplanning and scheduling of operations are all includedwithout requiring a preferred mathematical model. Muchof the information is presented in a descriptive manner,and the initial assumptions and conceptual models arebeing challenged during the process of problem solving,and then during the process of control.

The leading methodology within Intelligent Control isthe multiresolutional approach, specifically, the mul-tiresolutional decision support system (MRDS; Meystel1986, 1991, 1995, 2003; Meystel and Uzzaman 2000;Meystel and Albus 2002), which is capable of controlling“hard to compute”, complex, multi-scale, hierarchical

systems and processes such as control (planning and op-timization) of characterization and remediation of con-taminated heterogeneous aquifers. In particular, theMRDS is capable of continually integrating hydraulic andchemistry data with geologic and geophysical informationon all relevant scales and systems levels, in a unified,adaptive, multiresolutional representation built to directthe aquifer management to its optimal goals.

This future system will integrate the inseparable sitecharacterization and aquifer management, and will opti-mize control of all groundwater remediation operations,including tasks such as optimizing new well placement/location, pumping and injection scheduling, real-time in-situ crystallization (Ziegenbalg 2000), in-situ leachingand heap leaching, managing reactive barriers, or bio-logical curtains, with goals such as minimum cost, max-imum yield, and maximum information. The use of au-tonomous multiresolutional (MR) approach in unmannedmachines, spray casting robots, and power plants, hasresulted in substantial reduction of data complexity andsubstantial increase in computation efficiency (by severalorders of magnitude). Other advantages of the MR ap-proach are the ability to integrate interpretations andpredictions of existing models such as PDE-based models,geostatistics, and various artificial intelligence methods,within a multiresolutional knowledge structure. The latteraccounts directly for various cross-associations among thedifferent variables, on different scales and levels of res-olution—not just statistical linear dependence (i.e., cor-relations) employed in geostatistics. Consequently, thefuture tool will not be limited to the rigid serial structureof Geophysics � Geology � Modeling, and thus couldbenefit from the advantages of such a narrow modeling

Fig. 2 Intelligent control as the integration of three disciplines(Saridis 1985)

236

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 15: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

path, while eliminating its limitations, including problemsof upscaling (from the geologic model to the hydrologicmodel), feedback (from the flow and transport modelback to the geological model), misrepresentation (by un-certain models with uncertain parameters), and overallloss of information. This effective tool will be based onthe original multiresolutional methodology, in a mannersimilar to other successful applications in the areas ofrobotics and power plant optimization2 (Meystel and Al-bus 2002; Albus et al. 1993; Meystel and Uzzaman 2000;Corson et al. 1995). MRDS has been developed over theyears as a concept for planning and control of large,complex, and under-represented systems. These systemsare known for their decomposability and the advantage ofconsidering the results of decomposition as a hierarchicalsystem. Such hierarchical structures dominate both natu-ral phenomena and man-made operations.

In the multiresolutional world of interdependent andnested hierarchies of spatial scales, time scales, andmanagement of operations, both knowledge and decision-making can be processed in the most efficient way. Thefact that many natural geologic media tend to arrangeitself in hierarchical scales of heterogeneity has been es-tablished and addressed in many publications—e.g.,Cushman (1984, 1986, 1990), Dagan (1986, 1989), Meiand Auriault (1989), and Neuman (1990). Such naturalhierarchical organization calls for an MR knowledgerepresentation. In fact, model structure, particularlyboundary conditions, is simply a low-resolution feature ina multiresolutional world. In that sense, the maximumlikelihood Bayesian model averaging (MLBMA) ap-proach suggested by Neuman (2003) implies transitionsfrom high to low-resolution representations.

In terms of aquifer management, particularly remedi-ation operations, the MR hierarchical management systemcan be sketched as shown in Fig. 3, with three levels ofresolution: at the highest level of the hierarchy, the re-mediation management is seen as an inseparable whole; atthe lower levels, the remediation management is seen assequential-parallel processes. Notice that this is in con-trast with the serial process shown in Fig. 1b.

At the lowest level of the hierarchy, each of the pro-cesses is decomposed into components or sub-processes.In fact, all systems can be represented in this way: as ahierarchy of processes and sub-processes. Indeed,groundwater remediation, oil reservoirs, in-situ- and heapleaching-operations, and water resources, in general, arehierarchical in several ways. These complex systems in-clude subsystems such as vadose zone and aquifers, hi-erarchies of geologic media (from formation-scale tobelow pore-scale), sub-subsystems of wells and welldrilling, and time scales that span between minutes(during drilling) and many years of remediation opera-tion.

The multiresolutional architectureThere are multiple advantages in representing the Worldas a set of sub-Worlds, each with its individual scope andindividual level of detail. Constructing this representationand using it for the purposes of decision-making is calledmultiresolutional analysis. Tools such as artificial neuralnetworks (ANN), fuzzy logic (FL) and genetic algorithms(GA) have been used as computational componentswithin the MRDS structure (e.g., Meystel and Albus2002), but none is essential in the system. Other compu-tational alternatives can be chosen, such as game-theo-retic algorithms of stochastic approximation instead ofANN; JSM logic instead of FL; and evolutionary pro-gramming instead of simple GA (e.g., Meystel and An-drusenko 2001).

The future hydrogeological management system willbe based on computational architectures that consist ofone or multiple intelligent agents capable of making op-timal decisions for water resources management. In sim-ple terms, the multiresolutional decision support systems(MRDS) could be conceptualized as a pyramid ofknowledge and decision-making, with the highest reso-lution at the bottom (the raw data) and the lowest reso-lution at the top (the overall generalized understanding ofthe phenomena or process, and the decision-making lev-el). The MRDS consists of two major parts: (a) learning/modeling (bottom-up in the pyramid), and (b) search/

Fig. 3 Nested hierarchies ofaquifer management (schemat-ic, partial)—a proto-image ofMRDS

237

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 16: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

optimization (top-down). The learning is done by gener-alization of experiences that leads to the evolution ofacquired knowledge in a multiresolutional (MR) repre-sentation. This includes consecutive operations such ascollection of experiences (storing, selecting, enhancing,and clustering experiences) ! hypothesis formation !generation of rules ! concepts emergence ! relationaldatabase ! entity-relational model formation, all ofwhich repeat each time a new experience arrives (Meystel1997, 2002). The search/optimization/planning is donevia instantiation (or refinement). The process of planningconsists of choosing the desirable behavior by anticipat-ing admissible alternatives among possible behaviors, andselecting the best of them by comparing tentative trajec-tories in the state space. This process starts from the goal/objective at the top and determining a feasible trajectorythere ! “enveloping” a bounded domain at the levelbelow (higher resolution) ! searching for the optimalstring at that level, and repeating this top-down processuntil reaching the optimum at the highest resolution (e.g.,Meystel 1995, 1997). This process is shown in Figs. 4, 5,

6, 7, 8, below (more on these, in the following). In hy-drogeology, this “high-resolution” would typically end atpumping- and injection wells, although the knowledgedatabase includes higher spatial resolutions.

As a goal-oriented model, the MRDS consists ofcontrol loops, as shown in Figs. 4 and 5, close in spirit, tothe dynamic control loops described by Kitanidis andothers (as discussed earlier), where the flow and transport(PDE) models are now included in the world model of theMRDS. Here, the learning (from experiences) is focusedon the goal (e.g., minimum cost), while building its pro-visional models and improving them with each iterationin the control loops. Each of these elementary loops offunctioning acts like a goal-oriented agent that tries toshift the system to its optimal state. In other words, theMRDS consists of building highly efficient multiresolu-tional Knowledge Representations (World Models basedon all data, and including devices for unsupervisedlearning) with closure at each level of resolution (Figs. 4and 5). It employs a MR Behavior Generation Subsystem(for Planning and Control), including the “probing”, an-

Fig. 4 Nested hierarchy of in-telligent systems

Fig. 5 Elementary loop offunctioning at a level (an agent)

238

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 17: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

ticipation, and “caution” principles mentioned earlier, inan efficient, MR way. Meystel and his co-workers (e.g.,Meystel and Albus 2002) have shown planning andlearning to be joint processes in the MRDS architectures,which provide for a rapid and extremely accurate func-tioning, with superb computational efficiency.

Thus, the future decision support system (DSS) will bebased on (a) learning and knowledge gained by organiz-ing information, storing and generalizing experiences, and(b) construction of decisions (including plans and con-trols) that ensure functioning of a goal-oriented systemwith self-improving performance. The MR approachgives an opportunity to minimize the computationalcomplexity in a subset of DSS that organizes knowledgethrough generalization and instantiation/refinement, anduses nested MR-search for converging to the best solu-tion. This concept, which was introduced for planning andcontrol purposes by Meystel (1986), has been explored indepth in subsequent works (Meystel 1991, 1995, 1997,2002, 2003; Albus and Meystel 2001; Meystel and Albus2002). More than all other machine-learning techniques,the MRDS, particularly the MR generalization and search,seem to imitate the human brain in processing of infor-mation and decision-making.

Yet, applying these advances to aquifer managementpresents major challenges due to (a) the multidisciplinaryand nonlocal nature of aquifer systems, with multi-di-mensional complexity; (b) significant information gaps;and (c) control variables limited to a single scale. Never-theless, given the richness of geophysical information onseveral scales, hidden relationships between geophysicaland hydraulic/transport responses could be discovered, ashas been partially exercised using geostatistics.

The hydrogeologic control system of the future willbreak through the technical barrier presented by the rigidrepresentation process epitomized by the traditionalgroundwater models (as illustrated in Fig. 1b). The latterwill be used mainly as preliminary approximations, thus,freeing it from its rigid structure and exaggerated ex-pectations—in particular, the expectation to capture orrepresent the full complexity of the real world and pro-vide reliable predictions. This approach has not beenpossible or even conceivable until now. The success ofthe new development will be measured by the degree towhich it could operate independently of the operationalmodels included in it.

The reduction of complexity via reduction of “multi-plicity” could only be done by the virtue of grouping andrepresenting the group by a single image and/or symbol.This semiotic principle of substitution of sets of infor-mation units by a single information unit has emergedbecause of the need to reduce computational load(Meystel 1986, 1995, 1997, 2002). All of these processesof planning and control are suitable for the new devel-opment of MRDS for aquifer management.

The system of representation, based upon recursivegrouping/decomposition (see below), incorporates anduses the algorithms of generalization (bottom-up) andinstantiation (top-down) in different incarnations thatdepend on factors such as information or the subsystemwhere the results are applied. Thus, the learning systemmust employ the same tools: labeling the entities (e.g.,clusters) in order to deal with concise notations (sym-bols), grouping the entities, and decomposing them if

Fig. 6 Conceptual structure ofmultiresolutional consecutiverefinement

Fig. 7 Top-down refinement: an illustration of MR-S3 algorithm

239

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 18: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

information details are required. Learning systems use thesame computational mechanisms.

MRDS presumes multiresolutional architecture oflarge/complex systems that typically include nested hi-erarchies, as implied by Fig. 3 and shown in Fig. 4 (ibid).Nestedness is a property of elementary loops of func-tionings (ELFs) shown as a joint nested system in Fig. 4,and as a single loop in Fig. 5. It can be seen that the loopof closure of a level of resolution contains WORLD(ENVIRONMENT) ! SENSORS ! SENSORY PRO-CESSING (PERCEPTION) ! WORLD MODEL(Knowledge Representation) ! BEHAVIOR GENERA-TION (Planning-Control) ! ACTUATORS and ! backto ENVIRONMENT.

The multiresolutional analysis (a) allows developmentof multiresolutional hierarchies, (b) protects from para-doxes, (c) allows for inter-level disambiguation, and (d)determines complex relationships among variables. As adynamic learning and control system, the MRDS com-bines planning with on-line error compensation; it re-quires learning of both the system (e.g., remediation) andthe environment (surface operations, human interaction)to be part of the control process. All of these featuresreduce systematic and random errors on all time scales,and hence, reduce overall uncertainty.

A few computational blocksAlgorithms of multiresolutional knowledge representationemploy procedures of (a) selective space reduction byfocusing attention (FA); (b) searching for the group can-didates using combinatorial search (CS); (c) lower reso-lution object formation by grouping (G); and (d) decision-making (making a choice among alternatives). The tripletFA–CS–G (or GFACS) shown in Fig. 6 works as an al-gorithm of consecutive refinement (multiresolutionalsearch in the state space, or MR-S3) if considered top-down (as shown in Fig. 7), and as generalization (learn-ing), if considered bottom-up, when the triangles of high-and low-resolution levels in Fig. 6 are swapped. In thetop-down refinement case, Combinatorial Search is per-formed in order to find one string (minimum cost) out ofmultiple possible strings, while Grouping amounts toconstructing an envelop around the vicinity of the mini-mum cost string, as shown in Fig. 7. In the learning/bottom-up case, grouping implies generalization. TheMR-S3 shown in Fig. 6 also describes nested dynamicprogramming (NDP).2

Fig. 8 Block-diagram of thedecision making process inMRDS

2 Optimization cannot exist without a representation, which is amodel/simulator based on characterization.

240

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 19: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

Comment 1. It was demonstrated that for each prob-lem, there exists a set of resolutions that minimizescomputational complexity.

Comment 2. As the resolution increases, the set ofvariables may change. Transformation of variables from alevel to a level is performed as a part of Descriptive In-formation Analysis.

Information processing in MRDSThe building of computational blocks of this new repre-sentation, and the subsequent optimization/control areshown in Fig. 8, which includes three distinct fields ofcomputational activities within MRDS:

Field I—Collecting informationThis is the collection of sources of information, includingall possible field data and their interpretations and theirstorage in the required zone. This field incorporates asmany models of knowledge representation as feasible(geophysical interpretations, geological models, flow andtransport models, proxies, etc.). As these models areconstructed in different fields of knowledge, and as theyvary in their geometric and stratigraphic representationsand coordinates, they are heterogeneous ones. However,the outputs of various laws, models, domain expert ex-periences, and relevant direct operational observations arestored in a homogeneous (unified) form: they are pre-sented as meaningful implications that allow for inter-pretation by a human operator. This integration of het-erogeneous sources of information, which is the majorchallenge in hydrogeology, is implemented in the MRdata and knowledge representation.; in the schematicdescription of Fig. 8, it is included in both Field I andField II.

Field II—Maintaining the multiresolutional state spaceHere, the outputs of multiple models and the results oftesting are integrated in the multiresolutional form. Allinformation stored has an attached probability tag thatcould originate from measurement errors, or from geo-statistical interpolation and other statistical models (e.g.,Bayesian updating), or from fuzzy logic. Getting into thesame cluster can reinforce the “likelihood” of the concreteimplication, while getting into different clusters can re-duce the validity that was allocated with the statement inthe information repository prior to clustering. This isconsistent with the approach of changing the model (as aprocess of learning) rather than calibrating a pre-deter-mined model.

Field III—Searching and disambiguating search resultsIn this field, the multiresolutional searching in the statespace is run (S3 Search). The process is conducted bothtop-down and bottom-up, which allows for rapid disam-biguation and convergence to the correct representation(thereby, maximizing the use of information, and reduc-ing uncertainty). The search (optimization process) fol-lows the scheme in Fig. 7. The output (optimization)depends on the specific goal. The answer to any question

related to the desired goal is given by MRDS in the formof a set of recommended modes of operation that providefor minimum cost (or other objectives) and the measure ofprobability of this evaluation (e.g., in the form of theuncertainty zone).

Notice that the three fields in Fig. 8 are not temporal“stages” of computation, not consecutive “phases” of it,but rather three “work fields” that function independently,and that are attached to each other by interfaces that canbe organized, depending on the particular needs of theprocess. It has been shown that MR-S3 Search is the mostefficient computational tool currently available. Storingand processing of information in a multiresolutional formnot only supports the most efficient processing algorithmsbut also minimizes possible discrepancies and contradic-tions in the future results and recommendations.

In case a particular model proves to be valid for aparticular sub system, it should be included within the setof information flow of Field I. The ability of the MRDS toinclude a particular operational model of the system (orsubsystem) enables judgment of the significance of modelcontribution to the goal, and enables a comparison of thismodel with alternative models.

Conclusions

Recent developments in stochastic hydrology and adap-tation of advanced methods from the disciplines of op-erations research, stochastic control, and artificial intel-ligence in hydrogeology have provided insight into (a) theeffect of uncertainty on optimization of aquifer manage-ment; (b) the inseparability of information gain (e.g., withdrilling a new well) and optimization; (c) the relationshipsbetween uncertainty, reliability, and risk; (d) the possibleuse of proxies (ANN, response functions) to replacecomplex PDE models in order to enhance the search andincrease computational efficiency; (e) the importance ofdynamic feedback control/optimization; (f) the element ofmemory or learning through recursive, evolving equationsin such control systems; and (g) the effect of probing thesystem (by perturbing the input), learning it (through re-cursive functions), anticipation (hence, caution), and re-ducing uncertainty (via weighted feedback) without ac-tually quantifying it. The adaptation of control theorieshas shifted our emphasis from model sensitivities andgeneral uncertainty reduction to sensitivity of the costfunction (to parameter and state variable uncertainties),with corresponding implications on experimental designand planning.

Despite all of these remarkable new developments,actual solutions of complex real-world problems in hy-drogeology have been quite limited. The main reason isthe ultimate reliance on first-principle models that lead tocomplex, highly uncertain, distributed-parameter partialdifferential equations (PDE) on a given scale. The addi-tion of uncertainty, and hence, stochasticity or random-ness, has increased insight and highlighted important re-lationships between uncertainty, reliability, risk, and their

241

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 20: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

effect on the cost function. However, it has also (a) in-troduced additional complexity that results in prohibitivecomputer power even for just a single uncertain/randomparameter; and (b) led to the recognition of our inabilityto assess the complete uncertainty even when includingall model uncertainties. Thus, modelers are left with theurgent need in deterministic solutions in a stochasticworld, with limited ability to assess the uncertainty, whilereducing it is costly.

Meanwhile, recent developments of intelligent systemshave provided new tools that can be adapted in hydro-geology. These tools and features, which have been de-veloped and employed extensively in the areas of roboticsand unmanned vehicles, can be summarized as follows:

1. The main function of intelligence is global optimiza-tion of complex systems with inadequate or incom-plete or insufficient representation, and under in-complete specifications of an uncertain environment(typical of real-world problems in hydrogeology).

2. Intelligent control has been created to combine theadvantages and eliminate the limitations of operationsresearch, control theories, and artificial intelligence; ithas proven to be a computationally efficient procedureof directing a complex system towards their goals.

3. Intelligent control systems (MRDS) employ general-ization, focusing attention, and combinatorial searchalgorithms as their primary operator, which leads to amulti-level structure. In practice, the number of levelsof control depends on the size of the problem (typi-cally, 3–6 levels).

4. The multiresolutional system of knowledge organi-zation has shown to reduce representation complexityand computational load (by several orders of magni-tude).

5. In MRDS, planning/control problems are being solvedvia MR-S3 search with randomized state apace dis-cretization (or tessellation) with density of points thatreflect the uncertainty in information as a measure ofstochastic optimization. MRDS uses different ran-domized sampling at different levels of resolution;together, these levels have demonstrated efficientmultiresolutional strategies of decision-making underuncertainty.

6. Representations at each level of resolution do notrequire construction of any analytical models besidesthe MR storage of experiences obtained during priorfunctioning. However, in hydrogeology, where dataare often sparse, such models are needed at least at theinitial phase.

7. Tools such as ANN, FL, and GA can be used ascomputational components within the MRDS struc-ture, although none is essential in the system. Othertools such as multiresolutional versions of abductiveinference and plausible reasoning (finding implica-tions without full evidential support) lead to differentimplications when performing at several resolutionlevels simultaneously.

8. Multiresolutional knowledge organization with dis-ambiguation and restructuring is applied at severallevels of resolution, simultaneously, at all stages ofinformation updating.

9. Since the MR representation builds and supports un-supervised learning that contains self-oriented infor-mation, MRDS systems are autonomous (which im-plies that one could use it either in a closed controlloop or just as an “advisor”).

10. The MRDS is capable of accepting descriptive andtext information, not just numeric data, while bothtypes of information are treated based upon the samealgorithms.

These accomplishments have been possible once the rigiddifferential-integral calculus stopped dominating the ap-plication areas (particularly, robotics), and analytical/nu-merical models were used just as interpretations, ratherthan complete representations. The paradigm shift is moredifficult in hydrogeologic systems, which (unlike man-made machines) are not information-intensive, and aremechanically unprepared for automation (possibly, withthe exception of real-time directional drilling). However,there is an urgent need to fill the gap between stochasticoptimization/control theories (though remarkably ad-vanced) and actual field operations and heterogeneousfield data, and provide hydrogeologists with problem-solving tools. Our exploration (above) of current methodsin hydrogeology has hinted at a remarkable progress inhydrogeology—from stochastic theories to optimizationand control of hydrogeologic systems, yet, with heavyreliance on computer-intensive deterministic and sto-chastic (PDE, physically-based) models that render real-world solutions weak, at best. Interestingly, as was re-vealed in our exploration, a few works have alreadyhinted at a different system, in which different types ofintelligence will be combined to overcome the weaknessof rigid PDE models, including stochastic PDE, particu-larly for optimal management purposes.

Given the emergence of intelligent control systems aspowerful tools in other engineering applications, the ad-aptation of intelligent control concepts and tools seems tobe the obvious next step in hydrogeology. It is possiblenow to develop an intelligent system whose learningpower and goal-oriented prediction/anticipation capabilitywill surpass current ANN. This multiresolutional decisionsupport system will break the barrier between the differ-ent geo-disciplines, and will combine characterizationwith simulation and optimization. The new system willbenefit from the advantages and eliminate the limitationsof current PDE models (deterministic or stochastic) byintegrating their predictions as interpretations, and usethese models as initiators, gap fillers, and general guideswithin an ever-changing multiresolution knowledge rep-resentation. The new modeling-optimization/planning/control system will take advantage of all accumulatedexperiences and knowledge from all disciplines, on allscales. It will increase the speed of convergence to opti-mal solutions by orders of magnitude, and will reduce

242

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 21: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

uncertainty by (a) extending the associations amongvariables on all relevant scales far beyond linear statisticalcorrelations currently used in the traditional stochasticapproach; (b) maximizing the information hidden in in-terdependencies among these variables on all scales, anddetermining the complex relationships among thesevariables, independent of any particular single-scalemodel; (c) continuous model updating; and (d) continuousinter-level checking of consistency and disambiguation.The success of the new development will be measured bythe degree by which it could operate independently of theoperational models included in it.

Acknowledgement We would like to acknowledge Dr. Cliff Voss(Executive Editor), Mr. Perry Olcott (Mananging Editor), and thethree reviewers, particularly Dr. Peter Kitanidis, for their excellentand constructive comments; as well as several individuals whohelped us bridge among disciplines, particularly, Mr. Tom Ander-son (RMOTC), Dr. Xian-Juan Wen (ChevronTexaco), Dr. DongZhang (U. of Oklahoma), Dr. S.P. Neumann (U. of Arozina), andDr. Larry Lake (U. of Texas).

References

Aanonsen SI, Eide AL, Holden L, Aasen JO (1995) Optimizingreservoir performance under uncertainty with application towell location. Paper SPE 30710 presented at the SPE AnnualTechnical Conference & Exhibition, Dallas, TX, October, pp22–25

Ahlfeld D, Mulvey J, Pinder G, Wood E (1988a) Contaminatedgroundwater remediation design using simulation, optimizationand sensitivity theory, 1. Model development. Water ResourRes 24:431–442

Ahlfeld D, Mulvey J, Pinder G, Wood E (1988b) Contaminatedgroundwater remediation design using simulation, optimizationand sensitivity theory, 2. Analysis of a field site. Water ResourRes 24:443–452

Albus J, Meystel A, Uzzaman S (1993) Nested motion planning foran autonomous robot. In: Proceedings of IEEE Conference onAerospace Systems, May 25–27, Westlake Village, CA

Albus J, Meystel A (2001) Engineering of mind: an introduction ofthe science of intelligent systems. Wiley, New York

Alley W (1986) Regression approximations for transport modelconstraint sets in combined simulation-optimization studies.Water Resour Res 22(4):581–586

Andricevic R, Kitanidis PK (1990) Optimization of the pumpingschedule in aquifer remediation under uncertainty. Water Re-sour Res 26(5):875–885

Andricevic R (1993) Coupled withdrawal and sampling designs forgroundwater supply models. Water Resour Res 29(1):5–16

Arfken G (1985) Mathematical methods for physicists, 3rd edn.Academic, New York

Bar-Shalom Y (1981) Stochastic dynamic programming: cautionand probing. IEEE Trans Autom Control 26(5):1184–1195

Bhattacharya B, Lobbrecht AH, Solomatine DP (2003) Neuralnetworks and reinforcement learning in control of water sys-tems. ASCE J Water Resour Plan Manage 6(1):129

Ben-Zvi M, Berkowitz B, Kesler S (1988) Pre-posterior analysis asa tool for data evaluation: application to aquifer contamination.Water Resour Manage 2:11–20

Beran MJ (1968) Statistical continuum theories. Interscience, NewYork

Bittencourt AC, Horne RN (1997) Reservoir development and de-sign optimization. paper SPE 38895 presented at the 1997 SPEAnnual Technical Conference & Exhibition, San Antonio, TX,October, pp 5–8

Boger Z (2002) Who is afraid of the BIG bad ANN? In: Interna-tional Joint Conference on Neural Networks, IJCNN’02, Hon-olulu, HI, May 2002

Boger Z, Guterman H (1997) Knowledge extraction from artificialneural networks models. In: Proceedings of the IEEE Interna-tional Conference on Systems Man and Cybernetics, SMC’97,Orlando, FL, October, pp 3030–3035

Boger Z (1992) Application of neural networks to water andwastewater treatment plant operation. Instrum Soc Am Trans31(1):25–33

Bush MD, Cuypers M, Roggero F, Syversveen A-R (2002) Com-parison of production forecast uncertainty quantificationmethods—an integrated study, Delft, The Netherlands. http://www.nitg.tno.nl/punq/cases/punqs3/PUNQS3paper/index.htm

Centilmen A, Ertekin T, Grader AS (1999) Applications of neuralnetworks in multiwell field development. Paper SPE 56433presented at the 1999 SPE Annual Technical Conference &Exhibition, Houston, TX, October, pp 3–6

Chan N (1993) Robustness of the multiple realization method forstochastic hydraulic aquifer management. Water Resour Res29(9):3159–3167

Chan N (1994) Partial infeasibility method for chance-constrainedaquifer management. ASCE J Water Resour Plan Manage120(1):70–89

Chang L-C, Shoemaker CA, Liu PLF (1992) Optimal time-varyingpumping rates for groundwater remediation: application of aconstrained optimal control algorithm. Water Resour Res28:3157–3173

Chrsitakos G, Killam BR (1993) Sampling design for classifyingcontaminant level using annealing search algorithms. WaterResour Res 29(12):4063–4076

Cieniawski SE, Eheart JW, Ranjithan SR (1995) Using geneticalgorithms to solve a multiobjective groundwater monitoringproblem. Water Resour Res 31(2):399–409

Corson A, Meystel A, Otsu F, Uzzaman S (1995) Semiotic mul-tiresolutional analysis of a power plant, in architectures forsemiotic modeling and situation analysis in large complexsystems. In: Proceedings of the 1995 ISIC Workshop, Mon-terey, CA, pp 401–405

Culver TB, Shoemaker CA (1992) Dynamic optimal control forgroundwater remediation with flexible management periods.Water Resour Res 28(3):629–641

Culver TB, Shoemaker CA (1993) Optimal control for groundwaterremediation by differential dynamic programming with quasi-Newton approximations. Water Resour Res 29(4):823–831

Cushman JH (1984) On unifying the concepts of scale, instru-mentation, and stochastic in the development of multiphasetransport theory. Water Resour Res 20(11):1668–1676

Cushman JH (1986) On measurement, scale, and scaling. WaterResour Res 22(2):129–134

Cushman JH (1990) An introduction to hierarchical porous media.In: Cushman JH (ed) Dynamics of fluids in hierarchical porousmedia. Academic, New York, pp 1–5

Dagan G (1986) Statistical theory of groundwater flow and trans-port: pore to laboratory, laboratory to formation, and formationto regional scale. Water Resour Res 22(9):120S–134S

Dagan G (1989) Flow and transport in porous formations. Springer,Berlin Heidelberg New York

Dagan G, Neuman SP (1997) Subsurface flow and transport: astochastic approach. Cambridge University Press, Cambridge,UK

Dettinger MD, Wilson JL (1981) First-order analysis of uncertaintyin numerical models of groundwater flow, 1. Mathematicaldevelopment. Water Resour Res 17:149–161

Dou C, Woldt W, Dahab M, Borgardi I (1997) Transient ground-water flow simulations using a fuzzy set approach. GroundWater 35(2):205–215

Dougherty DE, Marryott RA (1991) Optimal groundwater man-agement 1. Simulated annealing. Water Resour Res27(10):2493–2508

Eidi AL, Holden L, Reiso E, Aanonsen SI (1994) Automatic historymatching by use of response surfaces and experimental design.

243

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 22: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

Paper presented at the 4th European Conference on the Math-ematics of Oil Recovery, Roros, Norway, 7–10 June 1994

Eigbe U et al (1998) Kalman filtering in groundwater flow mod-eling: problems and prospects. Stochast Hydrol Hydraul12(1):15–32

Erickson MA, Mayer A, Horn J (2002) Multi-objective optimaldesign of groundwater remediation systems: application of theniched Pareto genetic algorithm (NPGA). Adv Water Resour25(1):51–56

Feinerman E, Bresler E, Dagan G (1985) Optimization of a spa-tially variable resource: an illustration for irrigated crops. WaterResour Res 12:793–800

Ferr� TPA, Binley A, Blasch KW, Callegary JB, Crawford SM,Fink JB, Flint AL, Flint LE, Izbicki JA, Levitt MT, HoffmannJP, Pool DR, Scanlon B (2004) A conceptual framework for theapplication of geophysics to recharge monitoring. USGS-WRD(in press)

Fink JB (2000) The application of electrical geophysical methods tohydrologic problems. In: Arizona Hydrological Soc Ann Symp,oral presentation, September, pp 20–22

Fink JB (2001) Vadose zone injection monitoring with electricalgeophysics. In: Arizona Hydrological Soc Ann Symp, oralpresentation, September, pp 12–15

Freeze RA, Massmann J, Smith L, Sperling T, James B (1992)Hydrogeological decision analysis. Ground Water PublishingCompany, Dublin, OH, 72 pp

Freeze RA, Gorelick SM (1999) Convergence of stochastic opti-mization and decision analysis in the engineering design ofaquifer remediation. Ground Water 37(6):934–954

Georgakakos AP, Vlatsa DA (1991) Stochastic control of groundwater systems. Water Resour Res 27(8):2077–2090

Goldberg DE (1989) Genetic algorithm in search, optimization, andmachine learning. Addison-Wesley, Reading

Gordon E, Shamir U, Bensabat J (2000) Optimal management of aregional aquifer under salinization conditions. Water ResourRes 36(11):3193–3204

Gorelick SM (1987) Sensitivity analysis of optimal groundwatercontaminant capture curves: spatial variability and robust so-lutions. In: Solving ground water problems with models. Pro-ceedings of the National Water Well Association Conference ,pp 133–146

Gorelick SM (1997) Incorporating uncertainty into aquifer man-agement models. In: Dagan G, Neuman SP (eds) Subsurfaceflow and transport. Cambridge University Press, Cambridge, pp101–112

Gorelick SM, Freeze RA, Donohue D, Keely JF (1993) Ground-water contamination: optimal capture and containment. LewisPublishers, Chelsea, MI, 385 pp

Guterman H (1994) Application of principal component analysis tothe design of neural networks. Neural, Parallel ScientificComput 2:43–54

Guyaguler R, Horne N (2001) Uncertainty assessment of wellplacement optimization. SPE 71625, 2001 [with reference tohttp://www.nitg.tno.nl/punq/index.htm: Production forecastingwith uncertainty quantification (PUNQ)—a comparative study,Netherlands Institute of Applied Geoscience, TNO, NationalGeological Survey, Elf Exploration Production, 2002]

Guyaguler B, Horne RN, Rogers L, Rosenzweig JJ (2000) Opti-mization of well placement in a Gulf of Mexico waterfloodingproject. Paper SPE 63221 presented at the 2000 SPE AnnualTechnical Conference and Exhibition, Dallas, TX, October, pp1–4

Guadagnini A, Neuman SP (1999) Nonlocal and localised analysesof conditional mean steady state flow in bounded, randomlynonuniform domains, 2. Computational examples. Water Re-sour Res 35(10):3019–3039

Hinch EJ (1991) Perturbation methods, Cambridge texts in appliedmathematics. Cambridge University Press, Cambridge

Hoeting JA, Madigan D, Raftery AE, Volinsky CT (1999) Bayesianmodel averaging, A tutorial. Statist Sci 14(4):382–417

Horn J, Nafpliotis N, Goldberg DE (1994) A nitiched Pareto ge-netic algorithm for multiobjective optimization. In: Proceedings

of the First IEEE Conference on Evolutionary Computation(ICEC’ 94), Piscataway, NJ. IEEE Service Center, pp 82–87

James B, Gorelick SM (1994) When enough is enough: the worth ofmonitoring data in aquifer remediation design. Water ResourRes 30(12):3499–3513

Jones L, Willis R, Yeh WW-G (1987) Optimal control of nonlineargroundwater hydraulics using differential dynamic program-ming. Water Resour Res 23(11):2097–2106

Jones L (1992) Adaptive control of ground-water hydraulics. ASCEJ Water Resour Plan Manage 118(1):1–17

Kessler A, Glodberg I (2004) Optimal Operation of the WaterResources in Ramat Hagolan. In: Proceedings of The KinneretLake Workshop, The Israeli Water Resources Association

Kumar SV, Ranjithan S (2002) Evaluation of the constraint meth-od-based multiobjective evolutionary algorithm (CMEA) for athree-objective optimization problem. In: Langdon et al (eds)Proceedings of the Genetic and Evolutionary ComputationConference, GECCO 2002, New York, 9–13 July 2002. Mor-gan Kaufmann, pp 431–438

Landa JL, Guyaguler B (2003) A methodology for history matchingand the assessment of uncertainties associated with flow pre-diction, SPE 84465

Lee S-I, Kitanidis PK (1991) Optimal estimation and scheduling inaquifer remediation with incomplete information. Water ResourRes 27(9):2203–2217

Lee S-I, Kitanidis PK (1996) Optimization of monitoring well in-stallation time and location during aquifer decontamination.Water Resour Manage 10:439–462

Loughlin DH, Ranjithan S, Baugh JW, Brill ED (2001) Geneticalgorithm approaches for addressing unmodeled objectives.Eng Optim 33:549–569

Lobbrecht AH, Solomatine DP (2002) Machine learning in real-time control of water systems. Urban Water 4:283–289

Maddock T III (1972) Algebraic technological functions from asimulation model. Water Resour Res 8(1):129–134

Maddock T III (1973) Management model as a tool for studying theworth of data. Water Resour Res 9:270–280

Maskey S, Dibike YB, Jonoski A, Solomatine D (2000) Ground-water model approximation with artificial neural network forselecting optimal pumping strategy for plume removal. In:Schleider O, Zijderveld A (eds) AI methods in civil engineeringapplications. Cottbus, pp 67–80

Maskey S, Jonoski A, Solomatine DP (2002) Groundwater reme-diation strategy using global optimization algorithms. ASCE JWater Resour Plan Manage 128(6):431–440

Massmann J, Freeze RA (1987a) Groundwater contamination fromwaste management sites: the interaction between risk-basedengineering design and regulatory policy, 1. Water Resour Res23:351–367

Massmann J, Freeze RA (1987b) Groundwater contamination fromwaste management sites: the interaction between risk-basedengineering design and regulatory policy, 2. Water Resour Res23:368–380

McKinney DC, Gates GB, Lin M-D (1994) Groundwater resourcemanagement models: a comparison of genetic algorithms andnonlinear programming. In: Peters A et al (eds) Numericalmethods in water resources. Kluwer Academic Publishers,Dordrecht

Mei CC, Auriault J-L (1989) Mechanics of heterogeneous porousmedia with several spatial scales. Proc R Soc London Ser A426:391–423

Meystel A, Uzzaman S (2000) Multiresolutional decision supportsystem. US Patent, No. 6102958

Meystel A, Albus J (2002) Intelligent systems: architectures, de-sign, control. Wiley, New York

Meystel A (1991) Autonomous mobile robots: vehicles with cog-nitive control. World Scientific, Singapore

Meystel A (1995) Semiotic modeling and situation analysis: anintroduction (AdRem, 1995)

Meystel A (1986) Planning in a hierarchical nested controller forautonomous robots. In: Proceedings of IEEE 25th Conferenceon Decision Control, Athens, Greece. pp 1237–1249

244

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 23: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

Meystel A, Andrusenko J (2001) Simulating E. coli performance.In: Proceedings of the Workshop PERMIS’2001, NIST,Gaithersburg

Meystel A (2002) Intelligent control. In: Encyclopedia of physicalscience and technology, Vol. 8, 3rd edn. Academic, New York,pp 1–24

Meystel A (1997) Learning algorithms generating multigranularhierarchies. In: Mirkin B, McMorris FR, Roberts FS, RzhetskyA (eds) Mathematical hierarchies and biology, DIMACS seriesin discrete mathematics, Vol. 37. Am Mathematic Soc, CharlesStreet Providence, RI

Meystel A (2003) Multiresolutional hierarchical decision supportsystems. In: IEEE Transactions on Systems, Man, and Cyber-netics—Part C: Applications and Reviews, vol 33, no. 1.ITCRFH, pp 86–101

Morgan DR, Eheart JW, Valocchi AJ (1993) Aquifer remediationdesign under uncertainty using a new chance constrained pro-gramming technique. Water Resour Res 29(3):551–561

Murray D, Yakowitz S (1979) Constrained differential dynamicprogramming and its application to multireservoir control.Water Resour Res 15(4):1017–1027

Neuman SP, Orr S, Levin O, Paleologos E (1992) Theory and high-resolution finite element analysis of two-dimensional and three-dimensional effective permeability in strongly heterogeneousporous media. In: Russell TF, Ewing RE, Brebbia CA, GrayWG, Pinder GF (eds) Computational methods in water re-sources IX, Vol. 2: mathematical modeling in water resources.Elsevier, New York

Neuman SP (1990) Universal scaling of hydraulic conductivitiesand dispersivities in geologic media. Water Resour Res26(8):1749–1758

Neuman SP, Orr S (1993) Prediction of steady state flow innonuniform geologic media by conditional moments: exactnonlocal formalism, effective conductivities, and weak ap-proximation. Water Resour Res 9(2)

Neuman SP, Guadagnini A (1999) A new look at traditional de-terministic flow models and their calibration in the context ofrandomly heterogeneous media, 189–197. In: Proceedings ofModelCARE 99, International Conference on Calibration andReliability in Groundwater Modelling, ETH, Zurich, Switzer-land, September 1999

Neuman SP (2003) Maximum likelihood Bayesian averaging ofalternative conceptual-mathematical models. Stochast EnvironRes Risk Assessment 17(5):291–305, DOI:10.1007/s00477-003-0151-7

Neuman SP, Wierenga PJ (2003) A comprehensive strategy ofhydrogeologic modeling and uncertainty analysis for nuclearfacilities and sites, NUREG/CR-6805. US Nuclear RegulatoryCommission, Washington, DC

Orr S (1993) Stochastic approach to steady state flow in nonuni-form geologic media. PhD dissertation, University of Arizona

Orr S, Neuman SP (1994) Operator and integro-differential repre-sentations of conditional and unconditional stochastic subsur-face flow. J Stochast Hydrol Hydraul 8(2):157–172

Orr S (2002a) Uncertain predictions of flow and transport in ran-dom porous media: the implications for process planning andcontrol. In: Messina ER, Meystel AM (eds) Measuring theperformance and intelligence of systems: Proceedings of thePerMIS Workshop. NIST Specl Publ 990, August 13–15, 2002,pp 435–441

Orr S (2002b) Enhanced heap leaching—I. Insights. Mining Eng54(9)

Orr S, Vesselinov V (2002) Enhanced heap leaching—II. Appli-cations. Mining Eng 54(10):49–55

Pan Y, Horne RN (1998) Improved methods for multivariate op-timization of field development scheduling and well placementdesign. Paper SPE 49055 presented at the 1998 SPE AnnualTechnical Conference & Exhibition, New Orleans, Texas,September, pp 27–30

Philbrick CR, Kitanidis PK (1998) Optimal conjunctive-use oper-ations and plans. Water Resour Res 34(5):1307–1316

Philbrick CR, Kitanidis PK (2000) Improved dynamic program-ming methods for optimal control of lumped parameter sto-chastic systems. Operations Res

Reed P, Minsker BS (2004) Striking the balance: long-termgroundwater monitoring design for conflicting objectives.J Water Resour Plan Manage 130(2):140–149

Ranjithan S, Eheart JW, Garrett JH Jr (1993) Neural net work-based screening for groundwater reclamation under uncertainty.Water Resour Res 29(3):563–574

Rian DT, Hage A (1994) Automatic optimization of well locationsin a North Sea Fractured Chalk Reservoir using a front trackingreservoir simulator. Paper SPE 28716 presented at the 1994SPE International Petroleum & Exhibition of Mexico, Ver-acruz, Mexico, October, pp 10–13

Ritzel BJ, Eheart JW, Ranjithan SR (1994) Using genetic algo-rithms to solve a multiple objective groundwater pollutioncontainment problem. Water Resour Res 30(5):1589–1603

Rizzo DM, Dougherty DE (1994) Characterization of AquiferProperties using artificial neural networks: Neural Kriging.Water Resour Res 30: 483-497

Rogers LL, Dowla FU (1994) Optimal groundwater remediationusing artificial neural networks with parallel solute transport.Water Resour Res 30(2):458–481

Rogers LL, Dowla FU, Johnson VM (1995) Optimal field-scalegroundwater remediation using neural networks and the geneticalgorithm. Environ Sci Technol 29(5):1145–1155

Saridis GN (1985) Foundation of the theory of Intelligent control.In: IEEE Workshop on Intelligent Control

Schaffer JD (1984) Some experiments in machine learning usingvector calculated genetic algorithms. PhD dissertation, Van-derbilt University, Nashville, TN

Sperling T, Freeze RA, Massmann J, Smith L, James B (1992)Hydrologic decision analysis: application to design of agroundwater control system at an open pit mine. In: Freeze RA,Massmann J, Smith L, Sperling T, James B (eds) Hydrogeo-logical decision NGWA. Ground Water Pub Company, Dublin,OH, pp 43–56

Stoisits RF, Crawford KD, MacAllister DJ, Lawal AS, Ogbe DO(1999) Production optimization at the Kuparuk River fieldutilizing neural networks and genetic algorithms. Paper SPE52177 presented at the 1999 Mid-Continent Operations Sym-posium, Oklahoma City, Oklahoma, March, pp 28–31

Tartakovsky DM, Neuman SP (1998) Transient flow in boundedrandomly heterogeneous domains. Exact conditional momentequations and recursive approximations. Water Resour Res34(1):1–12

Tse E, Bar-Shalom Y, Meier L (1973) Wides sense dual control fornonlinear stochastic systems. IEEE Trans Autom Control18(2):98–108

Tucciarelli T, Pinder G (1991) Optimal data acquisition strategy forthe development of a transport model for groundwater reme-diation. Water Resour Res 27(4):77–588

Tung Y (1986) Groundwater management by chance-constrainedmodel. ASCE J Water Resour Plan Manage 112(1):1–19

Wagner BJ, Gorelick SM (1987) Optimal groundwater qualitymanagement under uncertainty. Water Resour Res 23(7):1162–1174

Wagner BJ, Gorelick SM (1989) Reliable aquifer remediation inthe presence of spatially variable hydraulic conductivity: fromdata to design. Water Resour Res 25(10)

Wagner JM, Shamir U, Nemati HR (1992) Groundwater qualitymanagement under uncertainty: stochastic programming ap-proaches and the value of information. Water Resour Res28(5):1233–1246

Wagner BJ (1995a) Recent advances in simulation-optimizationground-water management modeling. In: Vogel R (ed) US NatlRep Int Union Geod Geophys 1991–1994. Rev Geophys33:1021–1028

Wagner BJ (1995b) Sampling design methods for groundwatermodeling under uncertainty. Water Resour Res 31(10):2581–2591

245

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3

Page 24: Approaches to optimal aquifer management and intelligent ... of...de gestion et de contrle des systmes hydrogologiques. Resumen A pesar de nuevos avances notables en hidro-loga estocstica

Wang W, Ahlfeld DP (1994) Optimal groundwater remediationwith well location as a decision variable: model development.Water Resour Res 30(5):1605–1618

Whiffen GJ, Shoemaker CA (1993) Nonlinear weighted feedbackcontrol of groundwater remediation under uncertainty. WaterResour Res 29(9):3277–3289

Wong P, Aminzadeh F, Nikravesh M (eds) (2002) Soft computingfor reservoir characterization and modeling. Pysica-Verlag,Heidelberg

Xiang Y, Sykes J, Thomson N (1995) Alternative formulations foroptimal groundwater remediation design. J Water Resour PlanManage 121:171–181

Yakowitz SJ (1982) Dynamic programming applications in waterresources. Water Resour Res 18(4):673–696

Yakowitz SJ (1986) The stagewise Kuhn–Tucker condition anddifferential dynamic programming. IEEE Trans Autom ControlAC-31(1):25–30

Ye M, Neuman SP, Meyer DP (2004) Maximum likelihoodBayesian averaging of spatial variability models in unsaturatedtuff. Water Resour Res (in press)

Zheng C, Wang P (1999) An integrated global and local opti-mization approach for remediation system design. Water Re-sour Res 35(1):137–148

Zheng C, Wang PP (2002) A field demonstration of the simulation-optimization approach for remediation system design. GroundWater 40(3):258–265

Zhang Y, Greenwald R, Minsker B, Peralta R, Zheng C, Harre K,Becker D, Yeh L, Yager K (2004) Final cost and performancereport: application of flow and transport optimization codes togroundwater pump & treat systems. TR-2238-ENV, NAVFAC,Naval Facility Engineering Command, Engineering ServiceCenter, Port Hueneme, CA 93043-4370

Zhang D (2002) Stochastic methods for flow in porous media:coping with uncertainties. Academic, San Diego, CA, ISBN012–7796215, pp 350

Ziegenbalg G (2000) The restoration of heavy metal contaminatedsites by directed and controlled in situ crystallization processes.In: Proceedings 2000 Contaminated Sites Restoration Confer-ence (Centre for Groundwater Studies, CSIRO), pp 753–760

246

Hydrogeol J (2005) 13:223–246 DOI 10.1007/s10040-004-0424-3