39
Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh Contents Introduction .................................................................. 2 Chemical Kinetics Schemes and the Law of Mass Action ............................. 3 Characteristic Scales and Model Non-dimensionalization ............................. 6 Brief Review of Asymptotic Analysis and Asymptotic Algorithm for Model Reduction ..... 8 Quasi-Steady-State Approximation and Michaelis–Menten–Henri Kinetics ............... 14 NMDAR Desensitization: Background Information and General Model ................. 17 Kinetic Model of NMDAR and Experiment Design .................................. 20 Initial Conditions for NMDAR Experiments ........................................ 22 Reduction of the NMDAR Model in Case of Experiments with High Concentration of D-Serine ...................................................... 23 Reduction of the NMDAR Model in Experiments with High Concentration of L-Glutamate .................................................................. 27 Reduction of the NMDAR Model in Experiments with High Concentrations of D-Serine and L-Glutamate .................................................... 29 Reduction of the NMDAR Model After the Pulse .................................... 32 Reliable NMDAR Model Parameter Estimation ..................................... 33 Model Fitting to Data .......................................................... 35 Conclusion ................................................................... 37 References ................................................................... 38 Abstract Overparametrization of models in natural sciences, including neuroscience, is a problem that is widely recognized but often not addressed in experimental studies. The systematic reduction of complex models to simpler ones for which D. Shchepakin () · L. Kalachev · M. Kavanaugh University of Montana, Missoula, MT, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2020 B. Sriraman (ed.), Handbook of the Mathematics of the Arts and Sciences, https://doi.org/10.1007/978-3-319-70658-0_134-1 1

link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience:Approaches to Experimental Design andReliable Parameter Determination

Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Chemical Kinetics Schemes and the Law of Mass Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Characteristic Scales and Model Non-dimensionalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Brief Review of Asymptotic Analysis and Asymptotic Algorithm for Model Reduction . . . . . 8Quasi-Steady-State Approximation and Michaelis–Menten–Henri Kinetics . . . . . . . . . . . . . . . 14NMDAR Desensitization: Background Information and General Model . . . . . . . . . . . . . . . . . 17Kinetic Model of NMDAR and Experiment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Initial Conditions for NMDAR Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Reduction of the NMDAR Model in Case of Experiments with HighConcentration of D-Serine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Reduction of the NMDAR Model in Experiments with High Concentration ofL-Glutamate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Reduction of the NMDAR Model in Experiments with High Concentrationsof D-Serine and L-Glutamate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Reduction of the NMDAR Model After the Pulse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Reliable NMDAR Model Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Model Fitting to Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Abstract

Overparametrization of models in natural sciences, including neuroscience, isa problem that is widely recognized but often not addressed in experimentalstudies. The systematic reduction of complex models to simpler ones for which

D. Shchepakin (�) · L. Kalachev · M. KavanaughUniversity of Montana, Missoula, MT, USAe-mail: [email protected]; [email protected];[email protected]; [email protected]; [email protected]

© Springer Nature Switzerland AG 2020B. Sriraman (ed.), Handbook of the Mathematics of the Arts and Sciences,https://doi.org/10.1007/978-3-319-70658-0_134-1

1

Page 2: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

2 D. Shchepakin et al.

the parameters may be reliably estimated is based on asymptotic model reductionprocedures taking into account the presence of vastly different time scalesin the natural phenomena being studied. The steps of the reduction process,which are reviewed here, include basic model formulation (e.g., using thelaw of mass action applied routinely for problems in neuroscience, biologicaland chemical kinetics, and other fields), model non-dimensionalization usingcharacteristic scales (of times, species concentrations, etc.), application of anasymptotic algorithm to produce a reduced model, and analysis of the reducedmodel (including suggestions for experimental design and fitting the reducedmodel to experimental data). In addition to the review of some classical resultsand basic examples, we illustrate how the approach can be used in a morecomplex realistic case to produce several reduced kinetic models for N-methyl-D-aspartate receptors, a subtype of glutamate receptor expressed on neurons inthe brain, with models applied to different experimental protocols. Simultaneousapplication of the reduced models to fitting the data obtained in a series ofspecially designed experiments allows for a stepwise estimation of parametersof the original conventional model which is otherwise overparameterized withrespect to the existing data.

Keywords

Neurotrasmitter transport · Receptors and transporters · Chemical kineticsschemes · Asymptotic methods · Systems of differential equations · Modelreduction

Introduction

Mathematical models are applied extensively to studies of fundamental problems inneuroscience. One of the best known is the Hodgkin–Huxley model, which provideda biophysically accurate, mechanistic account of the generation of the actionpotential in the squid giant axon (Hodgkin and Huxley, 1952). Later developmentof a reduced version of this model, known as FitzHugh–Nagumo model, provided asimplified and useful tool for studying excitable systems (FitzHugh, 1955; Nagumoet al., 1962). In this work, the attention is focused on the analysis of kinetic modelsdescribing the action of neurotransmitter binding to receptors and transportersin the brain. Some of the models discussed in this chapter are widely known,comparatively simple, and generic, while others are more realistic but at the sametime more complex, e.g., modeling neurotransmitter diffusion and transport in thesynaptic cleft and extracellular space in the brain (Rusakov and Kullman, 1998;Savtchenko and Rusakov, 2007). The classical results on model derivation andreduction techniques presented below are of general interest and may be used tohandle numerous applications not only in neuroscience but also in biological andchemical kinetics, pharmacokinetics, and other fields of study dealing with complexmodels and the need for reliable model parameter identification.

Page 3: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 3

The current chapter is organized as follows. First, the well-known law of massaction (Voit et al., 2015) is briefly discussed, and some simple models derivedusing this law are introduced as an illustration. Then the basic ideas behindmodel non-dimensionalization are addressed; these ideas are then illustrated viatheir application to a generic neurotransmitter transporter model mentioned above.Next, the classical results of asymptotic analysis and the steps of a reductionalgorithm based on the so-called boundary function method (Vasil’eva et al., 1995)are reviewed. Following this, the asymptotic algorithm is applied to the exampleof a generic transporter model to produce the so-called Michaelis–Menten–Henriapproximation (Henri, 1903; Michaelis and Menten, 1913), and the idea of thequasi-steady state approximation (Segel and Slemrod, 1989; Stiefenhofer, 1998)is briefly discussed.

Finally, the formulation of a realistic general model related to the N-methyl-D-aspartate receptor (NMDAR), one of the major subtypes of glutamate receptorson neurons, is presented. This receptor plays critical role in synaptic plasticity,development, learning, and memory (Traynelis et al., 2010). Disruptions of itsfunction are associated with such disorders as epilepsy, depression, schizophrenia,ischemic brain injury, and others. NMDARs have been a topic of numerous studies;over the last two decades several mathematical models have been proposed andapplied to explain the dynamics of the ion currents mediated by the NMDAR ionchannel; see, e.g., Benveniste et al. (1990), Nahum-Levy et al. (2001) and Iacobucciand Popescu (2018). However, the conclusions on receptor kinetics based on thesemodels were typically limited due to model overparameterization with respect to theavailable data. For this more realistic, and thus more complex, example, it is shownhow designing the experiments in accordance with model predictions resolves theissue of model overparameterization. Application of the algorithm suggests a seriesof experiments which have to be performed to reliably estimate model parameters.The boundary function method, one of the asymptotic methods, is used to obtainthe simplified versions of the general model corresponding to some particularexperimental setups. As a proof of concept, the application of the algorithm tosimulated data, which mimics the data from real experiments, is presented. Briefdiscussion of the approach concludes the chapter.

Chemical Kinetics Schemes and the Law of Mass Action

Consider the following hypothetical kinetic scheme describing a chemical (bio-chemical) reaction, where A, B, C represent some chemical species:

nA + mB + jCk+−→ sA + pB + rC, nA + mB + jC

k−←− sA + pB + rC, (1)

where n, m, j , s, p, and r are the integers representing the number of moleculesof each species taking part in the corresponding forward–reverse reactions with thereaction rate constants k+ (forward) and k− (reverse); arrows indicate the directionsof particular reactions.

Page 4: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

4 D. Shchepakin et al.

Let [A], [B], and [C] represent the concentrations of A, B, C, respectively,in appropriate dimensional units of measurement. Under assumption that thespecies are well mixed in a fixed volume, the kinetic scheme (1) describes thetime-dependent species transformations (with no spatial dependence) which lead tochanges of species concentrations in the mix. The law of mass action states that therates of change of species concentrations depend on the respective concentrationsaccording to the following formulas:

d[A]dt

= k+(s − n)[A]n[B]m[C]j + k−(n − s)[A]s[B]p[C]r , (2)

d[B]dt

= k+(p − m)[A]n[B]m[C]j + k−(m − p)[A]s[B]p[C]r , (3)

d[C]dt

= k+(r − j)[A]n[B]m[C]j + k−(j − r)[A]s[B]p[C]r . (4)

Here t stands for time; below, without loss of generality, we will use the termstime, time variable, and independent variable interchangeably. For the case where,e.g., n = 1, m = 1, j = 0, s = 0, p = 0, and r = 1, the following kinetic schemecan be written (compare with (1)):

A + Bk+−→ C, A + B

k−←− C, (5)

describing the case of a generic receptor if A represents a neurotransmitter, B standsfor a free receptor, and C corresponds to a bound receptor, so that the entire schemeis just representing a binding–unbinding reaction. From (5) we obtain (compare with(2), (3) and (4)):

d[A]dt

= d[B]dt

= −d[C]dt

= −k+[A][B] + k−[C], (6)

for which the law of mass action just states that the rates of change of speciesconcentrations during the reactions (5) are proportional to the products of corre-sponding species concentrations taking part in the reactions under consideration.To solve (6) for the time-dependent concentrations of species, corresponding initialconditions, i.e., the concentrations of species at time t = 0, must be specified. Inparticular, consider the case where all the receptors at the initial instant of time areunbound:

[A](0) = [A]∗, [B](0) = [B]∗, [C](0) = [C]∗ = 0. (7)

Problem (6) and (7) can be easily solved. First, a conservation relationship forthe receptors, free and bound, may be derived; indeed, from (6), since d[B]/dt +d[C]/dt = 0, it follows that [B](t) + [C](t) =const= [B]∗. Then, it can be easilyseen that the total concentration of neurotransmitter, free and bound, is also con-

Page 5: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 5

served: d[A]/dt + d[C]/dt = 0, and thus, it follows that [A](t) + [C](t) =const=[A]∗. These two conservation relationships can be used to substitute the systemof three differential equations (6) by one equation with corresponding initialcondition:

d[C]dt

= k+([A]∗ − [C])([B]∗ − [C]) − k−[C], [C](0) = 0. (8)

The solution of the constant coefficient Riccati-type differential equation (8)with zero initial condition can be written out. Indeed, the stable steady state of thisequation [C]st. is given by the expression:

[C]st. = ([A]∗ + [B]∗ + k−/k+)/2 − D > 0,

where D = √([A]∗ + [B]∗ + k−/k+)2/4 − [A]∗[B]∗, and the formula for the time

dependent solution is

[C](t) = [C]st. + 1

1/(2D) − (1/[C]st. + 1/(2D)) exp(2k+Dt). (9)

When [C](t) is known, the expressions for other species’ concentrations, [A](t) =[A]∗ − [C](t) and [B](t) = [B]∗ − [C](t), can also be immediately written out.

Unfortunately, only the simplest models, e.g., (6), (7) have, explicit solutions,like (9). Even the slight modifications of the kinetic scheme (5) produce the modelswhich do not have explicit analytic solutions.

Consider the following example, dealing with the kinetics of a generic trans-porter, where A represents a neurotransmitter (e.g., located outside a cell), B standsfor a free transporter located on a cell membrane, and C corresponds to a boundtransporter; the scheme below is describing binding–unbinding reaction combinedwith the transfer of neurotransmitter through a cell membrane:

A + Bk+−→ C, A + B

k−←− C, Cλ−→ B. (10)

The corresponding system of differential equations for concentrations of speciesnow has the form (here once again the law of mass action is used with an additional,transfer, reaction characterized by the rate constant λ; note that in (10) the productof the transfer reaction, i.e., the neurotransmitter which had penetrated through thecell membrane and ended up inside a cell, was omitted):

d[A]dt

= −k+[A][B] + k−[C], (11)

d[B]dt

= −k+[A][B] + k−[C] + λ[C], (12)

Page 6: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

6 D. Shchepakin et al.

d[C]dt

= k+[A][B] − k−[C] − λ[C]. (13)

The system (11), (12), and (13) is supplied with the same initial conditions (7),where now [B] and [C] stand for the concentrations of free and bound transporters.The conservation of transporters can once again be derived from the system (11),(12), and (13) following the steps similar to those applied in the case of system (6):[B](t)+[C](t) = [B]∗. The neurotransmitters, however, are not conserved anymore.Elimination of [B] from (11), (12), and (13) leads to the following problem:

d[A]dt

= −k+[A]([B]∗ − [C]) + k−[C], (14)

d[C]dt

= k+[A]([B]∗ − [C]) − (k− + λ)[C], (15)

[A](0) = [A]∗, [C](0) = 0, (16)

which does not have an explicit analytic solution.In the next section the discussion of characteristic time scales is presented, and

the topic of model non-dimensionalization is addressed.As a side note, it is important to mention that the law of mass action is also

often used in the applications dealing with interactions of individuals, such asepidemic models, including a well-known susceptible–infected–recovered (SIR)model describing infectious disease propagation and its numerous generalizations(Murray, 1993).

Characteristic Scales and Model Non-dimensionalization

Usually complex models of real-life phenomena involve simultaneous description ofseveral processes occurring on a vastly different characteristic time scales. To betterunderstand the relationships between different parameters in the original statementof the problem and to find out which parameter combinations are actually important,it is usually advisable to perform model non-dimensionalization before starting theanalysis.

Natural phenomena in neuroscience commonly involve multiple processes thatoccur on different time scales: it may happen that some reactions are fast becausethe reaction rate constants have large numerical values compared to the others, butwe may also observe the situations where the reaction rate constants are moderate,but the concentrations of some species participating in the reactions are much highercompared to the others.

For the model (14), (15), and (16), consider a realistic situation, which maybe implemented experimentally, where the initial concentration of neurotransmitter[A]∗ is much higher than the initial concentration of free transporters [B]∗, i.e.,

Page 7: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 7

[A]∗ � [B]∗. It is very natural to introduce the new units of measurement forvarious variables, which are characteristic of the model under consideration. Thenew non-dimensional variables will be equal to the old dimensional ones dividedby the dimensional constants corresponding to the new units of measurement. Inparticular, the concentration of neurotransmitters can be measured in the units oftheir initial concentration, and the bound transporters concentration can be naturallymeasured in the units of the initial free transporters concentration. For the non-dimensional concentrations U and V of neurotransmitters and bound transporters,respectively, as well as for the non-dimensional time t , we write (note that now0 ≤ U ≤ 1 and 0 ≤ V ≤ 1):

U = [A]/[A]∗, V = [C]/[B]∗, t = t/σ, (17)

where σ is a characteristic time scale to be determined.After substitution of (17) into (14), (15), (16), the model will be written as

dU

dt= −k+[B]∗σU(1 − V ) + k−([B]∗/[A]∗)σV, (18)

dV

dt= k+[A]∗σU(1 − V ) − (k− + λ)σV, (19)

U(0) = 1, V (0) = 0. (20)

The choice of σ must lead to fewer model parameter combinations in (18), (19),and (20); it must also reflect the characteristic time of interest, i.e., determine thetime interval over which, e.g., the experimental data needs to be collected for modelparameter identification. Since in the case of low initial concentration of transportersthe availability of free transporters determines the rate of overall reaction (this is theso-called rate limiting step of the process), the following choice of measurementunit for the time variable can be made: σ = 1/(k+[B]∗). Then (18) and (19) maybe written as

dU

dt= −U(1 − V ) + k−/(k+[A]∗)V , (21)

dV

dt= ([A]∗/[B]∗)U(1 − V ) − ([A]∗/[B]∗)(k− + λ)/(k+[A]∗)V . (22)

The following non-dimensional so-called small parameter 0 < ε � 1 and non-dimensional rate constants K and γ can now be introduced:

ε = [B]∗/[A]∗, K = (k− + λ)/(k+[A]∗), γ = λ/(k+[A]∗). (23)

Under condition that both non-dimensional rate constants of reactions in (23) aremoderate, and thus, the corresponding characteristic reaction times are of the same

Page 8: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

8 D. Shchepakin et al.

order of magnitude, the system (21) and (22) is now transformed to

dU

dt= −U(1 − V ) + (K − γ )V, (24)

εdV

dt= U(1 − V ) − KV. (25)

This system has to be analyzed together with initial conditions (20) on a non-dimensional time interval 0 ≤ t ≤ T , where T is a moderate number (i.e., it doesnot depend on ε). In what follows, the notation t is once again used for the non-dimensional time instead of t . The review of the basic ideas of asymptotic analysisis presented in the next section. This has to be done before the reduction procedure isapplied to the original model (24), (25) and (20) with a small parameter (multiplyingthe derivative in one of the equations) in order to produce a simpler model containingonly one differential equation approximating the behavior of the solution of theoriginal problem on a finite time interval.

Brief Review of Asymptotic Analysis and Asymptotic Algorithmfor Model Reduction

As was illustrated in the previous section, in the presence of different characteristictime scales, the non-dimensionalized models may contain small parameters mul-tiplying different terms in the corresponding equations. As a result, in many cases,such so-called perturbed problems may be reduced to simpler ones containing fewerequations and fewer parameters. Such reductions are often needed to determine theoptimal number of parameter combinations which can be reliably estimated fromthe experimental data collected on a particular time scale related to the duration ofan experiment and to the frequency with which the data were collected.

Model reduction procedures cannot be applied automatically without additionalanalysis, i.e., some conditions have to be checked, and only if these conditions aresatisfied, the reduction is possible. It is important to point out that asymptotic resultsare very practical; they are not just pure mathematical exercises. The possibility ofasymptotic model reduction is related to that fact that the actual processes in thecomplex systems being studied may be comparatively fast or slow. This asymptoticapproach provides useful tools allowing one to better understand the behavior ofsuch realistic systems in neuroscience and in numerous other applied fields in thelimit where the fast processes are assumed to be happening instantaneously and theslow processes are approximated by the ones where changes within characteristictime intervals of interest do not happen at all.

First, it is important to introduce a number of definitions. Without loss ofgenerality, some of the definitions, formulations, and terminology presented beloware intentionally simplified to make understanding of the material easier. While anexact mathematical definition of regularly and singularly perturbed problems maybe given for a general case, a corresponding intuitive notion will be presented here

Page 9: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 9

for a simple particular situations relevant to the current discussion, where modelsare formulated in terms of scalar ordinary differential equations or systems ofdifferential equations, which are considered on a closed time interval 0 ≤ t ≤ T =const. Given two problems: the original one, so-called perturbed, which containsterms multiplied by a small parameter ε, and the other, so-called reduced, obtainedfrom the perturbed problem by setting in it the terms with the small parameter tozero (the reduced problem is also called unperturbed). Assume that the solutionsxε(t, ε) and x0(t) of the perturbed and the unperturbed problems, respectively, arefound. Then, the two solutions have to be compared to check if they are, in somesense, close to each other for ε sufficiently small. For the case of scalar equations, if

max0≤t≤T

|xε(t, ε) − x0(t)| → 0 as ε → 0, (26)

then the original problem is called regularly perturbed. Otherwise, it is calledsingularly perturbed. The above definition can be extended naturally to the systemsof equations and to the problems defined on the open domains. In these more generalcases, the definition of perturbation type will depend on the choice of a norm, whichis often implied by of the context of the problem. For the situations involving thesystems of equations discussed in this chapter, we use the Euclidean norm: for any

n-dimensional vector Z = {z1, . . . , zn} we define ‖ Z ‖=√

z21 + . . . + z2

n.Next, the notions of asymptotic approximation and asymptotic algorithm are

defined. Suppose, a problem for which (26) is not satisfied is considered (whichmeans that the corresponding problem is singularly perturbed). If such functionX(t, ε) could be found that

max0≤t≤T

|xε(t, ε) − X(t, ε)| → 0 as ε → 0,

then X is called a uniform asymptotic approximation to x in the closed interval0 ≤ t ≤ T . Asymptotic algorithm is then simply a method of constructing X(t, ε).

Function X(t, ε) is called the asymptotic approximation of x(t, ε) with accuracyof order εk if

max0≤t≤T

|xε(t, ε) − X(t, ε)| = O(εk).

Here the following notation is used: α = O(εk) means that there exists a constantM > 0 such that the norm ‖ α ‖≤ Mεk for sufficiently small ε. In the case of scalarα from the norm definition, it follows that ‖ α ‖= |α|. The quantity α is said to bemoderate if α = O(1).

For differential equation models, and in many other problems involving modelperturbations, the asymptotic algorithms consist of constructing the so-calledasymptotic series, such that their truncations provide asymptotic approximations tothe solution of the original perturbed problems with various orders of accuracy. In alarge number of practical applications, the leading order asymptotic approximations

Page 10: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

10 D. Shchepakin et al.

are of particular interest, as well as the limits of these leading order approximationsas ε → 0. Usually, when the reduced models are constructed, the analysis of theirsolutions indicates that their qualitative (and quantitative) behavior is similar, andoften practically identical, to that of the solutions of the original models.

In the remainder of this section the reduction algorithm and the main asymptoticresults (in somewhat simplified form) will only be discussed for the modelsformulated in terms of systems of ordinary differential equations

dX/dt = X = h(X, t), X(0) = X0, 0 ≤ t ≤ T 0, dimX = N,

which, after non-dimensionalization, can be written in the form (note that thecoefficient values in the non-dimensionalized equations are now just numbers whichcan be compared to each other; multiplication or division by a small parameter0 < ε � 1 can be used to scale the coefficients which are either small or large):

U = f (U, V, t, ε),

εV = g(U, V, t, ε),(27)

0 ≤ t ≤ T = O(1).

Usually, in many practical applications the small parameter 0 < ε � 1 is chosento be inversely proportional to rate constants of fast reactions. Here X = (U, V )′,where the prime ( ′ ) is used to denote the transpose, with

dimU + dimV = N.

We also use notation h = (f, g)′, with

dimf = dimU, dimg = dimV.

Vector functions f and g are assumed to be once continuously differentiable;f = O(1), g = O(1). The initial conditions for U and V are

(U(0)

V (0)

)=

(U0

V 0

). (28)

Next, the basic ideas of the boundary function method, one of the asymptoticanalysis algorithms, are applied for the analysis of (27) and (28); see, e.g., Vasil’evaet al. (1995). The uniform in the interval t ∈ [0, T ] asymptotic approximation of thesolution to problem (27) and (28) is sought in the following form:

U(t) = U0(t) + Π0U(τ) + O(ε), V (t) = V0(t) + Π0V (τ) + O(ε), (29)

Page 11: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 11

where τ = t/ε is the stretched independent variable. Here U0(t) and V0(t) denotethe leading terms of the so-called regular part of the asymptotic expansion whichapproximates the solution of the original system in the interior of the interval0 < t ≤ T , and Π0U(τ) and Π0V (τ) denote the so-called boundary layer functionswhich are needed to describe the fast transition from an arbitrarily initial conditionto an attracting (or stable) slow manifold to which the solutions (trajectories)described by regular functions belong. Since the boundary functions are expectedto be important only in a small vicinity of the initial instant of time t = 0 wherethe initial conditions are prescribed (i.e., within the initial boundary layer), thesefunctions must decay to zero as τ → ∞ (together with ε → 0); otherwise the slowmanifold will turn out to be not attracting, and the solution of the original system(27) will tend to infinity by absolute value on a fast time scale.

The problems for the terms of asymptotic approximation (29) are obtainedby substituting (29) into (27) and (28) and equating expressions multiplying likepowers of ε separately for the regular and boundary functions. The functions f andg on the right-hand side of (27) must be represented in the form similar to (29).For example, for f the following equivalent representation can be written out (forg – similar expression):

f (U(t), V (t), t, ε) = f (U0(t) + Π0U(τ) + O(ε), V0(t) + Π0V (τ) + O(ε), t, ε)

= f (U0(t) + O(ε), V0(t) + O(ε), t, ε)

+ [f (U0(ετ ) + Π0U(τ) + O(ε), V0(ετ )

+ Π0V (τ) + O(ε), ετ, ε)

− f (U0(ετ ) + O(ε), V0(ετ ) + O(ε), ετ, ε)]= f0(t) + Π0f (τ) + O(ε), (30)

with

f0(t) = f (U0(t), V0(t), t, 0), (31)

Π0f (τ) = f (U0(0) + Π0U(τ), V0(0) + Π0V (τ), 0, 0) − f0(0). (32)

Note once again that the goal is to reduce the original system in the time interval oforder O(1); f and g are bounded functions of order O(1) on this interval.

The problem for Π0U defined this way has the form:

dΠ0U

dτ= 0, Π0U(∞) = 0. (33)

Its solution is Π0U ≡ 0.For the regular functions U0 and V0, one can write:

0 = g(U0, V0, t, 0); (34)

Page 12: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

12 D. Shchepakin et al.

dU0

dt= f (U0, V0, t, 0), U0(0) = U0. (35)

Note that the equation (34) is not a differential equation any more.

Condition 1. Assume that (34) can be resolved with respect to V0 in the intervalt ∈ [0, T ]. This solution can be written in the form

V0(U0, t) = g−1(U0, t, 0). (36)

This expression is then substituted into (35):

dU0

dt= f (U0, V0(U0, t), t, 0), U0(0) = U0. (37)

Also assume that a solution of (37) exists for all t ∈ [0, T ].

It is important to mention that the solution (36) of (34) might not be unique(e.g., such situation is observed in a number of problems of chemical kinetics withcomplex kinetic schemes). Then, in the case of multiple solutions V0 of (34), itis assumed that these solutions are isolated (which means that each such solutionhas some vicinity where other solutions are not present). The final choice of thesolution (36) is discussed below. Equation (34) (or expression (36)) describes somelower dimensional manifold. The actual solution of the original problem (27) and(28) will lie asymptotically close to that manifold outside some narrow initial layer(located in the vicinity of t = 0) if the manifold is attracting. The conditions whichguarantee the stability of such manifold are formulated below. The slow dynamicsof the system on that manifold is described by (37).

Assume that the choice of a solution V0(t) to (34) is made. Then, with V0 chosen,the problem (37) for U0(t) on the interval t ∈ [0, T ] can be solved. The function V0given by (36) is then completely defined when U0 is known. However, the leadingorder regular function V0 does not, in general, satisfy the initial condition from(28): V (0) = V 0. Thus, the boundary function Π0V (τ) is needed, which satisfiesthat initial condition together with V0(t):

V0(0) + Π0V (0) = V 0. (38)

The equation for Π0V (τ) has the form:

dΠ0V

dτ= g(U0(0), V0(0) + Π0V, 0, 0). (39)

For more details, see, e.g., the previously mentioned (Vasil’eva et al., 1995).By virtue of (34), the point Π0V = 0 is the steady state of the equation (39).

For the boundary function Π0V (τ) to decay to zero as τ → ∞ (and, thus, for

Page 13: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 13

U0(t), V0(t) to approximate the exact solution of (27) and (28) in the vicinity ofthe lower dimensional stable manifold mentioned above), the steady state Π0V = 0has to be asymptotically stable for all t ∈ [0, T ]. The condition that guarantees thestability of the lower dimensional manifold mentioned above can be formulatedas follows. Let λi(t) (i = 1, . . . , dimV ) denote the eigenvalues of the matrixgV (U0(V0(t), t), V0(t), t, 0).

Condition 2. Assume that

Reλi(t) < 0.

The asymptotic stability of the steady state of (39) and the fact that the manifold(36) is attracting immediately follow from Condition 2.

Condition 2 has to be checked for every possible solution of (34) in order todetermine whether a particular slow manifold is stable or not. In cases where (34)has multiple roots, only solutions V0 expressed by (36) which satisfy Condition 2can, in principle, be used for the asymptotic reduction of the original problem.

It is important to mention that even when Condition 2 is satisfied, not everysolution of (39) and (38) will necessarily tend to a zero steady state. The initialcondition (38), which may be rewritten as

Π0V (0) = V 0 − V0(0), (40)

must belong to a domain of attraction of the steady state Π0V = 0. This requirementmay be formulated as follows.

Condition 3. Assume that the solution of (39) and (40) exists for τ ≥ 0 and tendsto zero as τ → ∞.

This condition allows one to make a unique choice of the root V0 of (34),expressed in terms of U0 in (36). That is, Condition 3 specifies the root of (34)which, after substitution into (39), has a domain of attraction containing the initialcondition (40).

The formulations of somewhat simplified versions of the theorems which areused for justification of the asymptotic reduction procedures applied to particularsystems (found, e.g., in neuroscience, chemical kinetics, etc.) can now be presented.

Theorem 1 (Vasil’eva et al. (1995)). Under conditions 1–3 and for sufficientlysmall ε the original problem (27) and (28) has a unique solution and

max0≤t≤T

‖ U(t, ε) − U0(t) ‖≤ Mε,

max0≤t≤T

‖ V (t, ε) − V0(X0(t), t) − Π0V (t/ε) ‖≤ Mε,

where M > 0 is some constant independent of ε.

Page 14: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

14 D. Shchepakin et al.

Taking the limit in the above relationships, as ε → 0, produces the result knownas Tikhonov’s theorem on passage to the limit.

Theorem 2 (Tikhonov (1948, 1950)). Under conditions 1–3 the problem (27) and(28) has a unique solution and

limε→0

U(t, ε) = U0(t), 0 ≤ t ≤ T ;

limε→0

V (t, ε) = V0(t), 0 < t ≤ T .

Note that the second limit is valid on an open interval which does not containthe point t = 0. The Theorems 1 and 2 indicate which conditions must be checkedfor a particular system to determine if the asymptotic reduction is at all possibleand what the dimension of the reduced model will be. Although the result of thetheorem on passage to the limit is asymptotic (i.e., its formulation involves thelimits as ε → 0), the constructed leading order approximation will be close to thesolution of the original problem for small fixed values of ε as well (on the timeintervals of order O(1)).

In the next section the asymptotic reduction procedure is applied to the problem(24), (25) and (20).

Quasi-Steady-State Approximation and Michaelis–Menten–HenriKinetics

The asymptotic reduction procedure described in the previous section, as well as itsnumerous extensions and modifications, is usually referred to as quasi-steady-stateapproximation (also known in chemical kinetics as the principle of quasi-stationaryconcentrations).

It can be easily seen that the system of equations (24) and (25) representing thegeneric transporter model is of the form (27). To easier compare the steps of theasymptotic algorithm applied to (24), (25), and (20) with those presented for thegeneral case in the previous section, let us rewrite here the system of equationsand the initial conditions for the neurotransmitter concentration U(t) and for boundtransporters concentration V (t) once again:

dU

dt= −U(1 − V ) + (K − γ )V, (41)

εdV

dt= U(1 − V ) − KV. (42)

U(0) = 1, V (0) = 0, (43)

Page 15: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 15

0 ≤ t ≤ T .

The asymptotic approximation of the solution of (41), (42), and (43) is soughtin the form (29); in this section the leading order approximation is constructed.Following the steps of the asymptotic reduction procedure, the expressions and theequations for the terms of the leading order approximation can be written out. Fromthe problem (33) for the boundary function Π0U it follows that

Π0U ≡ 0. (44)

Equation (34) now becomes

0 = U0(1 − V0) − KV0,

immediately producing the expression (compare with (36)):

V0 = U0

K + U0. (45)

The problem (35) for the regular function U0 becomes:

dU0

dt= − γU0

K + U0, U0(0) = 1. (46)

The problem (39) and (40) for the boundary function Π0V can now be written as

dΠ0V

dτ= −(1 + K)Π0V, Π0V (0) = −V0(0) = −1/(K + 1). (47)

The solution of (47) is

Π0V (τ) = −1/(K + 1) exp[−(1 + K)τ ]. (48)

Thus, the leading order asymptotic approximation of the solution of the originalperturbed problem (41), (42) and (43) has the form:

U(t) = U0(t) + O(ε), (49)

V (t) = U0(t)/(K + U0(t)) − 1/(K + 1) exp[−(1 + K)τ ] + O(ε), (50)

where U0 is the solution of (46). The equation in the reduced model (46) andits solution (which describes approximately the time-dependent concentration ofneurotransmitter and which does not have an explicit analytical representation)are usually referred to as Michaelis–Menten–Henri kinetics (approximation). Thesolution (49) and (50) of the reduced model describes the behavior of the solutionof the original system (41) and (42) in the vicinity of a slow manifold given

Page 16: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

16 D. Shchepakin et al.

(approximately) by (45). The regular terms in (49) and (50) approximate thebehavior of the solution of the original model outside some initial layer locatednear t = 0.

It is important to mention that asymptotic approximations capture, both qualita-tively and quantitatively, the actual features of the solutions of the original modelproblems which are determined by the presence of different characteristic timescales associated with various processes described by the models. In Fig. 1 thenumerical solutions of (41), (42) and (43) are shown for several choices of thesmall parameter values: ε = 0.25, 0.1, 0.05, 0.01. It is seen in this figure howthe shapes of the solution curves change as the small parameter values tend tozero. To illustrate the capabilities of the presented asymptotic approach in Fig. 2,the numerical solution of (41), (42) and (43) is compared with the leading orderapproximation (49) and (50) of the solution constructed by the boundary functionmethod; to produce the graphs a particular choice of the small parameter value wasmade: ε = 0.1. Further reduction of the small parameter value will lead to theasymptotic solution curves being practically indistinguishable from the numericalsolution curves.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Time

0.6

0.8

1

Sol

utio

ns:U

Numerical solutions for different choices of

=0.25=0.10=0.05=0.01

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Time

0

0.2

0.4

0.6

Sol

utio

ns:V

=0.25=0.10=0.05=0.01

Fig. 1 The numerical solutions of the original model system (41) and (42) with initial conditions(43) for different choices of small parameter: ε = 0.25, 0.1, 0.05, 0.01

Page 17: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 17

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Time

0.6

0.8

1

Sol

utio

ns:U

Comparison of numerical and asymptotic solutions

Leading order approximation ( =0.1)Numerical solution ( =0.1)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Time

0

0.2

0.4

0.6

Sol

utio

ns:V

Leading order approximation ( =0.1)Numerical solution ( =0.1)

Fig. 2 Comparison of the leading order approximation (49) and (50) with the numerical solutionof the original model (41), (42) and (43) for a particular choice of the small parameter value:ε = 0.1

NMDAR Desensitization: Background Information and GeneralModel

Neurons are the cellular units of the nervous system that receive, integrate, andtransmit information to each other and to target tissues like muscles and glands.Information in the nervous system is generally encoded by electrical pulses.Action potentials or spikes are brief membrane potential changes that are activelygenerated and conducted through neuronal fibers called axons to the synapse,a specialized structure between neurons that mediates chemical communication.When a spike in the presynaptic neuron reaches the presynaptic axon terminal,this triggers release of neurotransmitter molecules into the synaptic gap. Thetransmitter diffuses across the synapse and binds to specialized receptors in thepostsynaptic neuron, which in turn leads to regeneration of a small electricalsignal, or postsynaptic potential, in the neighboring cell. The neurotransmitterconcentration rapidly decays to reset the synapse for the next signal. The clearanceof neurotransmitter occurs by enzymatic degradation or diffusion and reuptake

Page 18: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

18 D. Shchepakin et al.

of neurotransmitter molecules by transmembrane proteins called transporters. Thereceptor-mediated postsynaptic potentials are continuously integrated, and if adepolarizing membrane potential change of sufficient amplitude is reached, thepostsynaptic cell will generate a spike that will be actively conducted down itsaxon, triggering neurotransmitter release on to an additional postsynaptic neuron.Let us note that the described situation is an example of an excitatory synapse. Thereare also inhibitory synapses in which the release of neurotransmitter and activationof the postsynaptic receptors causes a hyperpolarizing response that decreases theprobability that the postsynaptic cell will fire an action potential. The effect of aneurotransmitter on the postsynaptic cell, i.e., excitation or inhibition, depends onthe neurotransmitter and receptor subtype. We will focus on one particular typeof excitatory receptor, the N-methyl-D-aspartate receptor (NMDAR), which bindsthe most abundant excitatory neurotransmitter in the mammalian central nervoussystem, L-glutamate. The NMDA receptor is composed of four subunits, typicallytwo GluN1 subunits and two GluN2 subunits (Traynelis et al., 2010). Binding ofsynaptic glutamate to GluN2 subunits normally activates the receptor and opens anion channel integral to the complex. However, an additional requirement for receptoractivation is occupancy of the GluN1 subunits by a co-agonist, either D-serine orglycine. The latter co-agonists are normally present at ambient levels sufficient toallow a significant fraction of NMDARs to be activated by synaptically releasedglutamate. In the context of this discussion, we will ignore glycine and considerD-serine as the GluN1 ligand required for synaptic NMDAR activity in the cortex(Henneberger et al. 2010; Le Bail et al. 2014; Mothet at al. 2015; Radzishevsky et.al. 2013).

Glutamate receptors play crucial roles in synaptic plasticity, one form of whichis known as NMDAR-dependent long-term potentiation (LTP), which is widelyconsidered to be a major cellular and molecular substrate for memory inductionand storage. Calcium entry through repetitively activated NMDA receptor triggersrecruitment of more glutamate receptors to the postsynaptic membrane and results inLTP, a persistent potentiation of the subsequent response to glutamate (Collingridgeet al., 1983; Nicoll , 2017).

Several models have been proposed in order to study NMDAR kinetics. Somevariation of the model that is considered here was originally proposed by Benvenisteet al. (1990); see Fig. 3.

Here R denotes the receptor, S denotes D-serine, and G denotes L-glutamate.Each Ki is an equilibrium constant for the corresponding reaction: Ki = k+

i /k−i ,

where k+i and k−

i are the forward and reverse reaction rate constants, respectively.The model assumes that molecules of L-glutamate and D-serine can bind to thereceptor in any order, which explains nine states. Upon binding all four moleculesnecessary for the activation of the receptor, the full-ligand state G2RS2 can eitherundergo a conformational change and open, allowing for the inward current,(conductive state G2R

∗S2) or enter a so-called desensitized state, which still hasall four molecules bound to it but does not conduct any current (non-conductivestate G2R

′S2). The model seems intuitive except perhaps for the desensitized state.Let us now provide some insight into the reasons why this state was included.

Page 19: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 19

Fig. 3 Model of NMDARreceptor proposed byBenveniste et al. (1990)

In response to continuous exposure of NMDAR to L-glutamate and D-serineor glycine, the inward current exhibits the following dynamics. After a sharpspike the current slowly decays from its peak to a smaller steady-state valuein the continuous presence of ligands. This phenomenon of decremental loss ofresponse to the presence of the ligand molecules is called desensitization. Thenature of the desensitization phenomenon has puzzled researchers and appears toinvolve multiple mechanisms. For example, the kinetics of onset and recoveryfrom desensitization is faster with increasing D-serine or glycine concentration.However, some portion of desensitization was also reported to be independent ofthe amount of glycine used in experiments. The former is called glycine-dependentdesensitization, while the latter is referred to as glycine-independent desensitization.For more detailed discussion see, e.g., Mayer et al. (1989), Benveniste et al. (1990),Sather et al. (1990), Tong and Jahr (1994), and Krupp et al. (1998)

In terms of the model, glycine-dependent desensitization could be explainedif ligand molecules negatively influence each other’s binding through allostericinteractions. For example, if binding of glutamate reduces the affinity of the receptorfor glycine, then a saturating pulse of glutamate in the presence of non-saturatingglycine would displace the co-agonist glycine, and desensitization would occurduring the pre-steady-state phase of glycine unbinding. Comparing the values ofKi’s would verify the described mechanism. In order to account for glycine-independent desensitization, the model introduces a separate desensitized stateG2R

′S2 that causes current decay independently of the ligands concentrations. Forfurther discussion see Benveniste et al. (1990) and Lester et al. (1993). Let us notethat, generally speaking, the receptor can have multiple open and desensitized states,which is suggested by some empirically derived models (Iacobucci and Popescu,2018). However, the study of these models as well as of their combinations, e.g.,Schorge et al. (2005), falls outside the scope of this discussion.

The model presented in Fig. 3, as well as some more complicated ones, has beenpublished over the past two decades; see, e.g., already mentioned Benveniste et al.

Page 20: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

20 D. Shchepakin et al.

(1990), Clements and Westbrook (1991), Lester and Jahr (1992), Nahum-Levy et al.(2001), Schorge et al. (2005) and Iacobucci and Popescu (2018). However, all thesemodels on its own are overparameterized with respect to the available data. It ispossible to find some values of the parameters for the overparameterized models thatwould fit the data precisely. But the confidence intervals for at least some of thesevalues would be very wide, meaning that the estimates are not actually statisticallyreliable. Unfortunately, sometimes such unreliable results are still reported and gotused in further research. There are several approaches that could be used to dealwith this problem. For example, some of the values could be estimated reliablyfrom separate experiments, or the model itself could be modified under additionalassumptions. In any case, one needs to adjust the model to fit the available data. Itis shown below that adjusting an experimental protocol appropriately allows one toreduce the model presented in Fig. 3 using the asymptotic methods described in thischapter earlier. As a result, several smaller and simpler models will be derived, eachcorresponding to a particular experimental setup. These resulting models will not beoverparameterized with respect to the experimental data and will allow for a reliableestimation of a subset of parameters. The parameter estimates will allow one to getsome insight on the nature of glycine-sensitive desensitization.

Kinetic Model of NMDAR and Experiment Design

The law of mass action discussed previously dictates how chemical kinetics schemeslike (1) can be written in a form of a system of differential equations (2), (3) and (4).Using the same approach and similar notations for species concentrations, the modelof NMDAR shown in Fig. 3 can be written as the following system of differentialequations:

(a)d[R]dt

= −k+1 [R][S] + k−

1 [RS] − k+2 [R][G] + k−

2 [GR],

(b)d[RS]

dt= k+

1 [R][S] − k−1 [RS] − k+

3 [RS][S] + k−3 [RS2]

−k+4 [RS][G] + k−

4 [GRS],

(c)d[RS2]

dt= k+

3 [RS][S] − k−3 [RS2] − k+

6 [RS2][G] + k−6 [GRS2],

(d)d[GR]

dt= k+

2 [R][G] − k−2 [GR] − k+

5 [GR][S] + k−5 [GRS]

−k+8 [GR][G] + k−

8 [G2R],

(e)d[GRS]

dt= k+

4 [RS][G] − k−4 [GRS] + k+

5 [GR][S] − k−5 [GRS]

−k+7 [GRS][S] + k−

7 [GRS2] − k+10[GRS][G] + k−

10[G2RS],

Page 21: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 21

(f )d[GRS2]

dt= k+

6 [RS2][G] − k−6 [GRS2] + k+

7 [GRS][S] − k−7 [GRS2]

−k+12[GRS2][G] + k−

12[G2RS2],

(g)d[G2R]

dt= k+

8 [GR][G] − k−8 [G2R] − k+

9 [G2R][S] + k−9 [G2RS],

(h)d[G2RS]

dt= k+

9 [G2R][S] − k−9 [G2RS] + k+

10[GRS][G] − k−10[G2RS]

−k+11[G2RS][S] + k−

11[G2RS2],

(i)d[G2RS2]

dt= k+

11[G2RS][S] − k−11[G2RS2] + k+

12[GRS2][G] − k−12[G2RS2]

−k+13[G2RS2] + k−

13[G2R′S2] − k+

14[G2RS2] + k−14[G2R

∗S2],

(j)d[G2R

′S2]dt

= k+13[G2RS2] − k−

13[G2R′S2],

(k)d[G2R

∗S2]dt

= k+14[G2RS2] − k−

14[G2R∗S2].

(51)

It can be easily checked that the sum of all derivatives in (51) is zero, as expected:the receptors can bind and unbind ligands and undergo conformational changes,but in the system under consideration, they can neither appear nor disappear.Therefore, the total concentration of receptors in all states is constant, and it willbe denoted as [R]total. Now, the variables in the system can be normalized bydividing the concentration of each state by this constant: [R]new = [R]/[R]total,[RS]new = [RS]/[R]total, etc. Each new variable now denotes the fraction of allreceptors being in the corresponding state. In what follows, the old notations areused to denote normalized variables, e.g., [RS] will be used instead of [RS]new, etc.

Before the boundary function method can be applied, a small parameter hasto be introduced in the model. Using the experimental technique of patch clamp,control of the concentration of ligands that interact with NMDA receptors on thecell surface can be maintained by fast solution exchange and the concentrationsof D-serine and L-glutamate rapidly and independently switched. At the sametime the experimentalist is able to record the flow of ions, i.e., current, across thecell membrane. In other words, one can clearly see and measure the activationof NMDARs in response to perturbation of ligand concentration. In the modeldepicted in Fig. 3, there is only one conducting state (G2R

∗S2) responsible forthe flow of ions. Therefore, during the experiment the only measurable variable is[G2R

∗S2]. By raising or lowering the concentration of an NMDAR ligand, one caneffectively increase or decrease the rate of state transitions involving this ligand. Ifthe difference in the ligands’ concentrations during experiments can be made highenough, it will produce a necessary difference in reaction rates to introduce a smallparameter.

Page 22: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

22 D. Shchepakin et al.

The following protocol for the experiments is used. One of the ligands (e.g., D-serine) is present continuously throughout the whole experiment in the vicinity ofNMDARs. After some initial time period, the system will reach its steady state.Since both ligands are required for NMDAR to activate, [G2R

∗S2] = 0 and nochanges in the basal current will be seen. Next, a second ligand (in this example, itis L-glutamate) is introduced into the vicinity of the receptors. Then a sharp inwardcurrent followed by its decay due to NMDAR desensitization is observed. Afterthe system reached its new steady state, the switch back to the solution with onlyone initial ligand (in this example, it is D-serine) will occur. During this phase arelaxation of the inward current will be observed, i.e., the current will decay to itsbasal level as [G2R

∗S2] goes to zero. Naturally, L-glutamate and D-serine could beswitched in this example. In order to avoid confusion, it is convenient to define twotypes of experiments:

1. Type 1 experiment is an experiment where D-serine is continuously present inthe bath and L-glutamate is applied in a pulse-like manner. This is the experimentdescribed as an example above.

2. Type 2 experiment is an experiment where L-glutamate is continuously presentin the bath and D-serine is applied in a pulse-like manner.

The experimental protocol consists of three distinct phases: phase one is observedbefore the pulse of the second ligand, phase two takes place during the pulse whenboth ligands are present in the system, and phase three takes place after the pulse.Each phase can be modeled with (51) with different choices of constant values of[S] and [G]. As was noted, the patch clamp experimental technique allows for rapidswitching of solution, so that the ligand introduced in a pulse-like manner ([G]and [S] for Type 1 and Type 2 experiments, respectively) can me modeled as astep function, while the background ligand is just a constant. Rather than dealingwith non-continuous right-hand sides of system (51), one can solve it consecutivelythree times, with the end-state reached during the previous phase used as the initialcondition for the next phase. Before a small parameter needed for model reductionis introduced, the choice of the initial conditions for the experiments of both typeshas to be formalized.

Initial Conditions for NMDAR Experiments

During the first phase, NMDA receptors are exposed to only one type of ligand,and the system is allowed to reach its steady state before the second phase begins.As was noted, [G2R

∗S2] = 0 during the time period corresponding to the firstphase, and, therefore, no dynamics related to the flow of ions can be observed, andno useful information can be extracted except for the basal current level. However,since the system reached its steady state, this means that the state of the system isknown exactly at the moment when the second ligand is introduced. That is, for thefirst phase the time-dependent solution of system (51) does not need to be found,

Page 23: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 23

but rather its steady state has to be determined and then used as the initial conditionfor the second phase.

Consider the Type 1 experiment. During the first phase only D-serine is present,i.e., [G] = 0. Thus, there are only three non-zero variables: [R], [RS], and [RS2].Setting the corresponding left-hand sides in the equations (a) − (c) of system (51)to zero, one can write the algebraic “equilibrium” equations for the correspondingsteady-state values ([R]ss , [RS]ss , [RS2]ss) as follows:

0 = −k+1 [R]ss[S] + k−

1 [RS]ss ,0 = k+

1 [R]ss[S] − k−1 [RS]ss − k+

3 [RS]ssS + k−3 [RS2]ss ,

0 = k+3 [RS]ss[S] − k−

3 [RS2]ss .

And thus,

[R]ss = 1

1 + K1[S] + K1K3[S]2 ,

[RS]ss = K1[S]1 + K1[S] + K1K3[S]2

, (52)

[RS2]ss = K1K3[S]2

1 + K1[S] + K1K3[S]2 ,

where Ki = k+i /k−

i are equilibrium constants. Analogously, in the Type 2experiments, [S] = 0 and L-glutamate are always present, which leads to thepresence of the following non-trivial states: [R], [GR], [G2R]. The correspondingsteady-state values ([R]ss , [GR]ss , [G2R]ss) may be easily derived from equations(a), (d), (g) of the system (51) after setting their left-hand sides to zero:

[R]ss = 1

1 + K2[G] + K2K8[G]2 ,

[GR]ss = K2[G]1 + K2[G] + K2K8[G]2 , (53)

[G2R]ss = K2K8[G]2

1 + K2[G] + K2K8[G]2.

Reduction of the NMDAR Model in Case of Experiments with HighConcentration of D-Serine

The experimental technique allows experimentalist to control concentrations ofsubstances NMDAR receptors are exposed to throughout the experiment. In orderto introduce a small parameter and apply the boundary function method, consider

Page 24: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

24 D. Shchepakin et al.

the case of experiment with high concentration of D-serine and low to moderateconcentrations of L-glutamate. A small parameter ε can then be introduced asfollows:

[S] � 1, ¯[S] = ε[S], [S] = O(1), 0 < ε � 1. (54)

Substitution of (54) into (51) and multiplication of both sides of equations (a) − (i)

by ε, after some rearrangement and recombination of equations, produces a systemof the form (27), where

U = ([R] + [RS] + [RS2], [GR] + [GRS] + [GRS2],[G2R] + [G2RS] + [G2RS2], [G2R

′S2], [G2R∗S2]

)′, (55)

V = ([R], [RS], [GR], [GRS], [G2R], [G2RS])′ .

The part of the singularly perturbed system (27) for V consists of the equations(a), (b), (d), (e), (g), (h) of the rescaled system (51); the part of the system (27)for U includes the equations (j), (k) of the system (51) and the three additionalequations obtained as follows: equation for ([R] + [RS] + [RS2]) is produced bysumming up (a), (b), (c); the one for ([GR] + [GRS] + [GRS2]) by summingup (d), (e), (f ); and the one for ([G2R] + [G2RS] + [G2RS2]) by summing upequations (g), (h), (i) of rescaled system (51).

Next, the boundary function method can be applied: as discussed earlier, theasymptotic approximation of the solution of system (51) is sought in the form (29).The elements of the vector function V0, which is the leading order approximation ofthe regular part of the asymptotic solution for V (t), have to be constructed first. ForV0 = ([R]0, [RS]0, [GR]0, [GRS]0, [G2R]0, [G2RS]0)

′, the system of the type(34) can be immediately written out and solved to produce the following:

[R]0 = 0, [RS]0 = 0, [GR]0 = 0, [GRS]0 = 0, [G2R]0 = 0, [G2RS]0 = 0.

(56)Now the equations for the elements of the vector function U0, which is the leadingorder approximation of the regular part of the asymptotic solution for U(t), can bewritten out. It turns out that, in addition to equations for ([R]0 + [RS]0 + [RS2]0),([GR]0+[GRS]0+[GRS2]0), and ([G2R]0+[G2RS]0+[G2RS2]0), the equationsfor [RS2]0, [GRS2]0, and [G2RS2]0 can be derived as well. Indeed, for ([R]0 +[RS]0 + [RS2]0) the equation is:

d

dt([R]0 + [RS]0 + [RS2]0) = −k+

2 [R]0[G] + k−2 [GR]0 − k+

4 [RS]0[G]+ k−

4 [GRS]0 − k+6 [RS2]0[G] + k−

6 [GRS2]0.

(57)

From (57) and (56) it follows that

Page 25: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 25

d[RS2]0

dt= −k+

6 [RS2]0 + k−6 [GRS2]0, (58)

which defines [RS2]0 through a differential equation. In a similar manner, thefollowing differential equations can be obtained for [GRS2]0 and [G2RS2]0:

d[GRS2]0

dt= k+

6 [RS2]0[G] − k−6 [GRS2]0 − k+

12[GRS2]0[G] + k−12[G2RS2]0,

(59)

and

d[G2RS2]0

dt= k+

12[GRS2]0[G] − k−12[G2RS2]0 − k+

13[G2RS2]0

+ k−13[G2R

′S2]0 − k+14[G2RS2]0 + k−

14[G2R∗S2]0.

(60)

The remaining two equations for the elements [G2R′S2]0 and [G2R

∗S2]0 of thevector function U0 (satisfying (35)) are given by:

d[G2R′S2]0

dt= k+

13[G2RS2]0 − k−13[G2R

′S2]0,

d[G2R∗S2]0

dt= k+

14[G2RS2]0 − k−14[G2R

∗S2]0.

(61)

The system (58)–(61) for leading order regular functions [RS2]0, [GRS2]0,[G2RS2]0, [G2R

′S2]0, and [G2R∗S2]0 can be solved if the corresponding initial

conditions are specified, which can be done after the leading order boundaryfunctions are determined. According to (33), Π0U ≡ 0, i.e.,

Π0[R] + Π0[RS] + Π0[RS2] ≡ 0,

Π0[GR] + Π0[GRS] + Π0[GRS2] ≡ 0,

Π0[G2R] + Π0[G2RS] + Π0[G2RS2] ≡ 0,

Π0[G2R′S2] ≡ 0,

Π0[G2R∗S2] ≡ 0.

(62)

And from (39) (compare with the rescaled system (51)), it follows that

Page 26: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

26 D. Shchepakin et al.

dΠ0[R]dτ

= −k+1 SΠ0[R],

dΠ0[RS]dτ

= k+1 SΠ0[R] − k+

3 SΠ0[RS],dΠ0[GR]

dτ= −k+

5 SΠ0[GR],dΠ0[GRS]

dτ= k+

5 SΠ0[GR] − k+7 SΠ0[GRS],

dΠ0[G2R]dτ

= −k+9 SΠ0[G2R],

dΠ0[G2RS]dτ

= k+9 SΠ0[G2R] − k+

11SΠ0[G2RS].

(63)

Note, that the solutions of (63) can be found analytically when the correspondinginitial conditions are specified. The initial conditions for (58)–(61) and (63) arefound simultaneously from (38). However, the result varies depending on theexperimental design. Consider the Type 1 experiment. As was discussed previously,in that case only [R], [RS], and [RS2] have nonzero initial conditions which aregiven by the formulas (52). Since some of the initial conditions V 0 contain a smallparameter ε, they cannot be used right away as was done in (28). Rather the initialconditions first have to be expanded in the powers of ε, using Taylor’s series, andthen the leading order terms of the corresponding initial conditions’ expansions usedfor determining the leading order functions. From the first equation of (52) it followsthat

[R]ss = 1

1 + K1[S] + K1K3[S]2

= 1

1 + K1 ¯[S]/ε + K1K3 ¯[S]2/ε2

= 0 · ε0 + 0 · ε1 + O(ε2).

Analogously, from the second and the third equations of (52), one gets

[RS]ss = 0 · ε0 + ε1

K3 ¯[S] + O(ε2),

[RS2]ss = 1 · ε0 − ε1

K3 ¯[S] + O(ε2).

(64)

Thus, all the V 0 elements in the leading order approximation are zeros except forthat corresponding to variable [RS2], which turns out to be equal to 1. Then equation(38), together with (56), yields

Page 27: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 27

[R]0(0) + Π0[R](0) = Π0[R](0) = 0,

[RS]0(0) + Π0[RS](0) = Π0[RS](0) = 0,

[GR]0(0) + Π0[GR](0) = Π0[GR](0) = 0,

[GRS]0(0) + Π0[GRS](0) = Π0[GRS](0) = 0,

[G2R]0(0) + Π0[G2R](0) = Π0[G2R](0) = 0,

[G2RS]0(0) + Π0[G2RS](0) = Π0[G2RS](0) = 0.

(65)

The system (63) together with zero initial conditions (65) produces the trivialsolutions: Π0[R](τ ) = 0, Π0[RS](τ ) = 0, Π0[GR](τ ) = 0, Π0[GRS](τ ) = 0,Π0[G2R](τ ) = 0, and Π0[G2RS](τ ) = 0. Then, by virtue of (62) the initialconditions for (58)–(61) will be

[RS2]0(0) + Π0[RS2](0) = [RS2]0(0) = 1,

[GRS2]0(0) + Π0[GRS2](0) = [GRS2]0(0) = 0,

[G2RS2]0(0) + Π0[G2RS2](0) = [G2RS2]0(0) = 0,

[G2R′S2]0(0) = 0,

[G2R∗S2]0(0) = 0.

(66)

To summarize, the leading order approximation of the solution of the system (51)during the second phase of the Type 1 experiment with high concentration ofD-serine and low to moderate concentration of L-glutamate is given by regularfunctions only, which are defined by (56), (58)–(61) with initial conditions (66).It is worthwhile to note that the Type 2 experiment in this case leads to aslightly more complicated analysis involving non-zero boundary functions and theinitial conditions (66) being different. However, that case is not important for ourdiscussion.

Reduction of the NMDAR Model in Experiments with HighConcentration of L-Glutamate

The system (51) is symmetric with respect to D-serine and L-glutamate, whichis easily seen from the chemical kinetics scheme depicted in Fig. 3. As a result,the reduction procedure in case of high concentration of L-glutamate and low tomoderate concentration of D-serine is almost identical. A small parameter ε is nowintroduced as follows:

[G] � 1, ¯[G] = ε[G], [G] = O(1), 0 < ε � 1. (67)

Page 28: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

28 D. Shchepakin et al.

Substitution of (67) into (51) and multiplication of both sides of equations (a) − (i)

by ε once again, after some rearrangement and recombination of equations similarto that performed in the previous section, produces a system of the form (27),where

U = ([R] + [GR] + [G2R], [RS] + [GRS] + [G2RS],[RS2] + [GRS2] + [G2RS2], [G2R

′S2], [G2R∗S2]

)′,

V = ([R], [GR], [RS], [GRS], [RS2], [GRS2])′ .(68)

Repeating the derivation steps performed earlier for the case of high concentrationof D-serine and low to moderate concentrations of L-glutamate, we get the followingequations and values for the leading order regular functions:

[R]0 = 0, [RS]0 = 0, [RS2]0 = 0,

[GR]0 = 0, [GRS]0 = 0, [GRS2]0 = 0,

d[G2R]0

dt= −k+

9 [G2R]0[S] + k−9 [G2RS]0,

d[G2RS]0

dt= k+

9 [G2R]0[S] − k−9 [G2RS]0 − k+

11[G2RS]0[S] + k−11[G2RS2]0,

d[G2RS2]0

dt= k+

11[G2RS]0[S] − k−11[G2RS2]0 − k+

13[G2RS2]0

+ k−13[G2R

′S2]0 − k+14[G2RS2]0 + k−

14[G2R∗S2]0,

d[G2R′S2]0

dt= k+

13[G2RS2]0 − k−13[G2R

′S2]0,

d[G2R∗S2]0

dt= k+

14[G2RS2]0 − k−14[G2R

∗S2]0.

(69)The initial conditions for differential equations in (69) are determined together withthe initial conditions for the leading order boundary functions. Here the case of theType 2 experiment is considered. Similar to expansion of type (64) of (52) presentedin the previous section, now the expansion in the powers of ε of (53) has to beused:

[R]ss = 0 · ε0 + 0 · ε1 + O(ε2),

[GR]ss = 0 · ε0 + ε1

K8 ¯[G] + O(ε2),

[G2R]ss = 1 · ε0 − ε1

K8 ¯[G] + O(ε2).

(70)

Page 29: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 29

All the boundary layer functions in the leading order approximation turn out to bezero in this case as well. Taking this into account, together with (70), the followinginitial conditions for the equations in (69) can be produced:

[G2R]0(0) = 1, [G2RS]0(0) = 0, [G2RS2]0(0) = 0,

[G2R′S2]0(0) = 0, [G2R

∗S2]0(0) = 0.(71)

To summarize, the leading order approximation of the solution of the system(51) during the second phase of Type 2 experiment with high concentration ofL-glutamate and low to moderate concentration of D-serine is given by the regularfunctions only, which are defined by differential equations in (69) with initialconditions (71). As before, the Type 1 experiment in this case would lead to morecomplicated analysis compared to the Type 2 experiment and would involve non-zero boundary functions.

Reduction of the NMDAR Model in Experiments with HighConcentrations of D-Serine and L-Glutamate

Consider the case where both ligands are present at high concentrations. Now someof the boundary layer functions will be non-zero for any of the Type 1 or the Type2 experiments. The model reduction procedure in this case stays the same as in thepreviously studied cases. A small parameter ε is introduced as follows:

S � 1, G � 1, S = εS, G = εG, S = O(1), G = O(1), 0 < ε � 1. (72)

After substitution of (72) into (51), multiplication of both sides of equations (a)−(i)

by ε and some rearrangement of equations, a system of the form (27) is produced,where

U = ([R] + [RS] + [RS2] + [GR] + [GRS] + [GRS2]+[G2R] + [G2RS] + [G2RS2], [G2R

′S2], [G2R∗S2]

)′,

V = ([R], [RS], [RS2], [GR], [GRS], [GRS2], [G2R], [G2RS])′ .(73)

After performing the analysis similar to that presented in the previous sections, itturns out that only three functions of the regular part of the asymptotics in leadingorder approximation are not identically zero:

Page 30: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

30 D. Shchepakin et al.

[R]0 = 0, [RS]0 = 0, [RS2] = 0,

[GR]0 = 0, [GRS]0 = 0, [GRS2] = 0,

[G2R]0 = 0, [G2RS]0 = 0, [G2RS2] = 0,

d[G2RS2]0

dt= −k+

13[G2RS2]0 + k−13[G2R

′S2]0

− k+14[G2RS2]0 + k−

14[G2R∗S2]0,

d[G2R′S2]0

dt= k+

13[G2RS2]0 − k−13[G2R

′S2]0,

d[G2R∗S2]0

dt= k+

14[G2RS2]0 − k−14[G2R

∗S2]0.

(74)

The boundary functions in the leading order approximation are defined by thefollowing system:

dΠ0[R]dτ

= −k+1 SΠ0[R] − k+

2 GΠ0[R],dΠ0[RS]

dτ= k+

1 SΠ0[R] − k+3 SΠ0[RS] − k+

4 GΠ0[RS],dΠ0[RS2]

dτ= k+

3 SΠ0[RS] − k+6 GΠ0[RS2],

dΠ0[GR]dτ

= k+2 GΠ0[R] − k+

5 SΠ0[GR] − k+8 GΠ0[GR],

dΠ0[GRS]dτ

= k+4 GΠ0[RS] + k+

5 SΠ0[GR] − k+7 SΠ0[GRS] − k+

10GΠ0[GRS],dΠ0[GRS2]

dτ= k+

6 GΠ0[RS2] + k+7 SΠ0[GRS] − k+

12GΠ0[GRS2],dΠ0[G2R]

dτ= k+

8 GΠ0[GR] − k+9 SΠ0[G2R],

dΠ0[G2RS]dτ

= k+9 SΠ0[G2R] + k+

10GΠ0[GRS] − k+11SΠ0[G2RS],

(75)and

Π0[R] + Π0[RS] + Π0[RS2] + Π0[GR] + Π0[GRS]+Π0[GRS2] + Π0[G2R] + Π0[G2RS] + Π0[G2RS2] ≡ 0,

Π0[G2R′S2] ≡ 0,

Π0[G2R∗S2] ≡ 0.

(76)

Page 31: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 31

Here the Type 1 experiment is considered, but the Type 2 experiment case iscompletely symmetrical and can be analyzed in an analogous manner. The initialconditions for (75) are now of the form:

[R]0(0) + Π0[R](0) = Π0[R](0) = 0,

[RS]0(0) + Π0[RS](0) = Π0[RS](0) = 0,

[RS2]0(0) + Π0[RS2](0) = Π0[RS2](0) = 1,

[GR]0(0) + Π0[GR](0) = Π0[GR](0) = 0,

[GRS]0(0) + Π0[GRS](0) = Π0[GRS](0) = 0,

[GRS2]0(0) + Π0[GRS2](0) = Π0[GRS2](0) = 0,

[G2R]0(0) + Π0[G2R](0) = Π0[G2R](0) = 0,

[G2RS]0(0) + Π0[G2RS](0) = Π0[G2RS](0) = 0,

(77)

which means that all the elements of the vector function of Π0V are equal to zeroexcept for the following:

Π0[RS2] = e−k+6

¯[G]τ ,

Π0[GRS2] = k+6

k+12 − k+

6

(e−k+

6¯[G]τ − e−k+

12¯[G]τ) .

(78)

From (76) and (78) the expression for Π0[G2RS2] can be derived:

Π0[G2RS2] = −Π0[RS2] − Π0[GRS2]

= k+6 k+

12

k+12 − k+

6

(1

k+12

e−k+12

¯[G]τ − 1

k+6

e−k+6

¯[G]τ)

.(79)

It is important to note that since ¯[G]τ = Gt , the boundary functions (78) can beexpressed in terms of the original L-glutamate concentration and the original non-stretched time t . Next, taking into account the fact that according to (78) the initialvalue of Π0[G2RS2](0) = −1, and using (76), the initial conditions for (74) can beobtained:

[G2RS2]0(0) = −Π0[G2RS2](0) = 1,

[G2R′S2]0(0) = 0,

[G2R∗S2]0(0) = 0.

(80)

To summarize, the leading order approximation of the solution of the system(51) during the second phase of the Type 1 experiment with high concentrations of

Page 32: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

32 D. Shchepakin et al.

L-glutamate and D-serine is given by the regular functions, defined by differentialequations (74) with initial conditions (80), and the boundary functions (78) and (79),with all other boundary functions being zero.

Reduction of the NMDAR Model After the Pulse

So far, the first two phases of the experiments were discussed in detail. In orderto talk about the strategy for reliable parameter estimation of the NMDAR model,the third phase of the experiments now needs to be addressed. It turns out that themodels developed so far are sufficient for describing this phase as well.

After the pulse containing a ligand ends in each experiment, the ligand concen-tration effectively becomes zero, and it is present in the system only in a bound form.For example, in the Type 1 experiment, after the pulse of L-glutamate ends and thesystem is switched to a glutamate-free solution, i.e., [G] = 0, the L-glutamate isstill bound to certain fraction of transporters’ states, i.e., GR, G2R, GRS, G2RS,etc. Once again, consider all the three experiments discussed previously.

In the Type 1 experiment with high D-serine and low L-glutamate, the conditionsafter the end of the pulse do not actually change: D-serine is still present athigh concentrations (it is present throughout the entire Type 1 experiment) andL-glutamate is low (in fact, its concentration is zero). Therefore, the behavior ofthe system can be described by the very same model: the leading order regular partapproximation is described by the equations (56) and (58)–(61), and the leadingorder approximation of the boundary layer part is defined by the equations (62)–(63). The end state of the second phase, which serves as the initial condition for thethird phase, is now different from (52). That could potentially lead to appearanceof non-zero boundary functions during the third phase of the experiment. It can beexplained why this is not happening. As shown before, during the second phaseof experiment, all the boundary functions, Π0U and Π0V , were zero, i.e., thestates U and V were approximated by the regular functions U0 and V0 alone withO(ε) accuracy. This means that they satisfy (56), (58)–(61) at the end state of thesecond phase. Therefore, the fractions of the NMDAR states [R], [RS], [GR],[GRS], [G2R], and [G2RS] must all be equal to 0 + O(ε) at the end of secondphase of experiment, which is the beginning of the third phase. Thus, similar tothe way in which the solution of (62), (63) and (65) was found, it can be shownthat all the boundary function Π0U and Π0V are identically zero during the thirdphase of experiment. Since the Type 2 experiment with low D-serine and highL-glutamate is symmetric to the one discussed above, similar conclusion on theleading order boundary functions being zero can be drawn for that experiment aswell.

In the Type 1 experiment with high D-serine and high L-glutamate, the conditionsactually change after the end of the pulse. Namely, the concentration of L-glutamateswitches from being high to being zero, i.e., low. Therefore, the correct model forthis case consists of (56), (58)–(63). The initial condition for the third phase in this

Page 33: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 33

case corresponds to the end state of the second phase of experiment, which mightinclude functions (78) if the pulse is short and the functions do not have time duringthe pulse to decay to negligible values. Same as before, the description of NMDARstates’ fractions during the third phase of experiment does not involve any non-zeroboundary functions.

Reliable NMDAR Model Parameter Estimation

Now the models that were developed for various experimental setups are ready tobe used to reliably estimate some of the parameters of the full model depicted inFig. 3. Each of the derived models approximates the full model under the specifiedexperimental conditions. The reduced models contain an overlapping subsets ofparameters of the full model, which gives us the algorithm for the parameterestimation. Performing the experiments in a correct order allows one to estimateparameters in a stepwise manner, using for the consecutive steps the parametervalues estimated during the previous steps. Suppose that the three previouslydiscussed experiments were performed in the following sequence: (A) the Type1 experiment with high concentrations of both ligands was performed first, then(B) the Type 1 experiment with high D-serine and low L-glutamate concentration,and finally (C) the Type 2 experiment with low D-serine and high L-glutamateconcentration. It is important to mentioned once again that the useful data iscollected only during the second and the third phases of each experiment. Theprocess of parameter estimation should proceed as follows:

1. The values of k±13 and k±

14 have to be estimated from the second phase ofexperiment (A) using (74) with initial conditions (80). Note that although notall the boundary functions, i.e., (78), (79), are zero, the recorded current isdetermined by [G2R

∗S2] variable for which the leading order approximation haszero boundary layer function.

2. The values of k±6 and k±

12 are to be estimated from the third phase of experiment(A) and using all the phases of experiment (B). In both cases (56), (58) —(61) are used for describing the regular parts of the corresponding asymptoticapproximations. In modeling of the third phase of (A) the end state of thesecond phase of this experiment is used as the initial condition. Therefore, thecorresponding asymptotic solution might include the boundary layer functionsas was already discussed. However, since (62) has to be satisfied, the leadingorder boundary function for [G2R

∗S2] is going to be zero again. Thus, the systemof equations for the leading order regular functions in this case is sufficient fordescribing the approximate behavior of the model. The approximation of thesolution of the model describing the experimental setup (B) has zero leadingorder boundary functions, and the initial conditions for the corresponding leadingorder regular functions are given by (66).

Page 34: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

34 D. Shchepakin et al.

3. The values of k±9 and k±

11 are estimated from all the phases of experiment (C)using the leading order regular functions defined by differential equations (69)with the initial conditions (71). All the leading order boundary functions in thiscase are zero.

Let us reiterate once again that the variable [G2R∗S2] corresponds to the

fraction of the only NMDAR state which is observable in the experiments throughthe measurements of the ionic currents. The leading order regular part of itsapproximation, [G2R

∗S2]0, is defined for each experimental setup as a variable inthe corresponding system of differential equations, and the leading order boundaryfunction Π0[G2R

∗S2] ≡ 0 for each of the experiments, which follows from (33).Since in the leading order (29) is just a sum of the regular and the boundaryfunctions, finding the boundary layer functions for the rest of the variables, whichmay turn out to be non-zero, is needed in order to obtain the initial conditionsthrough relationship (38) for the system of differential equations for regularfunctions, e.g., as was done in (77).

It is interesting to notice another fact about the systems of differential equationsfor the leading order regular functions, i.e., the reduced models given by systems(58)–(61), (69), and (74). As was shown, they describe effective behavior of the fullsystem depicted in Fig. 3. If one takes a closer look at these systems, it becomesclear that each of them corresponds to a separate chemical kinetic scheme. That is,the leading order approximations of the full model can be effectively consideredas following their own chemical reaction schemes. These schemes can be seen inFig. 4, and, in fact, they turn out to be parts of the full scheme from Fig. 3.

Fig. 4 Chemical kinetics schemes (a), (b), and (c) for the reduced models of the NMDAR receptorcorresponding to experimental setups (a), (b), and (c), respectively. (a) is the experiment with highconcentrations of both ligands with D-serine being present throughout the whole experiment andL-glutamate applied in a pulse-like manner. (b) is the same experiment as (a), but the concentrationof L-glutamate is low. And (c) is the experiment with high concentration of L-glutamate continu-ously present throughout the experiment and low concentration of D-serine applied in a pulsatilemanner

Page 35: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 35

Model Fitting to Data

The approach described in the previous sections guarantees that the models forwhich parameters have to be reliably estimated from the data are not overparam-eterized, i.e., the number of parameters in the reduced models which are derivedat each step of the process is sufficiently low and may be reliably estimated ina consecutive manner. The real experimental data for various specially designedexperimental setups discussed in this chapter and needed to perform the parameterestimation procedure and to answer the question about the nature of NMDARdesensitization is not currently available. Therefore, here the simulated data is usedto prove the concept and to illustrate the application of the described procedure.As was previously mentioned, there exist several studies which introduce theoverparameterized models with some particular values of parameters (which are notunique due to overparameterization) that fit certain experimental data sets perfectly.Although, the parameter estimates presented in those studies are not statisticallyreliable, i.e., the corresponding confidence intervals for parameter values are verywide, these parameter values can be used to generate the artificial data which mimicsthe data used in the mentioned above studies. In order to simulate the data, the fullmodel system with parameter values taken from Nahum-Levy et al. (2001) wassolved, and some noise to the solution of the system was added to produce thesimulated data set. The model parameters were then estimated consecutively usingthe reduced models according to the proposed algorithm outlined in this chapter. TheMATLAB (R2018a) software package was used for all the numerical computations.The results of model fitting to data which produce the model parameter valuestogether with corresponding 95% confidence intervals are shown in Table 1.Examples of fitted model solutions are presented in Fig. 5.

Table 1 True parametervalues (used to producesimulated data) and estimatedmodel parameter values withtheir corresponding 95%confidence intervals.Equilibrium constants K1,K3, K2, and K8, which do notenter the reduced models’statements and which have tobe estimated from the fullmodel, might still benumerically unstable toestimate: see, e.g., theconfidence interval for K2

Parameter True value Estimated value

k+13 3.68 3.65 ± 0.02

k−13 3.00 3.00 ± 0.01

k+14 83.80 81.93 ± 0.63

k−14 83.80 83.45 ± 0.19

k+6 4.00 4.00 ± 0.01

k−6 0.97 1.42 ± 0.23

k+12 4.00 4.04 ± 0.02

k−12 8.25 8.17 ± 0.00

k+9 1.70 1.70 ± 0.01

k−9 2.35 2.36 ± 0.17

k+11 1.70 1.70 ± 0.02

k−11 19.90 19.73 ± 0.02

K1 0.7234 0.5537 ± 0.08

K3 0.7234 0.7720 ± 0.03

K2 4.1237 15.74 ± 60.78

K8 4.1237 3.82 ± 0.26

Page 36: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

36 D. Shchepakin et al.

0 5 10 15 20

s

-5

-4

-3

-2

-1

0

1C

urre

nt

0 5 10 15 20

s

-5

-4

-3

-2

-1

0

1

Cur

rent

(a) (b)

0 5 10 15 20

s

-5

-4

-3

-2

-1

0

1

Cur

rent

0 5 10 15 20

s

-5

-4

-3

-2

-1

0

1C

urre

nt

(c) (d)

Fig. 5 (a) Example of a simulated current (black) in the Type 1 experiment with both ligandsbeing at their saturating levels (10 mM for both). The rest of the figures above illustrate the otherexamples of the simulated currents (grey) and the corresponding fitted currents’ curves obtainedusing the reduced models’ solutions (black). (b) The Type 1 experiment with both being at theirsaturating levels (10 mM for both). (c) The Type 1 experiment with saturating concentration ofD-serine (10 mM) and low concentration of L-glutamate (10 μM). (d) The Type 2 experiment withsaturating concentration of L-glutamate (10 mM) and low concentration of D-serine (10 μM)

Thus, from the analysis performed using the simulated data, the conclusioncan be drawn that the long-lived desensitized state is, indeed, present (since k+

13is significantly greater than zero) and that the full-ligand state G2RS2 is morelikely to unbind one of L-glutamate or D-serine molecule than the states GRS2 orG2RS, respectively (since k−

12 is significantly greater than k−6 and k−

11 is significantlygreater than k−

9 ). Thus, the two independent sources of desensitization indicated bythe simulated data were correctly identified: (1) the presence of long-lived non-conductive state G2R

′S2 and (2) D-serine-dependent desensitization.

Page 37: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 37

Conclusion

The problem of overparameterization may lead to unreliable predictions which arenot always easy to recognize. After a model is constructed and parameters areestimated from data fitting, one tends to believe that this, if the model was notempirical, gives an understanding of the underlying mechanism of the modeledprocess. However, in the case of an overparameterized model, a subset of parametersmight not be estimated reliably, that is, the actual values of the parameters can bevaried drastically without a significant change to the overall fit of the experimentaldata. In other words, the gathered experimental data are not sufficient to estimatesome parameters. Given that the increasing number of parameters in a model willalways result in a better (or at least not worse) fit of the data, it is easy to see howthese unreliable estimates may get reported. The tendency to use previously derivedmodels with reported parameter values as a part of the new more complex modelsis understandable as the amount of data, the number of scientific publications,and complexity of studies continue to increase. However, problems arise whenoverparameterized models and the corresponding parameter values are used to makepredictions outside of the conditions for which the experimental data were gathered.In case of non-overparameterized models, as long as the assumptions of the originalmodel hold for the more complex cases, this approach is completely valid. However,in the case of overparameterized models, the previously irrelevant semi-randomparameter values, i.e., the ones which were impossible to estimate reliably in theoriginal simpler studies, may become important for making predictions in the caseof new more complex models, and the use of these unreliable parameter estimateswill then lead to wrong predictions of the complex models.

The value of asymptotic methods as a mathematical tool for analyzing models ofreal-life phenomena cannot be overstated. In this chapter, a generalized asymptoticanalysis approach was presented that allows one to reduce complex models contain-ing a large number of parameters to produce simpler models with fewer parametersfor which statistically reliable estimates may be found by fitting model solutions toavailable experimental data. The presence of vastly different characteristic temporaland spatial scales in the complex phenomena described by the original modelsis used to naturally introduce small parameters that facilitate the development ofalgorithms for constructing asymptotic approximations that mimic the qualitativeand quantitative features of the original complex model solutions. Since the ratesof reactions are proportional to the concentrations (or products of concentrations)of reacting species, the experimental designs which allow one to keep some of thespecies concentrations large compared to the others produce (by design!) the fastand slow characteristic time scales associated with the corresponding reactions,some of which are now either fast or slow. This, in turn, allows one to apply theasymptotic reduction algorithms described in this chapter to produce the reliablemodel parameter estimates using the available data from such specially designedexperiments.

Page 38: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

38 D. Shchepakin et al.

A chemical kinetic model of the N-methyl-D-aspartate receptor was considered,which is the variation of another model considered before (e.g., see Benvenisteet al. 1990; Clements and Westbrook 1991; Lester and Jahr 1992; Nahum-Levyet al. 2001). This is an example of an overparameterized model with respect to theavailable experimental data. It was shown that the model reduction was possible forspecially designed experimental conditions. The method yielded several separateand simpler models for a number of different experimental designs. The resultingreduced models had overlapping sets of parameters, which allowed for a reliableparameter estimation in a stepwise manner if the models were fitted to the data inthe right order. The described procedure allows for the estimation of a subset ofparameters of the original model avoiding the overparameterization issue. Someof the individual reduced models had been proposed previously (e.g., Clementsand Westbrook 1991; Lester and Jahr 1992). However, neither one of the reducedmodels taken by itself can be used to reliably estimate the full set of parameters.That is, for the reliable parameter estimation, all the reduced models must be usedin conjunction.

Let us note that only the time-dependent and spatially independent behavior ofthe system of neurotransmitter actions was considered in the original model. Anexample of model reduction for the case which involves spatial dependence asrelated to a description of neurotransmitter action in a synaptic cleft in the presenceof generic receptors and transporters may be found in Kalachev (2006).

References

Benveniste M, Clements J, Vyklicý L, Mayer ML (1990) A kinetic analysis of the modulationof N–methyl–D–aspartic acid receptors by glycine in mouse cultured hippocampal neurones.J Physiol 428:333–357

Clements JD, Westbrook GL (1991) Activation kinetics reveal the number of glutamate and glycinebinding sites on the N-methyl-D-aspartate receptor. Neuron 7(4):605–613

Collingridge GL, Kehl SJ, McLennan H (1983) Excitatory amino acids in synaptic transmission inthe Schaffer collateral-commissural pathway of the rat hippocampus. J Physiol 334:33–46

FitzHugh R (1955) Mathematical models of threshold phenomena in the nerve membrane. BullMath Biophys 17(4):257–278

Henneberger C, Papouin T, Oliet SH, Rusakov DA (2010) Long-term potentiation depends onrelease of D-serine from astrocytes. Nature 463(7278):232–236

Henri V (1903) Lois générales de l’Action des diastases. Hermann, ParisHodgkin A, Huxley A (1952) A quantitative description of membrane current and its application

to conduction and excitation in nerve. J Physiol 117:500–544Iacobucci GJ, Popescu GK (2018) Kinetic models for activation and modulation of NMDA receptor

subtypes. Curr Opin Physiol 2:114–122Kalachev L (2006) Reduced model of neurotransmitter transport in the presence of generic

receptors and transporters. J Phys Conf Ser 55:114–129Krupp JJ, Vissel B, Heinemann SF, Westbrook GL (1998) N–terminal domains in the NR2 subunit

control desensitization of NMDA receptors. Neuron 20:317–327Le Bail M, Martineau M, Sacchi S, Yatsenko N, Radzishevsky I, Conrod S, Ait Ouares K, Wolosker

H, Pollegioni L, Billard JM, Mothet JP (2014) Identity of the NMDA receptor coagonist issynapse specific and developmentally regulated in the hippocampus. Proc Natl Acad Sci USA112(2):204–313

Page 39: link.springer.com · Mathematical Models in Neuroscience: Approaches to Experimental Design and Reliable Parameter Determination Denis Shchepakin, Leonid Kalachev, and Michael Kavanaugh

Mathematical Models in Neuroscience: Approaches to Experimental Design. . . 39

Lester RA, Jahr CE (1992) NMDA channel behavior depends on agonist affinity. J Neurosci12(2):635–643

Lester RA, Tong G, Jahr CE (1993) Interactions between the glycine and glutamate binding sitesof the NMDA receptor. J Neurosci 13(3):1088–1096

Mayer ML, Vyklicky L Jr, Clements J (1989) Regulation of NMDA receptor desensitization inmouse hippocampal neurons by glycine. Nature 338(6214):425–427

Michaelis L, Menten M (1913) Die Kinetik der Invertinwirkung. Biochem Zeitsch 49:333–369Mothet JP, Le Bail M, Billard JM (2015) Time and space profiling of NMDA receptor coagonist

functions. J Neurochem 135:210–225Murray J (1993) Mathematical biology. Springer, Berlin/HeidelbergNagumo J, Arimoto S, Yoshizawa S (1962) An active pulse transmission line simulating nerve

axon. Proc IRE 50(10):2061–2070Nahum-Levy R, Lipinski D, Shavit S, Benveniste M (2001) Desensitization of NMDA receptor

channels is modulated by glutamate agonists. Biophys J 80:2152–2166Nicoll RA (2017) A brief history of long-term potentiation. Neuron 93(2):281–290Radzishevsky I, Sason H, Wolosker H (2013) D-serine: physiology and pathology. Curr Opin Clin

Nutr Metab Care 16(1):72–75Rusakov D, Kullmann D (1998) Geometric and viscous components of the tortuosity of the

extracellular space in the brain. Proc Natl Acad Sci USA 95(15):8975–8980Sather W, Johnson JW, Henderson G, Ascher P (1990) Glycine–insensitive desensitization of

NMDA responses in cultured mouse embryonic neurons. Neuron 4:725–731Savtchenko L, Rusakov D (2007) The optimal height of the synaptic cleft. Proc Natl Acad Sci USA

104(6):1823–1828Segel L, Slemrod M (1989) The quasi-steady state assumption: a case study in perturbation. SIAM

Rev 31:446–477Schorge S, Elenes S, Colquhoun D (2005) Maximum likelihood fitting of single channel NMDA

activity with a mechanism composed of independent dimers of subunits. J Physiol 569(Pt2):395–418

Stiefenhofer M (1998) Quasi-steady-state approximation for chemical reaction networks. J MathBiol 36:593–609

Tikhonov A (1948) On the dependence of the solutions of differential equations on a smallparameter. Mat Sb (NS) 22(64)2:193–204

Tikhonov A (1950) On systems of differential equations containing parameters. Mat Sb (NS)27(69)1:147–156

Tong G, Jahr CE (1994) Regulation of glycine–insensitive desensitization of the NMDA receptorin outside–out patches. J Neurophysiol 72(2):754–761

Traynelis SF, Wollmuth LP, McBain CJ, Menniti FS, Vance KM, Ogden KK, Hansen KB, YuanH, Myers SJ, Dingledine R (2010) Glutamate receptor ion channels: structure, regulation, andfunction. Pharmacol Rev 62(3):405–496

Vasil’eva A, Butuzov V, Kalachev L (1995) The boundary function method for singular perturba-tion problems. SIAM studies in applied mathematics. SIAM, Philadelphia

Voit E, Martens H, Omholt S (2015) 150 years of the mass action law. PLoS Comp Biol11(1):e1004012