15
Research Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System Wenjun Ying 1 and Craig S. Henriquez 2 1 Department of Mathematics, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Minhang, Shanghai 200240, China 2 Departments of Biomedical Engineering and Computer Science, Duke University, Durham, NC 27708-0281, USA Correspondence should be addressed to Wenjun Ying; [email protected] Received 19 November 2014; Accepted 4 February 2015 Academic Editor: Rodrigo W. dos Santos Copyright © 2015 W. Ying and C. S. Henriquez. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. e equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space- independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. e linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. e adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. 1. Introduction One of the long-recognized challenges in modeling cardiac dynamics [14] is developing efficient and accurate algo- rithms that can accommodate the widely varying scales in both space and time [5]. e electrical wave fronts typically occupy only a small fraction of the domain, are very sharp (in space), and change very rapidly (in time) while the electrical potential, in the region away from the wave fronts, is spatially broad and changes more slowly. With standard numerical methods on uniform grids, very small mesh parameters and very small timesteps must be used to correctly resolve the fine details of the sharp and rapidly changing wave fronts. ese discretization parameters are oſten chosen heuristically and are fixed throughout the simulation, even if conditions change. Adaptive mesh refinement (AMR) methods have been proposed as a solution, in which coarse grids and large timesteps are used in the area where the electrical potential is changing slowly and fine grids and small timesteps are applied only in the region where the sharp electrical waves are located and the action potential changes very rapidly. Using this approach, the numbers of grid nodes and timesteps used with the adaptive algorithm are to some extent optimized. e original AMR algorithm was first proposed by Berger and Oliger for hyperbolic equations [6] and shock hydrodynamics [7]. e methods have been applied to cardiac simulations by Cherry et al. [8, 9] and Trangenstein and Kim [10]. e Berger-Oliger AMR algorithm is a hierarchical and recursive integration method for time-dependent partial differential equations. It starts time integration on a relatively coarse grid with a large timestep. e coarse grid is locally refined further if the computed solution at part of the domain is estimated to have large errors. Better solutions are obtained by continuing time integration on the fine grid with a smaller timestep until both coarse and fine grids reach the same time, called synchronization of levels. e fine grid may be locally refined further and is dynamically changing, which leads to both space and time adaptive algorithm. e standard implementation of Berger-Oliger’s AMR algorithm uses block-structured grids and assumes that Hindawi Publishing Corporation BioMed Research International Volume 2015, Article ID 137482, 14 pages http://dx.doi.org/10.1155/2015/137482

Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

  • Upload
    others

  • View
    32

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

Research ArticleAdaptive Mesh Refinement and Adaptive Time Integration forElectrical Wave Propagation on the Purkinje System

Wenjun Ying1 and Craig S Henriquez2

1Department of Mathematics MOE-LSC and Institute of Natural Sciences Shanghai Jiao Tong University MinhangShanghai 200240 China2Departments of Biomedical Engineering and Computer Science Duke University Durham NC 27708-0281 USA

Correspondence should be addressed to Wenjun Ying wyingsjtueducn

Received 19 November 2014 Accepted 4 February 2015

Academic Editor Rodrigo W dos Santos

Copyright copy 2015 W Ying and C S HenriquezThis is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of theheart The equations governing the distribution of electric potential over the system are solved in time with the method of linesAt each timestep by an operator splitting technique the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes which have betterstability and allow larger timesteps than explicit onesThe linear diffusion equation on each edge of the system is spatially discretizedwith the continuous piecewise linear finite element method The adaptive algorithm can automatically recognize when and wherethe electrical wave starts to leave or enter the computational domain due to external currentvoltage stimulation self-excitationor local change of membrane properties Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm arepresented

1 Introduction

One of the long-recognized challenges in modeling cardiacdynamics [1ndash4] is developing efficient and accurate algo-rithms that can accommodate the widely varying scales inboth space and time [5] The electrical wave fronts typicallyoccupy only a small fraction of the domain are very sharp (inspace) and change very rapidly (in time) while the electricalpotential in the region away from the wave fronts is spatiallybroad and changes more slowly With standard numericalmethods on uniform grids very small mesh parameters andvery small timesteps must be used to correctly resolve thefine details of the sharp and rapidly changing wave frontsThese discretization parameters are often chosen heuristicallyand are fixed throughout the simulation even if conditionschange Adaptive mesh refinement (AMR) methods havebeen proposed as a solution in which coarse grids and largetimesteps are used in the area where the electrical potentialis changing slowly and fine grids and small timesteps areapplied only in the regionwhere the sharp electrical waves are

located and the action potential changes very rapidly Usingthis approach the numbers of grid nodes and timesteps usedwith the adaptive algorithm are to some extent optimizedTheoriginal AMRalgorithmwas first proposed byBerger andOliger for hyperbolic equations [6] and shock hydrodynamics[7]Themethods have been applied to cardiac simulations byCherry et al [8 9] and Trangenstein and Kim [10]

The Berger-Oliger AMR algorithm is a hierarchical andrecursive integration method for time-dependent partialdifferential equations It starts time integration on a relativelycoarse grid with a large timestep The coarse grid is locallyrefined further if the computed solution at part of the domainis estimated to have large errors Better solutions are obtainedby continuing time integration on the fine grid with a smallertimestep until both coarse and fine grids reach the same timecalled synchronization of levels The fine grid may be locallyrefined further and is dynamically changing which leads toboth space and time adaptive algorithm

The standard implementation of Berger-Oligerrsquos AMRalgorithm uses block-structured grids and assumes that

Hindawi Publishing CorporationBioMed Research InternationalVolume 2015 Article ID 137482 14 pageshttpdxdoiorg1011552015137482

2 BioMed Research International

the underlying grids are logically rectangular and can bemapped onto a single index spaceWhile themethodhas beenshown to provide computation and accuracy advantages it ischallenging to apply to domains with complex geometry

In this paper we present an AMR algorithm that canbe used for unstructured grids The method is applied inboth idealized and realistic tree-like domains similar to thatfound in the His-Purkinje system The algorithm is based onthat proposed by Trangenstein and Kim [10] where operatorsplitting technique is used to separate the space-independentreactions part from the linear diffusion part during timeintegration Both the reactions and the diffusion parts areintegrated with an implicit scheme This allows larger andadaptive timesteps As the AMR algorithm naturally providesa hierarchy of multilevel grids the linear systems resultingfrom space discretization of the linear diffusion on theadaptively refined grids are solved by a standard geomet-ric multigrid solver The results show that uniform coarsediscretization can lead to conduction failure or changes indynamics in some parts of the branching network whencompared to a uniform fine grid The results also show thatAMR scheme with adaptive time integration (AMR-ATI) canyield results as accurate as the uniform fine grid but with aspeedup of 15 times

The remainder of the work is organized as follows Sec-tion 2 describes the partial differential equations whichmodel electrical wave propagation on the Purkinje systemSections 3 and 4 respectively present the time integrationand space discretization for the reaction-diffusion equationsThe adaptive mesh refinement and adaptive time integrationprocedures are outlined in Sections 5 and 6 In Section 7some simulation results are presented with the AMR-ATIalgorithm

2 Differential Equations

Suppose that we are given a fiber network of the Purkinjesystem with 119873

119881vertices and 119873

119864edges See Figure 1 for an

idealized branch of the Purkinje system Let 119889119894be the number

of edges connected to vertices 119881119894 for each 119894 = 1 2 119873

119881

We call 119889119894the degree of the vertex 119881

119894 A vertex 119881

119894with 119889

119894= 1

is called a leaf-vertex Otherwise it is called a nonleaf-vertexDenote the 119895th edge 119864

119895in the fiber network by 119864

119895= [119909(119895)

0

119909(119895)

1] Denote the edges that have vertex 119881

119894as the common

endpoint by 1198641198941

1198641198942

119864119894119889119894

Denote by 119909(1198941)1198871198941

119909(1198942)

1198871198942

119909(119894119889119894)

119887119894119889119894

the endpoints of the adjacent edges which overlapwith vertex119881119894 The subscript 119887

119894119903

is either 0 or 1 for 119903 = 1 2 119889119894

On each edge 119864119895of the fiber network the distribution

of the electricaction potential is determined by the con-servation of electric currents and could be described by apartial differential equation coupled with a set of ordinarydifferential equations Consider

119886

2

120597119869 (119905 119909)

120597119909+ 119862119898

120597119906 (119905 119909)

120597119905+ 119868ion (119906 q) = 119868stim (119905 119909)

for 119909(119895)0lt 119909 lt 119909

(119895)

1

119889q (119905 119909)119889119905

= M (119906 q)

(1)

0

1

2

3

4

56

7

8

9

10

11

12

13

1415

Figure 1 An idealized branch of the Purkinje system

with

119869 (119905 119909) = minus1

119877

120597119906 (119905 119909)

120597119909(2)

as the flux Here 119886 (units cm) denotes a typical radius ofthe fiber 119862

119898(units 120583Fcm2) is the membrane capacitance

constant 119877 (units kΩsdotcm) is the electrical resistivity Thevector q denotes a vector of dimensionless gating variablesThe functions 119868ion(119906 q) and M(119906 q) are typically nonlineardescribing the membrane dynamics of the fiber One specificmodel is the Hodgkin-Huxley equations [11]

We assume the electric potential 119906(119905 119909) is continuous

119906 (119905 119909(1198941)

1198871198941

) = 119906 (119905 119909(1198942)

1198871198942

) = sdot sdot sdot = 119906 (119905 119909(119894119889119894)

119887119894119889119894

) (3)

and the electric flux is conserved119889119894

sum

119903=1

(minus1)119887119894119903 119869 (119905 119909

(119894119903)

119887119894119903

) = 0 (4)

at each vertex 119881119894at any time 119905 gt 0 At a leaf-vertex 119881

119894 as we

have 119889119894= 1 the assumption (4) means the no-flux boundary

condition is imposedCombined with some appropriate initial conditions for

the potential function 119906(119905 119909) and the gating variables q(119905 119909)the differential equations above can be uniquely solved

3 Time Integration and Operator Splitting

We will adapt the method of lines to temporally integrate thedifferential equations (1) Let 119905119899 be the discrete times at whichthe equations will be discretized At each timestep from 119905

119899 to119905119899+1 we use an operator splitting technique to advance thespace-dependent part

119886

2

120597119869 (119905 119909)

120597119909+ 119862119898

120597119906 (119905 119909)

120597119905= 0 for 119909(119895)

0lt 119909 lt 119909

(119895)

1 (5)

separately from the space-independent part

119862119898

120597119906 (119905 119909)

120597119905+ 119868ion (119906 q) = 119868stim (119905 119909) (6a)

119889q (119905 119909)119889119905

= M (119906 q) (6b)

BioMed Research International 3

in the differential equations (1) Note that the space-depend-ent part (5) is simply a linear diffusion equation The space-independent part (6a) and (6b) is simply a set of ordinarydifferential equations (ODEs)

With respect to time integration both the linear diffu-sion and the nonlinear reaction parts can be in principle inte-grated with any standard ODE solver In this work we adaptimplicit time integration schemes to integrate both the lineardiffusion equation and the nonlinear ODEs or so-called reac-tion equations An implicit scheme allows relatively largertimesteps than those imposed by the stability restrictionassociated with an explicit scheme

In the next section we will focus on the space discret-ization of the linear diffusion equation For simplicity weassume the linear diffusion equation (5) is discretized in timewith the backward Euler method

119886

2

120597119869 (119905119899+1

119909)

120597119909+ 119862119898

119906 (119905119899+1

119909) minus 119906 (119905119899 119909)

Δ119905= 0

for 119909(119895)0lt 119909 lt 119909

(119895)

1

(7)

with

119869 (119905119899+1

119909) = minus1

119877

120597119906 (119905119899+1

119909)

120597119909 (8)

Here Δ119905 = 119905119899+1 minus 119905119899

4 Space Discretization with the FiniteElement Method

For conciseness we will omit the time-dependency of theelectric potential 119906(119905119899+1 119909) and the electric flux 119869(119905119899+1 119909) Let

119906 (119909) equiv 119906 (119905119899+1

119909) 119869 (119909) equiv 119869 (119905119899+1

119909)

119887 (119909) equiv 120581119906 (119905119899 119909)

(9)

with 120581 = 2119862119898(119886Δ119905) Note that 119887(119909) is known in the timestep

from 119905119899 to 119905

119899+1 The semidiscrete equation (7) can then berewritten as

minus119889

119889119909[1

1198771199061015840(119909)] + 120581119906 (119909) = 119869

1015840(119909) + 120581119906 (119909) = 119887 (119909)

for 119909(119895)0lt 119909 lt 119909

(119895)

1

(10)

In this work we further discretize (10) with the continuouspiecewise linear finite element method

For simplicity in this section we only illustrate thediscretization of the ODE (10) with the finite elementmethodfor the case of uniform mesh refinement

Suppose the 119895th edge119864119895in the fiber network is partitioned

into a uniform grid denoted by G119895 Assume the grid G

119895has

(119898119895+1) nodes denoted by 120585(119895)

119894119898119895

119894=0with 120585(119895)

119894= 119909(119895)

0+ 119894ℎ(119895) and

ℎ(119895)= (119909(119895)

1minus 119909(119895)

0)119898119895

Let 119906(119895)119894

be the unknownpotential variable associatedwiththe grid node 120585(119895)

119894and u(119895)

ℎ= (119906(119895)

0 119906(119895)

1 119906

(119895)

119898119895

)119879 the vector of

unknown potential variables Let

120593(119895)

119894(120585) =

120585 minus 120585(119895)

119894minus1

ℎ(119895)if 120585(119895)119894minus1

lt 120585 lt 120585(119895)

119894

120585(119895)

119894+1minus 120585

ℎ(119895)if 120585(119895)119894lt 120585 lt 120585

(119895)

119894+1

0 otherwise

(11)

be the continuous piecewise linear finite element basis func-tion associated with the grid node 120585(119895)

119894for each 119894 = 1 119898

119895minus

1 At the endpoints 120585(119895)0

= 119909(119895)

0and 120585(119895)119898119895

= 119909(119895)

1 the associated

basis functions read

120593(119895)

0(120585) =

120585(119895)

1minus 120585

ℎ(119895)if 120585(119895)0lt 120585 lt 120585

(119895)

1

0 otherwise

120593(119895)

119898119895

(120585) =

120585 minus 120585(119895)

119898119895minus1

ℎ(119895)if 120585(119895)119898119895minus1lt 120585 lt 120585

(119895)

119898119895

0 otherwise

(12)

Assume the finite element solution takes the form 119906(119895)

ℎ(119909) =

sum119898119895

119894=0119906(119895)

119894120593(119895)

119894(119909) which is a linear combination of the basis

functionsWe introduce the electric flux 119869(119909) = minus119877

minus11199061015840(119909) at the

edge endpoints 120585(119895)0

= 119909(119895)

0and 120585

(119895)

119898119895

= 119909(119895)

1as two extra

unknowns In terms of the basis functions 120593(119895)119894(119909)119898119895

119894=0 the

finite element equations equivalent to the second-order ODE(10) are given by

int

1

0

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593119894(119909(119895)

1) minus 119869 (119909

(119895)

0) 120593119894(119909(119895)

0)

= int

1

0

119887 (119909) 120593(119895)

119894(119909) 119889119909

(13)

for 119894 = 0 1 119898119895 At individual grid nodes they explicitly

read

int

120585(119895)

1

120585(119895)

0

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

0(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

0(119909)] 119889119909

minus 119869 (119909(119895)

0) 120593(119895)

0(119909(119895)

0) = int

120585(119895)

1

120585(119895)

0

119887 (119909) 120593(119895)

0(119909) 119889119909

(14)

4 BioMed Research International

int

120585(119895)

119894+1

120585(119895)

119894minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

= int

120585(119895)

119894+1

120585(119895)

119894minus1

119887 (119909) 120593(119895)

119894(119909) 119889119909 for 119894 = 1 2 119898

119895minus 1

(15)

int

120585(119895)

119898119895

120585(119895)

119898119895minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119898119895

(119909) + 120581119906(119895)

ℎ(119909) 120593(119895)

119898119895

(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593(119895)

119898119895

(119909(119895)

1) = int

120585(119895)

119898119895

120585(119895)

119898119895minus1

119887 (119909) 120593(119895)

119898119895

(119909) 119889119909

(16)

By further discretizing each integral in the finite elementsystem above with the composite trapezoidal rule we can geta set of linear equations in the following form

A(119895)u(119895)ℎ+ J(119895) = ℎ(119895)b(119895) (17)

with A(119895) = (119886(119895)

119903119904)(119898119895+1)times(119898

119895+1)

as the finite element stiffnessmatrixb(119895) = (119887119895

119903)119898119895+1as the current vector and J(119895) as the flux

vector The stiffness matrix A(119895) is tridiagonal and symmetricpositive definite In each row at most three entries right onthe diagonal (119886(119895)

119903119903minus1 119886(119895)119903119903 and 119886

(119895)

119903119903+1) are nonzero The flux

vector J(119895) has the following form

J(119895) = (minus119869 (119909(119895)0) 0 0 119869 (119909

(119895)

1))119879

(18)

where most entries are zeros except the first and the last onesAs the fluxes 119869(119909(119895)

0) and 119869(119909

(119895)

1) through the endpoints

of each edge 119864119895are unknown the tridiagonal system (17)

involves two more unknowns than equations So for theglobal system to be uniquely solvable we need totally 2119873

119864

extra equationsconditionsFortunately at a nonleaf-vertex 119881

119894 which has degree 119889

119894gt

1 we have (119889119894minus1) equations by the continuity (3) of potentials

plus one more equation by the conservation (4) of electricfluxes At a leaf-vertex 119881

119894 which has 119889

119894= 1 the electric flux

conservation (4) provides exactly one boundary conditionTotally we have sum119873119881

119894=1119889119894= 2119873

119864additional equations The

number of equations in the final system is thus the same asthat of unknowns Normally the system is well-determined

The overall system on the fiber network involves boththe potentials at the grid nodes and the electric fluxes atthe fiber vertices as unknowns In practical computation weeliminate the electric fluxes to get a linear system with thediscrete potentials as the only unknowns As a matter of factnoting that 120593(119895)

0(119909(119895)

0) = 1 and 120593(119895)

119898119895

(119909(119895)

1) = 1 from (14) and

(16) we see the electric flux 119869(119909(119895)0) can be simply written out

in terms of 119906(119895)0

and 119906(119895)1

and the electric flux 119869(119909(119895)1) can be

simply written out in terms of 119906(119895)119898119895minus1

and 119906(119895)119898119895

Plugging theexplicit expressions of the electric fluxes into the conservationcondition (4) we get an equation involving the unknown

potentials only After eliminating the electric fluxes wehave a well-determined system which has 119873

119881+ sum119873119864

119895=1(119898119895minus

1) equations and unknowns It can be easily verified thatthe coefficient matrix of the resulting system is a strictlydiagonally dominant and symmetric positive definite M-matrix [12]

In the case when the edge 119864119895is partitioned into a locally

refined grid or a composite grid the ODE (10) can be sim-ilarly discretized with the continuous piecewise linear finiteelement method The coefficient matrix A(119895) of the resultingsystem associated with each edge 119864

119895is also a tridiagonal

strictly diagonally dominant and symmetric positive definiteM-matrix The overall system of discrete equations on thefiber network is well-determined too The electric fluxes atthe fiber vertices can also be eliminated so that the resultingsystem has the discrete potentials as the only unknowns andthe coefficient matrix is a diagonally dominant M-matrix

In this work the grids for the space discretization aregenerated by local and adaptive mesh refinement (see Sec-tion 5 for details) The discrete finite element equations onthe fiber network are solved with a V-cyclemultigridmethodBasic components of the V-cycle iteration on the fibernetwork including presmoothing residual restriction coarsegrid correction correction prolongation and postsmooth-ing are essentially the same as those of the V-cycle iterationfor the composite grid equations on a single interval Werefer the readers to Yingrsquos thesis work [13] for details of themultilevelmultigrid iteration

5 Adaptive Mesh Refinement

The adaptive mesh refinement (AMR) algorithm applied forthis study discretizes the fiber network into a hierarchy ofdynamically locally and adaptively refined grids See Figure 2for a few composite grids each ofwhich consists of five locallyrefined grids at different times

The AMR algorithm follows Berger-Oligerrsquos approachin timestepping [6ndash10] It uses a multilevel approach torecursively integrate the reaction-diffusion system on thecomposite grids It first integrates the system with a largetimestep on a coarse level grid Next a locally refined fine gridis generated based on available coarse data Then the algo-rithm integrates the system on the fine level grid with a smalltimestep Error estimation is achieved by the Richardsonextrapolation [14] By the recursive nature of the algorithmfine grids are dynamically and locally created based on thedata on coarse grids On fine grids smaller timesteps areused for time integration The AMR algorithm synchronizesadjacent coarse and fine levels from time to time At themoment of synchronization typical routines such as meshregridding and data up-downscaling are performed

Let119870 be themaximumnumber ofmesh refinement levelsand let ℓ

119896with 119896 isin 1 2 119870 be the 119896th level of mesh

refinement Algorithm 1 below gives a brief description ofthe central part of the adaptive mesh refinement procedurewhich recursively advances the data at the current level ℓ

119896and

its finer levels

BioMed Research International 5

Figure 2 Adaptively refined grids from a simulation which use fiverefinement levels

Algorithm 1 (advance(level ℓ119896 timestep Δ119905

119896))

Step 1 Integrate the differential equation at the level ℓ119896by its

timestep Δ119905119896with the Strang operator splitting technique

(a) Integrate the split reaction equation by a half timestepΔ1199051198962

(b) Integrate the split diffusion equation by a full timestepΔ119905119896

(c) Integrate the split reaction equation by a half timestepΔ1199051198962

Step 2 If the level ℓ119896has not been refined with 119896 lt 119870 or

it is time to regrid the grid on the finer level ℓ119896+1

refinethe current level ℓ

119896or regrid the finer level ℓ

119896+1after error

estimation Scale data down from the current level ℓ119896to the

finer level ℓ119896+1

Step 3 Set the timestep Δ119905119896+1

= Δ1199051198962 and integrate the finer

level ℓ119896+1

by two timesteps by recursively calling the functionitself ldquoadvance(level ℓ

119896+1 timestep Δ119905

119896+1)rdquo

Step 4 If the finer level ℓ119896+1

and the current level ℓ119896are

synchronized reaching the same time scale data up from thefiner level ℓ

119896+1to the current level ℓ

119896

Figure 3 illustrates the recursive advancing or integrationof three different mesh refinement levels Coarse levels areadvancedintegrated before fine levels Each level has its owntimestep of different size a coarser level has a larger timestepsize and a finer level has a smaller timestep The algorithmfirst advances level ℓ

0byΔ1199050(indicated by ldquo1rdquo) next advances

level ℓ1by Δ1199051= Δ11990502 (indicated by ldquo2rdquo) and then advances

level ℓ2by two steps with Δ119905

2= Δ11990512 (indicated by ldquo3rdquo and

ldquo4rdquo) Upon the synchronization of levels ℓ1and ℓ2 data on the

fine level ℓ2are upscaled to the coarser level ℓ

1 The recursive

integration continues with Steps ldquo5rdquo ldquo6rdquo ldquo7rdquo and so forthThe design of the AMR algorithm used in this work was

based on a few assumptions proposed by Trangenstein andKim [10]

(i) First we assume the number of elements in the basegrid which describes the computational domain andis provided by the user and is relatively small Thiswill allow linear systems on the grid to be solved veryquickly in the middle of a multilevel (composite grid)

Time

UpscaleDownscale Upscale

1

3

2 5 9

4 6 7 10 11

12

13 14

8

1200011

1200012

1200010

Figure 3 Recursive advancing of three mesh refinement levels

iteration andwill also improve the performance of theadaptive algorithm The base grid is fixed during theprocess of adaptive mesh refinement Other grids onfine levels are in general dynamically and recursivelygenerated by local or uniform refinement of those oncoarse levels

(ii) Second we assume that the region covered by the gridon a fine level is contained in the interior of that onthe coarser level unless both coarse and fine gridscoincide with physical boundary of the computa-tional domainThis assumption can prevent recursivesearching for element neighbors This assumption iscalled proper level nesting

(iii) Third we assume that if an element of a coarse gridis refined in any part of its physical space it must berefined everywhere As a result the boundary of themain grid on a fine level aligns with the boundary ofa subgrid of that on the previous coarser level By asubgrid of a grid we mean the union of a subset of itselements This assumption is called alignment of levelgrids

(iv) Fourth after the data on a coarse level grid areadvanced by some time we assume that the data onits finer level are advanced by as several timesteps asrequired by stability and accuracy to reach exactlythe same time as the coarse level This assumptionimplies that a coarse level is integrated before its finerlevels and the timestep on a coarse level is an integermultiple of that on the next finer level It also impliesthat the time stepping algorithm must be appliedrecursively within each timestep on all but the finestlevel This assumption is called synchronization ofadvancement

(v) Fifth by the time a fine level and its coarser level aresynchronized we assume that data on the fine levelis more accurate than that on the coarse level So thedata on a fine levelmust be upscaled to its coarser levelbefore the coarse level is further advanced by anothercoarse timestep This assumption is called fine datapreference

6 BioMed Research International

(vi) Sixth as stated all grids except the coarsest one inthe AMR algorithm are changing dynamically It isnecessary for a coarse level to regrid its finer levelfrom time to time But it will be extremely costly totag elements and regrid levels every coarse timestepSo we would rather make regridding infrequently Ata coarse level the number of timesteps between timesof regridding its finer level is called regrid intervalIn the AMR algorithm the regrid interval may bechosen to be an integer divisor of the refinement ratio[13] which could be any even integer number in theimplementation This assumption is called infrequentregridding

(vii) Finally there is one more assumption on the adaptivealgorithm called finite termination This means thatthe user shall specify a maximum number of refine-ment levels Once the refinement reaches the maxi-mum level no further mesh refinement is performed

As determined by the nature of the Purkinje system thegrids resulting from space discretization of the computationaldomain are essentially unstructured This restricts directapplication of Berger-Oligerrsquos originalAMRalgorithmwhichrequires the underlying grid to be Cartesian or logicallyrectangular The AMR algorithm adapted here for the Purk-inje system works with unstructured grids It represents anunstructured grid by lists of edges and nodes A grid nodeis allowed to have multiple connected edges and the degreeof a node could be greater than two The unstructured gridrepresentation can naturally describe the Purkinje systemwhich primarily is a tree-like structure but has some loopsinside

Finally it is worthmentioning that in theAMRalgorithmthere is a critical parameter called AMR tolerance anddenoted by tolAMR which is used by the Richardson extrap-olation process for tagging coarse grid elements The AMRtolerance closely influences efficiency and accuracy of thealgorithm The larger the tolerance is the more efficient thealgorithm is but the less accurate the solution is The smallerthe tolerance is the more accurate the solution is but theless efficient the algorithm is Due to the limitation of spacethe detailed explanation of the Richardson extrapolationthe AMR tolerance and other components of the AMRalgorithm such as tagging and buffering of coarse gridelements implementation of the V-cycle multigrid iterationon the adaptively refined grids and the data up-downscalingbetween coarse and fine grids are all omitted We referinterested readers to Yingrsquos dissertation [13]

6 Adaptive Time Integration

The AMR algorithm can automatically recognize when andwhere the wave fronts start to leave or enter the computa-tional domain due to external currentvoltage stimulation orself-excitation Once the algorithm finds that the wave frontshave left the computational domain and determines that thereis no need to make any further mesh refinement it will turnit off and work with adaptive time integration (ATI) onlyTheimplementation of timestep size control for the adaptive time

integration is standard In each timestep the ODEsreactionsare integrated with a full timestep once and independentlyintegrated with a half timestep twice Then the two solutionsare compared and an estimation of relative numerical error isobtained by the standard Richardson extrapolation technique[13] If the estimated (maximum) relative error denoted by119864max is greater than amaximum relative tolerance rtol(max)

ATI the timestep size was rejected and a new timestep will beestimated by

Δ119905(new)

= 120574 sdot (rtol(max)

ATI119864max

)

1(119901+1)

sdot Δ119905(old)

(19)

and the Richardson extrapolation process is repeated Other-wise the solution with two half timesteps is simply accepted(actually an extrapolated solution could be used for betteraccuracy) Here the coefficient 120574 = 08 is called a safetyfactor which ensures that the new timestep size will bestrictly smaller than the old one to avoid repeated rejectionof timesteps The constant 119901 in the exponent of (19) is theaccuracy order of the overall scheme

If the suggested timestep size is less than a thresholdtimestep which usually indicates that there is an abruptquickchange of solution states a local change of membraneproperties or arrival of an external stimulus the adaptivemesh refinement process is then automatically turned backon again In the implementation the threshold timestep is notexplicitly specified by the user It is set as the timestep that isused on the base grid

In addition during the ATI period if the estimatedrelative error 119864max is less than aminimum relative tolerancertol(min)

ATI a slightly larger timestep

Δ119905(new)

= 11Δ119905(old)

(20)

is suggested for integration in the next step The adaptivetime integration does not like timestep size to be significantlyincreased within two consecutive steps for the implicit solverto have a good initial guess For the same reason a maximumtimestep size Δ119905(max) is imposed during the adaptive timeintegration This usually guarantees that the Newton solverused converges within a few iterations

7 Results

The AMR algorithm proposed in the work was implementedin custom codes written in C++ The simulations presentedin this section were all performed in double precision on adual Xeon 36GHz computer No data was output when theprograms were run for timing studies

In all of the numerical studies the electrical resistivity 119877and the membrane capacitance 119862

119898are fixed to be constant

05 kΩsdotcm and 1120583Fcm2 respectivelyThemembrane dynam-ics are described by the Beeler-Reuter model [15] No-flux(homogeneous Neumann-type) boundary conditions wereapplied at the leaf-nodes of the fiber network

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 2: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

2 BioMed Research International

the underlying grids are logically rectangular and can bemapped onto a single index spaceWhile themethodhas beenshown to provide computation and accuracy advantages it ischallenging to apply to domains with complex geometry

In this paper we present an AMR algorithm that canbe used for unstructured grids The method is applied inboth idealized and realistic tree-like domains similar to thatfound in the His-Purkinje system The algorithm is based onthat proposed by Trangenstein and Kim [10] where operatorsplitting technique is used to separate the space-independentreactions part from the linear diffusion part during timeintegration Both the reactions and the diffusion parts areintegrated with an implicit scheme This allows larger andadaptive timesteps As the AMR algorithm naturally providesa hierarchy of multilevel grids the linear systems resultingfrom space discretization of the linear diffusion on theadaptively refined grids are solved by a standard geomet-ric multigrid solver The results show that uniform coarsediscretization can lead to conduction failure or changes indynamics in some parts of the branching network whencompared to a uniform fine grid The results also show thatAMR scheme with adaptive time integration (AMR-ATI) canyield results as accurate as the uniform fine grid but with aspeedup of 15 times

The remainder of the work is organized as follows Sec-tion 2 describes the partial differential equations whichmodel electrical wave propagation on the Purkinje systemSections 3 and 4 respectively present the time integrationand space discretization for the reaction-diffusion equationsThe adaptive mesh refinement and adaptive time integrationprocedures are outlined in Sections 5 and 6 In Section 7some simulation results are presented with the AMR-ATIalgorithm

2 Differential Equations

Suppose that we are given a fiber network of the Purkinjesystem with 119873

119881vertices and 119873

119864edges See Figure 1 for an

idealized branch of the Purkinje system Let 119889119894be the number

of edges connected to vertices 119881119894 for each 119894 = 1 2 119873

119881

We call 119889119894the degree of the vertex 119881

119894 A vertex 119881

119894with 119889

119894= 1

is called a leaf-vertex Otherwise it is called a nonleaf-vertexDenote the 119895th edge 119864

119895in the fiber network by 119864

119895= [119909(119895)

0

119909(119895)

1] Denote the edges that have vertex 119881

119894as the common

endpoint by 1198641198941

1198641198942

119864119894119889119894

Denote by 119909(1198941)1198871198941

119909(1198942)

1198871198942

119909(119894119889119894)

119887119894119889119894

the endpoints of the adjacent edges which overlapwith vertex119881119894 The subscript 119887

119894119903

is either 0 or 1 for 119903 = 1 2 119889119894

On each edge 119864119895of the fiber network the distribution

of the electricaction potential is determined by the con-servation of electric currents and could be described by apartial differential equation coupled with a set of ordinarydifferential equations Consider

119886

2

120597119869 (119905 119909)

120597119909+ 119862119898

120597119906 (119905 119909)

120597119905+ 119868ion (119906 q) = 119868stim (119905 119909)

for 119909(119895)0lt 119909 lt 119909

(119895)

1

119889q (119905 119909)119889119905

= M (119906 q)

(1)

0

1

2

3

4

56

7

8

9

10

11

12

13

1415

Figure 1 An idealized branch of the Purkinje system

with

119869 (119905 119909) = minus1

119877

120597119906 (119905 119909)

120597119909(2)

as the flux Here 119886 (units cm) denotes a typical radius ofthe fiber 119862

119898(units 120583Fcm2) is the membrane capacitance

constant 119877 (units kΩsdotcm) is the electrical resistivity Thevector q denotes a vector of dimensionless gating variablesThe functions 119868ion(119906 q) and M(119906 q) are typically nonlineardescribing the membrane dynamics of the fiber One specificmodel is the Hodgkin-Huxley equations [11]

We assume the electric potential 119906(119905 119909) is continuous

119906 (119905 119909(1198941)

1198871198941

) = 119906 (119905 119909(1198942)

1198871198942

) = sdot sdot sdot = 119906 (119905 119909(119894119889119894)

119887119894119889119894

) (3)

and the electric flux is conserved119889119894

sum

119903=1

(minus1)119887119894119903 119869 (119905 119909

(119894119903)

119887119894119903

) = 0 (4)

at each vertex 119881119894at any time 119905 gt 0 At a leaf-vertex 119881

119894 as we

have 119889119894= 1 the assumption (4) means the no-flux boundary

condition is imposedCombined with some appropriate initial conditions for

the potential function 119906(119905 119909) and the gating variables q(119905 119909)the differential equations above can be uniquely solved

3 Time Integration and Operator Splitting

We will adapt the method of lines to temporally integrate thedifferential equations (1) Let 119905119899 be the discrete times at whichthe equations will be discretized At each timestep from 119905

119899 to119905119899+1 we use an operator splitting technique to advance thespace-dependent part

119886

2

120597119869 (119905 119909)

120597119909+ 119862119898

120597119906 (119905 119909)

120597119905= 0 for 119909(119895)

0lt 119909 lt 119909

(119895)

1 (5)

separately from the space-independent part

119862119898

120597119906 (119905 119909)

120597119905+ 119868ion (119906 q) = 119868stim (119905 119909) (6a)

119889q (119905 119909)119889119905

= M (119906 q) (6b)

BioMed Research International 3

in the differential equations (1) Note that the space-depend-ent part (5) is simply a linear diffusion equation The space-independent part (6a) and (6b) is simply a set of ordinarydifferential equations (ODEs)

With respect to time integration both the linear diffu-sion and the nonlinear reaction parts can be in principle inte-grated with any standard ODE solver In this work we adaptimplicit time integration schemes to integrate both the lineardiffusion equation and the nonlinear ODEs or so-called reac-tion equations An implicit scheme allows relatively largertimesteps than those imposed by the stability restrictionassociated with an explicit scheme

In the next section we will focus on the space discret-ization of the linear diffusion equation For simplicity weassume the linear diffusion equation (5) is discretized in timewith the backward Euler method

119886

2

120597119869 (119905119899+1

119909)

120597119909+ 119862119898

119906 (119905119899+1

119909) minus 119906 (119905119899 119909)

Δ119905= 0

for 119909(119895)0lt 119909 lt 119909

(119895)

1

(7)

with

119869 (119905119899+1

119909) = minus1

119877

120597119906 (119905119899+1

119909)

120597119909 (8)

Here Δ119905 = 119905119899+1 minus 119905119899

4 Space Discretization with the FiniteElement Method

For conciseness we will omit the time-dependency of theelectric potential 119906(119905119899+1 119909) and the electric flux 119869(119905119899+1 119909) Let

119906 (119909) equiv 119906 (119905119899+1

119909) 119869 (119909) equiv 119869 (119905119899+1

119909)

119887 (119909) equiv 120581119906 (119905119899 119909)

(9)

with 120581 = 2119862119898(119886Δ119905) Note that 119887(119909) is known in the timestep

from 119905119899 to 119905

119899+1 The semidiscrete equation (7) can then berewritten as

minus119889

119889119909[1

1198771199061015840(119909)] + 120581119906 (119909) = 119869

1015840(119909) + 120581119906 (119909) = 119887 (119909)

for 119909(119895)0lt 119909 lt 119909

(119895)

1

(10)

In this work we further discretize (10) with the continuouspiecewise linear finite element method

For simplicity in this section we only illustrate thediscretization of the ODE (10) with the finite elementmethodfor the case of uniform mesh refinement

Suppose the 119895th edge119864119895in the fiber network is partitioned

into a uniform grid denoted by G119895 Assume the grid G

119895has

(119898119895+1) nodes denoted by 120585(119895)

119894119898119895

119894=0with 120585(119895)

119894= 119909(119895)

0+ 119894ℎ(119895) and

ℎ(119895)= (119909(119895)

1minus 119909(119895)

0)119898119895

Let 119906(119895)119894

be the unknownpotential variable associatedwiththe grid node 120585(119895)

119894and u(119895)

ℎ= (119906(119895)

0 119906(119895)

1 119906

(119895)

119898119895

)119879 the vector of

unknown potential variables Let

120593(119895)

119894(120585) =

120585 minus 120585(119895)

119894minus1

ℎ(119895)if 120585(119895)119894minus1

lt 120585 lt 120585(119895)

119894

120585(119895)

119894+1minus 120585

ℎ(119895)if 120585(119895)119894lt 120585 lt 120585

(119895)

119894+1

0 otherwise

(11)

be the continuous piecewise linear finite element basis func-tion associated with the grid node 120585(119895)

119894for each 119894 = 1 119898

119895minus

1 At the endpoints 120585(119895)0

= 119909(119895)

0and 120585(119895)119898119895

= 119909(119895)

1 the associated

basis functions read

120593(119895)

0(120585) =

120585(119895)

1minus 120585

ℎ(119895)if 120585(119895)0lt 120585 lt 120585

(119895)

1

0 otherwise

120593(119895)

119898119895

(120585) =

120585 minus 120585(119895)

119898119895minus1

ℎ(119895)if 120585(119895)119898119895minus1lt 120585 lt 120585

(119895)

119898119895

0 otherwise

(12)

Assume the finite element solution takes the form 119906(119895)

ℎ(119909) =

sum119898119895

119894=0119906(119895)

119894120593(119895)

119894(119909) which is a linear combination of the basis

functionsWe introduce the electric flux 119869(119909) = minus119877

minus11199061015840(119909) at the

edge endpoints 120585(119895)0

= 119909(119895)

0and 120585

(119895)

119898119895

= 119909(119895)

1as two extra

unknowns In terms of the basis functions 120593(119895)119894(119909)119898119895

119894=0 the

finite element equations equivalent to the second-order ODE(10) are given by

int

1

0

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593119894(119909(119895)

1) minus 119869 (119909

(119895)

0) 120593119894(119909(119895)

0)

= int

1

0

119887 (119909) 120593(119895)

119894(119909) 119889119909

(13)

for 119894 = 0 1 119898119895 At individual grid nodes they explicitly

read

int

120585(119895)

1

120585(119895)

0

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

0(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

0(119909)] 119889119909

minus 119869 (119909(119895)

0) 120593(119895)

0(119909(119895)

0) = int

120585(119895)

1

120585(119895)

0

119887 (119909) 120593(119895)

0(119909) 119889119909

(14)

4 BioMed Research International

int

120585(119895)

119894+1

120585(119895)

119894minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

= int

120585(119895)

119894+1

120585(119895)

119894minus1

119887 (119909) 120593(119895)

119894(119909) 119889119909 for 119894 = 1 2 119898

119895minus 1

(15)

int

120585(119895)

119898119895

120585(119895)

119898119895minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119898119895

(119909) + 120581119906(119895)

ℎ(119909) 120593(119895)

119898119895

(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593(119895)

119898119895

(119909(119895)

1) = int

120585(119895)

119898119895

120585(119895)

119898119895minus1

119887 (119909) 120593(119895)

119898119895

(119909) 119889119909

(16)

By further discretizing each integral in the finite elementsystem above with the composite trapezoidal rule we can geta set of linear equations in the following form

A(119895)u(119895)ℎ+ J(119895) = ℎ(119895)b(119895) (17)

with A(119895) = (119886(119895)

119903119904)(119898119895+1)times(119898

119895+1)

as the finite element stiffnessmatrixb(119895) = (119887119895

119903)119898119895+1as the current vector and J(119895) as the flux

vector The stiffness matrix A(119895) is tridiagonal and symmetricpositive definite In each row at most three entries right onthe diagonal (119886(119895)

119903119903minus1 119886(119895)119903119903 and 119886

(119895)

119903119903+1) are nonzero The flux

vector J(119895) has the following form

J(119895) = (minus119869 (119909(119895)0) 0 0 119869 (119909

(119895)

1))119879

(18)

where most entries are zeros except the first and the last onesAs the fluxes 119869(119909(119895)

0) and 119869(119909

(119895)

1) through the endpoints

of each edge 119864119895are unknown the tridiagonal system (17)

involves two more unknowns than equations So for theglobal system to be uniquely solvable we need totally 2119873

119864

extra equationsconditionsFortunately at a nonleaf-vertex 119881

119894 which has degree 119889

119894gt

1 we have (119889119894minus1) equations by the continuity (3) of potentials

plus one more equation by the conservation (4) of electricfluxes At a leaf-vertex 119881

119894 which has 119889

119894= 1 the electric flux

conservation (4) provides exactly one boundary conditionTotally we have sum119873119881

119894=1119889119894= 2119873

119864additional equations The

number of equations in the final system is thus the same asthat of unknowns Normally the system is well-determined

The overall system on the fiber network involves boththe potentials at the grid nodes and the electric fluxes atthe fiber vertices as unknowns In practical computation weeliminate the electric fluxes to get a linear system with thediscrete potentials as the only unknowns As a matter of factnoting that 120593(119895)

0(119909(119895)

0) = 1 and 120593(119895)

119898119895

(119909(119895)

1) = 1 from (14) and

(16) we see the electric flux 119869(119909(119895)0) can be simply written out

in terms of 119906(119895)0

and 119906(119895)1

and the electric flux 119869(119909(119895)1) can be

simply written out in terms of 119906(119895)119898119895minus1

and 119906(119895)119898119895

Plugging theexplicit expressions of the electric fluxes into the conservationcondition (4) we get an equation involving the unknown

potentials only After eliminating the electric fluxes wehave a well-determined system which has 119873

119881+ sum119873119864

119895=1(119898119895minus

1) equations and unknowns It can be easily verified thatthe coefficient matrix of the resulting system is a strictlydiagonally dominant and symmetric positive definite M-matrix [12]

In the case when the edge 119864119895is partitioned into a locally

refined grid or a composite grid the ODE (10) can be sim-ilarly discretized with the continuous piecewise linear finiteelement method The coefficient matrix A(119895) of the resultingsystem associated with each edge 119864

119895is also a tridiagonal

strictly diagonally dominant and symmetric positive definiteM-matrix The overall system of discrete equations on thefiber network is well-determined too The electric fluxes atthe fiber vertices can also be eliminated so that the resultingsystem has the discrete potentials as the only unknowns andthe coefficient matrix is a diagonally dominant M-matrix

In this work the grids for the space discretization aregenerated by local and adaptive mesh refinement (see Sec-tion 5 for details) The discrete finite element equations onthe fiber network are solved with a V-cyclemultigridmethodBasic components of the V-cycle iteration on the fibernetwork including presmoothing residual restriction coarsegrid correction correction prolongation and postsmooth-ing are essentially the same as those of the V-cycle iterationfor the composite grid equations on a single interval Werefer the readers to Yingrsquos thesis work [13] for details of themultilevelmultigrid iteration

5 Adaptive Mesh Refinement

The adaptive mesh refinement (AMR) algorithm applied forthis study discretizes the fiber network into a hierarchy ofdynamically locally and adaptively refined grids See Figure 2for a few composite grids each ofwhich consists of five locallyrefined grids at different times

The AMR algorithm follows Berger-Oligerrsquos approachin timestepping [6ndash10] It uses a multilevel approach torecursively integrate the reaction-diffusion system on thecomposite grids It first integrates the system with a largetimestep on a coarse level grid Next a locally refined fine gridis generated based on available coarse data Then the algo-rithm integrates the system on the fine level grid with a smalltimestep Error estimation is achieved by the Richardsonextrapolation [14] By the recursive nature of the algorithmfine grids are dynamically and locally created based on thedata on coarse grids On fine grids smaller timesteps areused for time integration The AMR algorithm synchronizesadjacent coarse and fine levels from time to time At themoment of synchronization typical routines such as meshregridding and data up-downscaling are performed

Let119870 be themaximumnumber ofmesh refinement levelsand let ℓ

119896with 119896 isin 1 2 119870 be the 119896th level of mesh

refinement Algorithm 1 below gives a brief description ofthe central part of the adaptive mesh refinement procedurewhich recursively advances the data at the current level ℓ

119896and

its finer levels

BioMed Research International 5

Figure 2 Adaptively refined grids from a simulation which use fiverefinement levels

Algorithm 1 (advance(level ℓ119896 timestep Δ119905

119896))

Step 1 Integrate the differential equation at the level ℓ119896by its

timestep Δ119905119896with the Strang operator splitting technique

(a) Integrate the split reaction equation by a half timestepΔ1199051198962

(b) Integrate the split diffusion equation by a full timestepΔ119905119896

(c) Integrate the split reaction equation by a half timestepΔ1199051198962

Step 2 If the level ℓ119896has not been refined with 119896 lt 119870 or

it is time to regrid the grid on the finer level ℓ119896+1

refinethe current level ℓ

119896or regrid the finer level ℓ

119896+1after error

estimation Scale data down from the current level ℓ119896to the

finer level ℓ119896+1

Step 3 Set the timestep Δ119905119896+1

= Δ1199051198962 and integrate the finer

level ℓ119896+1

by two timesteps by recursively calling the functionitself ldquoadvance(level ℓ

119896+1 timestep Δ119905

119896+1)rdquo

Step 4 If the finer level ℓ119896+1

and the current level ℓ119896are

synchronized reaching the same time scale data up from thefiner level ℓ

119896+1to the current level ℓ

119896

Figure 3 illustrates the recursive advancing or integrationof three different mesh refinement levels Coarse levels areadvancedintegrated before fine levels Each level has its owntimestep of different size a coarser level has a larger timestepsize and a finer level has a smaller timestep The algorithmfirst advances level ℓ

0byΔ1199050(indicated by ldquo1rdquo) next advances

level ℓ1by Δ1199051= Δ11990502 (indicated by ldquo2rdquo) and then advances

level ℓ2by two steps with Δ119905

2= Δ11990512 (indicated by ldquo3rdquo and

ldquo4rdquo) Upon the synchronization of levels ℓ1and ℓ2 data on the

fine level ℓ2are upscaled to the coarser level ℓ

1 The recursive

integration continues with Steps ldquo5rdquo ldquo6rdquo ldquo7rdquo and so forthThe design of the AMR algorithm used in this work was

based on a few assumptions proposed by Trangenstein andKim [10]

(i) First we assume the number of elements in the basegrid which describes the computational domain andis provided by the user and is relatively small Thiswill allow linear systems on the grid to be solved veryquickly in the middle of a multilevel (composite grid)

Time

UpscaleDownscale Upscale

1

3

2 5 9

4 6 7 10 11

12

13 14

8

1200011

1200012

1200010

Figure 3 Recursive advancing of three mesh refinement levels

iteration andwill also improve the performance of theadaptive algorithm The base grid is fixed during theprocess of adaptive mesh refinement Other grids onfine levels are in general dynamically and recursivelygenerated by local or uniform refinement of those oncoarse levels

(ii) Second we assume that the region covered by the gridon a fine level is contained in the interior of that onthe coarser level unless both coarse and fine gridscoincide with physical boundary of the computa-tional domainThis assumption can prevent recursivesearching for element neighbors This assumption iscalled proper level nesting

(iii) Third we assume that if an element of a coarse gridis refined in any part of its physical space it must berefined everywhere As a result the boundary of themain grid on a fine level aligns with the boundary ofa subgrid of that on the previous coarser level By asubgrid of a grid we mean the union of a subset of itselements This assumption is called alignment of levelgrids

(iv) Fourth after the data on a coarse level grid areadvanced by some time we assume that the data onits finer level are advanced by as several timesteps asrequired by stability and accuracy to reach exactlythe same time as the coarse level This assumptionimplies that a coarse level is integrated before its finerlevels and the timestep on a coarse level is an integermultiple of that on the next finer level It also impliesthat the time stepping algorithm must be appliedrecursively within each timestep on all but the finestlevel This assumption is called synchronization ofadvancement

(v) Fifth by the time a fine level and its coarser level aresynchronized we assume that data on the fine levelis more accurate than that on the coarse level So thedata on a fine levelmust be upscaled to its coarser levelbefore the coarse level is further advanced by anothercoarse timestep This assumption is called fine datapreference

6 BioMed Research International

(vi) Sixth as stated all grids except the coarsest one inthe AMR algorithm are changing dynamically It isnecessary for a coarse level to regrid its finer levelfrom time to time But it will be extremely costly totag elements and regrid levels every coarse timestepSo we would rather make regridding infrequently Ata coarse level the number of timesteps between timesof regridding its finer level is called regrid intervalIn the AMR algorithm the regrid interval may bechosen to be an integer divisor of the refinement ratio[13] which could be any even integer number in theimplementation This assumption is called infrequentregridding

(vii) Finally there is one more assumption on the adaptivealgorithm called finite termination This means thatthe user shall specify a maximum number of refine-ment levels Once the refinement reaches the maxi-mum level no further mesh refinement is performed

As determined by the nature of the Purkinje system thegrids resulting from space discretization of the computationaldomain are essentially unstructured This restricts directapplication of Berger-Oligerrsquos originalAMRalgorithmwhichrequires the underlying grid to be Cartesian or logicallyrectangular The AMR algorithm adapted here for the Purk-inje system works with unstructured grids It represents anunstructured grid by lists of edges and nodes A grid nodeis allowed to have multiple connected edges and the degreeof a node could be greater than two The unstructured gridrepresentation can naturally describe the Purkinje systemwhich primarily is a tree-like structure but has some loopsinside

Finally it is worthmentioning that in theAMRalgorithmthere is a critical parameter called AMR tolerance anddenoted by tolAMR which is used by the Richardson extrap-olation process for tagging coarse grid elements The AMRtolerance closely influences efficiency and accuracy of thealgorithm The larger the tolerance is the more efficient thealgorithm is but the less accurate the solution is The smallerthe tolerance is the more accurate the solution is but theless efficient the algorithm is Due to the limitation of spacethe detailed explanation of the Richardson extrapolationthe AMR tolerance and other components of the AMRalgorithm such as tagging and buffering of coarse gridelements implementation of the V-cycle multigrid iterationon the adaptively refined grids and the data up-downscalingbetween coarse and fine grids are all omitted We referinterested readers to Yingrsquos dissertation [13]

6 Adaptive Time Integration

The AMR algorithm can automatically recognize when andwhere the wave fronts start to leave or enter the computa-tional domain due to external currentvoltage stimulation orself-excitation Once the algorithm finds that the wave frontshave left the computational domain and determines that thereis no need to make any further mesh refinement it will turnit off and work with adaptive time integration (ATI) onlyTheimplementation of timestep size control for the adaptive time

integration is standard In each timestep the ODEsreactionsare integrated with a full timestep once and independentlyintegrated with a half timestep twice Then the two solutionsare compared and an estimation of relative numerical error isobtained by the standard Richardson extrapolation technique[13] If the estimated (maximum) relative error denoted by119864max is greater than amaximum relative tolerance rtol(max)

ATI the timestep size was rejected and a new timestep will beestimated by

Δ119905(new)

= 120574 sdot (rtol(max)

ATI119864max

)

1(119901+1)

sdot Δ119905(old)

(19)

and the Richardson extrapolation process is repeated Other-wise the solution with two half timesteps is simply accepted(actually an extrapolated solution could be used for betteraccuracy) Here the coefficient 120574 = 08 is called a safetyfactor which ensures that the new timestep size will bestrictly smaller than the old one to avoid repeated rejectionof timesteps The constant 119901 in the exponent of (19) is theaccuracy order of the overall scheme

If the suggested timestep size is less than a thresholdtimestep which usually indicates that there is an abruptquickchange of solution states a local change of membraneproperties or arrival of an external stimulus the adaptivemesh refinement process is then automatically turned backon again In the implementation the threshold timestep is notexplicitly specified by the user It is set as the timestep that isused on the base grid

In addition during the ATI period if the estimatedrelative error 119864max is less than aminimum relative tolerancertol(min)

ATI a slightly larger timestep

Δ119905(new)

= 11Δ119905(old)

(20)

is suggested for integration in the next step The adaptivetime integration does not like timestep size to be significantlyincreased within two consecutive steps for the implicit solverto have a good initial guess For the same reason a maximumtimestep size Δ119905(max) is imposed during the adaptive timeintegration This usually guarantees that the Newton solverused converges within a few iterations

7 Results

The AMR algorithm proposed in the work was implementedin custom codes written in C++ The simulations presentedin this section were all performed in double precision on adual Xeon 36GHz computer No data was output when theprograms were run for timing studies

In all of the numerical studies the electrical resistivity 119877and the membrane capacitance 119862

119898are fixed to be constant

05 kΩsdotcm and 1120583Fcm2 respectivelyThemembrane dynam-ics are described by the Beeler-Reuter model [15] No-flux(homogeneous Neumann-type) boundary conditions wereapplied at the leaf-nodes of the fiber network

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 3: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

BioMed Research International 3

in the differential equations (1) Note that the space-depend-ent part (5) is simply a linear diffusion equation The space-independent part (6a) and (6b) is simply a set of ordinarydifferential equations (ODEs)

With respect to time integration both the linear diffu-sion and the nonlinear reaction parts can be in principle inte-grated with any standard ODE solver In this work we adaptimplicit time integration schemes to integrate both the lineardiffusion equation and the nonlinear ODEs or so-called reac-tion equations An implicit scheme allows relatively largertimesteps than those imposed by the stability restrictionassociated with an explicit scheme

In the next section we will focus on the space discret-ization of the linear diffusion equation For simplicity weassume the linear diffusion equation (5) is discretized in timewith the backward Euler method

119886

2

120597119869 (119905119899+1

119909)

120597119909+ 119862119898

119906 (119905119899+1

119909) minus 119906 (119905119899 119909)

Δ119905= 0

for 119909(119895)0lt 119909 lt 119909

(119895)

1

(7)

with

119869 (119905119899+1

119909) = minus1

119877

120597119906 (119905119899+1

119909)

120597119909 (8)

Here Δ119905 = 119905119899+1 minus 119905119899

4 Space Discretization with the FiniteElement Method

For conciseness we will omit the time-dependency of theelectric potential 119906(119905119899+1 119909) and the electric flux 119869(119905119899+1 119909) Let

119906 (119909) equiv 119906 (119905119899+1

119909) 119869 (119909) equiv 119869 (119905119899+1

119909)

119887 (119909) equiv 120581119906 (119905119899 119909)

(9)

with 120581 = 2119862119898(119886Δ119905) Note that 119887(119909) is known in the timestep

from 119905119899 to 119905

119899+1 The semidiscrete equation (7) can then berewritten as

minus119889

119889119909[1

1198771199061015840(119909)] + 120581119906 (119909) = 119869

1015840(119909) + 120581119906 (119909) = 119887 (119909)

for 119909(119895)0lt 119909 lt 119909

(119895)

1

(10)

In this work we further discretize (10) with the continuouspiecewise linear finite element method

For simplicity in this section we only illustrate thediscretization of the ODE (10) with the finite elementmethodfor the case of uniform mesh refinement

Suppose the 119895th edge119864119895in the fiber network is partitioned

into a uniform grid denoted by G119895 Assume the grid G

119895has

(119898119895+1) nodes denoted by 120585(119895)

119894119898119895

119894=0with 120585(119895)

119894= 119909(119895)

0+ 119894ℎ(119895) and

ℎ(119895)= (119909(119895)

1minus 119909(119895)

0)119898119895

Let 119906(119895)119894

be the unknownpotential variable associatedwiththe grid node 120585(119895)

119894and u(119895)

ℎ= (119906(119895)

0 119906(119895)

1 119906

(119895)

119898119895

)119879 the vector of

unknown potential variables Let

120593(119895)

119894(120585) =

120585 minus 120585(119895)

119894minus1

ℎ(119895)if 120585(119895)119894minus1

lt 120585 lt 120585(119895)

119894

120585(119895)

119894+1minus 120585

ℎ(119895)if 120585(119895)119894lt 120585 lt 120585

(119895)

119894+1

0 otherwise

(11)

be the continuous piecewise linear finite element basis func-tion associated with the grid node 120585(119895)

119894for each 119894 = 1 119898

119895minus

1 At the endpoints 120585(119895)0

= 119909(119895)

0and 120585(119895)119898119895

= 119909(119895)

1 the associated

basis functions read

120593(119895)

0(120585) =

120585(119895)

1minus 120585

ℎ(119895)if 120585(119895)0lt 120585 lt 120585

(119895)

1

0 otherwise

120593(119895)

119898119895

(120585) =

120585 minus 120585(119895)

119898119895minus1

ℎ(119895)if 120585(119895)119898119895minus1lt 120585 lt 120585

(119895)

119898119895

0 otherwise

(12)

Assume the finite element solution takes the form 119906(119895)

ℎ(119909) =

sum119898119895

119894=0119906(119895)

119894120593(119895)

119894(119909) which is a linear combination of the basis

functionsWe introduce the electric flux 119869(119909) = minus119877

minus11199061015840(119909) at the

edge endpoints 120585(119895)0

= 119909(119895)

0and 120585

(119895)

119898119895

= 119909(119895)

1as two extra

unknowns In terms of the basis functions 120593(119895)119894(119909)119898119895

119894=0 the

finite element equations equivalent to the second-order ODE(10) are given by

int

1

0

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593119894(119909(119895)

1) minus 119869 (119909

(119895)

0) 120593119894(119909(119895)

0)

= int

1

0

119887 (119909) 120593(119895)

119894(119909) 119889119909

(13)

for 119894 = 0 1 119898119895 At individual grid nodes they explicitly

read

int

120585(119895)

1

120585(119895)

0

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

0(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

0(119909)] 119889119909

minus 119869 (119909(119895)

0) 120593(119895)

0(119909(119895)

0) = int

120585(119895)

1

120585(119895)

0

119887 (119909) 120593(119895)

0(119909) 119889119909

(14)

4 BioMed Research International

int

120585(119895)

119894+1

120585(119895)

119894minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

= int

120585(119895)

119894+1

120585(119895)

119894minus1

119887 (119909) 120593(119895)

119894(119909) 119889119909 for 119894 = 1 2 119898

119895minus 1

(15)

int

120585(119895)

119898119895

120585(119895)

119898119895minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119898119895

(119909) + 120581119906(119895)

ℎ(119909) 120593(119895)

119898119895

(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593(119895)

119898119895

(119909(119895)

1) = int

120585(119895)

119898119895

120585(119895)

119898119895minus1

119887 (119909) 120593(119895)

119898119895

(119909) 119889119909

(16)

By further discretizing each integral in the finite elementsystem above with the composite trapezoidal rule we can geta set of linear equations in the following form

A(119895)u(119895)ℎ+ J(119895) = ℎ(119895)b(119895) (17)

with A(119895) = (119886(119895)

119903119904)(119898119895+1)times(119898

119895+1)

as the finite element stiffnessmatrixb(119895) = (119887119895

119903)119898119895+1as the current vector and J(119895) as the flux

vector The stiffness matrix A(119895) is tridiagonal and symmetricpositive definite In each row at most three entries right onthe diagonal (119886(119895)

119903119903minus1 119886(119895)119903119903 and 119886

(119895)

119903119903+1) are nonzero The flux

vector J(119895) has the following form

J(119895) = (minus119869 (119909(119895)0) 0 0 119869 (119909

(119895)

1))119879

(18)

where most entries are zeros except the first and the last onesAs the fluxes 119869(119909(119895)

0) and 119869(119909

(119895)

1) through the endpoints

of each edge 119864119895are unknown the tridiagonal system (17)

involves two more unknowns than equations So for theglobal system to be uniquely solvable we need totally 2119873

119864

extra equationsconditionsFortunately at a nonleaf-vertex 119881

119894 which has degree 119889

119894gt

1 we have (119889119894minus1) equations by the continuity (3) of potentials

plus one more equation by the conservation (4) of electricfluxes At a leaf-vertex 119881

119894 which has 119889

119894= 1 the electric flux

conservation (4) provides exactly one boundary conditionTotally we have sum119873119881

119894=1119889119894= 2119873

119864additional equations The

number of equations in the final system is thus the same asthat of unknowns Normally the system is well-determined

The overall system on the fiber network involves boththe potentials at the grid nodes and the electric fluxes atthe fiber vertices as unknowns In practical computation weeliminate the electric fluxes to get a linear system with thediscrete potentials as the only unknowns As a matter of factnoting that 120593(119895)

0(119909(119895)

0) = 1 and 120593(119895)

119898119895

(119909(119895)

1) = 1 from (14) and

(16) we see the electric flux 119869(119909(119895)0) can be simply written out

in terms of 119906(119895)0

and 119906(119895)1

and the electric flux 119869(119909(119895)1) can be

simply written out in terms of 119906(119895)119898119895minus1

and 119906(119895)119898119895

Plugging theexplicit expressions of the electric fluxes into the conservationcondition (4) we get an equation involving the unknown

potentials only After eliminating the electric fluxes wehave a well-determined system which has 119873

119881+ sum119873119864

119895=1(119898119895minus

1) equations and unknowns It can be easily verified thatthe coefficient matrix of the resulting system is a strictlydiagonally dominant and symmetric positive definite M-matrix [12]

In the case when the edge 119864119895is partitioned into a locally

refined grid or a composite grid the ODE (10) can be sim-ilarly discretized with the continuous piecewise linear finiteelement method The coefficient matrix A(119895) of the resultingsystem associated with each edge 119864

119895is also a tridiagonal

strictly diagonally dominant and symmetric positive definiteM-matrix The overall system of discrete equations on thefiber network is well-determined too The electric fluxes atthe fiber vertices can also be eliminated so that the resultingsystem has the discrete potentials as the only unknowns andthe coefficient matrix is a diagonally dominant M-matrix

In this work the grids for the space discretization aregenerated by local and adaptive mesh refinement (see Sec-tion 5 for details) The discrete finite element equations onthe fiber network are solved with a V-cyclemultigridmethodBasic components of the V-cycle iteration on the fibernetwork including presmoothing residual restriction coarsegrid correction correction prolongation and postsmooth-ing are essentially the same as those of the V-cycle iterationfor the composite grid equations on a single interval Werefer the readers to Yingrsquos thesis work [13] for details of themultilevelmultigrid iteration

5 Adaptive Mesh Refinement

The adaptive mesh refinement (AMR) algorithm applied forthis study discretizes the fiber network into a hierarchy ofdynamically locally and adaptively refined grids See Figure 2for a few composite grids each ofwhich consists of five locallyrefined grids at different times

The AMR algorithm follows Berger-Oligerrsquos approachin timestepping [6ndash10] It uses a multilevel approach torecursively integrate the reaction-diffusion system on thecomposite grids It first integrates the system with a largetimestep on a coarse level grid Next a locally refined fine gridis generated based on available coarse data Then the algo-rithm integrates the system on the fine level grid with a smalltimestep Error estimation is achieved by the Richardsonextrapolation [14] By the recursive nature of the algorithmfine grids are dynamically and locally created based on thedata on coarse grids On fine grids smaller timesteps areused for time integration The AMR algorithm synchronizesadjacent coarse and fine levels from time to time At themoment of synchronization typical routines such as meshregridding and data up-downscaling are performed

Let119870 be themaximumnumber ofmesh refinement levelsand let ℓ

119896with 119896 isin 1 2 119870 be the 119896th level of mesh

refinement Algorithm 1 below gives a brief description ofthe central part of the adaptive mesh refinement procedurewhich recursively advances the data at the current level ℓ

119896and

its finer levels

BioMed Research International 5

Figure 2 Adaptively refined grids from a simulation which use fiverefinement levels

Algorithm 1 (advance(level ℓ119896 timestep Δ119905

119896))

Step 1 Integrate the differential equation at the level ℓ119896by its

timestep Δ119905119896with the Strang operator splitting technique

(a) Integrate the split reaction equation by a half timestepΔ1199051198962

(b) Integrate the split diffusion equation by a full timestepΔ119905119896

(c) Integrate the split reaction equation by a half timestepΔ1199051198962

Step 2 If the level ℓ119896has not been refined with 119896 lt 119870 or

it is time to regrid the grid on the finer level ℓ119896+1

refinethe current level ℓ

119896or regrid the finer level ℓ

119896+1after error

estimation Scale data down from the current level ℓ119896to the

finer level ℓ119896+1

Step 3 Set the timestep Δ119905119896+1

= Δ1199051198962 and integrate the finer

level ℓ119896+1

by two timesteps by recursively calling the functionitself ldquoadvance(level ℓ

119896+1 timestep Δ119905

119896+1)rdquo

Step 4 If the finer level ℓ119896+1

and the current level ℓ119896are

synchronized reaching the same time scale data up from thefiner level ℓ

119896+1to the current level ℓ

119896

Figure 3 illustrates the recursive advancing or integrationof three different mesh refinement levels Coarse levels areadvancedintegrated before fine levels Each level has its owntimestep of different size a coarser level has a larger timestepsize and a finer level has a smaller timestep The algorithmfirst advances level ℓ

0byΔ1199050(indicated by ldquo1rdquo) next advances

level ℓ1by Δ1199051= Δ11990502 (indicated by ldquo2rdquo) and then advances

level ℓ2by two steps with Δ119905

2= Δ11990512 (indicated by ldquo3rdquo and

ldquo4rdquo) Upon the synchronization of levels ℓ1and ℓ2 data on the

fine level ℓ2are upscaled to the coarser level ℓ

1 The recursive

integration continues with Steps ldquo5rdquo ldquo6rdquo ldquo7rdquo and so forthThe design of the AMR algorithm used in this work was

based on a few assumptions proposed by Trangenstein andKim [10]

(i) First we assume the number of elements in the basegrid which describes the computational domain andis provided by the user and is relatively small Thiswill allow linear systems on the grid to be solved veryquickly in the middle of a multilevel (composite grid)

Time

UpscaleDownscale Upscale

1

3

2 5 9

4 6 7 10 11

12

13 14

8

1200011

1200012

1200010

Figure 3 Recursive advancing of three mesh refinement levels

iteration andwill also improve the performance of theadaptive algorithm The base grid is fixed during theprocess of adaptive mesh refinement Other grids onfine levels are in general dynamically and recursivelygenerated by local or uniform refinement of those oncoarse levels

(ii) Second we assume that the region covered by the gridon a fine level is contained in the interior of that onthe coarser level unless both coarse and fine gridscoincide with physical boundary of the computa-tional domainThis assumption can prevent recursivesearching for element neighbors This assumption iscalled proper level nesting

(iii) Third we assume that if an element of a coarse gridis refined in any part of its physical space it must berefined everywhere As a result the boundary of themain grid on a fine level aligns with the boundary ofa subgrid of that on the previous coarser level By asubgrid of a grid we mean the union of a subset of itselements This assumption is called alignment of levelgrids

(iv) Fourth after the data on a coarse level grid areadvanced by some time we assume that the data onits finer level are advanced by as several timesteps asrequired by stability and accuracy to reach exactlythe same time as the coarse level This assumptionimplies that a coarse level is integrated before its finerlevels and the timestep on a coarse level is an integermultiple of that on the next finer level It also impliesthat the time stepping algorithm must be appliedrecursively within each timestep on all but the finestlevel This assumption is called synchronization ofadvancement

(v) Fifth by the time a fine level and its coarser level aresynchronized we assume that data on the fine levelis more accurate than that on the coarse level So thedata on a fine levelmust be upscaled to its coarser levelbefore the coarse level is further advanced by anothercoarse timestep This assumption is called fine datapreference

6 BioMed Research International

(vi) Sixth as stated all grids except the coarsest one inthe AMR algorithm are changing dynamically It isnecessary for a coarse level to regrid its finer levelfrom time to time But it will be extremely costly totag elements and regrid levels every coarse timestepSo we would rather make regridding infrequently Ata coarse level the number of timesteps between timesof regridding its finer level is called regrid intervalIn the AMR algorithm the regrid interval may bechosen to be an integer divisor of the refinement ratio[13] which could be any even integer number in theimplementation This assumption is called infrequentregridding

(vii) Finally there is one more assumption on the adaptivealgorithm called finite termination This means thatthe user shall specify a maximum number of refine-ment levels Once the refinement reaches the maxi-mum level no further mesh refinement is performed

As determined by the nature of the Purkinje system thegrids resulting from space discretization of the computationaldomain are essentially unstructured This restricts directapplication of Berger-Oligerrsquos originalAMRalgorithmwhichrequires the underlying grid to be Cartesian or logicallyrectangular The AMR algorithm adapted here for the Purk-inje system works with unstructured grids It represents anunstructured grid by lists of edges and nodes A grid nodeis allowed to have multiple connected edges and the degreeof a node could be greater than two The unstructured gridrepresentation can naturally describe the Purkinje systemwhich primarily is a tree-like structure but has some loopsinside

Finally it is worthmentioning that in theAMRalgorithmthere is a critical parameter called AMR tolerance anddenoted by tolAMR which is used by the Richardson extrap-olation process for tagging coarse grid elements The AMRtolerance closely influences efficiency and accuracy of thealgorithm The larger the tolerance is the more efficient thealgorithm is but the less accurate the solution is The smallerthe tolerance is the more accurate the solution is but theless efficient the algorithm is Due to the limitation of spacethe detailed explanation of the Richardson extrapolationthe AMR tolerance and other components of the AMRalgorithm such as tagging and buffering of coarse gridelements implementation of the V-cycle multigrid iterationon the adaptively refined grids and the data up-downscalingbetween coarse and fine grids are all omitted We referinterested readers to Yingrsquos dissertation [13]

6 Adaptive Time Integration

The AMR algorithm can automatically recognize when andwhere the wave fronts start to leave or enter the computa-tional domain due to external currentvoltage stimulation orself-excitation Once the algorithm finds that the wave frontshave left the computational domain and determines that thereis no need to make any further mesh refinement it will turnit off and work with adaptive time integration (ATI) onlyTheimplementation of timestep size control for the adaptive time

integration is standard In each timestep the ODEsreactionsare integrated with a full timestep once and independentlyintegrated with a half timestep twice Then the two solutionsare compared and an estimation of relative numerical error isobtained by the standard Richardson extrapolation technique[13] If the estimated (maximum) relative error denoted by119864max is greater than amaximum relative tolerance rtol(max)

ATI the timestep size was rejected and a new timestep will beestimated by

Δ119905(new)

= 120574 sdot (rtol(max)

ATI119864max

)

1(119901+1)

sdot Δ119905(old)

(19)

and the Richardson extrapolation process is repeated Other-wise the solution with two half timesteps is simply accepted(actually an extrapolated solution could be used for betteraccuracy) Here the coefficient 120574 = 08 is called a safetyfactor which ensures that the new timestep size will bestrictly smaller than the old one to avoid repeated rejectionof timesteps The constant 119901 in the exponent of (19) is theaccuracy order of the overall scheme

If the suggested timestep size is less than a thresholdtimestep which usually indicates that there is an abruptquickchange of solution states a local change of membraneproperties or arrival of an external stimulus the adaptivemesh refinement process is then automatically turned backon again In the implementation the threshold timestep is notexplicitly specified by the user It is set as the timestep that isused on the base grid

In addition during the ATI period if the estimatedrelative error 119864max is less than aminimum relative tolerancertol(min)

ATI a slightly larger timestep

Δ119905(new)

= 11Δ119905(old)

(20)

is suggested for integration in the next step The adaptivetime integration does not like timestep size to be significantlyincreased within two consecutive steps for the implicit solverto have a good initial guess For the same reason a maximumtimestep size Δ119905(max) is imposed during the adaptive timeintegration This usually guarantees that the Newton solverused converges within a few iterations

7 Results

The AMR algorithm proposed in the work was implementedin custom codes written in C++ The simulations presentedin this section were all performed in double precision on adual Xeon 36GHz computer No data was output when theprograms were run for timing studies

In all of the numerical studies the electrical resistivity 119877and the membrane capacitance 119862

119898are fixed to be constant

05 kΩsdotcm and 1120583Fcm2 respectivelyThemembrane dynam-ics are described by the Beeler-Reuter model [15] No-flux(homogeneous Neumann-type) boundary conditions wereapplied at the leaf-nodes of the fiber network

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 4: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

4 BioMed Research International

int

120585(119895)

119894+1

120585(119895)

119894minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119894(119909) + 120581119906

(119895)

ℎ(119909) 120593(119895)

119894(119909)] 119889119909

= int

120585(119895)

119894+1

120585(119895)

119894minus1

119887 (119909) 120593(119895)

119894(119909) 119889119909 for 119894 = 1 2 119898

119895minus 1

(15)

int

120585(119895)

119898119895

120585(119895)

119898119895minus1

[1

119877

119889

119889119909119906(119895)

ℎ(119909)

119889

119889119909120593(119895)

119898119895

(119909) + 120581119906(119895)

ℎ(119909) 120593(119895)

119898119895

(119909)] 119889119909

+ 119869 (119909(119895)

1) 120593(119895)

119898119895

(119909(119895)

1) = int

120585(119895)

119898119895

120585(119895)

119898119895minus1

119887 (119909) 120593(119895)

119898119895

(119909) 119889119909

(16)

By further discretizing each integral in the finite elementsystem above with the composite trapezoidal rule we can geta set of linear equations in the following form

A(119895)u(119895)ℎ+ J(119895) = ℎ(119895)b(119895) (17)

with A(119895) = (119886(119895)

119903119904)(119898119895+1)times(119898

119895+1)

as the finite element stiffnessmatrixb(119895) = (119887119895

119903)119898119895+1as the current vector and J(119895) as the flux

vector The stiffness matrix A(119895) is tridiagonal and symmetricpositive definite In each row at most three entries right onthe diagonal (119886(119895)

119903119903minus1 119886(119895)119903119903 and 119886

(119895)

119903119903+1) are nonzero The flux

vector J(119895) has the following form

J(119895) = (minus119869 (119909(119895)0) 0 0 119869 (119909

(119895)

1))119879

(18)

where most entries are zeros except the first and the last onesAs the fluxes 119869(119909(119895)

0) and 119869(119909

(119895)

1) through the endpoints

of each edge 119864119895are unknown the tridiagonal system (17)

involves two more unknowns than equations So for theglobal system to be uniquely solvable we need totally 2119873

119864

extra equationsconditionsFortunately at a nonleaf-vertex 119881

119894 which has degree 119889

119894gt

1 we have (119889119894minus1) equations by the continuity (3) of potentials

plus one more equation by the conservation (4) of electricfluxes At a leaf-vertex 119881

119894 which has 119889

119894= 1 the electric flux

conservation (4) provides exactly one boundary conditionTotally we have sum119873119881

119894=1119889119894= 2119873

119864additional equations The

number of equations in the final system is thus the same asthat of unknowns Normally the system is well-determined

The overall system on the fiber network involves boththe potentials at the grid nodes and the electric fluxes atthe fiber vertices as unknowns In practical computation weeliminate the electric fluxes to get a linear system with thediscrete potentials as the only unknowns As a matter of factnoting that 120593(119895)

0(119909(119895)

0) = 1 and 120593(119895)

119898119895

(119909(119895)

1) = 1 from (14) and

(16) we see the electric flux 119869(119909(119895)0) can be simply written out

in terms of 119906(119895)0

and 119906(119895)1

and the electric flux 119869(119909(119895)1) can be

simply written out in terms of 119906(119895)119898119895minus1

and 119906(119895)119898119895

Plugging theexplicit expressions of the electric fluxes into the conservationcondition (4) we get an equation involving the unknown

potentials only After eliminating the electric fluxes wehave a well-determined system which has 119873

119881+ sum119873119864

119895=1(119898119895minus

1) equations and unknowns It can be easily verified thatthe coefficient matrix of the resulting system is a strictlydiagonally dominant and symmetric positive definite M-matrix [12]

In the case when the edge 119864119895is partitioned into a locally

refined grid or a composite grid the ODE (10) can be sim-ilarly discretized with the continuous piecewise linear finiteelement method The coefficient matrix A(119895) of the resultingsystem associated with each edge 119864

119895is also a tridiagonal

strictly diagonally dominant and symmetric positive definiteM-matrix The overall system of discrete equations on thefiber network is well-determined too The electric fluxes atthe fiber vertices can also be eliminated so that the resultingsystem has the discrete potentials as the only unknowns andthe coefficient matrix is a diagonally dominant M-matrix

In this work the grids for the space discretization aregenerated by local and adaptive mesh refinement (see Sec-tion 5 for details) The discrete finite element equations onthe fiber network are solved with a V-cyclemultigridmethodBasic components of the V-cycle iteration on the fibernetwork including presmoothing residual restriction coarsegrid correction correction prolongation and postsmooth-ing are essentially the same as those of the V-cycle iterationfor the composite grid equations on a single interval Werefer the readers to Yingrsquos thesis work [13] for details of themultilevelmultigrid iteration

5 Adaptive Mesh Refinement

The adaptive mesh refinement (AMR) algorithm applied forthis study discretizes the fiber network into a hierarchy ofdynamically locally and adaptively refined grids See Figure 2for a few composite grids each ofwhich consists of five locallyrefined grids at different times

The AMR algorithm follows Berger-Oligerrsquos approachin timestepping [6ndash10] It uses a multilevel approach torecursively integrate the reaction-diffusion system on thecomposite grids It first integrates the system with a largetimestep on a coarse level grid Next a locally refined fine gridis generated based on available coarse data Then the algo-rithm integrates the system on the fine level grid with a smalltimestep Error estimation is achieved by the Richardsonextrapolation [14] By the recursive nature of the algorithmfine grids are dynamically and locally created based on thedata on coarse grids On fine grids smaller timesteps areused for time integration The AMR algorithm synchronizesadjacent coarse and fine levels from time to time At themoment of synchronization typical routines such as meshregridding and data up-downscaling are performed

Let119870 be themaximumnumber ofmesh refinement levelsand let ℓ

119896with 119896 isin 1 2 119870 be the 119896th level of mesh

refinement Algorithm 1 below gives a brief description ofthe central part of the adaptive mesh refinement procedurewhich recursively advances the data at the current level ℓ

119896and

its finer levels

BioMed Research International 5

Figure 2 Adaptively refined grids from a simulation which use fiverefinement levels

Algorithm 1 (advance(level ℓ119896 timestep Δ119905

119896))

Step 1 Integrate the differential equation at the level ℓ119896by its

timestep Δ119905119896with the Strang operator splitting technique

(a) Integrate the split reaction equation by a half timestepΔ1199051198962

(b) Integrate the split diffusion equation by a full timestepΔ119905119896

(c) Integrate the split reaction equation by a half timestepΔ1199051198962

Step 2 If the level ℓ119896has not been refined with 119896 lt 119870 or

it is time to regrid the grid on the finer level ℓ119896+1

refinethe current level ℓ

119896or regrid the finer level ℓ

119896+1after error

estimation Scale data down from the current level ℓ119896to the

finer level ℓ119896+1

Step 3 Set the timestep Δ119905119896+1

= Δ1199051198962 and integrate the finer

level ℓ119896+1

by two timesteps by recursively calling the functionitself ldquoadvance(level ℓ

119896+1 timestep Δ119905

119896+1)rdquo

Step 4 If the finer level ℓ119896+1

and the current level ℓ119896are

synchronized reaching the same time scale data up from thefiner level ℓ

119896+1to the current level ℓ

119896

Figure 3 illustrates the recursive advancing or integrationof three different mesh refinement levels Coarse levels areadvancedintegrated before fine levels Each level has its owntimestep of different size a coarser level has a larger timestepsize and a finer level has a smaller timestep The algorithmfirst advances level ℓ

0byΔ1199050(indicated by ldquo1rdquo) next advances

level ℓ1by Δ1199051= Δ11990502 (indicated by ldquo2rdquo) and then advances

level ℓ2by two steps with Δ119905

2= Δ11990512 (indicated by ldquo3rdquo and

ldquo4rdquo) Upon the synchronization of levels ℓ1and ℓ2 data on the

fine level ℓ2are upscaled to the coarser level ℓ

1 The recursive

integration continues with Steps ldquo5rdquo ldquo6rdquo ldquo7rdquo and so forthThe design of the AMR algorithm used in this work was

based on a few assumptions proposed by Trangenstein andKim [10]

(i) First we assume the number of elements in the basegrid which describes the computational domain andis provided by the user and is relatively small Thiswill allow linear systems on the grid to be solved veryquickly in the middle of a multilevel (composite grid)

Time

UpscaleDownscale Upscale

1

3

2 5 9

4 6 7 10 11

12

13 14

8

1200011

1200012

1200010

Figure 3 Recursive advancing of three mesh refinement levels

iteration andwill also improve the performance of theadaptive algorithm The base grid is fixed during theprocess of adaptive mesh refinement Other grids onfine levels are in general dynamically and recursivelygenerated by local or uniform refinement of those oncoarse levels

(ii) Second we assume that the region covered by the gridon a fine level is contained in the interior of that onthe coarser level unless both coarse and fine gridscoincide with physical boundary of the computa-tional domainThis assumption can prevent recursivesearching for element neighbors This assumption iscalled proper level nesting

(iii) Third we assume that if an element of a coarse gridis refined in any part of its physical space it must berefined everywhere As a result the boundary of themain grid on a fine level aligns with the boundary ofa subgrid of that on the previous coarser level By asubgrid of a grid we mean the union of a subset of itselements This assumption is called alignment of levelgrids

(iv) Fourth after the data on a coarse level grid areadvanced by some time we assume that the data onits finer level are advanced by as several timesteps asrequired by stability and accuracy to reach exactlythe same time as the coarse level This assumptionimplies that a coarse level is integrated before its finerlevels and the timestep on a coarse level is an integermultiple of that on the next finer level It also impliesthat the time stepping algorithm must be appliedrecursively within each timestep on all but the finestlevel This assumption is called synchronization ofadvancement

(v) Fifth by the time a fine level and its coarser level aresynchronized we assume that data on the fine levelis more accurate than that on the coarse level So thedata on a fine levelmust be upscaled to its coarser levelbefore the coarse level is further advanced by anothercoarse timestep This assumption is called fine datapreference

6 BioMed Research International

(vi) Sixth as stated all grids except the coarsest one inthe AMR algorithm are changing dynamically It isnecessary for a coarse level to regrid its finer levelfrom time to time But it will be extremely costly totag elements and regrid levels every coarse timestepSo we would rather make regridding infrequently Ata coarse level the number of timesteps between timesof regridding its finer level is called regrid intervalIn the AMR algorithm the regrid interval may bechosen to be an integer divisor of the refinement ratio[13] which could be any even integer number in theimplementation This assumption is called infrequentregridding

(vii) Finally there is one more assumption on the adaptivealgorithm called finite termination This means thatthe user shall specify a maximum number of refine-ment levels Once the refinement reaches the maxi-mum level no further mesh refinement is performed

As determined by the nature of the Purkinje system thegrids resulting from space discretization of the computationaldomain are essentially unstructured This restricts directapplication of Berger-Oligerrsquos originalAMRalgorithmwhichrequires the underlying grid to be Cartesian or logicallyrectangular The AMR algorithm adapted here for the Purk-inje system works with unstructured grids It represents anunstructured grid by lists of edges and nodes A grid nodeis allowed to have multiple connected edges and the degreeof a node could be greater than two The unstructured gridrepresentation can naturally describe the Purkinje systemwhich primarily is a tree-like structure but has some loopsinside

Finally it is worthmentioning that in theAMRalgorithmthere is a critical parameter called AMR tolerance anddenoted by tolAMR which is used by the Richardson extrap-olation process for tagging coarse grid elements The AMRtolerance closely influences efficiency and accuracy of thealgorithm The larger the tolerance is the more efficient thealgorithm is but the less accurate the solution is The smallerthe tolerance is the more accurate the solution is but theless efficient the algorithm is Due to the limitation of spacethe detailed explanation of the Richardson extrapolationthe AMR tolerance and other components of the AMRalgorithm such as tagging and buffering of coarse gridelements implementation of the V-cycle multigrid iterationon the adaptively refined grids and the data up-downscalingbetween coarse and fine grids are all omitted We referinterested readers to Yingrsquos dissertation [13]

6 Adaptive Time Integration

The AMR algorithm can automatically recognize when andwhere the wave fronts start to leave or enter the computa-tional domain due to external currentvoltage stimulation orself-excitation Once the algorithm finds that the wave frontshave left the computational domain and determines that thereis no need to make any further mesh refinement it will turnit off and work with adaptive time integration (ATI) onlyTheimplementation of timestep size control for the adaptive time

integration is standard In each timestep the ODEsreactionsare integrated with a full timestep once and independentlyintegrated with a half timestep twice Then the two solutionsare compared and an estimation of relative numerical error isobtained by the standard Richardson extrapolation technique[13] If the estimated (maximum) relative error denoted by119864max is greater than amaximum relative tolerance rtol(max)

ATI the timestep size was rejected and a new timestep will beestimated by

Δ119905(new)

= 120574 sdot (rtol(max)

ATI119864max

)

1(119901+1)

sdot Δ119905(old)

(19)

and the Richardson extrapolation process is repeated Other-wise the solution with two half timesteps is simply accepted(actually an extrapolated solution could be used for betteraccuracy) Here the coefficient 120574 = 08 is called a safetyfactor which ensures that the new timestep size will bestrictly smaller than the old one to avoid repeated rejectionof timesteps The constant 119901 in the exponent of (19) is theaccuracy order of the overall scheme

If the suggested timestep size is less than a thresholdtimestep which usually indicates that there is an abruptquickchange of solution states a local change of membraneproperties or arrival of an external stimulus the adaptivemesh refinement process is then automatically turned backon again In the implementation the threshold timestep is notexplicitly specified by the user It is set as the timestep that isused on the base grid

In addition during the ATI period if the estimatedrelative error 119864max is less than aminimum relative tolerancertol(min)

ATI a slightly larger timestep

Δ119905(new)

= 11Δ119905(old)

(20)

is suggested for integration in the next step The adaptivetime integration does not like timestep size to be significantlyincreased within two consecutive steps for the implicit solverto have a good initial guess For the same reason a maximumtimestep size Δ119905(max) is imposed during the adaptive timeintegration This usually guarantees that the Newton solverused converges within a few iterations

7 Results

The AMR algorithm proposed in the work was implementedin custom codes written in C++ The simulations presentedin this section were all performed in double precision on adual Xeon 36GHz computer No data was output when theprograms were run for timing studies

In all of the numerical studies the electrical resistivity 119877and the membrane capacitance 119862

119898are fixed to be constant

05 kΩsdotcm and 1120583Fcm2 respectivelyThemembrane dynam-ics are described by the Beeler-Reuter model [15] No-flux(homogeneous Neumann-type) boundary conditions wereapplied at the leaf-nodes of the fiber network

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 5: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

BioMed Research International 5

Figure 2 Adaptively refined grids from a simulation which use fiverefinement levels

Algorithm 1 (advance(level ℓ119896 timestep Δ119905

119896))

Step 1 Integrate the differential equation at the level ℓ119896by its

timestep Δ119905119896with the Strang operator splitting technique

(a) Integrate the split reaction equation by a half timestepΔ1199051198962

(b) Integrate the split diffusion equation by a full timestepΔ119905119896

(c) Integrate the split reaction equation by a half timestepΔ1199051198962

Step 2 If the level ℓ119896has not been refined with 119896 lt 119870 or

it is time to regrid the grid on the finer level ℓ119896+1

refinethe current level ℓ

119896or regrid the finer level ℓ

119896+1after error

estimation Scale data down from the current level ℓ119896to the

finer level ℓ119896+1

Step 3 Set the timestep Δ119905119896+1

= Δ1199051198962 and integrate the finer

level ℓ119896+1

by two timesteps by recursively calling the functionitself ldquoadvance(level ℓ

119896+1 timestep Δ119905

119896+1)rdquo

Step 4 If the finer level ℓ119896+1

and the current level ℓ119896are

synchronized reaching the same time scale data up from thefiner level ℓ

119896+1to the current level ℓ

119896

Figure 3 illustrates the recursive advancing or integrationof three different mesh refinement levels Coarse levels areadvancedintegrated before fine levels Each level has its owntimestep of different size a coarser level has a larger timestepsize and a finer level has a smaller timestep The algorithmfirst advances level ℓ

0byΔ1199050(indicated by ldquo1rdquo) next advances

level ℓ1by Δ1199051= Δ11990502 (indicated by ldquo2rdquo) and then advances

level ℓ2by two steps with Δ119905

2= Δ11990512 (indicated by ldquo3rdquo and

ldquo4rdquo) Upon the synchronization of levels ℓ1and ℓ2 data on the

fine level ℓ2are upscaled to the coarser level ℓ

1 The recursive

integration continues with Steps ldquo5rdquo ldquo6rdquo ldquo7rdquo and so forthThe design of the AMR algorithm used in this work was

based on a few assumptions proposed by Trangenstein andKim [10]

(i) First we assume the number of elements in the basegrid which describes the computational domain andis provided by the user and is relatively small Thiswill allow linear systems on the grid to be solved veryquickly in the middle of a multilevel (composite grid)

Time

UpscaleDownscale Upscale

1

3

2 5 9

4 6 7 10 11

12

13 14

8

1200011

1200012

1200010

Figure 3 Recursive advancing of three mesh refinement levels

iteration andwill also improve the performance of theadaptive algorithm The base grid is fixed during theprocess of adaptive mesh refinement Other grids onfine levels are in general dynamically and recursivelygenerated by local or uniform refinement of those oncoarse levels

(ii) Second we assume that the region covered by the gridon a fine level is contained in the interior of that onthe coarser level unless both coarse and fine gridscoincide with physical boundary of the computa-tional domainThis assumption can prevent recursivesearching for element neighbors This assumption iscalled proper level nesting

(iii) Third we assume that if an element of a coarse gridis refined in any part of its physical space it must berefined everywhere As a result the boundary of themain grid on a fine level aligns with the boundary ofa subgrid of that on the previous coarser level By asubgrid of a grid we mean the union of a subset of itselements This assumption is called alignment of levelgrids

(iv) Fourth after the data on a coarse level grid areadvanced by some time we assume that the data onits finer level are advanced by as several timesteps asrequired by stability and accuracy to reach exactlythe same time as the coarse level This assumptionimplies that a coarse level is integrated before its finerlevels and the timestep on a coarse level is an integermultiple of that on the next finer level It also impliesthat the time stepping algorithm must be appliedrecursively within each timestep on all but the finestlevel This assumption is called synchronization ofadvancement

(v) Fifth by the time a fine level and its coarser level aresynchronized we assume that data on the fine levelis more accurate than that on the coarse level So thedata on a fine levelmust be upscaled to its coarser levelbefore the coarse level is further advanced by anothercoarse timestep This assumption is called fine datapreference

6 BioMed Research International

(vi) Sixth as stated all grids except the coarsest one inthe AMR algorithm are changing dynamically It isnecessary for a coarse level to regrid its finer levelfrom time to time But it will be extremely costly totag elements and regrid levels every coarse timestepSo we would rather make regridding infrequently Ata coarse level the number of timesteps between timesof regridding its finer level is called regrid intervalIn the AMR algorithm the regrid interval may bechosen to be an integer divisor of the refinement ratio[13] which could be any even integer number in theimplementation This assumption is called infrequentregridding

(vii) Finally there is one more assumption on the adaptivealgorithm called finite termination This means thatthe user shall specify a maximum number of refine-ment levels Once the refinement reaches the maxi-mum level no further mesh refinement is performed

As determined by the nature of the Purkinje system thegrids resulting from space discretization of the computationaldomain are essentially unstructured This restricts directapplication of Berger-Oligerrsquos originalAMRalgorithmwhichrequires the underlying grid to be Cartesian or logicallyrectangular The AMR algorithm adapted here for the Purk-inje system works with unstructured grids It represents anunstructured grid by lists of edges and nodes A grid nodeis allowed to have multiple connected edges and the degreeof a node could be greater than two The unstructured gridrepresentation can naturally describe the Purkinje systemwhich primarily is a tree-like structure but has some loopsinside

Finally it is worthmentioning that in theAMRalgorithmthere is a critical parameter called AMR tolerance anddenoted by tolAMR which is used by the Richardson extrap-olation process for tagging coarse grid elements The AMRtolerance closely influences efficiency and accuracy of thealgorithm The larger the tolerance is the more efficient thealgorithm is but the less accurate the solution is The smallerthe tolerance is the more accurate the solution is but theless efficient the algorithm is Due to the limitation of spacethe detailed explanation of the Richardson extrapolationthe AMR tolerance and other components of the AMRalgorithm such as tagging and buffering of coarse gridelements implementation of the V-cycle multigrid iterationon the adaptively refined grids and the data up-downscalingbetween coarse and fine grids are all omitted We referinterested readers to Yingrsquos dissertation [13]

6 Adaptive Time Integration

The AMR algorithm can automatically recognize when andwhere the wave fronts start to leave or enter the computa-tional domain due to external currentvoltage stimulation orself-excitation Once the algorithm finds that the wave frontshave left the computational domain and determines that thereis no need to make any further mesh refinement it will turnit off and work with adaptive time integration (ATI) onlyTheimplementation of timestep size control for the adaptive time

integration is standard In each timestep the ODEsreactionsare integrated with a full timestep once and independentlyintegrated with a half timestep twice Then the two solutionsare compared and an estimation of relative numerical error isobtained by the standard Richardson extrapolation technique[13] If the estimated (maximum) relative error denoted by119864max is greater than amaximum relative tolerance rtol(max)

ATI the timestep size was rejected and a new timestep will beestimated by

Δ119905(new)

= 120574 sdot (rtol(max)

ATI119864max

)

1(119901+1)

sdot Δ119905(old)

(19)

and the Richardson extrapolation process is repeated Other-wise the solution with two half timesteps is simply accepted(actually an extrapolated solution could be used for betteraccuracy) Here the coefficient 120574 = 08 is called a safetyfactor which ensures that the new timestep size will bestrictly smaller than the old one to avoid repeated rejectionof timesteps The constant 119901 in the exponent of (19) is theaccuracy order of the overall scheme

If the suggested timestep size is less than a thresholdtimestep which usually indicates that there is an abruptquickchange of solution states a local change of membraneproperties or arrival of an external stimulus the adaptivemesh refinement process is then automatically turned backon again In the implementation the threshold timestep is notexplicitly specified by the user It is set as the timestep that isused on the base grid

In addition during the ATI period if the estimatedrelative error 119864max is less than aminimum relative tolerancertol(min)

ATI a slightly larger timestep

Δ119905(new)

= 11Δ119905(old)

(20)

is suggested for integration in the next step The adaptivetime integration does not like timestep size to be significantlyincreased within two consecutive steps for the implicit solverto have a good initial guess For the same reason a maximumtimestep size Δ119905(max) is imposed during the adaptive timeintegration This usually guarantees that the Newton solverused converges within a few iterations

7 Results

The AMR algorithm proposed in the work was implementedin custom codes written in C++ The simulations presentedin this section were all performed in double precision on adual Xeon 36GHz computer No data was output when theprograms were run for timing studies

In all of the numerical studies the electrical resistivity 119877and the membrane capacitance 119862

119898are fixed to be constant

05 kΩsdotcm and 1120583Fcm2 respectivelyThemembrane dynam-ics are described by the Beeler-Reuter model [15] No-flux(homogeneous Neumann-type) boundary conditions wereapplied at the leaf-nodes of the fiber network

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 6: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

6 BioMed Research International

(vi) Sixth as stated all grids except the coarsest one inthe AMR algorithm are changing dynamically It isnecessary for a coarse level to regrid its finer levelfrom time to time But it will be extremely costly totag elements and regrid levels every coarse timestepSo we would rather make regridding infrequently Ata coarse level the number of timesteps between timesof regridding its finer level is called regrid intervalIn the AMR algorithm the regrid interval may bechosen to be an integer divisor of the refinement ratio[13] which could be any even integer number in theimplementation This assumption is called infrequentregridding

(vii) Finally there is one more assumption on the adaptivealgorithm called finite termination This means thatthe user shall specify a maximum number of refine-ment levels Once the refinement reaches the maxi-mum level no further mesh refinement is performed

As determined by the nature of the Purkinje system thegrids resulting from space discretization of the computationaldomain are essentially unstructured This restricts directapplication of Berger-Oligerrsquos originalAMRalgorithmwhichrequires the underlying grid to be Cartesian or logicallyrectangular The AMR algorithm adapted here for the Purk-inje system works with unstructured grids It represents anunstructured grid by lists of edges and nodes A grid nodeis allowed to have multiple connected edges and the degreeof a node could be greater than two The unstructured gridrepresentation can naturally describe the Purkinje systemwhich primarily is a tree-like structure but has some loopsinside

Finally it is worthmentioning that in theAMRalgorithmthere is a critical parameter called AMR tolerance anddenoted by tolAMR which is used by the Richardson extrap-olation process for tagging coarse grid elements The AMRtolerance closely influences efficiency and accuracy of thealgorithm The larger the tolerance is the more efficient thealgorithm is but the less accurate the solution is The smallerthe tolerance is the more accurate the solution is but theless efficient the algorithm is Due to the limitation of spacethe detailed explanation of the Richardson extrapolationthe AMR tolerance and other components of the AMRalgorithm such as tagging and buffering of coarse gridelements implementation of the V-cycle multigrid iterationon the adaptively refined grids and the data up-downscalingbetween coarse and fine grids are all omitted We referinterested readers to Yingrsquos dissertation [13]

6 Adaptive Time Integration

The AMR algorithm can automatically recognize when andwhere the wave fronts start to leave or enter the computa-tional domain due to external currentvoltage stimulation orself-excitation Once the algorithm finds that the wave frontshave left the computational domain and determines that thereis no need to make any further mesh refinement it will turnit off and work with adaptive time integration (ATI) onlyTheimplementation of timestep size control for the adaptive time

integration is standard In each timestep the ODEsreactionsare integrated with a full timestep once and independentlyintegrated with a half timestep twice Then the two solutionsare compared and an estimation of relative numerical error isobtained by the standard Richardson extrapolation technique[13] If the estimated (maximum) relative error denoted by119864max is greater than amaximum relative tolerance rtol(max)

ATI the timestep size was rejected and a new timestep will beestimated by

Δ119905(new)

= 120574 sdot (rtol(max)

ATI119864max

)

1(119901+1)

sdot Δ119905(old)

(19)

and the Richardson extrapolation process is repeated Other-wise the solution with two half timesteps is simply accepted(actually an extrapolated solution could be used for betteraccuracy) Here the coefficient 120574 = 08 is called a safetyfactor which ensures that the new timestep size will bestrictly smaller than the old one to avoid repeated rejectionof timesteps The constant 119901 in the exponent of (19) is theaccuracy order of the overall scheme

If the suggested timestep size is less than a thresholdtimestep which usually indicates that there is an abruptquickchange of solution states a local change of membraneproperties or arrival of an external stimulus the adaptivemesh refinement process is then automatically turned backon again In the implementation the threshold timestep is notexplicitly specified by the user It is set as the timestep that isused on the base grid

In addition during the ATI period if the estimatedrelative error 119864max is less than aminimum relative tolerancertol(min)

ATI a slightly larger timestep

Δ119905(new)

= 11Δ119905(old)

(20)

is suggested for integration in the next step The adaptivetime integration does not like timestep size to be significantlyincreased within two consecutive steps for the implicit solverto have a good initial guess For the same reason a maximumtimestep size Δ119905(max) is imposed during the adaptive timeintegration This usually guarantees that the Newton solverused converges within a few iterations

7 Results

The AMR algorithm proposed in the work was implementedin custom codes written in C++ The simulations presentedin this section were all performed in double precision on adual Xeon 36GHz computer No data was output when theprograms were run for timing studies

In all of the numerical studies the electrical resistivity 119877and the membrane capacitance 119862

119898are fixed to be constant

05 kΩsdotcm and 1120583Fcm2 respectivelyThemembrane dynam-ics are described by the Beeler-Reuter model [15] No-flux(homogeneous Neumann-type) boundary conditions wereapplied at the leaf-nodes of the fiber network

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 7: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

BioMed Research International 7

During the time integration for the time-dependentPDEs a second-order operator splitting technique the Strangsplitting is applied in each timestep The ODEs resultingfrom operator splitting are integrated with a second-ordersingly diagonally implicit Runge-Kutta (SDIRK2) schemeThe linear diffusion equation is temporally integrated withthe Crank-Nicolson scheme and spatially discretizedwith thecontinuous piecewise linear finite element methodThe finiteelement system on locally refined grids is solved with a V-cycle multigridmultilevel iterative method in each timestepat each refinement level (In the V-cycle composite griditeration the multigrid prolongation is done by the piecewiselinear interpolationThemultigrid restriction is implementedin a way so that the restriction matrix is the transpose of theprolongation matrix At the coarsest level as the coefficientmatrix of the discrete system is a symmetric and positivedefiniteM-matrix the linear equations are solvedwith a sym-metric successive overrelaxation preconditioned conjugategradient method) The resulting scheme has second-orderglobal accuracy if the tolerances in each component of theadaptive algorithm are consistently selected [13]

In the adaptive simulations the mesh refinement ratio isfixed to be two and the critical AMR tolerance tolAMR is set as002Themaximum andminimum relative tolerances whichare used in the ATI period are chosen to be rtol(max)

ATI = 10minus2

and rtol(min)ATI = 10

minus4 respectively We restrict the timestepsizes used in the adaptive time integration so that they arenot greater than one millisecond that is Δ119905(max)

= 1msecThe value of 119901 in the exponent of (19) is equal to two as thescheme has second-order accuracy

Example 2 Simulations on a two-dimensional branch withthicknessradius-variable and nonuniform branch segmentsVoltage stimulation is applied from right

The two-dimensional branch shown in Figure 4 has adimension of 4 cmtimes 2 cm which is bounded by the rectangu-lar domain [0 4] times [1 3] cm2 This two-dimensional branchhas variable (piecewise constant) radius Branch segmentsldquo6rdquo and ldquo7rdquo have the smallest radius equal to 20 120583m Branchsegment ldquo1rdquo has the largest radius equal to 160120583m andeight times that of branch segment ldquo6rdquo The radius of branchsegment ldquo2rdquo is six times that of ldquo6rdquo Branch segments ldquo3rdquo andldquo4rdquo have the same radius four times that of ldquo6rdquoThe radius ofbranch segment ldquo5rdquo is twice that of branch segment ldquo6rdquo

A voltage stimulation is applied from the right end ofbranch segment ldquo1rdquo through a discontinuous initial actionpotential which is given as follows

119881119898(119909 119910) =

254mvolts if 119909 gt 35

minus846mvolts otherwise(21)

The base grid used by the simulation with adaptive meshrefinement is a coarse partition of the branch structure (Fig-ure 4) and has 240 edges with minimum element size equalto 00078 cm and maximum element size equal to 0078 cmNote that this is a highly nonuniform grid as the ratio of themaximum to theminimum element size is much greater than

1

2

3

45

6

7

Figure 4 A typical branch in the heart conduction system is highlynonuniform in the sense that the lengths of branch segments eachof which is bounded by two adjacent leaves or branching verticesmay vary significantly In this figure branch segments ldquo1rdquo and ldquo4rdquoare not directly connected Instead they are connected through avery short branch segment Similarly branch segments ldquo4rdquo and ldquo6rdquoare also connected through a very short segment

one Actually in the simulation the base grid was generatedby three times uniform bisection from a much coarser gridwhich only has 30 edges This makes the multigrid iterationsfor linear systems more efficient even though its contributionto the speedup of the overall algorithm is marginal as most(gt90) of the computer time used by the simulation wasspent integrating nonlinear ODEsreactions The adaptivesimulation effectively uses three refinement levels To applythe Richardson extrapolation technique the second level gridwas created from uniform bisection of the base grid In factonly the grids at the third level were dynamically changingThey are regridded every other timestep

In the adaptive simulation the timestep sizes vary fromΔ119905 = 00098msecs to Δ119905 = 10msec The AMR process wasautomatically turned off at time 119905 = 4285msecs as the algo-rithm determines there is no need to make spatially localrefinement After that the ATI process is activated Thesimulation on the time interval [0 400]msecs totally used991 secs in computer time

The two-dimensional branch is also partitioned into afine quasi-uniform grid which has 726 edges withminimumelement size equal to 00104 cm and maximum element sizeequal to 00127 cm The simulation with this fine quasi-uniform grid and timestep Δ119905 = 000635msecs on the sametime interval as the adaptive one used about 14060 secs incomputer time We call this one ldquouniformrdquo simulation

Another simulation simply working with the coarse gridwhich is used as the base grid for the AMR one is alsoperformed We call this as ldquono-refinementrdquo simulation Thetimestep is fixed to be Δ119905 = 0039msecs This one used767 secs in computer time

In each of the simulations above the action potentialis initiated at the right end of branch segment ldquo1rdquo andpropagates across the whole computational domain Wecollected action potentials at those seven marked points (seeFigure 4) during the simulations See Figures 5 and 6 fortraces of action potentials at the marked points ldquo4rdquo and ldquo7rdquorespectively The action potentials from the adaptive anduniform mesh refinement simulations match very well andtheir difference at the peak of the upstroke phase is uniformlybounded by 01mV The action potential from the ldquono-refinementrdquo simulation has least accurate results Potential

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 8: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

8 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

Pote

ntia

l (m

V)

Time (ms)

4

6

8

10

12

14

16

10 15 20 25 30 35 40 45 50

AdaptiveNo-refinement

Uniform

(b)

Figure 5 Traces of action potentials during the simulation period[0 400]msecs at marked point ldquo4rdquo in the two-dimensional branchshown in Figure 4 The solid curves were from the adaptive simu-lation The dotted curves were from the uniform simulation Thedashed curves were from the ldquono-refinementrdquo simulation In thesesimulations a voltage stimulation is applied from the right side ofthe radius-variable branch structure The right plot is a close-up ofthe left plot

oscillation is even observed at point ldquo7rdquo (see Figure 6) in theldquono-refinementrdquo simulation

Example 3 Simulations on a two-dimensional branch whosesegments have piecewise constant radii Voltage stimulationis applied from top

We once again run three simulations respectively withadaptive uniform mesh refinement and no-refinement Thecomputational domain and parameters are all the same as

Time (ms)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

minus80

minus60

minus40

minus20

AdaptiveNo-refinement

Uniform

(a)

0

2

4

6

8

10

12

14

16

20 30 40 50 60 70 80

AdaptiveNo-refinement

Uniform

Pote

ntia

l (m

V)

Time (ms)

(b)

Figure 6 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo7rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the right sideof the thicknessradius-variable branch structure The right plot is aclose-up of the left plot

those used previouslyThe only difference now is to stimulatethe tissue from top instead of from right

The voltage stimulation is applied from the upper part ofbranch ldquo7rdquo through a discontinuous initial action potentialwhich is given as follows

119881119898(119909 119910) =

254mvolts if 119910 gt 28

minus846mvolts otherwise(22)

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 9: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

BioMed Research International 9

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(b)

Figure 7 Traces of action potentials during the simulation period[0 400]msecs at marked points ldquo7rdquo and ldquo6rdquo in the two-dimensionalbranch shown in Figure 4 The solid curves were from the adaptivesimulation The dotted curves were from the uniform simulationThe dashed curves were from the ldquono-refinementrdquo simulation Inthese simulations a voltage stimulation is applied from the top ofthe radius-variable branch structureThe left plot corresponds to themarked point ldquo7rdquo and the right plot corresponds to themarked pointldquo6rdquo

The action potential successfully propagates across thewhole computational domain in each of the adaptive anduniform refinement simulations while it fails to pass acrossbranching points where the fiber radius changes from smallto large in the ldquono-refinementrdquo simulation (see Figure 7)

To verify accuracy of the solution from the adaptivesimulation action potentials at the seven marked points (seeFigure 4) are also collected and compared with those fromthe simulation with the fine and quasi-uniform grid It isobserved that the adaptive and uniform results match very

3

20

76109

Figure 8 The idealized Purkinje system of the heart

well too and their difference at the peak of the upstroke phaseis bounded by 015mvolts

The simulation with adaptive mesh refinement used1004msecs in computer time The timestep sizes used in theadaptive simulation vary from Δ119905 = 00098msecs to Δ119905 =10msec The adaptive mesh refinement was automaticallyturned off at time 119905 = 4402msecs

The one with the uniform grid and timestep Δ119905 =

000635msecs used 14050msecs in computer time

Example 4 Simulations on a three-dimensional branchwhose segments have piecewise constant radii

The three-dimensional Purkinje system used in the fol-lowing simulations was originally from 3Dsciencecom Thefiber radius of the system was modified to be piecewiseconstant for simplicity The radius of each branch segmenttakes one of the five values as the two-dimensional caseThe minimum and the maximum radius of the fiber arerespectively 20 120583m and 160 120583m Intermediate values are40 80 and 120 cm See Figure 8 for an illustration of thetopological structure where the ratios of variable radii are notcourteously shown

The idealized Purkinje system has a dimension 204 times

216 times 298 cm3 It is bounded by the rectangular domain[minus039 165]times[405 621]times[minus131 167] cm3Themaximumof the shortest paths from the top-most vertex to other leaf-vertices computed by Dijkstrarsquos algorithm is about 529 cm

As in the two-dimensional examples in each of thesimulations with the idealized Purkinje system a voltagestimulation is applied from the top branch through a discon-tinuous initial action potential given as follows

119881119898(119909 119910 119911) =

254mvolts if 119911 gt 1372minus846mvolts otherwise

(23)

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 10: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

10 BioMed Research International

The base grid used by the adaptive simulation is a coarsepartition of the Purkinje system and has 1412 edgeselementswithminimum element size equal to 0006 cm andmaximumelement size equal to 0075 cm This is also a highly nonuni-form grid as the ratio of the maximum to the minimumelement size is up to 125 Once again in this simulation thebase grid was actually generated from uniform bisection of acoarser grid which has 706 edgeselements as this will makethe multigrid iterations for linear systems more efficient Theadaptive simulation effectively uses three refinement levelsSimilar to the two-dimensional case the second level gridwascreated from uniform bisection of the base grid and only thegrids at the third level were dynamically changing regriddedevery other timestep

The timestep sizes used in the adaptive simulation varyfrom 000935msecs to 10msec The adaptive mesh refine-ment was automatically turned off at time 119905 = 4274msecsThe simulation on the time interval [0 400]msecs totallyused 6267 secs in computer time

The three-dimensional Purkinje system is also par-titioned into a fine and quasi-uniform grid which has6311 edgeselements with minimum element size equal to00095 cm and maximum element size equal to 0016 cmThe simulation with this fine quasi-uniform grid and time-step Δ119905 = 00082msecs used about 96960 secs in computertime

We also run a simulation with timestep Δ119905 =

00374msecs on the coarse grid that is used as the basegrid for the adaptive simulation This ldquono-refinementrdquosimulation used 4580 secs in computer time

It is observed that in each of the adaptive uniform refine-ment and no-refinement simulations the action potentialsuccessfully propagates and goes through the whole domainThe solution from the adaptive simulation is in very goodagreement with that from the uniform simulation We col-lected action potentials at sixteen points located in differentregions of the domain and found that the correspondingpotential traces almost overlap Their difference at the peakof the upstroke phases is uniformly bounded by 04mvoltsSee Figures 9 10 11 and 12 for some visualized results whichcorrespond to action potentials at the four marked pointsshown in Figure 8 Figure 13 shows four plots of the actionpotential at different times by the AMR-ATI simulation Theadaptive simulation roughly uses the same computer timeas the ldquono-refinementrdquo one but yields much more accurateresults and gains about 155 times speedup over the uniformone

8 Discussion

This paper demonstrates the promising application of aboth space and time adaptive algorithm for simulating wavepropagation on the His-Purkinje system

In this work only the numerical results with membranedynamics described by Beeler-Reuter model are presentedThe adaptive algorithm however is by no means restrictedto the simple model In fact in our implementation thealgorithm successfully works with other physically realistic

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

minus5

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 9 Traces of action potentials during the simulation period[0 400]msecs at the point marked as ldquo3rdquo in Figure 8 The right plotis a close-up of the left one

models such as Luo-Rudy dynamic model and DiFrancesco-Noble model As the advantage of the algorithm due toapplication of the operator splitting technique principallyany standard cardiac model of reaction-diffusion type formodeling wave propagation can be easily incorporated intothe adaptive algorithm

In addition the adaptive algorithm can also be straight-forwardly applied to modeling signal propagation on theneural system of the body or even the brain

Other than its advantages admittedly the algorithm hasits own limitations For example efficiency of the adaptivealgorithm heavily relies on the existence of a relatively coarsebase grid which describes the computational domain As thespeedup of an adaptivemesh refinement algorithm is roughly

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 11: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

BioMed Research International 11

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

5

10

15

20

25

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 10 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo20rdquo in Figure 8 The right plot is aclose-up of the left one

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

2

4

6

8

10

12

14

16

18

20 30 40 50 60 70 80

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 11 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo76rdquo in Figure 8 The right plot is aclose-up of the left one

bounded by the ratio of the maximum degree of freedoms(DOFs) of the grid with the highest resolution to the meanDOFs of all grids that have ever been created during theadaptive process the existence of a coarse base grid makesthe ratio as large as possible and hence its speedup over asimulation on the grid with highest resolution The use of arelatively coarse base grid alsomakes themultigridmultilevelsolver for linear systems more efficient It is noticeable that inour simulations the base grids were all selected to have thenumber of edgeselements as small as possible

In the AMR algorithm [10 13] designed for uniform orquasi-uniform grids the timestep size is typically selected tobe proportional to the minimum of element sizes (multipliedby an estimated wave speed) in a grid in order that thenumerical waves or discontinuities will never propagatemorethan one element within a single timestep As this strategyguarantees that the fronts of traveling waves are alwaysbounded and tracked by fine grids the ODEs resulting fromoperator splitting applied to the reaction-diffusion systemsare only integrated on the fine grids while the solution in

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 12: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

12 BioMed Research International

0

20

0 50 100 150 200 250 300 350 400

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

minus80

minus60

minus40

minus20

(a)

0

5

10

15

10 20 30 40 50 60

Pote

ntia

l (m

V)

Time (ms)

AdaptiveNo-refinement

Uniform

(b)

Figure 12 Traces of action potentials during the simulation period [0 400]msecs at the point marked as ldquo109rdquo in Figure 8 The right plot isa close-up of the left one

Time = 000 Time = 851

Time = 265Time = 175

Beeler-Reuter model with Strang splitting Beeler-Reuter model with Strang splitting

Beeler-Reuter model with Strang splittingBeeler-Reuter model with Strang splitting

Figure 13 Four plots of the action potential at different times by the AMR-ATI simulation (red denotes activated potential and blue denotesinactivated potential)

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 13: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

BioMed Research International 13

other regions covered by coarser level grids is simply obtainedby time interpolation

However a typical branch in the heart conduction systemor the whole system on its own is highly nonuniform in thesense that the lengths of branch segments each of which isbounded by two adjacent leaves or branching vertices mayvary significantly (see Figure 4) which determines that thecoarse partition of the domain is also highly nonuniformsince the algorithm requires the number of edgeselementsas small as possible for the efficiency considerations Onthis kind of computational domain the determination oftimesteps based onminimum element sizes is found to be lessefficient Here in the proposed adaptive algorithm timestepsizes are instead chosen to be proportional to the maximumof the element sizes in a grid This alternative strategy isexperimentally proved to be efficient and accurate enoughin simulating electric waves that travel moderately fast eventhough its rigidity needs further investigation in particularwhen the fiber radius is much larger than those currentlyused

In the simulations presented in this work error estima-tions in both the AMR and ATI processes were all basedon the Richardson extrapolation process which is in generalmore expensive but more rigorous and reliable than othersimpler ones such as the gradient detector If the gradientdetector is used the second coarsest grid in the AMR processdoes not need to be uniformly bisected and can also bechanging dynamically and the ATI period can use lesscomputer timeThe potential gain in algorithm efficiency andthe possible loss in solution accuracy with the alternative useof a gradient detector can be studied and compared in a futurework

Finally we make a remark about the parallelization ofthe space-time adaptive mesh refinement and adaptive timeintegration algorithm on the Purkinje system Our numericalexperiments in this work indicate that a large fraction (about80) of the computer time in the simulation is spent inthe time integration for the reaction part while the V-cyclemultigrid iteration as well as the mesh refinement part usesonly a small fraction (less than 20) The dominance of thecomputer time in the split reaction part is partially due tothe one-dimensional nature of the Purkinje network system(mesh refinement and grid generation on one-dimensionalstructures are much easier than the ones with multipledimensions) It is obvious that this dominance will be ben-eficial for the parallelization of the algorithm since the splitreaction part (ODEs) is spatially independent highly par-allelizable and can even be massively parallelized Of coursethe time consumption on different parts of the operatorsplitting depends on the specific cardiac models With amore complicated cardiac model the reaction part willconsume more computer time than the diffusion part andthe corresponding parallel algorithm is expected to havebetter efficiency Admittedly with the reaction part fullyparallelized further improvement in parallel efficiency willdepend on the parallelization of the V-cycle multigrid itera-tion which solves linear systems over thewhole fiber networkand needs time in data communication between different

CPUs or multicores The parallel performance of the AMR-ATI algorithm deserves further investigation

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Research of the first author was supported in part by theNational Science Foundation of USA under Grant DMS-0915023 and is supported by the National Natural ScienceFoundation of China under Grants DMS-11101278 and DMS-91130012 Research of the first author is also supported by theYoungThousand Talents Program of China

References

[1] E J Vigmond and C Clements ldquoConstruction of a computermodel to investigate sawtooth effects in the Purkinje systemrdquoIEEE Transactions on Biomedical Engineering vol 54 no 3 pp389ndash399 2007

[2] E J Vigmond R W dos Santos A J Prassl M Deo and GPlank ldquoSolvers for the cardiac bidomain equationsrdquo Progress inBiophysics andMolecular Biology vol 96 no 1ndash3 pp 3ndash18 2008

[3] S Linge J Sundnes M Hanslien G T Lines and A TveitoldquoNumerical solution of the bidomain equationsrdquo PhilosophicalTransactions of the Royal Society A Mathematical Physical andEngineering Sciences vol 367 no 1895 pp 1931ndash1950 2009

[4] P PathmanathanMO Bernabeu R Bordas et al ldquoA numericalguide to the solution of the bidomain equations of cardiacelectrophysiologyrdquoProgress in Biophysics andMolecular Biologyvol 102 no 2-3 pp 136ndash155 2010

[5] C S Henriquez ldquoA brief history of tissue models for cardiacelectrophysiologyrdquo IEEE Transactions on Biomedical Engineer-ing vol 61 no 5 pp 1457ndash1465 2014

[6] M J Berger and J Oliger ldquoAdaptive mesh refinement for hyper-bolic partial differential equationsrdquo Journal of ComputationalPhysics vol 53 no 3 pp 484ndash512 1984

[7] M J Berger and P Colella ldquoLocal adaptive mesh refinement forshock hydrodynamicsrdquo Journal of Computational Physics vol82 no 1 pp 64ndash84 1989

[8] E M Cherry H S Greenside and C S Henriquez ldquoA space-time adaptive method for simulating complex cardiac dynam-icsrdquo Physical Review Letters vol 84 no 6 article 1343 2000

[9] E M Cherry H S Greenside and C S Henriquez ldquoEffi-cient simulation of three-dimensional anisotropic cardiac tissueusing an adaptive mesh refinement methodrdquo Chaos vol 13 no3 pp 853ndash865 2003

[10] J A Trangenstein and C Kim ldquoOperator splitting and adaptivemesh refinement for the Luo-Rudy I modelrdquo Journal of Compu-tational Physics vol 196 no 2 pp 645ndash679 2004

[11] A L Hodgkin and A F Huxley ldquoA quantitative descriptionof membrane current and its application to conduction andexcitation in nerverdquoThe Journal of Physiology vol 117 no 4 pp500ndash544 1952

[12] R S VargaMatrix Iterative Analysis Springer Series in Compu-tational Mathematics Springer Berlin Germany 2nd edition2000

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 14: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

14 BioMed Research International

[13] W Ying Department of Mathematics Duke University Dur-ham NC USA 2005

[14] C Brezinski and M R Zaglia Extrapolation Methods Theoryand Practice vol 2 of Studies in Computational MathematicsNorth-Holland Publishing AmsterdamTheNetherlands 1991

[15] G W Beeler and H Reuter ldquoReconstruction of the actionpotential of ventricular myocardial fibresrdquoThe Journal of Phys-iology vol 268 no 1 pp 177ndash210 1977

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Page 15: Research Article Adaptive Mesh Refinement and …downloads.hindawi.com/journals/bmri/2015/137482.pdfResearch Article Adaptive Mesh Refinement and Adaptive Time Integration for Electrical

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom