25
LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS TOBIAS BREITEN * , VALERIA SIMONCINI , AND MARTIN STOLL Abstract. Many problems in science and technology can be cast using differential equations with both fractional time and spatial derivatives. To accurately simulate natural phenomena using this technology fine spatial and tem- poral discretizations are required, leading to large-scale linear systems or matrix equations, especially whenever more than one space dimension is considered. The discretization of fractional differential equations typically involves dense matrices with a Toeplitz or in the variable coefficient case Toeplitz-like structure . We combine the fast evaluation of Toeplitz matrices and their circulant preconditioners with state-of-the-art linear matrix equation methods to effi- ciently solve these problems, both in terms of CPU time and memory requirements. Additionally, we illustrate how these techniqes can be adapted when variable coefficients are present. Numerical experiments on typical differential problems with fractional derivatives in both space and time showing the effectiveness of the approaches are reported. Key words. Fractional calculus, Fast solvers, Sylvester equations, Preconditioning, Low-rank methods, Tensor equations AMS subject classifications. 65F08, 65F10, 65F50, 92E20, 93C20 1. Introduction. The study of integrals and derivatives of arbitrary order, so-called fractional order, is an old topic in mathematics going back to Euler and Leibniz (see [17] for historical notes). Despite its long history in mathematics it was not until recently that this topic has gained mainstream interest outside the mathematical community. This surging interest is mainly due to the inadequateness of traditional models to describe many real world phenomena. The well-known anomalous diffusion process is one typical such example [32]. Other applications of fractional calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits [21, 40], electro-analytical chemistry [49] or image processing [54]. With the increase of problems using fractional differential equations there is corresponding interest in the development and study of accurate, fast, and reliable numerical methods that allow their solution. There are various formulations for the fractional derivative mainly divided into derivatives of Caputo or Riemann-Liouville type (see definitions in Section 2). So far much of the numerical analysis focused on ways to discretize these equations using either tailored finite difference [38, 30] or finite element [14, 29] methods. Of particular importance is the preconditioning of the linear system, which can be understood as additionally employing an approximation of the discretized operator. For some types of fractional differential equations preconditioning has recently been considered (see [27, 33]). Another approach that has recently been studied by Burrage et al. considers approximations to matrix functions to solve the discretized system (see [5] for details). In this paper we focus on the solution of the discretized equations when a finite difference approximation is used. Our work here is motivated by some recent results in [39] where the discretization via finite differences is considered in a purely algebraic framework. Our aim is to exploit a novel linear algebra formulation of the discretized differential equations, which allows us to efficiently and reliably solve the resulting very large algebraic equations. The * Institute for Mathematics and Scientific Computing, Heinrichstr. 36/III, University of Graz, Austria ([email protected]). Most of this work was completed while this author was with the Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg Dipartimento di Matematica, Universit`a di Bologna, Piazza di Porta S. Donato, 5, 40127 Bologna, Italy, and IMATI-CNR, Pavia. ([email protected]) Numerical Linear Algebra for Dynamical Systems, Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstr. 1, 39106 Magdeburg, Germany ([email protected]), 1

LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS

TOBIAS BREITEN∗, VALERIA SIMONCINI† , AND MARTIN STOLL‡

Abstract. Many problems in science and technology can be cast using differential equations with both fractionaltime and spatial derivatives. To accurately simulate natural phenomena using this technology fine spatial and tem-poral discretizations are required, leading to large-scale linear systems or matrix equations, especially whenever morethan one space dimension is considered. The discretization of fractional differential equations typically involves densematrices with a Toeplitz or in the variable coefficient case Toeplitz-like structure . We combine the fast evaluationof Toeplitz matrices and their circulant preconditioners with state-of-the-art linear matrix equation methods to effi-ciently solve these problems, both in terms of CPU time and memory requirements. Additionally, we illustrate howthese techniqes can be adapted when variable coefficients are present. Numerical experiments on typical differentialproblems with fractional derivatives in both space and time showing the effectiveness of the approaches are reported.

Key words. Fractional calculus, Fast solvers, Sylvester equations, Preconditioning, Low-rank methods, Tensorequations

AMS subject classifications. 65F08, 65F10, 65F50, 92E20, 93C20

1. Introduction. The study of integrals and derivatives of arbitrary order, so-called fractionalorder, is an old topic in mathematics going back to Euler and Leibniz (see [17] for historicalnotes). Despite its long history in mathematics it was not until recently that this topic has gainedmainstream interest outside the mathematical community. This surging interest is mainly due tothe inadequateness of traditional models to describe many real world phenomena. The well-knownanomalous diffusion process is one typical such example [32]. Other applications of fractionalcalculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53],electrical circuits [21, 40], electro-analytical chemistry [49] or image processing [54].

With the increase of problems using fractional differential equations there is correspondinginterest in the development and study of accurate, fast, and reliable numerical methods that allowtheir solution. There are various formulations for the fractional derivative mainly divided intoderivatives of Caputo or Riemann-Liouville type (see definitions in Section 2). So far much ofthe numerical analysis focused on ways to discretize these equations using either tailored finitedifference [38, 30] or finite element [14, 29] methods. Of particular importance is the preconditioningof the linear system, which can be understood as additionally employing an approximation ofthe discretized operator. For some types of fractional differential equations preconditioning hasrecently been considered (see [27, 33]). Another approach that has recently been studied by Burrageet al. considers approximations to matrix functions to solve the discretized system (see [5] fordetails). In this paper we focus on the solution of the discretized equations when a finite differenceapproximation is used. Our work here is motivated by some recent results in [39] where thediscretization via finite differences is considered in a purely algebraic framework.

Our aim is to exploit a novel linear algebra formulation of the discretized differential equations,which allows us to efficiently and reliably solve the resulting very large algebraic equations. The

∗Institute for Mathematics and Scientific Computing, Heinrichstr. 36/III, University of Graz, Austria([email protected]). Most of this work was completed while this author was with the Max PlanckInstitute for Dynamics of Complex Technical Systems, Magdeburg†Dipartimento di Matematica, Universita di Bologna, Piazza di Porta S. Donato, 5, 40127 Bologna, Italy, and

IMATI-CNR, Pavia. ([email protected])‡Numerical Linear Algebra for Dynamical Systems, Max Planck Institute for Dynamics of Complex Technical

Systems, Sandtorstr. 1, 39106 Magdeburg, Germany ([email protected]),

1

Page 2: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

2 Breiten, Simoncini, Stoll

new framework captures the intrinsic matrix structure in each space and time directions typical ofthe problem; the corresponding matrix or tensor equations are solved with recently developed fastiterative methods, which determine accurate low-rank approximations in the solution manifold.

We therefore structure the paper as follows. Section 2.1 provides some background on bothCaputo and Riemann-Liouville fractional derivatives. In Section 3 we introduce different problemsall of fractional order. The problems are either space-, time- or space-time-fractional. Addition-ally, we introduce variable and constant coefficients. We start with a problem revealing the basicmatrix structure that is obtained when tailored finite difference methods are applied. Later amore complicated setup will lead to not only having to solve a simple structured linear systembut rather a (linear) matrix Sylvester equation. An even further structured equation is obtainedwhen we consider a two-dimensional (in space) setup. We also consider a problem that combinestwo-dimensional spatial derivatives of fractional order with a time-fractional derivative, which inturn leads to a tensor structured equation. In Section 4 we turn to constructing fast solvers for thepreviously obtained matrix equations. We therefore start by introducing circulant approximationsto the Toeplitz matrices, which are obtained as the discretization of an instance of a fractionalderivative. These circulant matrices are important as preconditioners for matrices of Toeplitz typebut can also be used with the tailored matrix equation solvers presented in Section 4.2 either ofdirect or preconditioned form. Section 4.3 provides some of the eigenvalue analysis needed to showthat our methods perform robustly within the given parameters. This is then followed by a tensor-valued solver in Section 4.4. The numerical results given in Section 5 illustrate the competitivenessof our approaches.

Throughout the manuscript the following notation will be used. matlab R© notation will beused whenever possible; for a matrix U = [u1, . . . ,un] ∈ Rm×n, vec(U) = [uT1 , . . . ,u

Tn ]T ∈ Rnm

denotes the vector obtained by stacking all columns of U one after the other; A ⊗B denotes theKronecker product of A and B.

2. Fractional calculus and Grunwald formulae.

2.1. The fractional derivative. In fractional calculus there are competing concepts for thedefinition of fractional derivatives. The Caputo and the Riemann-Liouville fractional derivatives[38] are both used here and we use this section to briefly recall their definition.

Consider a function f = f(t) defined on an interval [a, b]. Assuming that f is sufficiently oftencontinuously differentiable the Caputo derivative of real order α with (n− 1 < α ≤ n) is defined as(see, e.g., [39, Formula (3)], [6, Section 2.3])

CaD

αt f(t) =

1

Γ(n− α)

∫ t

a

f (n)(s)ds

(t− s)α−n+1 , (2.1)

where Γ(x) is the gamma function. Following the discussion in [39] the Caputo derivative is fre-quently used for the derivative with respect to time. We also define the Riemann-Liouville deriva-tive (see, e.g., [39, Formulas (6-7)]) : assuming that f is integrable for t > a a left-sided fractionalderivative of real order α with (n− 1 < α ≤ n) is defined as

RLa Dα

t f(t) =1

Γ(n− α)

(d

dt

)n ∫ t

a

f(s)ds

(t− s)α−n+1 , a < t < b. (2.2)

Analogously, a right-sided Riemann-Liouville fractional derivative is given by

RLt Dα

b f(t) =(−1)n

Γ(n− α)

(d

dt

)n ∫ b

t

f(s)ds

(s− t)α−n+1 , a < t < b. (2.3)

Page 3: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 3

If one is further interested in computing the symmetric Riesz derivative of order α one can sim-ply perform the half-sum of the left and right-sided Riemann-Liouville derivatives ( see, e.g., [39,Formula (5)]), that is,

dαf(t)

d |t|α=tD

αRf(t) =

1

2

(RLa Dα

t f(t) + RLt Dα

b f(t)). (2.4)

Here a connection to both the left-sided and the right-sided derivatives is made1. In the remainderof this paper we want to illustrate that fractional differential equations using the formulationstogether with certain discretization approaches lead to similar structures at the discrete level. Ourgoal is to give guidelines and offer numerical schemes for the efficient and accurate evaluation ofproblems of various form. Detailed introductions to fractional differential equations can be foundin [43, 38].

2.2. Numerical approximation. The discretization of fractional derivatives is often done byfinite difference schemes of Grunwald-Letnikov type. Assume that for a one dimensional problem,the spatial domain is given by x ∈ [a, b]. We here follow [30] for the introduction of the basicmethodology. According to [30] the following holds

RLa Dα

xf(x, t) = limM→∞

1

M∑k=0

gα,kf(x− kh, t), (2.5)

for the left-sided derivative (see (3) in [30] or (7.10) in [38]), where h = x−aM , and gα,k is given by

gα,k =Γ(k − α)

Γ(−α)Γ(k + 1)= (−1)k

(αk

). (2.6)

For the efficient computation of the coefficients gα,k one can use the following recurrence relation2

([38])

gα,0 = 1, gα,k =

(1− α+ 1

k

)gα,k−1. (2.7)

This finite difference approximation is of first order [30]. One can analogously obtain an expressionfor the right-sided derivative via

RLx Dα

b f(x, t) = limM→∞

1

M∑k=0

gα,kf(x+ kh, t), (2.8)

where h = b−xM .

Alternatively, a shifted Grunwald-Letnikov formula is introduced in [30]. This discretizationshows advantages regarding the stability when the fractional derivative is part of an unsteady differ-ential equation. The basis of the discretized operator used later is the following shifted expression

RLa Dα

xf(x, t) = limM→∞

1

M∑k=0

gα,kf(x− (k − 1)h, t)

1In this work we are not debating, which of these derivatives is the most suitable for the description of a naturalphenomenon.

2 For a matlab implementation this can be efficiently computed using y = cumprod([1, 1− ((α+ 1)./(1 : n))])where n is the number of coefficients.

Page 4: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

4 Breiten, Simoncini, Stoll

introduced in [30, Theorem 2.7].

3. Some model problems and discretizations. In what follows, we introduce some pro-totypes of fractional differential equations. We start with space fractional problems with constantcoefficients. These will then be expanded by an additional time fractional derivative. Similaritiesand differences in the non-constant coefficient case are pointed out. We emphasize that the subse-quent examples are neither the only relevant formulations nor the only possible ways to discretizeeach problem. Instead, we show that classical discretization techniques typically lead to densebut highly structured problems which, when considered as matrix equations, allow for an efficientnumerical treatment.

3.1. Space fractional problems. Consider the one dimensional fractional diffusion equation

du(x, t)

dt− xD

βRu(x, t) = f(x, t), (x, t) ∈ (0, 1)× (0, T ],

u(0, t) = u(1, t) = 0, t ∈ [0, T ],

u(x, 0) = 0, x ∈ [0, 1],

(3.1)

with spatial differentiation parameter β ∈ (1, 2). Equation (3.1) is discretized in time by an implicitEuler scheme of step size τ to give

un+1 − un

τ− xD

βRu

n+1 = fn+1, (3.2)

where un+1 := u(x, tn+1), and fn+1 := f(x, tn+1) denote the values of u(x, t) and f(x, t) at timetn+1 = (n+1)τ. According to (2.4) the Riesz derivative is defined as the weighted average of the leftand right-sided Riemann-Liouville derivative. Hence, let us first consider the one-sided analogue of(3.2), i.e.,

un+1 − un

τ−RLa Dβ

xun+1 = fn+1. (3.3)

The stable discretization of this problem is introduced in [30, Formula (18)], where using un+1i :=

u(xi, tn+1), the derivative is approximated via

RLa Dβ

xun+1i ≈ 1

hβx

i+1∑k=0

gβ,kun+1i−k+1.

Here, hx = b−anx+1 and nx is the number of interior points in space. Incorporating the homogeneous

Dirichlet boundary conditions, an approximation to (3.2) in matrix form is given by

1

τ

un+11 − un1un+12 − un2

...

...un+1nx − u

nnx

= h−βx

gβ,1 gβ,0 0 . . . 0

gβ,2 gβ,1 gβ,0. . .

......

. . .. . . gβ,0 0

gβ,nx−1

. . .. . . gβ,1 gβ,0

gβ,nx gβ,nx−1 . . . gβ,2 gβ,1

︸ ︷︷ ︸

Tnxβ

un+11

un+12

...

...un+1nx

︸ ︷︷ ︸

un+1

+

fn+11

fn+12

...

...fn+1nx

︸ ︷︷ ︸

fn+1

.

Page 5: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 5

We now approximate the symmetric Riesz derivative as the weighted sum of the left- and right-sidedRiemann-Liouville fractional derivative (see also [39, Formula (28)]), to obtain the differentiationmatrix

Lnxβ =1

2

(Tnxβ + (Tnx

β )T).

While we simply take the weighted average of the discretized operators, the justification of theweighted two-sided average is laid in [31, Equation (16)] where all ci are constant at 1

2 and all fi arezero. Using this notation it is easy to see that the implicit Euler method for solving (3.2) requiresthe solution of the algebraic linear system(

Inx − τLnxβ)

un+1 = un + τ fn+1 (3.4)

at every time-step. We discuss the efficient solution of this system in Section 4. We here focus onthe shifted version of the Grunwald formulae but want to remark that all techniques presented inthis paper are also applicable when the unshifted version is used.

Next, for Ω = (ax, bx)× (ay, by), consider the two-dimensional version of (3.1)

du(x, y, t)

dt− xD

β1R u(x, y, t)− yD

β2R u(x, y, t) = f(x, y, t), (x, y, t) ∈ Ω× (0, T ],

u(x, y, t) = 0, (x, y, t) ∈ Γ× [0, T ],

u(x, y, 0) = 0, (x, y) ∈ Ω,

(3.5)

where β1, β2 ∈ (1, 2) and Γ denotes the boundary of Ω. Using again the shifted Grunwald finitedifference for the spatial derivatives, gives in the x-direction

xDβ1

R u(x, y, t) =1

Γ(−β1)lim

nx→∞

1

hβ1x

Mx∑k=0

Γ(k − β1)

Γ(k + 1)u(x− (k − 1)hx, y, t)

and then in the y-direction

yDβ2

R u(x, y, t) =1

Γ(−β2)lim

ny→∞

1

hβ2y

My∑k=0

Γ(k − β2)

Γ(k + 1)u(x, y − (k − 1)hy, t).

With the previously defined weights and employing an implicit Euler method in time, we obtainthe following equation

1

τ

(un+1 − un

)=(Iny ⊗ Lnxβ1

+ Lnyβ2⊗ Inx

)︸ ︷︷ ︸

Lnxnyβ1,β2

un+1 + fn+1. (3.6)

Note that for the numerical approximation we have used hx = bx−axnx+1 , hy =

by−ayny+1 with nx + 2 and

ny + 2 degrees of freedom in both spatial dimensions. Again, the boundary degrees of freedom havebeen eliminated. To proceed with the implicit time stepping, one now has to solve the large linearsystem of equations (

Inxny − τLnxnyβ1,β2

)un+1 = un + τ fn+1. (3.7)

Page 6: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

6 Breiten, Simoncini, Stoll

at each time step tn. Due to the structure of Lnxnyβ1,β2

we have

Inxny − τLnxnyβ1,β2= Iny ⊗

(1

2Inx − τLnxβ1

)+

(1

2Iny − τLnyβ2

)⊗ Inx .

In other words, the vector un+1 ∈ Rnxny is the solution of a Kronecker structured linear system.On the other hand, interpreting un+1,un, fn+1 as vectorizations of matrices, i.e.,

un+1 = vec(Un+1), un = vec(Un), fn+1 = vec(Fn+1), Un+1,Un,Fn+1 ∈ Rnx×ny ,

this can be rewritten as a Sylvester equation of the form(1

2Inx − τLnxβ1

)Un+1 + Un+1

(1

2Iny − τLnyβ2

)T= Un + τFn+1. (3.8)

Note that Lnxβ1and L

nyβ2

are both symmetric but, in general, different. In Section 4 we shall exploit

this key connection to efficiently determine a numerical approximation to Un+1 and thus to un+1.

3.2. Time fractional problems. We now assume that the time derivative is of fractionalorder as well. Hence, in a one-dimensional space let us consider

C0 D

αt u(x, t)− xD

βRu(x, t) = f(x, t), (x, t) ∈ (0, 1)× (0, T ],

u(0, t) = u(1, t) = 0, t ∈ [0, T ],

u(x, 0) = 0, x ∈ [0, 1],

(3.9)

where α ∈ (0, 1) and β ∈ (1, 2).We again approximate the Riesz derivative as the weighted sum of the left- and right-sided

Riemann-Liouville fractional derivatives used before. The Caputo time-fractional derivative C0 Dαt u(x, t)

is then approximated using the unshifted Grunwald-Letnikov approximation to give

Tnt+1α

u(x, t0)u(x, t1)

...u(x, tnt)

− Int+1

xD

βRu(x, t0)

xDβRu(x, t1)

...

xDβRu(x, tnt)

= Int+1

f(x, t0)f(x, t1)

...f(x, tnt)

, (3.10)

where

Tnt+1α := τ−α

gα,0 0 . . . . . . 0

gα,1 gα,0. . .

.... . .

. . .. . .

. . ....

. . .. . .

. . . gα,0 0gα,nt . . . . . . gα,1 gα,0

=

τ−αgα,0 0... Tnt

α

τ−αgα,nt

.

Here, Tnt+1α represents the discrete Caputo derivative. In case of a non-zero initial condition

u(x, t0) = u0 we get

Tntα

u(x, t1)u(x, t2)

...u(x, tnt)

− Int

xD

βRu(x, t1)

xDβRu(x, t2)

...

xDβRu(x, tnt)

=

f(x, t1)f(x, t2)

...f(x, tnt)

gα,1gα,2

...gα,nt

u(x, t0). (3.11)

Page 7: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 7

If we now discretize in space this leads to the following algebraic linear system in Kronecker form((Tnt

α ⊗ Inx)− (Int ⊗ Lnxβ ))

u = f (3.12)

where u = [uT1 ,uT2 , . . . ,u

Tnt ]

T ∈ Rnxnt . Similarly, we define f = [fT1 , fT2 , . . . , f

Tnt ]

T . Introducing thematrices U = [u1,u2, . . . ,unt ] and F = [f1, . . . , fnt ], we can rewrite (3.12) as

U(Tntα )T − Lnxβ U = F, (3.13)

which shows that U is the solution to a Sylvester matrix equation (see, e.g., [26]), with Tntα lower

triangular and Lnxβ a dense symmetric matrix.Finally, if we have fractional derivatives in both a two-dimensional space and time, we consider

C0 D

αt u(x, y, t)− xD

β1R u(x, y, t)− yD

β2R u(x, y, t) = f(x, y, t), (x, y, t) ∈ Ω× (0, T ],

u(x, y, t) = 0, (x, y, t) ∈ Γ× [0, T ],

u(x, y, 0) = 0, (x, y) ∈ Ω,

(3.14)

Following the same steps as before we obtain a space-time discretization written as the followingalgebraic linear system (

Tntα ⊗ Inxny − Int ⊗ L

nxnyβ1,β2

)u = f . (3.15)

The space-time coefficient matrix now has a double tensor structure, making the numerical solutionof the associated equation at each time step much more complex than in the previous cases. Aneffective solution strategy resorts to recently developed algorithms that use approximate tensorcomputations; see Section 4.4.

3.3. Variable coefficents. In this section we consider a more general FDE, which involvesseparable variable coefficients. To simplify the presentation, we provide the details only for the two-dimensional space-fractional example, since the other cases can be obtained in a similar manner.

We thus consider the case when, i.e., p+(x1, x2) = p(1)+ (x1)p

(2)+ (x2), and similarly for the other

coefficients. The left and right sided Riemann-Liouville derivatives enter the equation with differentcoefficients, therefore the block(

p+ DRL β1

0 x1+p− D

RL β1

x1 1 +q+ DRL β2

0 x2+q− D

RL β2

x2 1

)u(x1, x2, t)

becomes (P+,1Tβ1

UP+,2 + P−,1TTβ1

UP−,2 + Q+,1UTβ2Q+,2 + Q−,1UTT

β2Q−,2

)or, in Kronecker notation,(

P+,2 ⊗P+,1Tβ1+ P−,2 ⊗P−,1T

Tβ1

+ Q+,2TTβ2⊗Q+,1 + Q−,2Tβ2

⊗Q−,1)vec(U). (3.16)

A simpler case occurs when the coefficients only depend on one spatial variable; for instance, thefollowing data was used in [51]:(

DC α0 t −p+ D

RL β1

0 x1−p− D

RL β1

x1 1 −q+ DRL β2

0 x2−q− D

RL β2

x2 1

)u(x1, x2, t) = 0

Page 8: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

8 Breiten, Simoncini, Stoll

with

p+ = Γ(1.2)xβ1

1 , p− = Γ(1.2)(2− x1)β1 , q+ = Γ(1.2)xβ2

2 , q− = Γ(1.2)(2− x2)β2 .

After the Grunwald-Letnikov discretization, the system matrix A reads

A = Tntα ⊗ In2n1 − Int ⊗ In2 ⊗

(P+Tβ1

+ P−T>β1

)− Int ⊗

(Q+Tβ2

+ Q−T>β2

)⊗ In1 , (3.17)

where Tβ is the one-sided derivative, P+,P−,Q+,Q− are diagonal matrices with the grid valuesof p+, p−, q+, q−, respectively. Now using

Lnx,Pβ1= P+Tβ1

+ P−T>β1

we obtain the following form of the two-dimensional problem

A = Tα ⊗ In1n2 − Int ⊗ Lnx,Pβ1⊗ In2 − Int ⊗ In1 ⊗ Lnx,Qβ2

. (3.18)

In the more general case when the coefficients are sum of separable terms, that is

p+(x1, x2) =

r∑j=1

p+,j(x1)p+,j(x2),

then the equation can be rewritten accordingly. For instance, assuming for simplicity that allcoefficients are of order r we obtain

r∑j=1

(P

(j)+,2 ⊗P

(j)+,1Tβ1 + P

(j)−,2 ⊗P

(j)−,1T

Tβ1

+ Q(j)+,2T

Tβ2⊗Q

(j)+,1 + Q

(j)−,2Tβ2 ⊗Q

(j)−,1

). (3.19)

Our methodology can be applied to this case as well. An even more general form is obtained if allcoefficients are of different form and hence results in affine decompositions of different orders. Forreasons of exposition we will not discuss this case further.

The structure of the linear systems in the variable coefficient case is very similar to the constantcoefficient case but also shows crucial differences. While the matrix Lnxβ1

is a symmetric Toeplitz

matrix the matrix Lnx,Pβ1is not symmetric and also without Toeplitz structure. Nonetheless, the

fact that it is a sum of terms associated with Toeplitz matrices will be exploited in our solutionstrategy.

4. Matrix equations solvers. This section is devoted to the introduction of the methodologythat allows us to efficiently solve the problems presented in Sections 3. In all cases, we start ourdiscussion by considering the constant coefficient case, and then we continue with by the separablevariable coefficient case. We saw that the discretization of all problems leads to a system thatcontained a special structure within the matrix, the so-called Toeplitz matrices discussed in thenext section. We there recall efficient ways to work with Toeplitz matrices and in particular focuson techniques for fast multiplications and introduce approximations to Toeplitz matrices that canbe used as preconditioners.

We then proceed by introducing methods that are well-suited for the numerical solution of largescale linear matrix equations, and in particular of Sylvester equations, as they occur in the casewhen we have an additional time-fractional or two-dimensional problem. This is followed by a brief

Page 9: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 9

discussion of a tensor-based approach for the time-fractional problem with two space dimensions.We shall mainly report on the recently developed KPIK method [44], which seemed to provide afully satisfactory performance on the problems tested; the code is used as a stand alone solver. Weadditionally use this method as a preconditioner within a low-rank Krylov subspace solver.

Lastly, we discuss the tensor-structured problem and introduce a suitable solver based onrecently developed tensor techniques [36, 35].

As a general remark for all solvers we are going to survey, we mention that they are all relatedto Krylov subspaces. Given a matrix A and a vector r0, the Krylov subspace of size l is defined as

Kl(A, r0) = spanr0,Ar0, . . . ,Al−1r0

.

As l increases, the space dimension grows, and the spaces are nested, namelyKl(A, r0) ⊆ Kl+1(A, r0).In the following we shall also consider wide forms of generalizations of this original definition, fromthe use of matrices in place of r0, to the use of sequences of shifted and inverted matrices insteadof A.

4.1. Computations with Toeplitz matrices. We briefly discuss the properties of Toeplitzmatrices and possible solvers. As Ng points out in [34] many direct solution strategies exist forthe solution of Toeplitz systems that can efficiently solve these systems recursively. We mentionhere [28, 13, 16, 1] among others. One is nevertheless interested in finding iterative solvers forthe Toeplitz matrices as this further reduces the complexity. Additionally, as we want to use theToeplitz solver within a possible preconditioner for the Sylvester equations we are not necessarilyinterested in computing the solution to full accuracy. Note that for convenience reasons we simplyuse n for the dimensionality of the matrices in the general discussion following about Toeplitzmatrices. It will be clear from the application and the discussion in Section 3 what the specificvalue for n is. Let us consider a basic Toeplitz matrix of the form

T =

t0 t−1 . . . t2−n t1−nt1 t0 t−1 t2−n... t1 t0

. . ....

tn−2. . .

. . . t−1tn−1 tn−2 . . . t1 t0

.

Circulant matrices, which take the generic form

C =

c0 cn−1 . . . c2 c1c1 c0 cn−1 c2... c1 c0

. . ....

cn−2. . .

. . . cn−1cn−1 cn−2 . . . c1 c0

,

are special Toeplitz matrices as each column of a circulant matrix is a circulant shift of its precedingcolumn.

It is well known that a circulant matrix C can be diagonalized using the Fourier matrix F3.In more detail, the diagonalization of C is written as C = FHΛF where F is again the Fourier

3In matlab this matrix can be computed via F=fft(eye(n))

Page 10: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

10 Breiten, Simoncini, Stoll

matrix and Λ is a diagonal matrix containing the eigenvalues (see [7, 34]). In order to efficientlycompute the matrix-vector multiplication with C the matrix-vector multiplication using F and Λneeds to be available. The evaluation of F and FH times a vector can be done via the Fast FourierTransform (FFT, [8, 15]). The computation of the diagonal elements of Λ is done employing onemore FFT. Overall, this means that the matrix vector multiplication with C can be replaced byapplications of the FFT.

The n × n Toeplitz matrices mentioned above are not circulant but can be embedded into a2n× 2n circulant matrix as follows [

T BB T

] [y0

],

with

B =

0 tn−1 . . . t2 t1t1−n 0 tn−1 t2

... t1−n 0. . .

...

t−2. . .

. . . tn−1t−1 t−2 . . . t1−n 0

.

This new structure allows one to exploit the FFT in matrix-vector multiplications with T.Note that these techniques can also be used when the variable coefficient case is considered as

multiplications with

I− Lnx,Pβ1= I−

(P+Tβ1

+ P−T>β1

)can be performed using FFTs and simple diagonal scaling with P− and P+.

For the constant coefficient case, i.e., the Toeplitz case there exists a variety of different pre-conditioners [34]. Here we focus on a classical circulant preconditioner introduced by Strang in[48]. The idea is to approximate the Toeplitz matrix T by a circulant C that can in turn be easilyinverted by means of the FFT machinery. The diagonals cj of this C are determined as

cj =

tj , 0 ≤ j ≤ bn/2ctj−n, bn/2c < j < ncn+j 0 < −j < n

Here k := bn/2c is the largest integer less or equal than n/2. Note that other circulant approxima-tions are possible but will not be discussed here [34].

The computational strategy described above can be used to solve the linear system in (3.4)associated with Problem 1, namely(

Inx − τLnxβ)

un+1 = un + f .

In [52] it was shown that the coefficient matrix Inx − τLnxβ is a strictly diagonally dominant M-matrix. This allows us to use a symmetric Krylov subspace solver such as the Conjugate Gradientmethod (cg) [20], which requires matrix-vector products with the coefficient matrix; for moredetails on iterative Krylov subspace solvers we refer the reader to [19, 41, 46]. The coefficient

Page 11: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 11

matrix has Toeplitz structure, therefore the circulant approximation C ≈ Inx − τLnxβ can be usedas a preconditioner for cg. For the fast convergence of cg it is sufficient that the eigenvalues of thepreconditioned matrix form a small number of tiny clusters, which is known to be the case for theStrang circulant preconditioner, since it gives a single eigenvalue cluster around 1 [27].

The numerical treatment is more complicated in the variable coefficient case. The authors of[11] point out that the use of circulant preconditioners for this case is not as effective and suggestthe use of the following approximation

−Lnx,Pβ1= −(P+Tβ1

+ P−T>β1) ≈ P+A + P−A> =: W (4.1)

where W ≈ −Lnx,Pβ1is used for preconditioning purposes whenever a system involving Lnx,Pβ1

needsto be solved and

A =1

hβ1x

2 −1

−1 2. . .

. . .. . . −1−1 2

which is simply the central difference approximation to the 1D Laplacian. This approximation isgood when the fractional differentiation parameter is close to two, i.e. ≥ 1.5. Alternatively, forsmaller differentiation parameters one could use

A =1

hβ1x

1 −1

1. . .

. . .. . . −1

1

,

as suggested in [11]. We can use a nonsymmetric Krylov solver such as Gmres [42] where the matrixmultiplication is performed using the FFT based approach presented above, and the tridiagonalmatrix P+A + P−A> as preconditioner by means of its LU factorisation.

One of the major concerns is carrying the approaches for the one-dimensional case over to thehigher dimensional settings. We now present techniques well established for matrix equations andshow how they can be used for the solution of fractional differential equations in more than onedimension or when the time derivative is also of fractional order.

We are in particular focussing on the connection to matrix equation solvers, which we introducein the next section. In addition we mention the possible use of low-rank versions [47, 25] of well-known Krylov subspace solvers that have become popular as they allow for the solution of tensor-valued equations while maintaining the low-rank nature of the solution. This approach is certainlyapplicable in our case when combined with a suitable preconditioner. For more detailed discussionwe refer to [25].

4.2. Numerical solution of the Sylvester equation. The numerical solution of linearmatrix equations of the form

AU + UB = C (4.2)

Page 12: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

12 Breiten, Simoncini, Stoll

arises in a large variety of applications; we refer the reader to [45] for a detailed description. Notethat in general, A and B are allowed to have different dimensions, so that the right-hand side Cand the unknown solution U are rectangular matrices. A unique solution is ensured if A and −Bhave disjoint spectra. Robust numerical procedures for solving (4.2) when A and B have modestdimensions - up to a few hundreds - have been widely tested, and the Bartels-Stewart algorithmhas emerged as the method of choice [3]. The method relies on a full Schur decomposition of thetwo matrices, and then on a backward substitution of the transformed problem.

We mention here another method that can be used when either A or B is small and alreadyin triangular form, the way the equation (3.13) is in Problem 2, when nt is small. Let U =[u1, . . . , unB ], where nB is the size of B, and C = [c1, . . . , cnB ]. Explicit inspection shows that ifB is upper triangular, then the first column of U can be obtained by solving the shifted system(A+B11I)u1 = c1. All subsequent columns of U may be obtained with a backward substitution as

(A+BiiI)ui = ci −i−1∑k=1

ukBki, i = 2, ..., nB .

Each of these systems may be solved by cg equipped with a circulant preconditioner, as for Prob-lem 1. This strategy is appealing when nB , the size of B, is small.

Generically, however, we consider the case where the coefficient matrices A and B are bothextremely large, typically dense, making full spectral decompositions and backward solves pro-hibitively expensive both in terms of computational and memory requirements. On the other hand,C usually has much lower rank than the problem dimension, which makes low-rank approximationprocedures more appealing. Note that the matrix C in our case reflects the right-hand side of theFDE and as this often has a certain smoothness, we perform a truncation via a truncated singularvalue decomposition to obtain a low-rank representation C. For example, instead of solving (3.13)we replace the right-hand side F by a low-rank approximation F ≈ F with F = W1W

T2 and the

matrices W1,W2 only have a small number of columns and are computed via a truncated SVD ofF. Based on the low-rank nature of the right-hand side of the Sylvester equations efficient methodsseek an approximation U ≈ U as the product of low-rank matrices, U = VYWT , for some matrixY, and V, W having much fewer columns than rows. These low-rank approximations are of specialinterest, since in general U is dense, and thus hard or impossible to store when A and B are trulylarge, as in our cases. Among these approaches are Krylov subspace projection and ADI meth-ods, the latter being a particular Krylov subspace method [45]. We consider the following generalprojection methods for (4.2): given two approximation spaces Range(V) and Range(W), an ap-

proximation U = VYWT is determined by requiring that the residual matrix R := AU+ UB−Csatisfies4

VTRW = 0.

Assuming that both V and W have orthogonal columns, and using U = VYWT , the conditionabove gives the reduced matrix equation

(VTAV)Y + Y(WTBW)−VTCW = 0;

for VTAV and WTBW of small size, this equation can be efficiently solved by the Bartels-Stewartmethod, giving the solution Y. Different choices of Range(V) and Range(W) lead to different

4It can be shown that this requirement corresponds to an orthogonality (Galerkin) condition of the residualvector for the equation in Kronecker form, with respect to the space spanned by Range(W ⊗ V) [45].

Page 13: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 13

approximate solutions. The quality of such an approximation depends on whether certain spectralproperties of the coefficient matrices A and B are well represented in the two approximation spaces.Among the most successful choices are Rational Krylov subspaces: assuming C can be writtenas C = C1C

T2 , Rational Krylov subspaces generate the two spaces Range(V) = Range([C1, (A −

σ1I)−1C1, (A−σ2I)−1C1, . . .]) and Range(W) = Range([C2, (BT−η1I)−1C2, (B

T−η2I)−1C2, . . .]),for specifically selected shifts σi, ηi, i = 1, 2, . . .. Note that a shift σ of multiplicity k can be used,as long as terms with powers (A− σI)−j , j = 1, . . . , k are included in the basis. In our numericalexperience we found the choice σi, ηi ∈ 0,∞ particularly effective: for Range(V) this choicecorresponds to an approximation space generated by powers of A and A−1 and it was first proposedunder the name of Extended Krylov subspace [12]. In [44] it was shown for B = AT that such aspace can be generated progressively as

EK(A,C1) = Range([C1,A−1C1,AC1,A

−2C1,A2C1,A

−3C1, . . .])

and expanded until the approximate solution U is sufficiently good. Note that in a standardimplementation that sequentially generates EK(A,C1), two “blocks” of new vectors are added ateach iteration, one block multiplied by A, and one “multiplied” by A−1. The block size depends onthe number of columns in C1, although as the iteration proceeds this number could decrease, in caserank deficiency occurs. The implementation of the resulting projection method with EK(A,C1) asapproximation space was called KPIK in [44] for the Lyapunov matrix equation, that is (4.2) withB = AT and C1 = C2.

The procedure in [44] can be easily adapted to the case of general B, so that also the spaceEK(BT ,C2) is constructed [45]. For consistency we shall also call KPIK our implementation forthe Sylvester equation.

The effectiveness of the procedure can be measured in terms of both the dimension of theapproximation space needed to achieve the required accuracy, as well as computational time. Thetwo devices are tightly related. Indeed, the generation of EK(A,C1) requires solving systems withA, whose cost is problem dependent. Therefore, the larger the space, the higher this cost is,increasing the overall computational time. The space dimension determines the rate of convergenceof the approximate solution U towards U; how the accuracy improves as the space dimensionincreases was recently analyzed in [22] for KPIK applied to the Lyapunov equation, and in [4] as aparticular case of Rational Krylov space methods applied to the Sylvester equation; see Section 4.3.

From a computational standpoint, our application problem is particularly demanding becausethe iterative generation of the extended space requires solving systems with A and B, whose size canbe very large; see the numerical experiments. To alleviate this computation, in our implementationthe inner systems with A and B are solved inexactly, that is by means of a preconditioned iterativemethod, with a sufficiently high accuracy so as to roughly maintain the KPIK rate of convergenceexpected with the exact (to machine precision) application of A−1 and B−1. In our numericalexperiments we shall call iKPIK the inexact version of the method. Note that at this point we needto distinguish between the constant and the variable coefficient case. In the constant coefficientcase we will inexactly solve for the Toeplitz matrices by using the previously mentioned circulantpreconditioned CG method. In the case of the variable coefficient we can use the preconditionedGMRES approach that uses the classical finite difference matrix for the preconditioner.

Finally, we observe that our stopping criterion for the whole procedure is based on the residualnorm. In accordance with other low-rank approximation methods, the Frobenius norm of theresidual matrix, namely ‖R‖2 =

∑i,j R2

ij , can be computed without explicitly storing the whole

Page 14: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

14 Breiten, Simoncini, Stoll

residual matrix R. Indeed, using C1 = Vγγγ1 and C2 = Wγγγ2, we have

R = [AV, V]

[0 YY −γγγ1γγγT2

][AW, W]T = Q1ρρρ1

[0 YY −γγγ1γγγT2

]ρρρT2 QT

2 ,

where the two skinny QR factorizations [AV, V] = Q1ρρρ1 and [AW, W] = Q2ρρρ2 can be updatedas the space expands5. Therefore,

‖R‖ =

∥∥∥∥ρρρ1 [ 0 YY −γγγ1γγγT2

]ρρρT2

∥∥∥∥ ,whose storage and computational costs do not depend on the problem size, but only on the approx-imation space dimensions.

4.3. Considerations on the convergence of the iterative solvers in the constant coef-ficient case. The performance of the iterative methods discussed so far depends, in the symmetriccase, on the distribution of the eigenvalues of the coefficient matrices, and in the nonsymmetriccase, on more complex spectral information such as the field of values. We start with providing asimple but helpful bound on the spectrum of the matrix obtained after discretizing the fractionalderivatives.

Lemma 1. For 1 < β < 2, the spectrum of the matrix Lβ := 12 (Tβ + TT

β ) is contained in the

open interval (−2h−ββ, 0).Proof. From [27], for 1 < β < 2, we recall the following useful properties of gβ,k :

gβ,0 = 1, gβ,1 = −β < 0, gβ,2 > gβ,3 > · · · > 0,

∞∑k=0

gβ,k = 0,

n∑k=0

gβ,n < 0, ∀n ≥ 1.

We can now adapt the proof in [27] to get an estimate of the eigenvalues of Lβ = 12 (Tβ + TT

β ).Recall the structure of Tβ :

Tβ = h−β

gβ,1 gβ,0 0gβ,2 gβ,1 gβ,0gβ,3 gβ,2 gβ,1 gβ,0

.... . . gβ,2 gβ,1

. . ....

. . .. . .

. . .. . . gβ,0 0

.... . .

. . .. . . gβ,2 gβ,1 gβ,0

gβ,n gβ,n−1 . . . . . . gβ,2 gβ,1

.

Hence, the Gershgorin circles of Lβ are all centered at h−βgβ,1 = −βh−β . Moreover, the largestradius is obtained in row

⌊n2 + 1

⌋and thus is at most (depending on n being odd or even) rmax =

h−βbn2 +1c∑k=0,k 6=1

gβ,k < −h−βgβ,1 = h−ββ. This now implies σ(Lβ) ⊂ (−2h−ββ, 0).

5The residual norm computation could be made even cheaper in the exact case, since then the skinny QRfactorization would not have to be done explicitly [45]

Page 15: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 15

The result shows that as the spatial mesh is refined, the eigenvalues of Lβ spread out on thenegative half of the real line and preconditioning is needed for a fast convergence of the iterativescheme for Problem 1. The eigenvalue properties are also crucial to assess the performance of theExtended Krylov subspace method described in the previous section, which we use both in Problem2 and Problem 3 as a solver. Assume that C = c1c

T2 . In [4] for A and B symmetric, it was shown

that the residual Frobenius norm is bounded as

‖R‖F‖c1‖ ‖c2‖

≤ 4 maxγmA,A,B, γmB,B,A+ ξ(γmA,A,B + γmB,B,A),

where γmA,A,B and γmB,B,A are quantities associated with the approximation spaces in A andin B of dimension mA and mB, respectively, and ξ is an explicit constant independent of theapproximation space dimensions. For A = B, which in our Problem 3 is obtained for instancewhenever β1 = β2, it holds (see [22])

γmA,A,B = γmB,B,A =

(κ1/4 − 1

κ1/4 + 1

)2mA

, κ = λmax(A)/λmin(A),

where κ is the condition number of A. In the notation of Problem 3, the rate of convergence of theExtended Krylov subspace method when β1 = β2 is thus driven by κ1/4, where κ is the conditionnumber of 1

2I− Lβ .

Finally, if one were to consider the (symmetric) Kronecker formulation of the Sylvester equationin (3.13), that is

(I1 ⊗A + B⊗ I2)vec(U) = vec(C),

the following estimate for the eigenvalues of the coefficient matrix would hold:

σ(I1 ⊗A + B⊗ I2) ⊂(

1, 1 +2β1τ

hβ1+

2β2τ

hβ2

).

The assertion follows from the fact that the eigenvalues ν of I1 ⊗ A + B ⊗ I2 are given by theeigenvalues of A and B via νi,j = λi(A) + µj(B).

4.4. Tensor-valued equation solver. The efficient solution of tensor-valued equations hasrecently seen much progress regarding the development of effective methods and their correspondinganalysis [50, 2, 35]. General introductions to this very active field of research can be found in [18, 24]and in the literature mentioned there.

Although a variety of solvers for tensor valued equations exists we do not aim at providing asurvey of the available methods, but we rather focus on the dmrg (Density Matrix RenormalizationGroup) method as presented in [35], which is part of the the tensor train (TT) toolbox [36, 35, 37].The guiding principle is that in the equation AAAx = f , the tensor-structured matrix AAA and theright-hand side are approximated by a simpler (low tensor rank) matrix AAATT and a vector fTT ,respectively. The linear system problem is then formulated as a minimization problem, i.e.,

min∥∥∥fTT −AAATTxTT∥∥∥ . (4.3)

Page 16: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

16 Breiten, Simoncini, Stoll

Alternatively, the iterative solver amen[10] has shown great potential and is a suitable candidateto solve our fractional problem as well. In our context, the tensor AAA is given by

AAA = Tntα ⊗ Inx ⊗ Iny − Int ⊗

(1

hβ1Iny ⊗ Lnxβ1

+ Lnyβ2⊗ 1

hβ2Inx)

= (Tntα ⊗ Inx ⊗ Iny )−

(Int ⊗ 1

hβ1Iny ⊗ Lnxβ1

)−(

Int ⊗ Lnyβ2⊗ 1

hβ2Inx).

One of the most used decompositions is the canonical decomposition of a tensor AAA ([24]) given by

AAA(i1, i2, i3) =

r∑α=1

U1(i1, α)U2(i2, α)U3(i3, α)

with AAA(i1, i2, i3) the entry (i1, i2, i3) of the tensor AAA, canonical factors Ui, i = 1, 2, 3 and canonicalrank r. While this is already a significant reduction compared to the original tensor, employing thisdecomposition can be numerically inefficient. Instead, we use the recently introduced tensor train(TT) format: AAA is approximated by

AAA ≈AAATT (i1, i2, i3) = G1(i1)G2(i2)G3(i3),

where Gk(ik) is an rk−1 × rk matrix for each fixed ik. Note that we find a tensor AAATT of lowtensor-rank that approximates the full TT decomposition of the tensor AAA.

In order to solve this tensor valued equation we now employ the dmrg algorithm with the TTdecomposition as introduced by Oseledets in [35]. Before briefly describing the method we note thatthese techniques have been developed before in quantum systems simulation using different termi-nology; there the TT format is known as the matrix product state (MPS) (see [18] for references).As pointed out earlier the dmrg method transforms the problem of solving the linear system into aleast squares problem (4.3), where all full tensors are replaced by their TT approximations of lowertensor rank, i.e., fTT is the (TT) tensor approximation of f . The dmrg method can be understoodas a modified alternating least squares method. While a standard alternating least squares algo-rithm fixes all but one core and then optimizes the functional the TT dmrg method fixes all buttwo cores Gk,Gk+1 to compute a solution. The algorithm then moves to the next pair of cores andso on. For more details we refer to Oseledets [35].

Additionally, one can use a TT version of well-known Krylov methods such as gmres ([42])adapted to the TT framework (see [9]) but we have not employed this method here.

5. Numerical results. In this section we report on our numerical experience with the algo-rithms proposed in the previous sections. All tests are performed on a Linux Ubuntu ComputeServer using 4 Intel Xeon E7-8837 CPUs running at 2.67 GHz each equipped with 256 GB of RAMusing matlab R© 2012.

We illustrate the effectiveness of our proposed methods by testing the convergence for all prob-lems presented earlier both in the constant and variable coefficient case. Our goal is to obtainrobustness with respect to the discretization parameter in both the temporal and the spatial di-mensions. Additionally, we are testing all problems for a variety of different orders of differentiationto illustrate that the methods are suitable for reasonable parameter choices.

As this work discusses various different formulations and several solution strategies we want tosummarize the suggested solution methods in Table 5.1.

Page 17: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 17

Fig. 5.1: Comparison of numerical solutions when using two different values for β with zero initialcondition and zero Dirichlet boundary condition. (510 and 10 space and time points, resp.)

Problem setup Solution method

Time+1D FDE (3.4) PCGTime+2D FDE (3.8) iKPIKFrac.Time+1D FDE (3.12) iKPIKFrac.Time+2D FDE (3.15) dmrg

Table 5.1: Overview of the solvers. Iterative solution in constant coefficient case always usescirculant preconditioned cg in the variable coefficient case a preconditioned nonsymmetric solver.

5.1. Fractional in space: Constant and variable coefficients.

One-dimensional example. We are using a zero-initial condition and a zero Dirichlet bound-ary condition. The result for two different values of β is shown in Figure 5.1 and is simply givento illustrate that the parameter β has a significant influence on the solution u. Table 5.2 shows theresult for a variety of discretization levels with two values of β. For this problem the forcing termwas given by

f = 80 sin(20x) cos(10x).

It can be seen that the Strang preconditioner [48] performs exceptionally well with only 6 to 9iterations needed for convergence and no dependence on the mesh size. A tolerance of 10−6 for therelative residual was used for the stopping criterion. The time-step is chosen to be τ = hx/2.

For the variable coefficient case we consider the following parameters p+ = Γ(1.2)xβ1

1 andp− = Γ(1.2)(2 − x1)β1 . It is easily seen from Table 5.3 that the iteration numbers are very robustusing the approximation by the difference matrices, and that timings remain very moderate.

Page 18: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

18 Breiten, Simoncini, Stoll

DoF β = 1.3 β = 1.7avg. its (time) avg. its (time)

32768 6.0(1.52) 7.0(1.83)65536 6.0(2.27) 7.0(2.37)

131072 6.0(4.05) 7.6(5.40)262144 6.0(14.91) 8.4(20.44)524288 6.0(36.31) 8.9(50.19)

1048576 6.0(63.69) 9.0(91.49)

Table 5.2: Constant coefficient case: Average PCG iteration numbers for 10 time steps and twovalues of β, as the mesh is refined; in parenthesis is the total CPU time (in secs).

DoF β = 1.3 β = 1.7avg. its (time) avg. its (time)

16384 7.0(0.48) 6.3(0.38)32768 6.8(0.82) 6.0(0.82)65536 6.4(1.52) 6.0(1.21)

131072 5.9(2.97) 5.9(2.81)262144 5.4(5.77) 6.0(6.62)524288 5.1(12.14) 6.0(13.42)

1048576 4.8(40.03) 6.3(48.15)

Table 5.3: Variable coefficient case: Average GMRES iteration numbers for 8 time steps and twovalues of β, as the mesh is refined; in parenthesis is the total CPU time (in secs).

Two-dimensional example. This problem now includes a second spatial dimension togetherwith a standard derivative in time. The spatial domain is the unit square. The problem is char-acterized by a zero-Dirichlet condition with a zero-initial condition. The time-dependent forcingterm is given as

F = 100 sin(10x) cos(y) + sin(10t)xy

and the solution for this setup is illustrated in Figure 5.2, where we show the solution at twodifferent time steps.

Table 5.4 shows the average ikpik iteration numbers alongside the total computing time. Thetolerance for iKPIK is set to 10−6, to illustrate that we can compute the solution very accurately,and the tolerance for the inner iterative solver is chosen to be 10−12. The digits summarized inTable 5.4 give the average number of ikpik iterations for 8 time-steps. The variable coefficient isgiven by

p+ = Γ(1.2)xβ1

1 , p− = Γ(1.2)(2− x1)β1 , q+ = Γ(1.2)xβ2

2 , q− = Γ(1.2)(2− x2)β2 .

Note that the largest problem given in Table 5.4 corresponds to 33, 554, 432 unknowns. A furtherincrease in the matrix dimensions would require storing the vector of unknows in a low-rank format.

Page 19: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 19

(a) First time-step

(b) Tenth time-step

Fig. 5.2: First and tenth time-step solutions of the 2D fractional differential equation.

5.2. Time and space fractional. Additionally we consider the challenging case when alsothe temporal derivative is of fractional type.

One spatial dimension. Here the forcing term was defined by f = 8 sin(10x), while a zeroDirichlet condition on the spatial interval was used. Table 5.5 summarizes the results for ikpik.Here we discretized both the temporal and the spatial domain using the same number of elements.For the variable coefficient case we use the finite difference approximation presented earlier. Wenoticed that the convergence of the outer iKPIK solver was dependent on the accuracy. Due tothe effectiveness of the preconditioner we set a tight tolerance for the inner scheme of 10−13, anduse a tolerance of 10−6 for the iKPIK convergence. When stating the degrees of freedom for each

Page 20: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

20 Breiten, Simoncini, Stoll

Variable Coeff. Constant Coeff.nx ny β1 = 1.3 β1 = 1.7 β1 = 1.3 β1 = 1.7

β2 = 1.7 β2 = 1.9 β2 = 1.7 β2 = 1.9it(CPUtime) it(CPUtime) it(CPUtime) it(CPUtime)

1024 1024 2.3 (19.35) 2.5 (15.01) 2.9 (9.89) 2.5 (18.51 )1024 2048 2.8 (47.17) 2.9 (22.25) 3.0 (23.07) 3.2 (22.44)2048 2048 3.0 (76.72) 2.6 (36.01) 2.0 (51.23) 2.3 (34.43)4096 4096 3.0 (171.30) 2.6 (199.82) 2.0 (164.20) 2.2 (172.24 )

Table 5.4: The solver ikpik is presented, for a variety of meshes and two different values of β inboth the constant and the variable coefficient cases. Shown are the iteration numbers and the totalCPU time.

dimension one has to note that implicitly this method is solving a linear system of tensor formthat has the dimensionality ntnx×ntnx so for the largest example shown in Table 5.5 this leads toroughly 70 million unknowns. We see that in the constant coefficient case the method performs very

Variable Coeff.nt nx β = 1.7 β = 1.1 β = 1.9

α = 0.5 α = 0.9 α = 0.2it(CPUtime,MI) it(CPUtime,MI) it(CPUtime,MI)

1024 1024 10 (0.33, 31) 43 (1.87, 39 ) 4 (0.14, 15)2048 2048 11 (0.59, 33) 57 (4.92, 48 ) 4 (0.21, 15 )4096 4096 13 (2.64, 36) 74 (30.52, 60) 4 (0.54, 15 )8192 8192 14 (5.51, 39) 95 (91.13, 75 ) 4 (1.31, 16 )

Constant Coeff.nt nx β = 1.7 β = 1.1 β = 1.9

α = 0.5 α = 0.9 α = 0.2it(CPUtime,MI) it(CPUtime,MI) it(CPUtime,MI)

1024 1024 6 (0.17, 18) 14 (0.39, 15 ) 4 (0.13, 17)2048 2048 6 (0.27, 18) 16 (0.79, 18) 4 (0.21, 17)4096 4096 6 (0.65, 20) 18 (2.52, 19) 4 (0.45, 17)8192 8192 7 (2.43, 20) 20 (6.51, 20) 4 (1.25, 20)

Table 5.5: Preconditioned low-rank bicg and ikpik, for a variety of meshes and different valuesof α and β. Shown are the iteration numbers, the elapsed CPU time, and the maximum iterationnumbers for the inner solver (MI).

robustly with respect to mesh-refinement and that also the iteration numbers needed for iKPIK areof moderate size when the differentiation parameter is changed. From the results for the variable

Page 21: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 21

coefficient case it can be seen that

−Lnx,Pβ1≈ P+A + P−A> (5.1)

is less effective in this setup.

In Figure 5.3 we illustrate the performance of both the preconditioner and the iKPIK methodfor a variety of differentiation parameters in the variable coefficient setup. It can be seen thatiKPIK often only needs a small number of iterations whereas the preconditioner is not always aseffective as the one in the constant coefficient case.

Two dimensions. In this section we briefly illustrate how the tensorized problems can besolved. Our implementation is based on the recently developed tensor-train format [36, 35, 37].The forcing term is given by f = 100 sin(10x) cos(y), while zero-Dirichlet conditions are imposed,together with a zero initial condition. The approximations of the right-hand side fTT and the tensorATT use the round function within the TT-toolbox. The tolerance for this was set to 10−6. Thedmrg method recalled in Section 4.4 ([35]) was used, with a convergence tolerance set to 10−6.Figure 5.4 illustrates the different behaviour of the solution for two different values of the temporaldifferentiation order.

We next illustrate that the performance of dmrg for our tensor equation with constants coeffi-cients is robust with respect to changes in the system parameters such as the orders of differentiationand varying meshes. Table 5.6 shows the dmrg iteration numbers for 4 different mesh sizes bothin time and space and three different choices of differentiation orders. The iteration numbers areconstant with the mesh size, while they show a very slight increase from 6 or 7 to 11 with thedifferent discretization order. It is also possible to include a preconditioner within dmrg but we

β1 = 1.3, β2 = 1.5 β1 = 1.7, β2 = 1.9 β1 = 1.9, β2 = 1.1α = 0.3 α = 0.7 α = 0.2

nt ny nx it it it

512 256 512 7 8 8256 512 512 7 8 8

1024 512 2048 7 8 91024 1024 4096 9 9 9

Table 5.6: Performance of dmrg as the mesh is refined, for different values of the differentiationorders.

have not done so here. We also want to illustrate the performance of the tensor method for the usewith a variable coefficient setup as presented earlier. For this we consider the coefficients

p+ = Γ(1.2)xβ1

1 x2, p− = Γ(1.2)(2− x1)β1x2, q+ = Γ(1.2)xβ2

2 x1, q− = Γ(1.2)(2− x2)β2x1

and show the results in Table 5.7. Here the newly developed amen solver [10] showed slightlybetter performance than dmrg but also was not necessarily tailored for our problem, i.e. nopreconditioning was incorporated.

Page 22: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

22 Breiten, Simoncini, Stoll

β

1.25 1.5 1.75 2.0

α

0.25

0.5

0.75

10

20

30

40

50

60

70

80

90

(a) GMRES iterations

β

1.25 1.5 1.75 2.0

α

0.75

0.5

0.25

10

20

30

40

50

60

(b) KPIK iterations

Fig. 5.3: Results for a mesh 1024 degrees of freedom both in space and time. We put a mesh ofwidth 0.05 on both the differentiation parameter in time and space. α ∈ [0.05, 1], β ∈ [1.05, 2]. Weswitch between preconditioners at β = 1.3.

6. Conclusions. We pointed out in the introduction that FDE are a crucial tool in math-ematical modelling in the coming years. We have introduced four model problems that use thewell-established Grunwald-Letnikov scheme (and some of its descendants). These model problemswere chosen such that their main numerical features are to be found in many FDE problems. In par-ticular, we derived the discrete linear systems and matrix equations of Sylvester-type that represent

Page 23: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 23

(a) α = 0.3 (b) α = 0.7

Fig. 5.4: Problem 4. Solution at time-step 2 (below) and time-step 20 (above), for two-differentorders of the differentiation for the derivative in time.

β1 = 1.3, β2 = 1.5 β1 = 1.7, β2 = 1.9 β1 = 1.9, β2 = 1.1α = 0.3 α = 0.2 α = 0.7

nt ny nx it it it

512 256 512 14 14 15512 512 512 14 13 14

1024 512 1024 16 18 151024 1024 1024 16 20 14

Table 5.7: Performance of amen as the mesh is refined, for different values of the differentiationorders.

the discretized version of the space-, time- and space-time-fractional derivative for both constantand variable coefficents. For all problems it was crucial to notice the Toeplitz or Toeplitz-likestructure of the discrete differentiation operator. While the simplest model problem only requireda circulant preconditioner in combination with the preconditioned cg method the more involvedproblems needed further study. For realistic discretization levels it was no longer possible to explic-itly store the approximate solution vectors. We thus considered low-rank matrix equations solversand in particular focused on the successful kpik method in its inexact version iKPIK. This methodwas then extremely effective when we again used the circulant preconditioned Krylov solver toevaluate any linear systems with the inverse of a differentiation matrix. The last and most chal-lenging problem was then solved using recently developed tensor-methodology and while still futureresearch needs to be devoted to understanding these solvers better, the numerical results are verypromising.

Acknowledgements. Martin Stoll would like to thank Anke Stoll for introducing him tofractional calculus. Additionally, he is indebted to Akwum Onwunta and Sergey Dolgov for helpwith the TT Toolbox. The research of the second author was partially supported by the Italianresearch fund INdAM-GNCS 2013 “Strategie risolutive per sistemi lineari di tipo KKT con uso diinformazioni strutturali”.

Page 24: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

24 Breiten, Simoncini, Stoll

REFERENCES

[1] G. Ammar and W. Gragg, Superfast solution of real positive definite Toeplitz systems, SIAM Journal onMatrix Analysis and Applications, 9 (1988), pp. 61–76.

[2] J. Ballani and L. Grasedyck, A projection method to solve linear systems in tensor format, NumericalLinear Algebra with Applications, 20 (2013), pp. 27–43.

[3] R. H. Bartels and G. W. Stewart, Algorithm 432: Solution of the Matrix Equation AX+XB = C, Comm.of the ACM, 15 (1972), pp. 820–826.

[4] B. Beckermann, An Error Analysis for Rational Galerkin Projection Applied to the Sylvester Equation, SIAMJ. Numer. Anal., 49 (2011), pp. 2430–2450.

[5] K. Burrage, N. Hale, and D. Kay, An efficient implicit FEM scheme for fractional-in-space reaction-diffusion equations, SIAM Journal on Scientific Computing, 34 (2012), pp. A2145–A2172.

[6] M. Caputo, Elasticita e dissipazione, Zanichelli Bologna, 1969.[7] M. Chen, On the solution of circulant linear systems., SIAM J. Numer. Anal., 24 (1987), pp. 668–683.[8] J. Cooley and J. Tukey, An algorithm for the machine calculation of complex Fourier series, Mathematics

of computation, 19 (1965), pp. 297–301.[9] S. Dolgov, TT-GMRES: solution to a linear system in the structured tensor format, Russian Journal of

Numerical Analysis and Mathematical Modelling, 28 (2013), pp. 149–172.[10] S. V. Dolgov and D. V. Savostyanov, Alternating minimal energy methods for linear systems in

higher dimensions. part ii: Faster algorithm and application to nonsymmetric systems, arXiv preprintarXiv:1304.1222, (2013).

[11] M. Donatelli, M. Mazza, and S. Serra-Capizzano, Spectral analysis and structure preserving precondition-ers for fractional diffusion equations, tech. rep., Technical Report, 2015.

[12] V. Druskin and L. Knizhnerman, Extended Krylov subspaces: approximation of the matrix square root andrelated functions, SIAM J. Matrix Anal. Appl., 19 (1998), pp. 755–771.

[13] J. Durbin, The fitting of time-series models, Revue de l’Institut International de Statistique, (1960), pp. 233–244.

[14] G. Fix and J. Roof, Least squares finite-element solution of a fractional order two-point boundary valueproblem, Computers & Mathematics with Applications, 48 (2004), pp. 1017–1033.

[15] M. Frigo and S. G. Johnson, FFTW: An adaptive software architecture for the FFT, in Proc. 1998 IEEEIntl. Conf. Acoustics Speech and Signal Processing, vol. 3, IEEE, 1998, pp. 1381–1384.

[16] I. Gohberg and I. Feldman, Convolution equations and projection methods for their solution, vol. 41, AMSBookstore, 2005.

[17] R. Gorenflo and F. Mainardi, Fractional calculus: integral and differential equations of fractional order,tech. rep., 2008.

[18] L. Grasedyck, D. Kressner, and C. Tobler, A literature survey of low-rank tensor approximation tech-niques, GAMM-Mitteilungen, 36 (2013), pp. 53–78.

[19] A. Greenbaum, Iterative methods for solving linear systems, vol. 17 of Frontiers in Applied Mathematics,Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.

[20] M. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur.Stand, 49 (1952), pp. 409–436 (1953).

[21] I. Jesus, J. Machado, and J. Cunha, Fractional electrical impedances in botanical elements, Journal ofVibration and Control, 14 (2008), pp. 1389–1402.

[22] L. Knizhnerman and V. Simoncini, Convergence analysis of the Extended Krylov Subspace Method for theLyapunov equation, Numerische Mathematik, 118 (2011), pp. 567–586.

[23] R. Koeller, Applications of fractional calculus to the theory of viscoelasticity, ASME, Transactions, Journalof Applied Mechanics(ISSN 0021-8936), 51 (1984), pp. 299–307.

[24] T. Kolda and B. Bader, Tensor decompositions and applications, SIAM Review, 51 (2009), pp. 455–500.[25] D. Kressner and C. Tobler, Krylov subspace methods for linear systems with tensor product structure, SIAM

J. Matrix Anal. Appl, 31 (2010), pp. 1688–1714.[26] P. Lancaster and M. Tismenetsky, Theory of matrices, vol. 2, Academic press New York, 1969.[27] S.-L. Lei and H.-W. Sun, A circulant preconditioner for fractional diffusion equations, Journal of Computa-

tional Physics, 242 (2013), pp. 713–725.[28] N. Levinson, The Wiener RMS (root mean square) error criterion in filter design and prediction, J. Math.

and Phys., 25 (1946), pp. 261–278.[29] F. Liu, V. Anh, and I. Turner, Numerical solution of the space fractional Fokker–Planck equation, Journal

of Computational and Applied Mathematics, 166 (2004), pp. 209–219.[30] M. Meerschaert and C. Tadjeran, Finite difference approximations for fractional advection–dispersion flow

Page 25: LOW-RANK SOLVERS FOR FRACTIONAL DIFFERENTIAL EQUATIONS · calculus are viscoelasticity - for example using the Kelvin-Voigt fractional derivative model [23, 53], electrical circuits

Low-rank solvers for fractional differential equations 25

equations, Journal of Computational and Applied Mathematics, 172 (2004), pp. 65–77.[31] M. M. Meerschaert and C. Tadjeran, Finite difference approximations for two-sided space-fractional partial

differential equations, Applied Numerical Mathematics, 56 (2006), pp. 80–90.[32] R. Metzler and J. Klafter, The random walk’s guide to anomalous diffusion: a fractional dynamics ap-

proach, Physics reports, 339 (2000), pp. 1–77.[33] T. Moroney and Q. Yang, A banded preconditioner for the two-sided, nonlinear space-fractional diffusion

equation, Computers & Mathematics with Applications, 66 (2013), pp. 659–667.[34] M. Ng, Iterative methods for Toeplitz systems, Oxford University Press, 2004.[35] I. Oseledets, DMRG approach to fast linear algebra in the TT-format, Comput. Methods Appl. Math., 11

(2011), pp. 382–393.[36] , Tensor-train decomposition, SIAM Journal on Scientific Computing, 33 (2011), pp. 2295–2317.[37] I. Oseledets and E. Tyrtyshnikov, Breaking the curse of dimensionality, or how to use SVD in many

dimensions, SIAM Journal on Scientific Computing, 31 (2009), pp. 3744–3759.[38] I. Podlubny, Fractional differential equations: an introduction to fractional derivatives, fractional differential

equations, to methods of their solution and some of their applications, vol. 198, Access Online via Elsevier,1998.

[39] I. Podlubny, A. Chechkin, T. Skovranek, Y. Chen, and B. Vinagre Jara, Matrix approach to discretefractional calculus II: Partial fractional differential equations, Journal of Computational Physics, 228(2009), pp. 3137–3153.

[40] I. Podlubny, I. Petras, B. Vinagre, P. O’leary, and L. Dorcak, Analogue realizations of fractional-ordercontrollers, Nonlinear dynamics, 29 (2002), pp. 281–296.

[41] Y. Saad, Iterative methods for sparse linear systems, Society for Industrial and Applied Mathematics, Philadel-phia, PA, 2003.

[42] Y. Saad and M. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linearsystems, SIAM J. Sci. Statist. Comput, 7 (1986), pp. 856–869.

[43] S. Samko, A. Kilbas, and O. Marichev, Fractional integrals and derivatives, Gordon and Breach SciencePublishers Yverdon, 1993.

[44] V. Simoncini, A new iterative method for solving large-scale Lyapunov matrix equations, SIAM J. Sci. Comput.,29 (2007), pp. 1268–1288.

[45] V. Simoncini, Computational methods for linear matrix equations, tech. rep., Universita di Bologna, March2013.

[46] V. Simoncini and D. Szyld, Recent computational developments in Krylov subspace methods for linear systems,Numer. Linear Algebra Appl, 14 (2007), pp. 1–61.

[47] M. Stoll and T. Breiten, A low-rank in time approach to PDE-constrained optimization, SIAM Journal onScientific Computing, 37 (2015), pp. B1–B29.

[48] G. Strang, A proposal for Toeplitz matrix calculations, Studies in Applied Mathematics, 74 (1986), pp. 171–176.

[49] C. Tarley, G. Silveira, W. dos Santos, G. Matos, E. da Silva, M. Bezerra, M. Miro, and S. Ferreira,Chemometric tools in electroanalytical chemistry: methods for optimization based on factorial design andresponse surface methodology, Microchemical journal, 92 (2009), pp. 58–67.

[50] C. Tobler, Low-rank tensor methods for linear systems and eigenvalue problems, PhD thesis, Diss., Eid-genossische Technische Hochschule ETH Zurich, Nr. 20320, 2012, 2012.

[51] H. Wang and N. Du, Fast alternating-direction finite difference methods for three-dimensional space-fractionaldiffusion equations, Journal of Computational Physics, 258 (2014), pp. 305–318.

[52] H. Wang, K. Wang, and T. Sircar, A direct O(Nlog2N) finite difference method for fractional diffusionequations, Journal of Computational Physics, 229 (2010), pp. 8095 – 8104.

[53] C. Wex, A. Stoll, M. Frohlich, S. Arndt, and H. Lippert, How preservation time changes the linearviscoelastic properties of porcine liver, Biorheology, 50 (2013), pp. 115–131.

[54] P. Yi-Fei, Application of fractional differential approach to digital image processing, Journal of Sichuan Uni-versity (Engineering Science Edition), 3 (2007), p. 022.