Crank–Nicolson method




In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations.[1] It is a second-order method in time. It is implicit in time and can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century.[2]


For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable.[3] However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step Δt times the thermal diffusivity to the square of space step, Δx2, is large (typically larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.[citation needed]




Contents






  • 1 The method


  • 2 Example: 1D diffusion


  • 3 Example: 1D diffusion with advection for steady flow, with multiple channel connections


  • 4 Example: 2D diffusion


  • 5 Application in financial mathematics


  • 6 See also


  • 7 References


  • 8 External links





The method




The Crank–Nicolson stencil for a 1D problem.


The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For example, in one dimension, if the partial differential equation is


u∂t=F(u,x,t,∂u∂x,∂2u∂x2){displaystyle {frac {partial u}{partial t}}=Fleft(u,,x,,t,,{frac {partial u}{partial x}},,{frac {partial ^{2}u}{partial x^{2}}}right)}{frac  {partial u}{partial t}}=Fleft(u,,x,,t,,{frac  {partial u}{partial x}},,{frac  {partial ^{2}u}{partial x^{2}}}right)

then, letting u(iΔx,nΔt)=uin{displaystyle u(iDelta x,,nDelta t)=u_{i}^{n},}u(iDelta x,,nDelta t)=u_{{i}}^{{n}},, the equation for Crank–Nicolson method is a combination of the forward Euler method at n{displaystyle n}n and the backward Euler method at n + 1 (note, however, that the method itself is not simply the average of those two methods, as the backward Euler equation has an implicit dependence on the solution):


uin+1−uinΔt=Fin(u,x,t,∂u∂x,∂2u∂x2)(forward Euler){displaystyle {frac {u_{i}^{n+1}-u_{i}^{n}}{Delta t}}=F_{i}^{n}left(u,,x,,t,,{frac {partial u}{partial x}},,{frac {partial ^{2}u}{partial x^{2}}}right)qquad {mbox{(forward Euler)}}}{frac  {u_{{i}}^{{n+1}}-u_{{i}}^{{n}}}{Delta t}}=F_{{i}}^{{n}}left(u,,x,,t,,{frac  {partial u}{partial x}},,{frac  {partial ^{2}u}{partial x^{2}}}right)qquad {mbox{(forward Euler)}}

uin+1−uinΔt=Fin+1(u,x,t,∂u∂x,∂2u∂x2)(backward Euler){displaystyle {frac {u_{i}^{n+1}-u_{i}^{n}}{Delta t}}=F_{i}^{n+1}left(u,,x,,t,,{frac {partial u}{partial x}},,{frac {partial ^{2}u}{partial x^{2}}}right)qquad {mbox{(backward Euler)}}}{frac  {u_{{i}}^{{n+1}}-u_{{i}}^{{n}}}{Delta t}}=F_{{i}}^{{n+1}}left(u,,x,,t,,{frac  {partial u}{partial x}},,{frac  {partial ^{2}u}{partial x^{2}}}right)qquad {mbox{(backward Euler)}}

uin+1−uinΔt=12[Fin+1(u,x,t,∂u∂x,∂2u∂x2)+Fin(u,x,t,∂u∂x,∂2u∂x2)](Crank-Nicolson).{displaystyle {frac {u_{i}^{n+1}-u_{i}^{n}}{Delta t}}={frac {1}{2}}left[F_{i}^{n+1}left(u,,x,,t,,{frac {partial u}{partial x}},,{frac {partial ^{2}u}{partial x^{2}}}right)+F_{i}^{n}left(u,,x,,t,,{frac {partial u}{partial x}},,{frac {partial ^{2}u}{partial x^{2}}}right)right]qquad {mbox{(Crank-Nicolson)}}.}{frac  {u_{{i}}^{{n+1}}-u_{{i}}^{{n}}}{Delta t}}={frac  {1}{2}}left[F_{{i}}^{{n+1}}left(u,,x,,t,,{frac  {partial u}{partial x}},,{frac  {partial ^{2}u}{partial x^{2}}}right)+F_{{i}}^{{n}}left(u,,x,,t,,{frac  {partial u}{partial x}},,{frac  {partial ^{2}u}{partial x^{2}}}right)right]qquad {mbox{(Crank-Nicolson)}}.

Note that this is an implicit method: to get the "next" value of u in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear so that advancing in time will involve the solution of a system of nonlinear algebraic equations, though linearizations are possible. In many problems, especially linear diffusion, the algebraic problem is tridiagonal and may be efficiently solved with the tridiagonal matrix algorithm, which gives a fast O(n){displaystyle {mathcal {O}}(n)}{mathcal {O}}(n) direct solution as opposed to the usual O(n3){displaystyle {mathcal {O}}(n^{3})}{mathcal {O}}(n^{3}) for a full matrix.



Example: 1D diffusion


The Crank–Nicolson method is often applied to diffusion problems. As an example, for linear diffusion,


u∂t=a∂2u∂x2{displaystyle {partial u over partial t}=a{frac {partial ^{2}u}{partial x^{2}}}}{partial u over partial t}=a{frac  {partial ^{2}u}{partial x^{2}}}

applying a finite difference spatial discretization for the right hand side, the Crank–Nicolson discretization is then :


uin+1−uinΔt=a2(Δx)2((ui+1n+1−2uin+1+ui−1n+1)+(ui+1n−2uin+ui−1n)){displaystyle {frac {u_{i}^{n+1}-u_{i}^{n}}{Delta t}}={frac {a}{2(Delta x)^{2}}}left((u_{i+1}^{n+1}-2u_{i}^{n+1}+u_{i-1}^{n+1})+(u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n})right)}{frac  {u_{{i}}^{{n+1}}-u_{{i}}^{{n}}}{Delta t}}={frac  {a}{2(Delta x)^{2}}}left((u_{{i+1}}^{{n+1}}-2u_{{i}}^{{n+1}}+u_{{i-1}}^{{n+1}})+(u_{{i+1}}^{{n}}-2u_{{i}}^{{n}}+u_{{i-1}}^{{n}})right)

or, letting r=aΔt2(Δx)2{displaystyle r={frac {aDelta t}{2(Delta x)^{2}}}}{displaystyle r={frac {aDelta t}{2(Delta x)^{2}}}}:


rui+1n+1+(1+2r)uin+1−rui−1n+1=rui+1n+(1−2r)uin+rui−1n{displaystyle -ru_{i+1}^{n+1}+(1+2r)u_{i}^{n+1}-ru_{i-1}^{n+1}=ru_{i+1}^{n}+(1-2r)u_{i}^{n}+ru_{i-1}^{n},}-ru_{{i+1}}^{{n+1}}+(1+2r)u_{{i}}^{{n+1}}-ru_{{i-1}}^{{n+1}}=ru_{{i+1}}^{{n}}+(1-2r)u_{{i}}^{{n}}+ru_{{i-1}}^{{n}},

given that the terms on the right-hand side of the equation are known, this is a tridiagonal problem, so that uin+1{displaystyle u_{i}^{n+1},}u_{{i}}^{{n+1}}, may be efficiently solved by using the tridiagonal matrix algorithm in favor of a much more costly matrix inversion.


A quasilinear equation, such as (this is a minimalistic example and not general)


u∂t=a(u)∂2u∂x2{displaystyle {frac {partial u}{partial t}}=a(u){frac {partial ^{2}u}{partial x^{2}}}}{frac  {partial u}{partial t}}=a(u){frac  {partial ^{2}u}{partial x^{2}}}

would lead to a nonlinear system of algebraic equations which could not be easily solved as above; however, it is possible in some cases to linearize the problem by using the old value for a{displaystyle a}a, that is ain(u){displaystyle a_{i}^{n}(u),}a_{{i}}^{{n}}(u), instead of ain+1(u){displaystyle a_{i}^{n+1}(u),}a_{{i}}^{{n+1}}(u),. Other times, it may be possible to estimate ain+1(u){displaystyle a_{i}^{n+1}(u),}a_{{i}}^{{n+1}}(u), using an explicit method and maintain stability.



Example: 1D diffusion with advection for steady flow, with multiple channel connections


This is a solution usually employed for many purposes when there is a contamination problem in streams or rivers under steady flow conditions but information is given in one dimension only. Often the problem can be simplified into a 1-dimensional problem and still yield useful information.


Here we model the concentration of a solute contaminant in water. This problem is composed of three parts: the known diffusion equation (Dx{displaystyle D_{x}}D_{x} chosen as constant), an advective component (which means the system is evolving in space due to a velocity field), which we choose to be a constant Ux, and a lateral interaction between longitudinal channels (k).








C∂t=Dx∂2C∂x2−Ux∂C∂x−k(C−CN)−k(C−CM){displaystyle {frac {partial C}{partial t}}=D_{x}{frac {partial ^{2}C}{partial x^{2}}}-U_{x}{frac {partial C}{partial x}}-k(C-C_{N})-k(C-C_{M})}{frac  {partial C}{partial t}}=D_{x}{frac  {partial ^{2}C}{partial x^{2}}}-U_{x}{frac  {partial C}{partial x}}-k(C-C_{N})-k(C-C_{M})












 



 



 



 





(1)




where C is the concentration of the contaminant and subscripts N and M correspond to previous and next channel.


The Crank–Nicolson method (where i represents position and j time) transforms each component of the PDE into the following:








C∂t⇒Cij+1−CijΔt{displaystyle {frac {partial C}{partial t}}Rightarrow {frac {C_{i}^{j+1}-C_{i}^{j}}{Delta t}}}{frac  {partial C}{partial t}}Rightarrow {frac  {C_{{i}}^{{j+1}}-C_{{i}}^{{j}}}{Delta t}}












 



 



 



 





(2)










2C∂x2⇒12(Δx)2((Ci+1j+1−2Cij+1+Ci−1j+1)+(Ci+1j−2Cij+Ci−1j)){displaystyle {frac {partial ^{2}C}{partial x^{2}}}Rightarrow {frac {1}{2(Delta x)^{2}}}left((C_{i+1}^{j+1}-2C_{i}^{j+1}+C_{i-1}^{j+1})+(C_{i+1}^{j}-2C_{i}^{j}+C_{i-1}^{j})right)}{frac  {partial ^{2}C}{partial x^{2}}}Rightarrow {frac  {1}{2(Delta x)^{2}}}left((C_{{i+1}}^{{j+1}}-2C_{{i}}^{{j+1}}+C_{{i-1}}^{{j+1}})+(C_{{i+1}}^{{j}}-2C_{{i}}^{{j}}+C_{{i-1}}^{{j}})right)












 



 



 



 





(3)










C∂x⇒12((Ci+1j+1−Ci−1j+1)2(Δx)+(Ci+1j−Ci−1j)2(Δx)){displaystyle {frac {partial C}{partial x}}Rightarrow {frac {1}{2}}left({frac {(C_{i+1}^{j+1}-C_{i-1}^{j+1})}{2(Delta x)}}+{frac {(C_{i+1}^{j}-C_{i-1}^{j})}{2(Delta x)}}right)}{frac  {partial C}{partial x}}Rightarrow {frac  {1}{2}}left({frac  {(C_{{i+1}}^{{j+1}}-C_{{i-1}}^{{j+1}})}{2(Delta x)}}+{frac  {(C_{{i+1}}^{{j}}-C_{{i-1}}^{{j}})}{2(Delta x)}}right)












 



 



 



 





(4)










C⇒12(Cij+1+Cij){displaystyle CRightarrow {frac {1}{2}}(C_{i}^{j+1}+C_{i}^{j})}CRightarrow {frac  {1}{2}}(C_{{i}}^{{j+1}}+C_{{i}}^{{j}})












 



 



 



 





(5)










CN⇒12(CNij+1+CNij){displaystyle C_{N}Rightarrow {frac {1}{2}}(C_{Ni}^{j+1}+C_{Ni}^{j})}C_{N}Rightarrow {frac  {1}{2}}(C_{{Ni}}^{{j+1}}+C_{{Ni}}^{{j}})












 



 



 



 





(6)










CM⇒12(CMij+1+CMij).{displaystyle C_{M}Rightarrow {frac {1}{2}}(C_{Mi}^{j+1}+C_{Mi}^{j}).}C_{M}Rightarrow {frac  {1}{2}}(C_{{Mi}}^{{j+1}}+C_{{Mi}}^{{j}}).












 



 



 



 





(7)




Now we create the following constants to simplify the algebra:


λ=DxΔt2Δx2{displaystyle lambda ={frac {D_{x}Delta t}{2Delta x^{2}}}}lambda ={frac  {D_{x}Delta t}{2Delta x^{2}}}

α=UxΔt4Δx{displaystyle alpha ={frac {U_{x}Delta t}{4Delta x}}}alpha ={frac  {U_{x}Delta t}{4Delta x}}

β=kΔt2{displaystyle beta ={frac {kDelta t}{2}}}beta ={frac  {kDelta t}{2}}

and substitute (2), (3), (4), (5), (6), (7), α, β and λ into (1). We then put the new time terms on the left (j + 1) and the present time terms on the right (j) to get:


βCNij+1−)Ci−1j+1+(1+2λ+2β)Cij+1−α)Ci+1j+1−βCMij+1=βCNij+(λ)Ci−1j+(1−)Cij+(λα)Ci+1j+βCMij.{displaystyle -beta C_{Ni}^{j+1}-(lambda +alpha )C_{i-1}^{j+1}+(1+2lambda +2beta )C_{i}^{j+1}-(lambda -alpha )C_{i+1}^{j+1}-beta C_{Mi}^{j+1}=beta C_{Ni}^{j}+(lambda +alpha )C_{i-1}^{j}+(1-2lambda -2beta )C_{i}^{j}+(lambda -alpha )C_{i+1}^{j}+beta C_{Mi}^{j}.}-beta C_{{Ni}}^{{j+1}}-(lambda +alpha )C_{{i-1}}^{{j+1}}+(1+2lambda +2beta )C_{{i}}^{{j+1}}-(lambda -alpha )C_{{i+1}}^{{j+1}}-beta C_{{Mi}}^{{j+1}}=beta C_{{Ni}}^{{j}}+(lambda +alpha )C_{{i-1}}^{{j}}+(1-2lambda -2beta )C_{{i}}^{{j}}+(lambda -alpha )C_{{i+1}}^{{j}}+beta C_{{Mi}}^{{j}}.

To model the first channel, we realize that it can only be in contact with the following channel (M), so the expression is simplified to:


)Ci−1j+1+(1+2λ)Cij+1−α)Ci+1j+1−βCMij+1=+(λ)Ci−1j+(1−β)Cij+(λα)Ci+1j+βCMij.{displaystyle -(lambda +alpha )C_{i-1}^{j+1}+(1+2lambda +beta )C_{i}^{j+1}-(lambda -alpha )C_{i+1}^{j+1}-beta C_{Mi}^{j+1}=+(lambda +alpha )C_{i-1}^{j}+(1-2lambda -beta )C_{i}^{j}+(lambda -alpha )C_{i+1}^{j}+beta C_{Mi}^{j}.}-(lambda +alpha )C_{{i-1}}^{{j+1}}+(1+2lambda +beta )C_{{i}}^{{j+1}}-(lambda -alpha )C_{{i+1}}^{{j+1}}-beta C_{{Mi}}^{{j+1}}=+(lambda +alpha )C_{{i-1}}^{{j}}+(1-2lambda -beta )C_{{i}}^{{j}}+(lambda -alpha )C_{{i+1}}^{{j}}+beta C_{{Mi}}^{{j}}.

In the same way, to model the last channel, we realize that it can only be in contact with the previous channel (N), so the expression is simplified to:


βCNij+1−)Ci−1j+1+(1+2λ)Cij+1−α)Ci+1j+1=βCNij+(λ)Ci−1j+(1−β)Cij+(λα)Ci+1j.{displaystyle -beta C_{Ni}^{j+1}-(lambda +alpha )C_{i-1}^{j+1}+(1+2lambda +beta )C_{i}^{j+1}-(lambda -alpha )C_{i+1}^{j+1}=beta C_{Ni}^{j}+(lambda +alpha )C_{i-1}^{j}+(1-2lambda -beta )C_{i}^{j}+(lambda -alpha )C_{i+1}^{j}.}-beta C_{{Ni}}^{{j+1}}-(lambda +alpha )C_{{i-1}}^{{j+1}}+(1+2lambda +beta )C_{{i}}^{{j+1}}-(lambda -alpha )C_{{i+1}}^{{j+1}}=beta C_{{Ni}}^{{j}}+(lambda +alpha )C_{{i-1}}^{{j}}+(1-2lambda -beta )C_{{i}}^{{j}}+(lambda -alpha )C_{{i+1}}^{{j}}.

To solve this linear system of equations we must now see that boundary conditions must be given first to the beginning of the channels:


C0j{displaystyle C_{0}^{j}}C_{0}^{{j}}: initial condition for the channel at present time step
C0j+1{displaystyle C_{0}^{j+1}}C_{{0}}^{{j+1}}: initial condition for the channel at next time step
CN0j{displaystyle C_{N0}^{j}}C_{{N0}}^{{j}}: initial condition for the previous channel to the one analyzed at present time step
CM0j{displaystyle C_{M0}^{j}}C_{{M0}}^{{j}}: initial condition for the next channel to the one analyzed at present time step.


For the last cell of the channels (z) the most convenient condition becomes an adiabatic one, so


C∂xx=z=(Ci+1−Ci−1)2Δx=0.{displaystyle {frac {partial C}{partial x}}_{x=z}={frac {(C_{i+1}-C_{i-1})}{2Delta x}}=0.}{frac  {partial C}{partial x}}_{{x=z}}={frac  {(C_{{i+1}}-C_{{i-1}})}{2Delta x}}=0.

This condition is satisfied if and only if (regardless of a null value)


Ci+1j+1=Ci−1j+1.{displaystyle C_{i+1}^{j+1}=C_{i-1}^{j+1}.,}C_{{i+1}}^{{j+1}}=C_{{i-1}}^{{j+1}}.,

Let us solve this problem (in a matrix form) for the case of 3 channels and 5 nodes (including the initial boundary condition). We express this as a linear system problem:


[AA][Cj+1]=[BB][Cj]+[d]{displaystyle {begin{bmatrix}AAend{bmatrix}}{begin{bmatrix}C^{j+1}end{bmatrix}}=[BB][C^{j}]+[d]}{begin{bmatrix}AAend{bmatrix}}{begin{bmatrix}C^{{j+1}}end{bmatrix}}=[BB][C^{{j}}]+[d]

where



Cj+1=[C11j+1C12j+1C13j+1C14j+1C21j+1C22j+1C23j+1C24j+1C31j+1C32j+1C33j+1C34j+1]{displaystyle mathbf {C^{j+1}} ={begin{bmatrix}C_{11}^{j+1}\C_{12}^{j+1}\C_{13}^{j+1}\C_{14}^{j+1}\C_{21}^{j+1}\C_{22}^{j+1}\C_{23}^{j+1}\C_{24}^{j+1}\C_{31}^{j+1}\C_{32}^{j+1}\C_{33}^{j+1}\C_{34}^{j+1}end{bmatrix}}}{mathbf  {C^{{j+1}}}}={begin{bmatrix}C_{{11}}^{{j+1}}\C_{{12}}^{{j+1}}\C_{{13}}^{{j+1}}\C_{{14}}^{{j+1}}\C_{{21}}^{{j+1}}\C_{{22}}^{{j+1}}\C_{{23}}^{{j+1}}\C_{{24}}^{{j+1}}\C_{{31}}^{{j+1}}\C_{{32}}^{{j+1}}\C_{{33}}^{{j+1}}\C_{{34}}^{{j+1}}end{bmatrix}}   and   Cj=[C11jC12jC13jC14jC21jC22jC23jC24jC31jC32jC33jC34j].{displaystyle mathbf {C^{j}} ={begin{bmatrix}C_{11}^{j}\C_{12}^{j}\C_{13}^{j}\C_{14}^{j}\C_{21}^{j}\C_{22}^{j}\C_{23}^{j}\C_{24}^{j}\C_{31}^{j}\C_{32}^{j}\C_{33}^{j}\C_{34}^{j}end{bmatrix}}.}{mathbf  {C^{{j}}}}={begin{bmatrix}C_{{11}}^{{j}}\C_{{12}}^{{j}}\C_{{13}}^{{j}}\C_{{14}}^{{j}}\C_{{21}}^{{j}}\C_{{22}}^{{j}}\C_{{23}}^{{j}}\C_{{24}}^{{j}}\C_{{31}}^{{j}}\C_{{32}}^{{j}}\C_{{33}}^{{j}}\C_{{34}}^{{j}}end{bmatrix}}.

Now we must realize that AA and BB should be arrays made of four different subarrays (remember that only three channels are considered for this example but it covers the main part discussed above).



AA=[AA1AA30AA3AA2AA30AA3AA1]{displaystyle mathbf {AA} ={begin{bmatrix}AA1&AA3&0\AA3&AA2&AA3\0&AA3&AA1end{bmatrix}}}{mathbf  {AA}}={begin{bmatrix}AA1&AA3&0\AA3&AA2&AA3\0&AA3&AA1end{bmatrix}}   and  


BB=[BB1−AA30−AA3BB2−AA30−AA3BB1]{displaystyle mathbf {BB} ={begin{bmatrix}BB1&-AA3&0\-AA3&BB2&-AA3\0&-AA3&BB1end{bmatrix}}}{mathbf  {BB}}={begin{bmatrix}BB1&-AA3&0\-AA3&BB2&-AA3\0&-AA3&BB1end{bmatrix}}  

where the elements mentioned above correspond to the next arrays and an additional 4x4 full of zeros. Please note that the sizes of AA and BB are 12x12:



AA1=[(1+2λ)−α)00−)(1+2λ)−α)00−)(1+2λ)−α)00−(1+2λ)]{displaystyle mathbf {AA1} ={begin{bmatrix}(1+2lambda +beta )&-(lambda -alpha )&0&0\-(lambda +alpha )&(1+2lambda +beta )&-(lambda -alpha )&0\0&-(lambda +alpha )&(1+2lambda +beta )&-(lambda -alpha )\0&0&-2lambda &(1+2lambda +beta )end{bmatrix}}}{mathbf  {AA1}}={begin{bmatrix}(1+2lambda +beta )&-(lambda -alpha )&0&0\-(lambda +alpha )&(1+2lambda +beta )&-(lambda -alpha )&0\0&-(lambda +alpha )&(1+2lambda +beta )&-(lambda -alpha )\0&0&-2lambda &(1+2lambda +beta )end{bmatrix}}   , 


AA2=[(1+2λ+2β)−α)00−)(1+2λ+2β)−α)00−)(1+2λ+2β)−α)00−(1+2λ+2β)]{displaystyle mathbf {AA2} ={begin{bmatrix}(1+2lambda +2beta )&-(lambda -alpha )&0&0\-(lambda +alpha )&(1+2lambda +2beta )&-(lambda -alpha )&0\0&-(lambda +alpha )&(1+2lambda +2beta )&-(lambda -alpha )\0&0&-2lambda &(1+2lambda +2beta )end{bmatrix}}}{mathbf  {AA2}}={begin{bmatrix}(1+2lambda +2beta )&-(lambda -alpha )&0&0\-(lambda +alpha )&(1+2lambda +2beta )&-(lambda -alpha )&0\0&-(lambda +alpha )&(1+2lambda +2beta )&-(lambda -alpha )\0&0&-2lambda &(1+2lambda +2beta )end{bmatrix}}   , 


AA3=[−β0000−β0000−β0000−β]{displaystyle mathbf {AA3} ={begin{bmatrix}-beta &0&0&0\0&-beta &0&0\0&0&-beta &0\0&0&0&-beta end{bmatrix}}}{mathbf  {AA3}}={begin{bmatrix}-beta &0&0&0\0&-beta &0&0\0&0&-beta &0\0&0&0&-beta end{bmatrix}}   , 


BB1=[(1−β)(λα)00(λ)(1−β)(λα)00(λ)(1−β)(λα)002λ(1−β)]{displaystyle mathbf {BB1} ={begin{bmatrix}(1-2lambda -beta )&(lambda -alpha )&0&0\(lambda +alpha )&(1-2lambda -beta )&(lambda -alpha )&0\0&(lambda +alpha )&(1-2lambda -beta )&(lambda -alpha )\0&0&2lambda &(1-2lambda -beta )end{bmatrix}}}{mathbf  {BB1}}={begin{bmatrix}(1-2lambda -beta )&(lambda -alpha )&0&0\(lambda +alpha )&(1-2lambda -beta )&(lambda -alpha )&0\0&(lambda +alpha )&(1-2lambda -beta )&(lambda -alpha )\0&0&2lambda &(1-2lambda -beta )end{bmatrix}}   &  

BB2=[(1−)(λα)00(λ)(1−)(λα)00(λ)(1−)(λα)002λ(1−)].{displaystyle mathbf {BB2} ={begin{bmatrix}(1-2lambda -2beta )&(lambda -alpha )&0&0\(lambda +alpha )&(1-2lambda -2beta )&(lambda -alpha )&0\0&(lambda +alpha )&(1-2lambda -2beta )&(lambda -alpha )\0&0&2lambda &(1-2lambda -2beta )end{bmatrix}}.}{mathbf  {BB2}}={begin{bmatrix}(1-2lambda -2beta )&(lambda -alpha )&0&0\(lambda +alpha )&(1-2lambda -2beta )&(lambda -alpha )&0\0&(lambda +alpha )&(1-2lambda -2beta )&(lambda -alpha )\0&0&2lambda &(1-2lambda -2beta )end{bmatrix}}.

The d vector here is used to hold the boundary conditions. In this example it is a 12x1 vector:


d=[(λ)(C10j+1+C10j)000(λ)(C20j+1+C20j)000(λ)(C30j+1+C30j)000].{displaystyle mathbf {d} ={begin{bmatrix}(lambda +alpha )(C_{10}^{j+1}+C_{10}^{j})\0\0\0\(lambda +alpha )(C_{20}^{j+1}+C_{20}^{j})\0\0\0\(lambda +alpha )(C_{30}^{j+1}+C_{30}^{j})\0\0\0end{bmatrix}}.}{mathbf  {d}}={begin{bmatrix}(lambda +alpha )(C_{{10}}^{{j+1}}+C_{{10}}^{{j}})\0\0\0\(lambda +alpha )(C_{{20}}^{{j+1}}+C_{{20}}^{{j}})\0\0\0\(lambda +alpha )(C_{{30}}^{{j+1}}+C_{{30}}^{{j}})\0\0\0end{bmatrix}}.

To find the concentration at any time, one must iterate the following equation:


[Cj+1]=[AA−1]([BB][Cj]+[d]).{displaystyle {begin{bmatrix}C^{j+1}end{bmatrix}}={begin{bmatrix}AA^{-1}end{bmatrix}}([BB][C^{j}]+[d]).}{begin{bmatrix}C^{{j+1}}end{bmatrix}}={begin{bmatrix}AA^{{-1}}end{bmatrix}}([BB][C^{{j}}]+[d]).


Example: 2D diffusion


When extending into two dimensions on a uniform Cartesian grid, the derivation is similar and the results may lead to a system of band-diagonal equations rather than tridiagonal ones. The two-dimensional heat equation


u∂t=a(∂2u∂x2+∂2u∂y2){displaystyle {frac {partial u}{partial t}}=aleft({frac {partial ^{2}u}{partial x^{2}}}+{frac {partial ^{2}u}{partial y^{2}}}right)}{frac  {partial u}{partial t}}=aleft({frac  {partial ^{2}u}{partial x^{2}}}+{frac  {partial ^{2}u}{partial y^{2}}}right)

can be solved with the Crank–Nicolson discretization of


ui,jn+1=ui,jn+12aΔt(Δx)2[(ui+1,jn+1+ui−1,jn+1+ui,j+1n+1+ui,j−1n+1−4ui,jn+1)+(ui+1,jn+ui−1,jn+ui,j+1n+ui,j−1n−4ui,jn)]{displaystyle {begin{aligned}u_{i,j}^{n+1}&=u_{i,j}^{n}+{frac {1}{2}}{frac {aDelta t}{(Delta x)^{2}}}{big [}(u_{i+1,j}^{n+1}+u_{i-1,j}^{n+1}+u_{i,j+1}^{n+1}+u_{i,j-1}^{n+1}-4u_{i,j}^{n+1})\&qquad {}+(u_{i+1,j}^{n}+u_{i-1,j}^{n}+u_{i,j+1}^{n}+u_{i,j-1}^{n}-4u_{i,j}^{n}){big ]}end{aligned}}}{begin{aligned}u_{{i,j}}^{{n+1}}&=u_{{i,j}}^{n}+{frac  {1}{2}}{frac  {aDelta t}{(Delta x)^{2}}}{big [}(u_{{i+1,j}}^{{n+1}}+u_{{i-1,j}}^{{n+1}}+u_{{i,j+1}}^{{n+1}}+u_{{i,j-1}}^{{n+1}}-4u_{{i,j}}^{{n+1}})\&qquad {}+(u_{{i+1,j}}^{{n}}+u_{{i-1,j}}^{{n}}+u_{{i,j+1}}^{{n}}+u_{{i,j-1}}^{{n}}-4u_{{i,j}}^{{n}}){big ]}end{aligned}}

assuming that a square grid is used so that Δx=Δy{displaystyle Delta x=Delta y}Delta x=Delta y. This equation can be simplified somewhat by rearranging terms and using the CFL number


μ=aΔt(Δx)2.{displaystyle mu ={frac {aDelta t}{(Delta x)^{2}}}.}mu ={frac  {aDelta t}{(Delta x)^{2}}}.

For the Crank–Nicolson numerical scheme, a low CFL number is not required for stability, however it is required for numerical accuracy. We can now write the scheme as:


(1+2μ)ui,jn+1−μ2(ui+1,jn+1+ui−1,jn+1+ui,j+1n+1+ui,j−1n+1)=(1−)ui,jn+μ2(ui+1,jn+ui−1,jn+ui,j+1n+ui,j−1n).{displaystyle {begin{aligned}&(1+2mu )u_{i,j}^{n+1}-{frac {mu }{2}}left(u_{i+1,j}^{n+1}+u_{i-1,j}^{n+1}+u_{i,j+1}^{n+1}+u_{i,j-1}^{n+1}right)\&quad =(1-2mu )u_{i,j}^{n}+{frac {mu }{2}}left(u_{i+1,j}^{n}+u_{i-1,j}^{n}+u_{i,j+1}^{n}+u_{i,j-1}^{n}right).end{aligned}}}{begin{aligned}&(1+2mu )u_{{i,j}}^{{n+1}}-{frac  {mu }{2}}left(u_{{i+1,j}}^{{n+1}}+u_{{i-1,j}}^{{n+1}}+u_{{i,j+1}}^{{n+1}}+u_{{i,j-1}}^{{n+1}}right)\&quad =(1-2mu )u_{{i,j}}^{{n}}+{frac  {mu }{2}}left(u_{{i+1,j}}^{{n}}+u_{{i-1,j}}^{{n}}+u_{{i,j+1}}^{{n}}+u_{{i,j-1}}^{{n}}right).end{aligned}}

Solving such a linear system is not practical due to extremely high time complexity of solving a linear system by the means of Gaussian elimination or even Strassen algorithm. Hence an Alternating direction implicit method can be implemented to solve the numerical PDE whereby one dimension is treated implicitly and other dimension explicitly for half of the assigned time-step and vice versa for the remainder half of the time-step. The benefit of this strategy is that the implicit solver only requires a Tridiagonal matrix algorithm to be solved. The difference between the true Crank–Nicolson solution and ADI approximated solution has an order of accuracy of O(Δt4){displaystyle O(Delta t^{4})}O(Delta t^{4}) and hence can be ignored with a sufficiently small time-step.[4]



Application in financial mathematics



Because a number of other phenomena can be modeled with the heat equation (often called the diffusion equation in financial mathematics), the Crank–Nicolson method has been applied to those areas as well.[5] Particularly, the Black–Scholes option pricing model's differential equation can be transformed into the heat equation, and thus numerical solutions for option pricing can be obtained with the Crank–Nicolson method.


The importance of this for finance is that option pricing problems, when extended beyond the standard assumptions (e.g. incorporating changing dividends), cannot be solved in closed form, but can be solved using this method. Note however, that for non-smooth final conditions (which happen for most financial instruments), the Crank–Nicolson method is not satisfactory as numerical oscillations are not damped. For vanilla options, this results in oscillation in the gamma value around the strike price. Therefore, special damping initialization steps are necessary (e.g., fully implicit finite difference method).



See also



  • Financial mathematics

  • Trapezoidal rule



References




  1. ^ Tuncer Cebeci (2002). Convective Heat Transfer. Springer. ISBN 0-9668461-4-1..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ Crank, J.; Nicolson, P. (1947). "A practical method for numerical evaluation of solutions of partial differential equations of the heat conduction type". Proc. Camb. Phil. Soc. 43 (1): 50–67. doi:10.1007/BF02127704..


  3. ^ Thomas, J. W. (1995). Numerical Partial Differential Equations: Finite Difference Methods. Texts in Applied Mathematics. 22. Berlin, New York: Springer-Verlag. ISBN 978-0-387-97999-1.. Example 3.3.2 shows that Crank–Nicolson is unconditionally stable when applied to ut=auxx{displaystyle u_{t}=au_{xx}}u_{t}=au_{{xx}}.


  4. ^ "Multi -Dimensional Parabolic Problems" (PDF). Computer Science Department. RPI. Retrieved 29 May 2016.


  5. ^ Wilmott, P.; Howison, S.; Dewynne, J. (1995). The Mathematics of Financial Derivatives: A Student Introduction. Cambridge Univ. Press. ISBN 0-521-49789-2.



External links




  • Numerical PDE Techniques for Scientists and Engineers, open access Lectures and Codes for Numerical PDEs

  • An example of how to apply and implement the Crank-Nicolson method for the Advection equation










Popular posts from this blog

Shashamane

Carrot

Deprivation index