Class: Base system
This class simply initiates a system, defines the grid and contains the basic functionality for evolving in time.
See the ComFiT Library Reference below for a complete list of class methods and their usage.
Types of functions
There are five different types of functions:
conf_
-functions: Configures the state of the system, for instance by setting an initial condition. Output nothing.evolve_
-functions: Evolves the state in time according to some equation of motion.calc_
-functions: Calculates something from the state, returns that which has been calculated.plot_
-functions: Functions tailored to plot specific things. Output the axes and figure.get_
-functions: Functions that return a component of a tensor field. Relevant for symmetric and antisymmetric tensors where it is not convenient to save all elements.
General keywords and parameters
The only required input argument to the BaseSystem class is the dim
argument, which specifies the dimension of the system.
In some cases, default values of other parameter depend on the value of dim
, and are represented by curly brackets:
These are the optional keywords for the BaseSystem
class.
Keyword | Definition | Default value |
---|---|---|
xmin |
Minimum value of \(x\) of the simulation domain | \(0\) |
ymin |
Minimum value of \(y\) of the simulation domain | \(0\) |
zmin |
Minimum value of \(z\) of the simulation domain | \(0\) |
xmax |
Maximum value of \(x\) of the simulation domain. | \(100\) |
ymax |
Maximum value of \(y\) of the simulation domain | \(\left \lbrace \begin{array}{c} 1 \\ 100 \\ 100 \\ \end{array} \right \rbrace\) |
zmax |
Maximum value of \(z\) of the simulation domain | \(\left \lbrace \begin{array}{c} 1 \\ 1 \\ 100 \\ \end{array} \right \rbrace\) |
xRes |
Resolution of the \(x\) axis | \(101\) |
yRes |
Resolution of the \(y\) axis | \(\left \lbrace \begin{array}{c} 1 \\ 101 \\ 101 \\ \end{array} \right \rbrace\) |
zRes |
Resolution of the \(z\) axis | \(\left \lbrace \begin{array}{c} 1 \\ 1 \\ 101 \\ \end{array} \right \rbrace\) |
dx |
Spacing between points on the \(x\)-axis. Trumps xRes if provided. xmax will be modified to match. |
\(\frac{\texttt{xmax}-\texttt{xmin}}{\texttt{xRes}} = 1\) |
dy |
Spacing between points on the \(y\)-axis. Trumps yRes if provided. |
\(\frac{\texttt{ymax}-\texttt{ymin}}{\texttt{yRes}} = 1\) |
dz |
Spacing between points on the \(x\)-axis. Trumps zRes if provided. |
\(\frac{\texttt{zmax}-\texttt{zmin}}{\texttt{zRes}} = 1\) |
xlim |
List or tuple consisting of the lower and upper limit for the simulation domain in the \(x\)-direction. Trumps xmin and xmax if provided. |
\((\texttt{xmin},\texttt{xmax}) = (0,101)\) |
ylim |
List or tuple consisting of the lower and upper limit for the simulation domain in the \(y\)-direction. Trumps ymin and ymax if provided. |
\((\texttt{ymin},\texttt{ymax}) = \left \lbrace \begin{array}{c} (0,1) \\ (0,101) \\ (0,101) \\ \end{array} \right \rbrace\) |
zlim |
List or tuple consisting of the lower and upper limit for the simulation domain in the \(z\)-direction. Trumps zmin and zmax if provided. |
\((\texttt{xmin},\texttt{xmax}) = \left \lbrace \begin{array}{c} (0,1) \\ (0,1) \\ (0,101) \\ \end{array} \right \rbrace\) |
time |
Float specifying the time of initialization | \(0\) |
a0 |
Characteristic length scale associated with the system, in units of which all plots will be scaled. This is also the default width used with the coarse-graining operation. | \(1\) |
X |
2D numpy array with position coordinates. Typically used when the coordinates are not a regular grid. | None |
Y |
2D numpy array with position coordinates. Typically used when the coordinates are not a regular grid. | None |
Z |
2D numpy array with position coordinates. Typically used when the coordinates are not a regular grid. | None |
From these keywords, a number of useful parameters are constructed, given in the table below.
Parameter | Definition | Value |
---|---|---|
x |
Numpy array with dimensions \(\left \lbrace \begin{array}{l} \texttt{xRes} \\ \texttt{xRes}\times 1 \\ \texttt{xRes}\times 1 \times 1 \\ \end{array} \right \rbrace\) consisting of the grid points from xmin to (including) xmax-dx . |
|
y |
Numpy array with dimensions \(\left \lbrace \begin{array}{l} 1 \\ 1 \times \texttt{yRes} \\ 1 \times \texttt{yRes} \times 1 \\ \end{array} \right \rbrace\) consisting of the grid points from ymin to (including) ymax-dy . |
|
z |
Numpy array with dimensions \(\left \lbrace \begin{array}{l} 1 \\ 1 \\ 1 \times 1 \times \texttt{zRes} \\ \end{array} \right \rbrace\) consisting of the grid points from zmin to (including) zmax-dz . |
|
xmidi |
Index of the mid \(x\)-value. In the case of an odd xRes , this midpoint index will not hit the middle exactly but undershoot by dx/2 . |
|
xmid |
The \(x\) value given by xmidi . |
|
ymidi |
Index of the mid \(y\)-value. In the case of an odd yRes , this midpoint index will not hit the middle exactly but undershoot by dy/2 . |
|
ymid |
The \(y\) value given by ymidi . |
|
zmidi |
Index of the mid \(z\)-value. In the case of an odd zRes , this midpoint index will not hit the middle exactly but undershoot by dz/2 . |
|
zmid |
The \(z\) value given by zmidi . |
|
size_x |
Size of the \(x\)-axis | \(\texttt{xmax} - \texttt{xmin}\) |
size_y |
Size of the \(y\)-axis | \(\texttt{ymax} - \texttt{ymin}\) (1 if dim \(<2\)) |
size_z |
Size of the \(z\)-axis | \(\texttt{zmax} - \texttt{zmin}\) (1 if dim \(<3\)) |
size_min |
Minimum value of the simulation domain | \(\left \lbrace \begin{array}{c} \texttt{size_x} \\ \texttt{min}(\texttt{size_x}, \texttt{size_y})\\ \texttt{min}(\texttt{size_x}, \texttt{size_y}, \texttt{size_z}) \\ \end{array} \right \rbrace\) |
size_max |
Maximum value of the simulation domain | \(\left \lbrace \begin{array}{c} \texttt{size_x} \\ \texttt{max}(\texttt{size_x}, \texttt{size_y})\\ \texttt{max}(\texttt{size_x}, \texttt{size_y}, \texttt{size_z}) \\ \end{array} \right \rbrace\) |
dV |
Volume element of the grid. | \(\left \lbrace \begin{array}{l} \texttt{dx} \\ \texttt{dx} \times \texttt{dy} \\ \texttt{dx} \times \texttt{dy} \times \texttt{dz} \\ \end{array} \right \rbrace\). |
volume |
Volume of the simulation domain | \(\texttt{size_x} \times \texttt{size_y} \times \texttt{size_z}\) |
Note that even though variables like yRes
, zRes
etc. are defined in cases where they are not relevant, such as for a \(1\)-dimensional system, they play no significant role in any calculations in such situations.
Periodic boundary conditions means that xmax
and xmin
are identified as the same point.
Fourier transformations
ComFiT is based on so-called spectral methods, which means using Fourier transformations to solve differential equations. There are some subtleties to how the Fourier transformations work, which we will explain here. If you prefer a video explanation, you can watch the following video.
Infinite system
For an infinite system, the Fourier transformation of a function \(g(\mathbf{r})\) is given by
The Fourier transform
and the inverse Fourier transformation is given by
The inverse Fourier transform
The factor \(\frac{1}{(2\pi)^d}\) in the definition of \(g_{\mathfrak f}\) is a convention. It is useful when thinking of \(g_{\mathfrak f}(\mathbf k)\) as the weight of the corresponding different Fourier components. If \(g(\mathbf r)\) is a real function, then \(g_{\mathfrak f}(-\mathbf k) = g_{\mathfrak f}^*(\mathbf k)\). Thus, in principle, to calculate a derivative of a function, one can take the Fourier transformation, multiply by \(\mathfrak i \mathbf k\), and then take the inverse Fourier transformation.
which is why we can calculate derivatives of a field g
in ComFiT using
where 1j
is how to write the imaginary unit in Python.
In fact, the combination 1j*bs.k[0]
is used so often that it is saved in its own property bs.dif[0]
, so it is more common to see the derivative calculated as
In reality, however, we are working with a periodic grid, which means that the Fourier transformation is not exactly the same as the one defined above. We will cover this next.
Periodic grid
In this section, we will show why we can calculate a numerical derivative as given above.
In one dimension, on a periodic grid, a function is defined on a grid with \(N\) points, where it is assumed that the \(N+1\)th point (x_n
) would be the same as the first point (x_0
).
Numerically, the Discrete Fourier transformation as
and the inverse discrete Fourier transformation is given by
Difference between \(g_{\mathcal f}\) and \(g_{\mathcal f m}\)
The Fourier transform \(g_{\mathfrak f}\) is a function of the wavenumber \(\mathbf k\), while \(g_{\mathfrak f m}\) is a function of the index \(m\).
\(n\) is related to the values \(x_n\) by
so
Inserting this into the Fourier transform, we get
Now, we define
i.e,
So we see that \(g_{\mathfrak f k}\) can be thought of as a function \(g_{\mathfrak f}\), the Fourier transform of \(g\) evaluated at the points \(k_n\)
The discrete Fourier transformation
So we can write the inverse Fourier transform as
The inverse discrete Fourier transformation
From this expression, we see that if we multiply \(g_{\mathfrak f m}\) with \(\mathfrak i k_m\), we get
which justifies the numerical derivative given in the previous section.
The Nyquist frequency
The function calc_wavenums
calculates the wavenumbers corresponding to the input position vectors given by x
.
# In ComFiT 1.8.7
def calc_wavenums(
self,
x: np.ndarray
) -> np.ndarray:
"""Calculates the wavenumbers corresponding to the input position vectors given by x.
Parameters
----------
x : numpy.ndarray
1D array of x-positions.
Returns
-------
numpy.ndarray
1D array of wavenumbers with all the modes for the given x-array,
assuming periodicity from x[0] to x[0] over n intervals.
Examples
--------
>>> x = np.array([-10, -5, 0, 5, 10])
>>> k = instance_of_BaseSystem.calc_wavenums(self, x)
>>> print(k)
[ 0. 0.25132741 0.50265482 -0.50265482 -0.25132741]
"""
n = len(x)
high = (n - 1) // 2
low = - (n // 2)
l = n * (x[1] - x[0])
k = np.concatenate((np.arange(0, high + 1), np.arange(low, 0))) * 2 * np.pi / l
return k
However, there is a slight difference between the wavenumbers calculated here and \(k_m\) in the previous section. The wavenumbers are the same up to \(k_{N/2}\) (which correspond to the so-called Nyquist frequency), but then the wavenumbers are negative, the reason for this is that for a function defined on a grid with \(N\) points, the highest wavenumber that can be resolved is \(k_{N/2}\), and wavenumbers \(k_m\) above are effective the same as negative wavenumbers. We will explain this next.
The Nyquist frequency is given by
which corresponds to the wavenumber
The value of \(m\) corresponding to this frequency is
Insert \(- k_m\), where \(k_m< k_{NQ}\) into the Fourier transform, we get
where we have used that
This shows that finding the Fourier spectrum above the Nyquist frequency corresponds to finding the amplitude to negative wavenumbers.
So, in the calc_wavenum
function, we set the first \(N/2\) k-values to \([ k_m ]_{m=1}^{N/2} = [0,..., N/2] \cdot \frac{2\pi}{N \Delta x}\) and then the following wavenumbers to \([k_m]_{N/2}^{N} = [-N/2,...,-1]\cdot \frac{2\pi}{N \Delta x}\)
Plotting a Fourier field
From a numerical point of view, in calculating derivatives, we can use the discrete Fourier transformations directly, as outline above.
For physical applications, however, the Fourier transformation defined on an infinite domain is more useful.
In plotting the Fourier fields (passing the fourier=True
parameter to the relevant plot function), therefore, we slightly modify g_f
.
We will detail this next.
We can write the inverse discrete Fourier transformation as follows
The sum is a numerical approximation of the infinite inverse Fourier transform of \(g_{\mathfrak f}(\mathbf k)\), if we make the connection
Connection between the discrete and infinite Fourier transformation
This is why, when passing the fourier=True
parameter to the plot functions, the field is modified in the _check_if_fourier_and_adjust
function in base_system_plot
according by
# In ComFiT 1.9.0
dkx = self.k[0][1]-self.k[0][0]
phase_shift = 1/(self.xRes*dkx)*np.exp(1j*self.k[0]*self.xmin)
if self.dim > 1:
dky = self.k[1][0,1]-self.k[1][0,0]
phase_shift = phase_shift*1/(self.yRes*dky)*np.exp(1j*self.k[1]*self.ymin)
if self.dim > 2:
dkz = self.k[2][0,0,1]-self.k[2][0,0,0]
phase_shift = phase_shift*1/(self.zRes*dkz)*np.exp(1j*self.k[2]*self.zmin)
field = np.fft.fftshift(phase_shift*field, axes=range(-self.dim, 0))
before passing the field to be plotted.
This adjusts the discrete transform to approximate the continuous Fourier transform.
The fftshift
function is used to shift the zero frequency component to the center of the array.
Example
# In ComFiT 1.9.0
import comfit as cf
import numpy as np
bs = cf.BaseSystem(1, xlim=[-10,10], xRes=101)
field = 0.5*bs.calc_Gaussian(position=1)
field_f = bs.fft(field)
fig, ax = bs.plot_subplots(1,2)
bs.plot_field(field, ax=ax[0], fig=fig, title='Real field')
bs.plot_complex_field(field_f, fourier=True, ax=ax[1], fig=fig, title='Fourier transform')
bs.plot_save(fig)
Notice that since the field is real, the Fourier transform is symmetric in amplitude and opposite in phase around the zero frequency.
Coarse-graining
A common and useful method is that of coarse-graining, which is defined as
where \(\mathcal K(\mathbf r'-\mathbf r)\) is a Gaussian kernel given by
From a numerical point of view, this is done in Fourier space since, by the convolution theorem,
Thus, we need the Fourier transform of \(\mathcal K\), which is
This is why we have the following function
which calculates \(\mathcal K_{\mathfrak f}\).
Typically, a field is coarse-grained with a width using the following piece of code
The Gaussian function is actually so useful that is given by can be calculated using
Vortex fields
A general feature that will be reused is that of vortex fields. An angle field is a field where each point in space corresponds to an angle \(\theta \in \mathcal S^n\). A vortex is a topological defect in an angle field, around which the circulation is some integer multiple of the covering of \(\mathcal S^n\).
Angle field of a single vortex in two dimensions
In two dimensions, the angle field takes values \(\theta \in [-\pi,\pi \rangle\) and a vortex is a point \(\mathbf r_0\). The angle field produced by the vortex has a circulation which is a multiple integer of \(2\pi\), i.e.,
where \(s_n\) is the integer charge of the vortex. A possible angle field for a vortex positioned at \((x_0,y_0)\) is given by
Angle field of a vortex ring in three dimensions
For a ring vortex in three dimensions centered at \(\mathbf{r_0}\) with radius \(R\), an angle field with the correct topological charge is given by first calculating the auxiliary quantities
and then calculating
These expressions are based on the geometry depicted in the following figure.
Vortex ring angle field explanation: Geometry of a vortex ring in the plane given by \(\vec n\). \(\mathcal N'\) is the plane normal to the tangent vector \(\vec t'\) at \(\vec r'\) upon which we impose a Cartesian coordinate system to determine the angles \(\theta_1\), \(\theta_2\) that are used to construct the (inset) initial angle field. Figure reprinted from Ref.3 with permission.
The angle field is then given by
and is implemented in the function calc_angle_field_vortex_ring
.
Periodic boundary conditions: Numerical implementation of angle fields
Apart from the angle field of a single vortex, the other fields are compatible with periodic boundary conditions. The expressions for these fields, however, are really only valid for an infinite region. When this is imposed on periodic boundary conditions, it results in spurious boundary effects, especially if either of the vortices is placed near the edge of the simulation domain. By simply inserting the vortices directly, we get what is shown in the following figure (a).
Numerical implementation of periodic angle fields: The angle field of panel (a) has been filtered by the field \(F\) with \(w=0.2x_{\textrm{max}}\) to produce the periodic field given in panel (c). This field is simply rolled to produce a different position for the dipole in panel (d).
This field is not periodic on the domain. This typically causes the unintentional nucleation of vortices and strain on the boundary. We therefore seek to modify the fields so that they don't "see" the periodic boundary conditions.
In order to produce a field that is periodic on the domain, we transform the field \(\theta\) to a complex field \(\eta = e^{i \theta}\). The argument of this complex field has the correct winding configuration of the vortex dipole. However, we want to smooth this field so that it goes to \(1\) (\(\theta=0)\) at the borders. To do so, we introduce the filter function
where \(r^2 = (x-x_{\textrm{mid}})^2 + (y-y_{\textrm{mid}})^2\), which is a function that is zero in the center region and goes to \(1\) at infinity over a width of \(w\). The filtered field \(\eta\) is then obtained by making a smooth function that goes from the previous angle field according to
\(\tilde \eta\) is shown in Figure (c). The value of \(w\) and \(R\) can be adjusted and are found in the source code as width
and radius
, respectively.
From this section, it is clear that the initial vortex dipole should not be too large. Thus, we have included a warning in case it is attempted to initiate a dipole with a larger distance than a certain threshold.
Numerical integration scheme
The systems of interest for this code are those that can be written on the form
where \(\omega\) is a linear differential operator and \(N\) is a non-linear operator (function of \(\psi\)). The following table shows some examples from the models that we will discuss in the following chapters.
Model | \(\omega\) | \(\omega_{\mathfrak f}(\mathbf{k})\) | \(N\) |
---|---|---|---|
Quantum Mechanics | $\frac{1}{2}i \nabla^2 $ | \(-\frac{1}{2} i \mathbf{k}^2\) | \(- i V\) |
BEC | \((i+\gamma) (1+\frac{1}{2}\nabla^2)\) | \((i+\gamma) (1-\frac{1}{2}\mathbf k^2)\) | \(- (i + \gamma) (V_{ext} + \psi \psi^*)\psi\) |
Active Nematic | \(\frac{K}{\gamma} \nabla^2 +\frac{AB}{\gamma}\) | \(-\frac{K}{\gamma} k^2 +\frac{AB}{\gamma}\) | \(- \mathbf{u}\cdot \nabla Q + Q \Omega -\Omega Q - \frac{2A}{\gamma}Q^2_{kk}Q\) |
Table: Examples of time evolution operators, non-conserved.
In the following, we will explain the method of evolution of exponential time differencing for stiff systems1. This will result in two integration schemes, the exponential time differencing second order Runge-Kutta 2 (ETD2RK) scheme and the forth order ETD4RK scheme. As in Ref. [coxExponentialTimeDifferencing2002], we will show an intuitive way to obtain the former and only recite the expressions for the latter.
The ETD2RK scheme
To see how we obtain the ETD2RK scheme, we take the Fourier transformation of the time evolution equation, and get:
Where \(\psi_{\mathfrak f}(\mathbf{k})\) is the Fourier transform of \(\psi(\mathbf{r})\). Integrating from \(t\) to \(t+ \Delta t\), one finds:
we multiply by \(e^{ \omega_{\mathfrak f}(t+ \Delta t)}\) and get:
This is an exact result, however, the last integral is unknown. In order to calculate the last integral here, we approximate it by \(N (t+\tau) \approx N_{\mathfrak f 0} + \frac{\Delta N_{\mathfrak f}}{\Delta t} \tau\) where \(N_{\mathfrak f 0} = (N(\psi(t))_{\mathfrak f}\) and \(\Delta N_{\mathfrak f} = N_{\mathfrak f}(t+\Delta t)-N_{\mathfrak f}(t)\). We also change the integration limits from \(\tau \in [t,t+\Delta t]\) to \(\tau \in [0,\Delta t]\), which gives:
To find \(\psi_{\mathfrak f} (t+\Delta t)\), we would need to know the value \(N_{\mathfrak f} (t+\Delta t)\) before finding the state at \(\psi(t+\Delta t)\). To do this, we first find a predicted state \(\psi_a\) by assuming \(\Delta N_{\mathfrak f}=0\) and calculating \(\psi(t)\) according to the equation above. This lets us calculate an approximate \(\Delta N_{\mathfrak f} = N_{\mathfrak f a} - N_{\mathfrak f 0}\) and we use this in order to evolve \(\psi\). This is the ETD2RK scheme.
Note that \(N_{\mathfrak f}\) is a non-linear function of the field variable \(\psi\), but can also be an explicit variable of time \(t\), i.e. \(N_{\mathfrak f}(\psi,t)\).
Therefore, in the code, it has to be encoded as a function of these two variables calc_nonlinear_evolution_function_f(self, psi, t)
.
For numerical purposes, it is useful to calculate the small \(\omega_{\mathfrak f}\) limit. We expand the exponential in its Taylor series and keep the leading order term to get:
In \(I_{\mathfrak f 1}\), and \(I_{\mathfrak f 2}\) there is a division by \(0\) when \(\omega_{\mathfrak f} = 0\). To avoid numerical issues related to this we use the above limits when \(|\omega_{\mathfrak f}|\) is smaller than a tolerance. We don't use the limit for \(I_{\mathfrak f 0}\) since it doesn't contain a division by \(0\). The function evolve_ETD2RK_loop
defined in the base system class performs an ETD2RK step. This function is called by the evolvers discussed in the model chapter if the method is defined as method = "ETD2RK"
. This is the default solver if method
is not set. The integrating factors for a given \(\omega_{\mathfrak f}(\mathbf{k})\) can be found with the function calc_evolution_integrating_factors_ETD2RK
where the variable tol
gives when the factors should be replaced by their leading order Taylor expansion.
Note that all solvers defined in the class BaseSystem
updates the time variable
self.t
to allow for time-dependents in the non-linear term.
The ETD4RK scheme
Following Ref.1, we may generalize the method to a fourth order Runge-Kutta as follows
where
Algorithm: The ETD4RK scheme
In the small \(\omega_{\mathfrak f}\) limit, we have
Similar as for the EDT2RK case \(I_{\mathfrak f 1}\), \(I_{\mathfrak f 3}\), \(I_{\mathfrak f 4}\), and \(I_{\mathfrak f 5}\) contains a division by \(0\) when \(\omega_{\mathfrak f} = 0\).
We therefore replace these coefficients with their limits when \(|\omega_{\mathfrak f}|\) is smaller than a tolerance.
This has been important in order to make the the code stable for some of the systems.
In the same way as the EDT2RK scheme this is implemented as the function
self.evolve_ETD4RK_loop(self, integrating_factors_f, non_linear_evolution_function, field, field_f)
This function is called by the evolvers discussed in the model chapter if the method is defined as method = "ETD4RK"
, the integrating factors are found with
self.calc_evolution_integrating_factors_ETD4RK(self, omega_f, tol=10**(-4))
.
The fully non-linear limit
It is both interesting and enlightening to see the fully non-linear limit of these equations, i.e., the limit in which \(\omega_{\mathfrak f} =0\), \(N_{\mathfrak f} = \partial_t \psi \equiv \dot{\psi}_{\mathfrak f}\) and the small \(\omega_{\mathfrak f}\) approximations are exact. For the ETD2RK scheme, we get
which is a two-stage Runge-Kutta method called Heun's method.
The ETD4RK scheme becomes
Note that this is not the typical Runge-Kutta 4 method, due to the differences in calculating \(\psi_{\mathfrak f c}\). The reason is that a straight-forward generalization of the Runge-Kutta 4 method will not produce a fourth-order method in the general case 1.
The fully linear limit
If \(N=0\), the evolution equation changes to
An example of this is the schrödinger equation, for which \(\omega_{\mathfrak f} = -\frac{1}{2} i \mathbf k^2\), so we get
This is an exact equation, of course, so you may evolve this free particle solution to any time.
Testing
In order to test the numerical methods, study the simplest model of a field equation with a (non-linear) forcing term, namely the heat equation
where \(T\) is the temperature in celsius, and \(f(\mathbf r)\) is a forcing term, which we model as
which represents a heating element with temperature \(T_0\) placed at \(\mathbf r_0\).
As a benchmark, we use the solve_ivp
of the scipy
library sp.integrate
to solve the equation using a finite difference method.
The solutions match to a satisfactory degree, but a more thorough investigation into how the accuracy of the framework and integration methods scale with spatial and temporal resolution will be performed in the future.
Tests are included in test_base_system.py
, but for visual examination, here are animations of the initial condition \(T=0\) in all three dimensions
Algorithms for tracking defects
To be written
Calculating the velocity
The equations for the velocity are taken from Ref.2, simplified using Mathematica and then substituted for python code using chatGPT.
-
Cox, S. M., & Matthews, P. C. (2002). Exponential Time Differencing for Stiff Systems. Journal of Computational Physics, 176(2), 430–455. https://doi.org/10.1006/jcph.2002.6995 ↩↩↩
-
Skogvoll, V., Rønning, J., Salvalaglio, M., & Angheluta, L. (2023). A unified field theory of topological defects and non-linear local excitations. Npj Computational Materials, 9(1), Article 1. https://doi.org/10.1038/s41524-023-01077-6 ↩
-
Skogvoll, V., Angheluta, L., Skaugen, A., Salvalaglio, M., & Viñals, J. (2022). A phase field crystal theory of the kinematics of dislocation lines. Journal of the Mechanics and Physics of Solids, 166, 104932. https://doi.org/10.1016/j.jmps.2022.104932 ↩