Commit 4ccb8b67 authored by Christoph Grueninger's avatar Christoph Grueninger
Browse files

[handbook] Rewrite Newton in a nutshell.

The emphasis is now on the elements of the linear system
and to make it more clean how the physical equations are
related to the linear system.

git-svn-id: svn:// 2fb0f335-1f38-0410-981e-8018bf24f1b0
parent 461a792b
\section{Assembling the linear system}
Coming back to the example of chapter \ref{flow} the following mass conservation
equation is to be solved:
The physical system is implemented as the mathematical differential equation in
local operators. \Dumux generates the linear system automatically. Read on, to
learn what is done internally.
% \subsection{Newton's algorithm}
The differential equations are implemented in the residual form. All terms are
on the left hand side and are summed up. The terms contains values for the primary
variables which are part of the solution vector $\textbf{u}$. The sum of the terms
is called residual $\textbf{r}(\textbf{u})$ which is a function of the solution. For
\phi \frac{\partial \varrho_\alpha S_\alpha}{\partial t}
\text{div} \left\{
\text{div} \left(
\varrho_\alpha \frac{k_{r\alpha}}{\mu_\alpha} \mbox{\bf K}
\left(\grad\, p_\alpha - \varrho_{\alpha} \mbox{\bf g} \right)
\right\} - q_\alpha} _
= 0 \; .
Because of the nonlinear dependencies (even in this comparativly simple equation)
in there, this is a really difficult task. However, for finding roots of of
difficult equations there is a really handy method out there: \textsc{Newton}'s method.
When using a fully coupled numerical model, one timestep essentially consists
of the application of the \textsc{Newton} algorithm to solve the nonlinear system.
One step of the \textsc{Newton} method can be formalized as follows:
\textsc{Newton} method:
\textbf{u}^{r+1} &= \textbf{u}^r - \left(\textbf{f}^\prime (\textbf{u}^r) \right)^{-1} \textbf{f}(\textbf{u}^r) \\
\Leftrightarrow {\textbf{f}^{\prime}(\textbf{u}^r)} ( \textbf{u}^{r+1}-\textbf{u}^r) &= -\textbf{f}(\textbf{u}^r) \\
\Leftrightarrow \underbrace{\textbf{f}^{\prime}(\textbf{u}^r)}_{\textnormal{Jacobian}}
( \textbf{u}^r - \textbf{u}^{r+1}) &= \textbf{f}(\textbf{u}^r) \label{NewtonAsUsed}
\noindent with
\item $\phantom{a}^r$: last iteration, $\phantom{a}^{r+1}$: current iteration,
\item $\phantom{a}^\prime$: derivative
\item $\textbf{u}$: vector of unknowns, the actual primary variables
\item $\textbf{f}(\textbf{u}^r)$: function of vector of unknowns
1-D example with slope $m$:
m= \frac{y(u^{r+1})-y(u^{r})}{u^{r+1}-u^{r}}
\textnormal{ for a root of a function: }
m= - \frac{y(u^{r})}{u^{r+1}-u^{r}}
The value of u (generally a vector of unknowns) for which f becomes zero is searched
for. Therefore the quantity of interest is $\textbf{u}^{r+1}$.
But the (BiCGSTAB / Pardiso / ...) linear solver solves systems of the form:
A\textbf{x} = \textbf{b} .
Comparing (\ref{GenSysEq}) with (\ref{NewtonAsUsed}) leads to:
\item $\textbf{b} = \textbf{f}(\textbf{u}^r)$ r.h.s. as it is known from the last
iteration. Here, $\textbf{f}(\textbf{u}^r)$ is called residual. It is obtained by
evaluating the balance equations with the primary variables, as obtained from the
last iteration step.
\item $A=\textbf{f}^{\prime}(\textbf{u}^r)$ coefficient matrix or \textsc{Jacobian}.
It is obtained by numerical differentiation. Evaluating the balance equations at
the last solution + eps, then evaluating the balance equations at the last solution
- eps, division by 2eps: numerical differentiation complete.
\item $\textbf{x} = (\textbf{u}^{r+1} - \textbf{u}^{r})$ this is what the linear
solver finds as an solution.
This is equivalent to stating that the implemented algorithm solves for the change
of the solution. Or in other words: until the $\textbf{u}$ does not change with one more \textsc{Newton-}iteration (do not confuse with timestep!).
In the rest of Dumux (everywhere besides in the solver), not the change of the
solution is looked for, but the actual solution is used. Therefore the outcome
of the linear solver needs to be reformulated as done in \verb+updateMethod.update(*this, u, *uOld, model);+.
In this function the ``change in solution'' is changed to ``solution''.
Afterwards the quantity \verb+*u+ stands for the solution.
\right) - q_\alpha} _
= 0
We don't know the solution $\textbf{u}$, so we use the iterative Newton algorithm to
obtain a good estimate of $\textbf{u}$. We start with an initial guess $\textbf{u}^0$ and
calculate it's residual $\textbf{r}(\textbf{u}^0)$. To minimize the error, we calculate
the derivative of the residual with respect to the solution. This is the Jacobian
= J_{\textbf{r}(\textbf{u}^i)}
= \left(\frac{\text{d}}{\text{d}\textbf{u}^i_m}\textbf{r}(\textbf{u}^i)_n\right)_{m,n}
with $i$ denoting the Newton iteration step.
Each column is the residual derived with respect to the $m$th entry of $\textbf{u}^i$.
The Jacobian indicates the direction where the residual increases. By solving the
linear system
J_{\textbf{r}(\textbf{u}^i)} \cdot \textbf{x}^i = \textbf{u}^i
we calculate the direction of maximum growth $\textbf{x}^i$. We subtract it from
our current solution to get a new, better solution
$\textbf{u}^{i+1} = \textbf{u}^i - \textbf{x}^i$.
We repeat the calculation of of the Jacobian $J_{\textbf{r}(\textbf{u}^i)}$ and the
direction of maximum growth $\textbf{x}^i$ until our solution becomes good enough.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment