diff --git a/doc/handbook/4_newtoninanutshell.tex b/doc/handbook/4_newtoninanutshell.tex
index a258e99a0cfbf69d30557c9eda058e86fa35289a..e9901443cd2b0945fff2508d44d7dccea64246ce 100644
--- a/doc/handbook/4_newtoninanutshell.tex
+++ b/doc/handbook/4_newtoninanutshell.tex
@@ -1,82 +1,48 @@
 \section{Assembling the linear system}
 \label{sc_linearsystem}
-
-Coming back to the example of chapter \ref{flow} the following mass conservation
-equation is to be solved:
-\begin{align}
+The physical system is implemented as the mathematical differential equation in
+local operators. \Dumux generates the linear system automatically. Read on, to
+learn what is done internally.
+
+% \subsection{Newton's algorithm}
+The differential equations are implemented in the residual form. All terms are
+on the left hand side and are summed up. The terms contains values for the primary
+variables which are part of the solution vector $\textbf{u}$. The sum of the terms
+is called residual $\textbf{r}(\textbf{u})$ which is a function of the solution. For
+example:
+\begin{align*}
 \underbrace{
   \phi \frac{\partial \varrho_\alpha S_\alpha}{\partial t}
  -
- \text{div} \left\{
+ \text{div} \left(
  \varrho_\alpha \frac{k_{r\alpha}}{\mu_\alpha} \mbox{\bf K}
  \left(\grad\, p_\alpha - \varrho_{\alpha} \mbox{\bf g} \right)
- \right\} - q_\alpha} _
-{\textbf{f}(\textbf{u}^r)}
-= 0 \; .
-\end{align}
-
-Because of the nonlinear dependencies (even in this comparativly simple equation)
-in there, this is a really difficult task. However, for finding roots of of
-difficult equations there is a really handy method out there: \textsc{Newton}'s method.
-
-When using a fully coupled numerical model, one timestep essentially consists
-of the application of the \textsc{Newton} algorithm to solve the nonlinear system.
-
-One step of the \textsc{Newton} method can be formalized as follows:
-
-\textsc{Newton} method:
-\begin{subequations}
-\begin{align}
-\label{NewtonGen}
-\textbf{u}^{r+1} &= \textbf{u}^r -  \left(\textbf{f}^\prime (\textbf{u}^r) \right)^{-1} \textbf{f}(\textbf{u}^r) \\
-\Leftrightarrow {\textbf{f}^{\prime}(\textbf{u}^r)}  ( \textbf{u}^{r+1}-\textbf{u}^r) &= -\textbf{f}(\textbf{u}^r) \\
-\Leftrightarrow \underbrace{\textbf{f}^{\prime}(\textbf{u}^r)}_{\textnormal{Jacobian}}
-                ( \textbf{u}^r - \textbf{u}^{r+1}) &= \textbf{f}(\textbf{u}^r) \label{NewtonAsUsed}
-\end{align}
-\end{subequations}
-
-\noindent with
-\begin{itemize}
-\item $\phantom{a}^r$: last iteration, $\phantom{a}^{r+1}$: current iteration,
-\item $\phantom{a}^\prime$: derivative
-\item $\textbf{u}$: vector of unknowns, the actual primary variables
-\item $\textbf{f}(\textbf{u}^r)$: function of vector of unknowns
-\end{itemize}
-
-1-D example with slope $m$:
-\begin{equation}
- m= \frac{y(u^{r+1})-y(u^{r})}{u^{r+1}-u^{r}}
- \textnormal{ for a root of a function: }
- m= - \frac{y(u^{r})}{u^{r+1}-u^{r}}
-\end{equation}
-The value of u (generally a vector of unknowns) for which f becomes zero is searched
-for. Therefore the quantity of interest is $\textbf{u}^{r+1}$.
-
-But the (BiCGSTAB / Pardiso / ...) linear solver solves systems of the form:
-\begin{equation}
-\label{GenSysEq}
- A\textbf{x} = \textbf{b} .
-\end{equation}
-
-Comparing (\ref{GenSysEq}) with (\ref{NewtonAsUsed}) leads to:
-\begin{itemize}
-\item $\textbf{b} = \textbf{f}(\textbf{u}^r)$ r.h.s. as it is known from the last
-iteration. Here, $\textbf{f}(\textbf{u}^r)$ is called residual. It is obtained by
-evaluating the balance equations with the primary variables, as obtained from the
-last iteration step.
-\item $A=\textbf{f}^{\prime}(\textbf{u}^r)$ coefficient matrix or \textsc{Jacobian}.
-It is obtained by numerical differentiation. Evaluating the balance equations at
-the last solution + eps, then evaluating the balance equations at the last solution
-- eps, division by 2eps: numerical differentiation complete.
-\item $\textbf{x} = (\textbf{u}^{r+1} - \textbf{u}^{r})$ this is what the linear
-solver finds as an solution.
-\end{itemize}
-
-This is equivalent to stating that the implemented algorithm solves for the change
-of the solution. Or in other words: until the $\textbf{u}$ does not change with one more \textsc{Newton-}iteration (do not confuse with timestep!).
-
-In the rest of Dumux (everywhere besides in the solver), not the change of the
-solution is looked for, but the actual solution is used. Therefore the outcome
-of the linear solver needs to be reformulated as done in \verb+updateMethod.update(*this, u, *uOld, model);+.
-In this function the ``change in solution'' is changed to ``solution''.
-Afterwards the quantity \verb+*u+ stands for the solution.
+ \right) - q_\alpha} _
+{\textbf{r}(\textbf{u})}
+= 0
+\end{align*}
+
+We don't know the solution $\textbf{u}$, so we use the iterative Newton algorithm to
+obtain a good estimate of $\textbf{u}$. We start with an initial guess $\textbf{u}^0$ and
+calculate it's residual $\textbf{r}(\textbf{u}^0)$. To minimize the error, we calculate
+the derivative of the residual with respect to the solution. This is the Jacobian
+matrix
+\begin{align*}
+  \frac{\text{d}}{\text{d}\textbf{u}}\textbf{r}(\textbf{u}^i)
+  = J_{\textbf{r}(\textbf{u}^i)}
+  = \left(\frac{\text{d}}{\text{d}\textbf{u}^i_m}\textbf{r}(\textbf{u}^i)_n\right)_{m,n}
+\end{align*}
+with $i$ denoting the Newton iteration step.
+Each column is the residual derived with respect to the $m$th entry of $\textbf{u}^i$.
+
+The Jacobian indicates the direction where the residual increases. By solving the
+linear system
+\begin{align*}
+  J_{\textbf{r}(\textbf{u}^i)} \cdot \textbf{x}^i = \textbf{u}^i
+\end{align*}
+we calculate the direction of maximum growth $\textbf{x}^i$. We subtract it from
+our current solution to get a new, better solution
+$\textbf{u}^{i+1} = \textbf{u}^i - \textbf{x}^i$.
+
+We repeat the calculation of of the Jacobian $J_{\textbf{r}(\textbf{u}^i)}$ and the
+direction of maximum growth $\textbf{x}^i$ until our solution becomes good enough.