Skip to content
Snippets Groups Projects
Commit 5cfd24af authored by Martin Utz's avatar Martin Utz Committed by Martin Schneider
Browse files

Add the first draft of parrallel section

The new content is included, but the formulation has to be improved.
parent b5e780f3
No related branches found
No related tags found
1 merge request!1338[handbook] Update for 3.0
...@@ -148,6 +148,7 @@ in deeper modifications of underlying \Dumux models, classes, functions, etc. ...@@ -148,6 +148,7 @@ in deeper modifications of underlying \Dumux models, classes, functions, etc.
\input{5_stepsofasimulation} \input{5_stepsofasimulation}
\input{5_propertysystem} \input{5_propertysystem}
\input{5_grids} \input{5_grids}
\input{5_parallel}
\bibliographystyle{plainnat} \bibliographystyle{plainnat}
\bibliography{dumux-handbook} \bibliography{dumux-handbook}
......
\section{Parallel Computation}
\label{sec:parallelcomputation}
\Dumux also support parallel computation. The parallel version needs an external MPI libary.
Posibilities are OpenMPI MPICH and IntelMPI.
Depending on the grid manager METIS or ParMETIS can also be used for paritioning.
In the following show how to prepare a model an run it in parallel whith
the imcompressible 2p model.
dumux/test/porousmediumflow/2p/implicit/incompressible
\subsection{prepareing the model}
If the parallel AMGBackend is not allready set in your application
you should from the sequential solver backend to the parallel amg backend
in your application.
First include the header files for the parallel AMGBackend
#include <dumux/linear/amgbackend.hh>
and remove the header files of the sequential backend
#include <dumux/linear/seqsolverbackend.hh>
Second, hange the linear solver to the AMG solver
from the AMGBackend
using LinearSolver = Dumux::AMGBackend<TypeTag>;
and recompile your application.
\subsection{Start parallel computation}
The parallel simulation is starte with mpirun followed by -np and
the number of cores that should be used and the executable.
mpirun -np n_cores executable
On HPC cluster you usually have to use qeuing system like (e.g. slurm).
\subsection{Handling Parallel Results}
The results sould not differ between parallel an serial execution. As in
the serial case you get vtu-files as output. However you have an additional
variable "process rank" that shows the processor rank of each MPI partition.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment