Commit b411ac0a authored by Utz's avatar Utz Committed by Martin Schneider

Add the section parralel

Modify the section "input ant output" and "external tools".
parent b16bd7ab
......@@ -147,7 +147,7 @@ in deeper modifications of underlying \Dumux models, classes, functions, etc.
\input{5_spatialdiscretizations}
\input{5_stepsofasimulation}
\input{5_propertysystem}
\input{5_grids}
\input{5_inputoutput}
\input{5_parallel}
\bibliographystyle{plainnat}
......
......@@ -30,9 +30,41 @@ The basic Git commands are:
\subsection{Gnuplot}
\label{gnuplot}
A gnuplot interface is available to plot or visualize results during a simulation run.
This is achieved with the help of the class provided in \texttt{io/gnuplotinterface.hh}.
Have a look at tests including this header for examples how to use this interface.
To use the gnuplot interface you have to make some modifications in your problem file.
First you have to include the corresponding header file for the gnuplot interface.
\begin{lstlisting}[style=DumuxCode]
#include <dumux/io/gnuplotinterface.hh
\end{lstlisting}
Secondly you have to define an instance of the class GnuplotInterface (e.g. called \texttt{gnuplot\_}) in the private part of your problem class.
\begin{lstlisting}[style=DumuxCode]
Dumux::GnuplotInterface<double> gnuplot_;
\end{lstlisting}
Usually with the ploting is dealt within a function \texttt{postTimeStep}, which firstly extracts the variables (in the exapmle below \texttt{x\_} and \texttt{y\_}) which shall be plotted. The actual plotting is done using the method of the gnuplot interface.\\
Example:
\begin{lstlisting}[style=DumuxCode]
gnuplot_.resetPlot(); // reset the plot
gnuplot_.setXRange(0.0, 72000.0); // specify xmin and xmax
gnuplot_.setYRange(0.0, 1.0); // specify ymin and ymax
gnuplot_.setXlabel("time [s]"); // set xlabel
gnuplot_.setYlabel("mole fraction mol/mol"); // set ylabel
// set x-values, y-values, the name of the data file and the Gnupot options
gnuplot_.addDataSetToPlot(x_, y_, "N2_left.dat", options);
gnuplot_.plot("mole_fraction_N2"); // set the name of the output file
\end{lstlisting}
Its also possible to add several data sets to one plot by calling \texttt{addDataSetToPlot()} more than once.
For more information have a look into a test including the gluplot interface header file or
the header file itself (\texttt{dumux/io/gnuplotinterface.hh}).
\subsection{Gstat}
......
......@@ -4,9 +4,9 @@
This section summarizes some ideas about grid generation and grid formats that can be used by \Dumux
for input and output formats.
In general,
\Dumux can read grids from file, or, construct grids inside the code. All grids are constructed inside a so called \texttt{GridCreator} which is a \Dumux property.
\Dumux can read grids from file, or construct grids inside the code. All grids are constructed inside a so called \texttt{GridCreator} which is a \Dumux property.
Note that some \texttt{GridCreator}s are already available in \Dumux, so e.g.
construction of a structured grid is fairly easy. We will subsequently introduce the supported file formats, the standard grid creator and its capabilities,
construction of a structured grid is fairly easy. We will subsequently introduce the supported file formats, the standard \texttt{Gridcreator} and its capabilities
and briefly mention how to customize and deal with common other grid formats.
\subsection{Supported grid file formats}
......@@ -140,7 +140,7 @@ in dumux-devel at dumux-devel/util/gridconverters/Documentation\_ICEM\_CFD\_crea
\subsection{Output formats}
The default output format for \Dumux is the vtk-file format. Additionally it is possible
to generate plots with the gnuplotinterface.
to generate plots with the gnuplot interface.
\subsubsection{VTK file format}
Dumux allows to write out simulation results via the vtkwirter.
......@@ -151,7 +151,7 @@ The *.pvd file groups the singel *.vtu file and contains additionaly the timeste
Also it is the main file for the visualiazion with paraview.
The vtk-file format is also supported by other common visualisation programms like Visit and Tecplot.
\subsubsection{Gnuplotinterface}
\Dumux provides some Gnuplot interface, which can be used to plot results and generate directly an
image file (e.g. png), if gnuplot is installed. An example can be found in test/io/gnuplotinterface.
\subsubsection{Gnuplot interface}
\Dumux provides some gnuplot interface, which can be used to plot results and generate directly an
image file (e.g. png). To use the gnuplot interface gnuplot has to be installed. For more information see \ref{gnuplot}.
\section{Parallel Computation}
\label{sec:parallelcomputation}
\Dumux also support parallel computation. The parallel version needs an external MPI libary.
Posibilities are OpenMPI MPICH and IntelMPI.
Depending on the grid manager METIS or ParMETIS can also be used for paritioning.
In the following show how to prepare a model an run it in parallel whith
the imcompressible 2p model.
dumux/test/porousmediumflow/2p/implicit/incompressible
\subsection{prepareing the model}
If the parallel AMGBackend is not allready set in your application
you should from the sequential solver backend to the parallel amg backend
in your application.
First include the header files for the parallel AMGBackend
Multicore processors are standard nowadays and parallel programming is the key to gain
performance from modern computers. This section explains how \Dumux can be used
on multicore systems, ranging from the users desktop computer to high performance
computing clusters.
There are different concepts and methods for parallel programming, they are
often grouped in \textit{shared-memory} and \textit{distributed-memory}
apporaches. The parallelization in \Dumux is based on the
\textit{Message Passing Interface} (MPI), which is usually called MPI parallelization.
It is the MPI parallelization that allows the user to run
\Dumux applications in parallel on a desktop computer, the users laptop or
large high performance clusters. However, the chosen \Dumux
model must support parallel computations, which is the case for the most \Dumux applications.
The main idea behind the MPI parallelization is the concept of \textit{domain
decomposition}. For parallel simulations, the computational domain is splitted into
subdomains and one process (\textit{rank}) is used to solves the local problem of each
subdomain. During the global solution process, some data exchange between the
ranks/subdomains is needed. MPI is used to send data to other ranks and to receive
data from other ranks.
Most grid managers contain own domain decomposition methods to split the
computational domain into subdomains. Some grid managers also support external
tools like METIS or ParMETIS for partitioning.
Before \Dumux can be started in parallel, a
MPI library (e.g. OpenMPI, MPICH or IntelMPI)
must be installed on the system and all \Dune modules and \Dumux must be recompiled.
\subsection{Prepare an Parallel Application}
Not all parts of \Dumux can be used in parallel. One example are the linear solvers
of the sequential backend. However, with the AMG backend \Dumux provides
a parallel solver backend based on Algebraic MultiGrid (AMG) that can be used in
parallel.
If an application uses not allready the AMG backend, the
backend must be switched by the user to run the application also in parallel.
First, the header files for the parallel AMGBackend must be included
\begin{lstlisting}[style=DumuxCode]
#include <dumux/linear/amgbackend.hh>
\end{lstlisting}
and remove the header files of the sequential backend
so that the backend can be used. The header files of the sequential backend
\begin{lstlisting}[style=DumuxCode]
#include <dumux/linear/seqsolverbackend.hh>
\end{lstlisting}
can be removed.
Second, the linear solver must be switched to the AMG backend
Second, hange the linear solver to the AMG solver
from the AMGBackend
\begin{lstlisting}[style=DumuxCode]
using LinearSolver = Dumux::AMGBackend<TypeTag>;
\end{lstlisting}
and recompile your application.
and the application must be compiled.
\subsection{Start parallel computation}
The parallel simulation is starte with mpirun followed by -np and
the number of cores that should be used and the executable.
mpirun -np n_cores executable
On HPC cluster you usually have to use qeuing system like (e.g. slurm).
\subsection{Run an Parallel Application}
The parallel simulation is started with the \textbf{mpirun} command.
\begin{lstlisting}[style=Bash]
mpirun -np <n_cores> <executable_name>
\end{lstlisting}
\textit{np} sets the number of cores {n\_cores} that should be used for the
computation. On a cluster you usually have to use a queuing system (e.g. slurm) to
submit a job.
\subsection{Handling Parallel Results}
The results sould not differ between parallel an serial execution. As in
the serial case you get vtu-files as output. However you have an additional
variable "process rank" that shows the processor rank of each MPI partition.
For most models, the results should not differ between parallel and serial
runs. However, parallel computations are not naturally deterministic.
A typical case where one can not assume a deterministic behavior are models where
small differences in the solution can cause large differences in the results
(e.g. for some turbulent flow problems). Nevertheless, it is useful to expect that
the simulation results do not depend on the number of cores. Therefore one should question
when a model is not deterministic. Typical reasons for a wrong non deterministic
behaviour are errors in the parallel computation of boundary conditions or missing/reduced
data exchange in higher order gradient approximations.
For serial computations \Dumux produces single vtu-files as default output format.
During a simulation, one vtu-file is written for every output step.
In the parallel case, one vtu-file for each step and processor is created.
For parallel computations an additional variable "process rank" is written
into the file. The process rank allows the user to inspect the subdomains
after the computation.
\subsection{MPI scaling}
For parallel computations the number of cores must be choosen
carefully. Using too many cores will not always lead to more performance, but
can produce a bad efficiency. One reason is that for small subdomains, the
communication between the subdomains gets the limiting factor for parallel computations.
The user should test the MPI scaling (realtion between the number of cores and the computation time)
for each specific application to ensure a fast and efficient use of the given resources.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment