Commit 3ab21f73 authored by Martin Utz's avatar Martin Utz Committed by Timo Koch

[handbook] Correct some mistakes in input/output and parallel

parent ac702b5e
......@@ -4,9 +4,11 @@
This section summarizes some ideas about grid generation and grid formats that can be used by \Dumux
for input and output formats.
In general,
\Dumux can read grids from file, or construct grids inside the code. All grids are constructed inside a so called \texttt{GridCreator} which is a \Dumux property.
\Dumux can read grids from files and construct grids inside the code with a \texttt{GridCreator}.
All grids are constructed inside a so called \texttt{GridManger}.
Note that some \texttt{GridCreator}s are already available in \Dumux, so e.g.
construction of a structured grid is fairly easy. We will subsequently introduce the supported file formats, the standard \texttt{Gridcreator} and its capabilities
construction of a structured grid is fairly easy. We will subsequently introduce the supported file formats,
the standard \texttt{GridCreator} and its capabilities
and briefly mention how to customize and deal with common other grid formats.
\subsection{Supported grid file formats}
......@@ -101,27 +103,27 @@ For all available parameters see the Doxygen documentation.
% TODO
\subsection{Output formats}
The default output format for \Dumux is the vtk-file format. Additionally it is possible
to generate plots with the gnuplot interface.
The default output format for \Dumux is the VTK file format. Additionally it is possible
to generate plots with gnuplot directly from \Dumux.
\subsubsection{VTK file format}
Dumux allows to write out simulation results via the vtkwirter.
Dumux allows to write out simulation results via the \texttt{vtkWirter}.
For every print out step, a single *.vtu file is created. For parallel simulations one file
per printoutstep is generated for each processor.
Time step information and files are stored in a *.pvd file.
The *.pvd file groups the singel *.vtu files and contains additionaly the timestep information.
Also it is the main file for the visualisation with paraview.
The vtk-file format is also supported by other common visualisation programms like Visit and Tecplot.
per print out step is generated for each processor.
The *.pvd file groups the single *.vtu files and contains additionally the time step information.
Also it is the main file for the visualisation with ParaView.
The VTK file format is also supported by other common visualisation programs like Visit and Tecplot.
\subsubsection{Customize the VTK output}
Dependent on the used \texttt{TYPETAG} a default set of variables is stored in the VTK files, but it's also possible
to add further variables. For that you can use the method \texttt{addField} of the \texttt{vtkWriter}.
Using the respective \texttt{initOutputModule} function of the model \texttt{IOFields} a default
set of variables is stored in the VTK files. But it's also possible to add further variables.
For that you can use the method \texttt{addField} of the \texttt{vtkWriter}. E.g. add a variable called temperatureExact:
\begin{lstlisting}[style=DumuxCode]
vtkWriter.addField(problem->getExactTemperature(), "temperatureExact");
\end{lstlisting}
The first input argument of this method is the value of the additional variable, provided by a method of the corresponding problem.
If it doesn't already exists, the user has to provide this method (see example below for analytic temperatur).
If it doesn't already exists, the user has to provide this method.
\begin{lstlisting}[style=DumuxCode]
//! get the analytical temperature
const std::vector<Scalar>& getExactTemperature()
......@@ -134,6 +136,6 @@ The second input argument is the name of the additional variable (as it should b
The example above is taken from:\\ \texttt{test/porousmediumflow/1pnc/implicit/test\_1p2cni\_convection\_fv.cc}
\subsubsection{Gnuplot interface}
\Dumux provides some gnuplot interface, which can be used to plot results and generate directly an
image file (e.g. png). To use the gnuplot interface gnuplot has to be installed. For more information see \ref{gnuplot}.
\Dumux provides a gnuplot interface, which can be used to plot results and generate
image files (e.g. png). To use the gnuplot interface gnuplot has to be installed. For more information see \ref{gnuplot}.
......@@ -5,9 +5,9 @@ performance from modern computers. This section explains how \Dumux can be used
on multicore systems, ranging from the users desktop computer to high performance
computing clusters.
There are different concepts and methods for parallel programming, they are
There are different concepts and methods for parallel programming, which are
often grouped in \textit{shared-memory} and \textit{distributed-memory}
apporaches. The parallelization in \Dumux is based on the
approaches. The parallelization in \Dumux is based on the
\textit{Message Passing Interface} (MPI), which is usually called MPI parallelization (distributed-memory approach).
It is the MPI parallelization that allows the user to run
\Dumux applications in parallel on a desktop computer, the users laptop or
......@@ -15,14 +15,14 @@ large high performance clusters. However, the chosen \Dumux
model must support parallel computations, which is the case for the most \Dumux applications.
The main idea behind the MPI parallelization is the concept of \textit{domain
decomposition}. For parallel simulations, the computational domain is splitted into
decomposition}. For parallel simulations, the computational domain is split into
subdomains and one process (\textit{rank}) is used to solves the local problem of each
subdomain. During the global solution process, some data exchange between the
ranks/subdomains is needed. MPI is used to send data to other ranks and to receive
data from other ranks.
Most grid managers contain own domain decomposition methods to split the
computational domain into subdomains. Some grid managers also support external
tools like METIS or ParMETIS for partitioning.
tools like METIS, ParMETIS, PTScotch or ZOLTAN for partitioning.
Before \Dumux can be started in parallel, a
MPI library (e.g. OpenMPI, MPICH or IntelMPI)
......@@ -34,16 +34,16 @@ Not all parts of \Dumux can be used in parallel. One example are the linear solv
of the sequential backend. However, with the AMG backend \Dumux provides
a parallel solver backend based on Algebraic Multi Grid (AMG) that can be used in
parallel.
If an application uses not allready the AMG backend, the
backend must be switched by the user to run the application also in parallel.
If an application uses not already the AMG backend, the
user must switch the backend to AMG to run the application also in parallel.
First, the header files for the parallel AMG backend must be included.
First, the header file for the parallel AMG backend must be included.
\begin{lstlisting}[style=DumuxCode]
#include <dumux/linear/amgbackend.hh>
\end{lstlisting}
so that the backend can be used. The header files of the sequential backend
so that the backend can be used. The header file of the sequential backend
\begin{lstlisting}[style=DumuxCode]
#include <dumux/linear/seqsolverbackend.hh>
......@@ -59,26 +59,28 @@ using LinearSolver = Dumux::AMGBackend<TypeTag>;
and the application must be compiled.
\subsection{Run an Parallel Application}
The parallel simulation is started with the \textbf{mpirun} command.
The starting procedure for parallel simulations depends on the chosen MPI library.
Most MPI implementations use the \textbf{mpirun} command
\begin{lstlisting}[style=Bash]
mpirun -np <n_cores> <executable_name>
\end{lstlisting}
\textit{np} sets the number of cores (\texttt{n\_cores}) that should be used for the
where \textit{-np} sets the number of cores (\texttt{n\_cores}) that should be used for the
computation. On a cluster you usually have to use a queuing system (e.g. slurm) to
submit a job.
\subsection{Handling Parallel Results}
For most models, the results should not differ between parallel and serial
runs. However, parallel computations are not naturally deterministic.
A typical case where one can not assume a deterministic behavior are models where
A typical case where one can not assume a deterministic behaviour are models where
small differences in the solution can cause large differences in the results
(e.g. for some turbulent flow problems). Nevertheless, it is useful to expect that
the simulation results do not depend on the number of cores. Therefore one should question
when a model is not deterministic. Typical reasons for a wrong non deterministic
the simulation results do not depend on the number of cores. Therefore you should double check
the model, if it is really not deterministic. Typical reasons for a wrong non deterministic
behaviour are errors in the parallel computation of boundary conditions or missing/reduced
data exchange in higher order gradient approximations.
data exchange in higher order gradient approximations. Also you should keep in mind, that
for iterative solvers there can occur differences in the solution due to the error threshold.
For serial computations \Dumux produces single vtu-files as default output format.
......@@ -89,9 +91,9 @@ into the file. The process rank allows the user to inspect the subdomains
after the computation.
\subsection{MPI scaling}
For parallel computations the number of cores must be choosen
For parallel computations the number of cores must be chosen
carefully. Using too many cores will not always lead to more performance, but
can produce a bad efficiency. One reason is that for small subdomains, the
communication between the subdomains gets the limiting factor for parallel computations.
The user should test the MPI scaling (realtion between the number of cores and the computation time)
The user should test the MPI scaling (relation between the number of cores and the computation time)
for each specific application to ensure a fast and efficient use of the given resources.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment