Newer
Older
A test problem / application consists of:
* A property file (properties.hh)
* a problem file (often problem.hh)
* a spatial parameters file (often spatialparams.hh)
* an input file (often params.input)
* a main file (often main.cc)
* a build system file (CMakeLists.txt)
## Example: gas injection / immiscible two phase flow
Mass balance:
$\begin{equation}
\phi \frac{\partial \varrho_\alpha S_\alpha}{\partial t}
-
-
q_\alpha = 0
\end{equation}$
Momentum balance:
$\begin{equation}
\boldsymbol{v}_\alpha = \varrho_\alpha \frac{k_{r\alpha}}{\mu_\alpha} \mathbf{K} \left(\nabla\, p_\alpha - \varrho_{\alpha} \mathbf{g} \right)
# Properties <small>(`properties.hh`)</small>
Sets all properties of the current problem. The injection test case inherits from the 2p model:
namespace TTag {
struct Injection2p { using InheritsFrom = std::tuple<TwoP>; };
} // end namespace TTag
// Set/Overwrite properties within the namespace Dumux::Properties
} // end namespace Dumux::Properties
```
## The properties file
Often specifies the discretization method:
```cpp
namespace Dumux::Properties {
// define the TypeTag for this problem with a cell-centered two-point
// flux approximation spatial discretization.
namespace TTag {
struct Injection2p { using InheritsFrom = std::tuple<TwoP>; };
using InheritsFrom = std::tuple<Injection2p, CCTpfaModel>; };
} // end namespace TTag
} // end namespace Dumux::Properties
```
## The properties file
struct Grid<TypeTag, TTag::Injection2p>
{ using type = Dune::YaspGrid<2>; };
struct Problem<TypeTag, TTag::Injection2p>
{ using type = InjectionProblem2P<TypeTag>; };
```cpp
// Set the spatial parameters
template<class TypeTag>
struct SpatialParams<TypeTag, TTag::Injection2p>
{
using FVGridGeometry = GetPropType<TypeTag,
Properties::FVGridGeometry>;
using Scalar = GetPropType<TypeTag, Properties::Scalar>;
public:
using type = InjectionSpatialParams<FVGridGeometry, Scalar>;
};
```cpp
// Set fluid configuration
template<class TypeTag>
struct FluidSystem<TypeTag, TTag::Injection2p>
{
private:
using Scalar = GetPropType<TypeTag, Properties::Scalar>;
using Policy = FluidSystems::H2ON2DefaultPolicy<
public:
using type = FluidSystems::H2ON2<Scalar, Policy>;
};
<h2>The properties file may set many more properties depending on the utilized model and test case.</h2>
# The problem <small>(`problem.hh`)</small>
## The problem file
A problem in DuMu$^\mathsf{x}$ implements a specific model scenario:
```cpp
template<class TypeTag>
class InjectionProblem2P : public PorousMediumFlowProblem<TypeTag>
{
// Details of the model scenario
// - BoundaryConditions
// - InitialConditions
// - etc.
Defining the types of boundary conditions:
```cpp
BoundaryTypes boundaryTypesAtPos(const GlobalPosition &globalPos) const
{
BoundaryTypes bcTypes;
// Use Dirichlet boundary conditions on the left
if (globalPos[0] < eps_)
bcTypes.setAllDirichlet();
// Use Neumann boundary conditions on the rest of the boundary
else
bcTypes.setAllNeumann();
return bcTypes;
}
```
## The problem file
PrimaryVariables dirichletAtPos(const GlobalPosition& globalPos) const
{ return initialAtPos(globalPos); }
NumEqVector neumannAtPos(const GlobalPosition& globalPos) const
NumEqVector values(0.0);
if (injectionActive() && onInjectionBoundary(globalPos))
// inject nitrogen. negative values mean injection
// units kg/(s*m^2)
values[Indices::conti0EqIdx + FluidSystem::N2Idx] = -1e-4;
values[Indices::conti0EqIdx + FluidSystem::H2OIdx] = 0.0;
}
return values;
}
```
## The problem file
PrimaryVariables initialAtPos(const GlobalPosition& globalPos) const
const Scalar densityW = FluidSystem::H2O::liquidDensity(
const Scalar pw = 1.0e5 - densityW*this->gravity()[dimWorld-1]*(
values[Indices::pressureIdx] = pw;
return values;
}
```
## The problem file
```cpp
NumEqVector sourceAtPos(const GlobalPosition &globalPos) const
{
# Spatial Parameters <small>(`spatialparams.hh`)</small>
template<class FVGridGeometry, class Scalar>
class InjectionSpatialParams : ...
auto permeabilityAtPos(const GlobalPosition& globalPos) const
{
if (isInAquitard_(globalPos))
return aquitardK_;
return aquiferK_;
}
```cpp
Scalar porosityAtPos(const GlobalPosition& globalPos) const
{
// here, either aquitard or aquifer porosity are returned
if (isInAquitard_(globalPos))
return aquitardPorosity_;
return aquiferPorosity_;
}
```
## The spatial parameters
Capillary pressure - saturation relationship:
More information in a later lecture on the materialsystem!
```cpp
auto fluidMatrixInteractionAtPos(const GlobalPosition& globalPos) const
{
if (isInAquitard_(globalPos))
return makeFluidMatrixInteraction(aquitardPcKrSwCurve_);
return makeFluidMatrixInteraction(aquiferPcKrSwCurve_);
}
```
## The spatial parameters
```cpp
Scalar temperatureAtPos(const GlobalPosition& globalPos) const
{
// constant temperature of 20°C
return 273.15 + 20.0;
}
```
# Runtime parameters <small>(`params.input`)</small>
DUNE INI syntax:
```cpp
[Grid]
LowerLeft = 0 0
UpperRight = 60 40
Cells = 24 16
[Problem]
Name = test
```
Input files are specified as arguments to the executable
`./myexecutable params.input`
If no input file is given it defaults to the file `params.input` or `myexecutablename.input`
Parameters can be overwritten in the command line via:
```bash
./executable params.input –Problem.Name myNewName
```
# The main file <small>(`2pmain.cc`)</small>
## The main source file
* Each problem has a specific main file (`test_name.cc` or `main.cc`) which sets up the program structure and calls assembler and solvers to assemble and solve the PDEs.
* Depending on the complexity of the problem the main file can be either set up to solve a linear problem, a non-linear problem and stationary as well as instationary problems.
* The main file usually includes the problem, the solvers, the assembler, the VTK output module and the gridmanager.
## The main source file
```cpp
// define the type tag for this problem
using TypeTag = Properties::TTag::Injection2pCC;
// maybe initialize MPI and/or multithreading backend
const auto& mpiHelper = Dune::MPIHelper::instance();
// print dumux start message
if (mpiHelper.rank() == 0)
DumuxMessage::print(/*firstCall=*/true);
// parse command line arguments and input file
Parameters::init(argc, argv);
```
## The main source file
```cpp
// try to create a grid (from the given grid file or the input file)
GridManager<GetPropType<TypeTag, Properties::Grid>> gridManager;
gridManager.init();
// we compute on the leaf grid view
const auto& leafGridView = gridManager.grid().leafGridView();
`GridGeometry` and `Problem` instantiation:
```cpp
// (represents the chosen discretization scheme)
using GridGeometry = GetPropType<TypeTag, Properties::GridGeometry>;
auto gridGeometry = std::make_shared<GridGeometry>(leafGridView);
// the problem (initial and boundary conditions)
using Problem = GetPropType<TypeTag, Properties::Problem>;
auto problem = std::make_shared<Problem>(gridGeometry);
```
Initial solution and secondary variables:
using SolutionVector = GetPropType<TypeTag,
Properties::SolutionVector>;
SolutionVector x(gridGeometry->numDofs());
// the grid variables
using GridVariables = GetPropType<TypeTag, Properties::GridVariables>;
auto gridVariables = std::make_shared<GridVariables>(
## The main source file
Setting up [VTK](https://vtk.org/) output:
```cpp
VtkOutputModule<GridVariables, SolutionVector> vtkWriter(
using VelocityOutput = GetPropType<TypeTag,
Properties::VelocityOutput>;
std::make_shared<VelocityOutput>(*gridVariables));
using IOFields = GetPropType<TypeTag, Properties::IOFields>;
```
## The main source file
Differences for various problem cases:
* Stationary linear problem
* Stationary non-linear problem
* Instationary non-linear problem
## The main source file
```cpp
// the assembler for stationary problems
using Assembler = FVAssembler<TypeTag, DiffMethod::numeric>;
auto assembler = std::make_shared<Assembler>(
// the discretization matrices for stationary linear problems
using JacobianMatrix = GetPropType<TypeTag,
Properties::JacobianMatrix>;
auto A = std::make_shared<JacobianMatrix>();
auto r = std::make_shared<SolutionVector>();
assembler->setLinearSystem(A, r);
assembler->assembleJacobianAndResidual(x);
```
## The main source file
using LinearSolver = ILUBiCGSTABIstlSolver<
LinearSolverTraits<GridGeometry>,
auto linearSolver = std::make_shared<LinearSolver>(
gridGeometry->gridView(), gridGeometry->dofMapper());
// we solve Ax = -r to save update and copy
(*r) *= -1.0;
linearSolver->solve(*A, x, *r);
// the grid variables need to be up to date for subsequent output
gridVariables->update(x);
```
## The main source file
For non-linear problems, use the `NewtonSolver`:
auto assembler = ...; // as before
auto linearSolver = ...; // as before
using NewtonSolver = Dumux::NewtonSolver<Assembler, LinearSolver>;
NewtonSolver nonLinearSolver(assembler, linearSolver);
// linearize & solve
nonLinearSolver.solve(x);
```
## The main source file
For instationary problems, pass a `TimeLoop` and a solution vector
carrying the solution on the previous time step to the assembler:
auto timeLoop = std::make_shared<TimeLoop<Scalar>(0.0, dt, tEnd);
timeLoop->setMaxTimeStepSize(maxDt);
using Assembler = FVAssembler<TypeTag, DiffMethod::numeric>;
auto assembler = std::make_shared<Assembler>(
problem, gridGeometry, gridVariables, timeLoop, xOld);
// assemble linear/non-linear problem as before
// ...
```cpp
timeLoop->start(); do
{
// Calculate solution within each time step
} while (!timeLoop->finished());
```
## The main source file
```cpp
// time loop
timeLoop->start(); do
{
// linearize & solve
nonLinearSolver.solve(x, *timeLoop);
// make the new solution the old solution
xOld = x;
gridVariables->advanceTimeStep();
```
## The main source file
Advancing the time loop to the next step:
```cpp
// advance to the time loop to the next step
timeLoop->advanceTimeStep();
// report statistics of this time step
timeLoop->reportTimeStep();
// set new dt as suggested by the newton solver
timeLoop->setTimeStepSize(
nonLinearSolver.suggestTimeStepSize(timeLoop->timeStepSize())
);
timeLoop->finalize(leafGridView.comm());
```
# Build system <small>(`CMakeLists.txt`)</small>
## Build system - what is CMake?
* Open source build system tool developed by KITware.
* Offers a one-tool-solution to all building tasks, like configuring, building, linking, testing and packaging.
* Is a build system generator: It supports a set of backends called generators.
* Is portable and supports cross-platform compilation.
* Is controlled by ONE rather simple language.
* Every directory in a project contains a file called `CMakeLists.txt`, which is written in the CMake language. You can think of these as a distributed configure script. Upon configure, the top-level `CMakeLists.txt` is executed.
## Build system - configuring
* Configure build time compiler parameters / linking information.
* Create „targets“ that can be build to create executables.
## Build system - configuring
* Build with the script `dune-common/bin/dunecontrol <options>` which takes care of all dependencies and modular dune structure.
* Option `all`: build all libraries and executables.
* Option `--opts=<optionfile.opts>` specify e.g. compiler flags; For DuMu$^\mathsf{x}$, you may use `dumux/cmake.opts`.
* Option `--build-dir=<build directory>` specify path for out-of-source build; Default: every module contains its own build directory `build-cmake/`.
* You have to reconfigure (possibly deleting all build directories first) whenever a dependency changes or a Dune library is updated.
## Invoking `dunecontrol`
```bash
./dune-common/bin/dunecontrol --opts=dumux/cmake.opts all
```
* Use `add_subdirectory` for recursively adding subdirectories.
* The subdirectory has to contain a `CMakeLists.txt` file (can be empty).
* Executables are added via `add_executable(<name> source1 [source2 ...])`.
* Tests are added via `dumux_add_test(...)` which also add a test executable to the test suite.
* Symlinks can be added via `dune_symlink_to_source_files(FILES file1 [file2 ...])`.
## Build system
Simplest incorporation of a test by defining name, source file and command line arguments:
```cmake
dumux_add_test(
NAME test_2p_incompressible_box
SOURCES test_2p_fv.cc
CMD_ARGS test_2p.input
)
```
## Build system
Add extra compile definitions and commands:
```cmake
dune_add_test(
NAME test_2p_incompressible_box
SOURCES test_2p_fv.cc
COMPILE_DEFINITIONS TYPETAG=TwoPIncompressibleBox
COMMAND ${CMAKE_SOURCE_DIR}/bin/testing/runtest.py
CMD_ARGS --script fuzzy
--files ${CMAKE_SOURCE_DIR}/test/references/lensbox-reference.vtu
${CMAKE_CURRENT_BINARY_DIR}/2p_box-00007.vtu
--command "${CMAKE_CURRENT_BINARY_DIR}/test_2p_incompressible_box
test_2p.input -Problem.Name 2p_box"
Extra: Create linked parameter file in `build-cmake` folder, separately add executable and set compile definitions for an executable:
```cmake
dune_symlink_to_source_files(FILES "params.input")
# using tpfa
add_executable(test_2p_incompressible_tpfa EXCLUDE_FROM_ALL main.cc)
target_compile_definitions(
test_2p_incompressible_tpfa PUBLIC TYPETAG=TwoPIncompressibleTpfa
)
```
## Build system
Important basic commands:
* See also Dune build system documentation on [https://www.dune-project.org/sphinx/core/](https://www.dune-project.org/sphinx/core/) for a comprehensive CMake online documentation.
# Parallelism
## Distributed memory parallelism with MPI
* MPI stands for Message Passing Interface.
* Main idea is the concept of domain decomposition.
* Each local subdomain is solved on an individual process(rank).
* MPI manages the communication between the ranks.
* Most solvers in DuMu$^\mathsf{x}$ are capable of parallel solving.
## Distributed memory parallelism with MPI
Run with:
```cpp
mpirun -np [n_cores] [executable_name]
```
Handling results:
* Each rank creates its own `*.vtu`/`*.vtp` file.
* These are combined into `*.pvtu`/`*.pvtp` files for each time step.
* A normal `*.pvd` file is created from the `*.pvtu`/`*.pvtp` files.
## Shared-memory parallelism and multi-threaded applications
* Dumux can exploit parallelism with the shared memory model.
* Used in the `Dumux::FVAssembler` by default to assemble the residual and stiffness matrix.
* Is enabled when a multi-threading backend is found.
* Backend is selected by `CMAKE` during configuration and stored in `DUMUX_MULTITHREADING_BACKEND`.
* Possible examples are `OpenMP`, `TBB`, C++ parallel algorithms and `Kokkos`.
## Switching off multi-threading
Simply add the following to your input file:
```cpp
[Assembly]
Multithreading = false
```
Important for working on clusters: Number of threads can also be restricted via manipulating the environment variable `DUMUX_NUM_THREADS=2 ./executable`
# Exercises
## Exercises:
Exercise about setting boundary conditions, the problem file etc:
* Go to [https://git.iws.uni-stuttgart.de/dumux-repositories/dumux-course/tree/master/exercises/exercise-basic](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux-course/tree/master/exercises/exercise-basic) and check out the README
Exercise for the main-files:
* Go to [https://git.iws.uni-stuttgart.de/dumux-repositories/dumux-course/tree/master/exercises/exercise-mainfile](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux-course/tree/master/exercises/exercise-basic) and check out the README