Newer
Older
$$
\begin{aligned}
\frac{\partial \left(\phi \varrho_\alpha S_\alpha \right)}{\partial t}
q_\alpha = 0, \quad \alpha \in \lbrace w, n \rbrace.
\end{aligned}
$$
Momentum balance equations (multiphase-phase Darcy's law):
$$
\begin{aligned}
\boldsymbol{v}_\alpha = \varrho_\alpha \frac{k_{r\alpha}}{\mu_\alpha} \mathbf{K} \left(\nabla\, p_\alpha - \varrho_{\alpha} \mathbf{g} \right), \quad \alpha \in \lbrace w, n \rbrace.
\end{aligned}
$$
## Gas injection / immiscible two phase flow
$$
\begin{aligned}
\frac{\partial \left(\phi \varrho_\alpha S_\alpha \right)}{\partial t}
-
\nabla \cdot \left( \varrho_\alpha \frac{k_{r\alpha}}{\mu_\alpha} \mathbf{K} \left(\nabla\, p_\alpha - \varrho_{\alpha} \mathbf{g} \right) \right)
-
\end{aligned}
$$
* $p_w$, $p_n$: wetting and non-wetting fluid phase pressure
* $\varrho_\alpha$, $\mu_\alpha$: fluid phase density and dynamic viscosity
* $\phi$, $\mathbf{K}$, $k_{r\alpha}$: porosity, intrinsic permeability, relative permeability
* $S_w$, $S_n$: wetting and non-wetting saturation
* $\mathbf{g}$, $q_\alpha$: gravitational potential, source term
$$
\begin{aligned}
\frac{\partial \left(\phi \varrho_\alpha S_\alpha \right)}{\partial t}
-
\nabla \cdot \left( \varrho_\alpha \frac{k_{r\alpha}}{\mu_\alpha} \mathbf{K} \left(\nabla\, p_\alpha - \varrho_{\alpha} \mathbf{g} \right) \right)
-
q_\alpha = 0
\end{aligned}
$$
* Constitutive relations: $p_n := p_w + p_c$, $p_c := p_c(S_w)$, $k_{r\alpha}$ = $k_{r\alpha}(S_w)$
* Physical constraint (no free space): $S_w + S_n = 1$
* Primary variables: $p_w$, $S_n$ (wetting phase pressure, non-wetting phase saturation)
# A DuMu^x^ application
## A DuMu^x^ application
The source code for a
typical DuMu^x^ application consists of:
* Property specializations (e.g. `properties.hh`)
* Problem class definition (e.g. `problem.hh`) and often a spatial parameter class definition (e.g. `spatialparams.hh`)
* A parameter configuration file (e.g. `params.input`)
* The main program (e.g. `main.cc`)
* `CMakeLists.txt` (CMake build system configuration)
Details on properties in [Part II: Property system](./properties.html)
##
`*properties*.hh`
Create tag, choose model and discretization scheme:
namespace Dumux::Properties::TTag {
// the application type tag (choose any name)
struct Injection2pCC { using InheritsFrom = std::tuple<TwoP, CCTpfaModel>; };
} // end namespace Dumux::Properties::TTag
Specialize the `Grid` property for tag `TTag::Injection2pCC` to set the grid type:
struct Grid<TypeTag, TTag::Injection2pCC>
{ using type = Dune::YaspGrid<2>; }; // Yet Another Structured Parallel Grid
} // end namespace Dumux::Properties
Specialize the `Problem` property to set the problem type:
{ using type = InjectionProblem2P<TypeTag>; };
The problem class `InjectionProblem2P` is discussed
in one of the following sections.
##
Specialize the `SpatialParams` property to set the spatial parameter type:
struct SpatialParams<TypeTag, TTag::Injection2pCC>
using GridGeometry
= GetPropType<TypeTag, Properties::GridGeometry>;
using Scalar
= GetPropType<TypeTag, Properties::Scalar>;
using type = InjectionSpatialParams<GridGeometry, Scalar>;
Specialize the `FluidSystem` property to set the fluid system type:
struct FluidSystem<TypeTag, TTag::Injection2pCC>
{
using Scalar = GetPropType<TypeTag, Properties::Scalar>;
using Policy = FluidSystems::H2ON2DefaultPolicy<
using type = FluidSystems::H2ON2<Scalar, Policy>;
};
A "problem" class in DuMu^x^ implements a specific scenario:
```cpp
template<class TypeTag>
class InjectionProblem2P : public PorousMediumFlowProblem<TypeTag>
{
// - boundary conditions
// - initial conditions
// - source terms
// - scenario name (for output)
Inherit from base class `PorousMediumFlowProblem`, only override
scenario-specific functions (static polymorphism).
##
```cpp
BoundaryTypes boundaryTypesAtPos(const GlobalPosition &globalPos) const
{
BoundaryTypes bcTypes;
// Use Dirichlet boundary conditions on the left
if (globalPos[0] < eps_)
bcTypes.setAllDirichlet();
// Use Neumann boundary conditions on the rest of the boundary
else
bcTypes.setAllNeumann();
return bcTypes;
}
```
Dirichlet boundary condition values (only called on Dirichlet boundaries)
PrimaryVariables dirichletAtPos(const GlobalPosition& globalPos) const
{ return initialAtPos(globalPos); }
`PrimaryVariables` is an array of primary variables (here, the size of the array is 2).
##
Neumann boundary condition values / boundary fluxes (only called on Neumann boundaries)
NumEqVector neumannAtPos(const GlobalPosition& globalPos) const
NumEqVector values(0.0);
if (injectionActive() && onInjectionBoundary(globalPos))
{
values[Indices::conti0EqIdx + FluidSystem::N2Idx] = -1e-4;
values[Indices::conti0EqIdx + FluidSystem::H2OIdx] = 0.0;
}
`NumEqVector` is an array of equations (here, the size of the array is 2).
PrimaryVariables initialAtPos(const GlobalPosition& globalPos) const
const Scalar densityW
= FluidSystem::H2O::liquidDensity(temperature(), 1.0e5);
const Scalar pw
= 1.0e5 - densityW*this->gravity()[dimWorld-1]
*(aquiferDepth_ - globalPos[dimWorld-1]);
```cpp
NumEqVector sourceAtPos(const GlobalPosition &globalPos) const
template<class FVGridGeometry, class Scalar>
class InjectionSpatialParams
: public FVPorousMediumFlowSpatialParamsMP<
FVGridGeometry, Scalar,
InjectionSpatialParams<FVGridGeometry, Scalar>
> {
// parameters that are typically position-dependent
// e.g. porosity, permeability
};
```
Inherit from `FVPorousMediumFlowSpatialParamsMP` where
`FV`: finite volumes, `MP`: multi-phase flow.
##
A function returning the instrinsic permeability $K$:
```cpp
auto permeabilityAtPos(const GlobalPosition& globalPos) const
if (isInAquitard_(globalPos))
return aquitardK_;
return aquiferK_;
}
```cpp
Scalar porosityAtPos(const GlobalPosition& globalPos) const
{
// here, either aquitard or aquifer porosity are returned
if (isInAquitard_(globalPos))
return aquitardPorosity_;
return aquiferPorosity_;
}
```
A function returning a parameterized constitutive law
describing fluid-matrix interactions
($p_c(S_w)$, $k_r(S_w)$):
auto fluidMatrixInteractionAtPos(const GlobalPosition& globalPos) const
{
if (isInAquitard_(globalPos))
return makeFluidMatrixInteraction(aquitardPcKrSwCurve_);
return makeFluidMatrixInteraction(aquiferPcKrSwCurve_);
}
```
Set the (constant) temperature field in the domain:
```cpp
Scalar temperatureAtPos(const GlobalPosition& globalPos) const
{
# Runtime parameters
## Runtime parameters
Dune INI syntax (`[Group]` and `Key = Value` pairs)
```cpp
[Grid]
LowerLeft = 0 0
UpperRight = 60 40
Cells = 24 16
[Problem]
Name = test
```
See [Part I: Runtime parameters](./params.html) for details.
* Each problem has one main file (`test_name.cc` or `main.cc`)
* Contains the `main` function (mandatory in C/C++ programs)
Calling `Dumux::initialize` is mandatory
at the beginning of each DuMu^x^ program to make sure the
environment is correctly set up.
```cpp
int main(int argc, char** argv)
{
// initialize MPI+X backends (mandatory)
Dumux::initialize(argc, argv);
```cpp
// parse command line arguments and input file
Dumux::Parameters::init(argc, argv);
```
See [Part I: Runtime parameters](./params.html) for details.
Define an alias for the test problem type tag
using TypeTag = Properties::TTag::Injection2pCC;
The tag (tag alias) is used to extract types and values
via the property system (details on properties in [Part II: Property system](./properties.html)).
##
// create a grid (take parameters from input file)
GridManager<GetPropType<TypeTag, Properties::Grid>> gridManager;
gridManager.init();
// obtain the grid and a grid view
const auto& grid = gridManager.grid();
const auto& leafGridView = grid.leafGridView();
More details on the grid in [Part I: Grid](./grid.html).
`GridGeometry` and `Problem` instantiation:
```cpp
// (represents the chosen discretization scheme)
using GridGeometry = GetPropType<TypeTag, Properties::GridGeometry>;
auto gridGeometry = std::make_shared<GridGeometry>(leafGridView);
// the problem (initial and boundary conditions)
using Problem = GetPropType<TypeTag, Properties::Problem>;
auto problem = std::make_shared<Problem>(gridGeometry);
```
Initial solution and secondary variables:
using SolutionVector
= GetPropType<TypeTag, Properties::SolutionVector>;
SolutionVector x(gridGeometry->numDofs());
// the grid variables
using GridVariables = GetPropType<TypeTag, Properties::GridVariables>;
auto gridVariables
= std::make_shared<GridVariables>(problem, gridGeometry);
Setting up [VTK](https://vtk.org/) output:
```cpp
VtkOutputModule<GridVariables, SolutionVector>
vtkWriter(*gridVariables, x, problem->name());
// optionally add a velocity post-processor
using VelocityOutput
= GetPropType<TypeTag, Properties::VelocityOutput>;
std::make_shared<VelocityOutput>(*gridVariables));
// add input/output defaults for the chosen model
using IOFields = GetPropType<TypeTag, Properties::IOFields>;
* Steady-state linear problems
* Steady-state non-linear problems
* Time-dependent non-linear problems
Assembling the system for a steady-state linear problem:
```cpp
// the assembler for stationary problems
using Assembler = FVAssembler<TypeTag, DiffMethod::numeric>;
auto assembler = std::make_shared<Assembler>(
// the discretization matrices for stationary linear problems
using JacobianMatrix
= GetPropType<TypeTag, Properties::JacobianMatrix>;
auto A = std::make_shared<JacobianMatrix>();
auto r = std::make_shared<SolutionVector>();
assembler->setLinearSystem(A, r);
assembler->assembleJacobianAndResidual(x);
```
using LinearSolver = ILUBiCGSTABIstlSolver<
LinearSolverTraits<GridGeometry>,
auto linearSolver = std::make_shared<LinearSolver>(
gridGeometry->gridView(), gridGeometry->dofMapper());
(*r) *= -1.0; // solve Ax = -r to save update and copy
##
Simplified code for steady-state linear problem using `Dumux::LinearPDESolver`
```cpp
auto assembler = ...; // construct as before
auto linearSolver = ...; // construct as before
using Solver = LinearPDESolver<Assembler, LinearSolver>;
Solver solver(assembler, linearSolver);
// assemble & solve
solver.solve(x);
```
##
For steady-state non-linear problems, use `Dumux::NewtonSolver`:
auto assembler = ...; // construct as before
auto linearSolver = ...; // construct as before
using Solver = NewtonSolver<Assembler, LinearSolver>;
Solver solver(assembler, linearSolver);
For time-dependent problems, pass a `TimeLoop` instance
and a solution vector carrying the solution
on the previous/initial time level to the assembler:
auto timeLoop = std::make_shared<TimeLoop<Scalar>(0.0, dt, tEnd);
timeLoop->setMaxTimeStepSize(maxDt);
using Assembler = FVAssembler<TypeTag, DiffMethod::numeric>;
auto assembler = std::make_shared<Assembler>(
problem, gridGeometry, gridVariables,
timeLoop, xOld // <-- new arguments
);
// assemble linear/non-linear problem as before
// ...
timeLoop->start(); do {
// Time loop body
```cpp
// time loop
timeLoop->start(); do
{
// linearize & solve
nonLinearSolver.solve(x, *timeLoop);
// make the new solution the old solution
xOld = x;
gridVariables->advanceTimeStep();
```
Advancing the time loop to the next step:
```cpp
// advance to the time loop to the next step
timeLoop->advanceTimeStep();
// report statistics of this time step
timeLoop->reportTimeStep();
// set new dt as suggested by the newton solver
timeLoop->setTimeStepSize(
nonLinearSolver.suggestTimeStepSize(timeLoop->timeStepSize())
);
timeLoop->finalize(leafGridView.comm());
```
# Build system (CMake)
## Typical C++ build steps
1. configure
2. compile
3. install (copy programs to some location)
Step 3 is usually skipped by DuMu^x^ developers.
But you can of course "install" Dune modules and DuMu^x^.
* [Open-source build system tool developed by KITware](https://cmake.org/)
* Offers a one-tool-solution to all building tasks, like configuring, building, linking, testing and packaging
* Is portable and supports cross-platform compilation
* Build file generator: Support various backends called generators (default: GNU Makefiles)
* Control files written in the CMake language
* Every directory in a project contains a file called `CMakeLists.txt`, which is written in the CMake language
* Upon configure, the top-level `CMakeLists.txt` is executed
* Subfolders are included with the `add_subdirectory(...)` function
* Configure build-time compiler parameters / linking information
* Create "targets" that can be build to create executables
* Find and link external libraries / software dependencies
## Build system - dunecontrol
* For Dune, CMake is triggered via the `dunecontrol` script
./dune-common/bin/dunecontrol [options] <command>
* useful commands:
- `configure`: configure libraries, variables, paths, files
- `make all`: build libraries
- `all`: includes `configure` and `make all`
* useful options:
- `--opts=<optionfile.opts>` specify e.g. compiler flags; DuMu^x^ provides [`dumux/cmake.opts`](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/cmake.opts).
- `--build-dir=<build directory>` specify path for out-of-source build; Default: every module contains its own build directory called `build-cmake/`
- `--only <module>` only apply command to a specific module/s (comma-separated list)
## Build system - dunecontrol
You may have to reconfigure (rerun `dunecontrol`)
whenever a dependency changes or a Dune library is updated.
CMake caches. To reconfigure, you may need to remove the cache first:
```sh
./dune-common/bin/dunecontrol bexec rm -r CMakeFiles CMakeCache.txt
```
removes the `CMake` cache files from all Dune modules.
(Use `--only <module>` to only remove caches of specific modules.)
## Build system - basic CMake functions
* Use `add_subdirectory` for recursively adding subdirectories
* Subdirectory must contain a `CMakeLists.txt` (can be empty)
* Executables are added via `add_executable(<name> source1 [source2 ...])`
* Tests are added via `dumux_add_test` which also add a test executable to the test suite
* Symlinks can be added via `dune_symlink_to_source_files`
`CMakeLists.txt` for an example application: create a test target `test_2p_incompressible_box`
by defining name, source file and command line arguments:
dumux_add_test(
NAME test_2p_incompressible_box
SOURCES test_2p_fv.cc
CMD_ARGS test_2p.input
)
Add extra compile definitions and commands (typical
for DuMu^x^ regression tests):
NAME test_2p_incompressible_box
SOURCES test_2p_fv.cc
COMPILE_DEFINITIONS TYPETAG=TwoPIncompressibleBox
COMMAND ${CMAKE_SOURCE_DIR}/bin/testing/runtest.py
CMD_ARGS --script fuzzy
--files ${CMAKE_SOURCE_DIR}/test/references/lensbox-reference.vtu
${CMAKE_CURRENT_BINARY_DIR}/2p_box-00007.vtu
--command "${CMAKE_CURRENT_BINARY_DIR}/test_2p_incompressible_box
test_2p.input -Problem.Name 2p_box"
Extra: Create linked parameter file in `build-cmake` folder, separately add executable and set compile definitions for an executable:
```cmake
dune_symlink_to_source_files(FILES "params.input")
# using tpfa
add_executable(test_2p_incompressible_tpfa EXCLUDE_FROM_ALL main.cc)
target_compile_definitions(
test_2p_incompressible_tpfa PUBLIC TYPETAG=TwoPIncompressibleTpfa
)
Useful resources:
* [CMake online documentation](https://cmake.org/cmake/help/latest/)
* [Dune CMake online documentation](https://www.dune-project.org/sphinx/core/) for CMake functions available with Dune.
# Parallelism
## Distributed memory parallelism with MPI
* MPI stands for Message Passing Interface
* MPI manages communication between processes that run in parallel
## Distributed memory parallelism with MPI
* Main idea is the concept of domain decomposition
* The grid manages the grid partitioning
* Each local subdomain is assembled on an individual process (rank)
* Solvers have to handle decomposed vectors/matrices
* Most solvers in DuMu^x^ (iterative Krylov subspace methods based on dune-istl) are capable of MPI parallel solving
## Distributed memory parallelism with MPI
Run with:
```cpp
mpirun -np [n_cores] [executable_name]
```
## Distributed memory parallelism with MPI
Partitioned VTK output files:
* Each rank creates its own `*.vtu`/`*.vtp` file
* Combined into `*.pvtu`/`*.pvtp` files for each time step
* `*.pvd` groups all `*.pvtu`/`*.pvtp` files into a time series
* Dumux can exploit parallelism with the shared memory model
* Used in the `Dumux::FVAssembler` by default to assemble the residual and stiffness matrix
* Is enabled when a multi-threading backend is found
* Backend is selected by `CMAKE` during configuration and stored in `DUMUX_MULTITHREADING_BACKEND`
* Possible examples are `OpenMP`, `TBB`, C++ parallel algorithms, and `Kokkos`
## Shared-memory parallelism
**Restrict the number of threads**
Number of threads can be restricted via the environment variable `DUMUX_NUM_THREADS`
```sh
DUMUX_NUM_THREADS=2 ./executable
```
## Shared-memory parallelism
**Multi-threaded assembly**
Control use of multi-threading during assembly in input file:
```cpp
[Assembly]
Multithreading = false
```
## Exercises
Basic application setup:
* Go to [Exercise basic](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux-course/tree/master/exercises/exercise-basic#exercise-basics-dumux-course)
* Go to [Exercise main file](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux-course/-/tree/master/exercises/exercise-mainfile#exercise-mainfiles-dumux-course)