From 3a9ad1fe9bb6e61da2ab4eb9c0cea1122655af08 Mon Sep 17 00:00:00 2001 From: Martin Schneider <martin.schneider@iws.uni-stuttgart.de> Date: Fri, 24 Mar 2023 10:43:30 +0100 Subject: [PATCH] [example][diffusion] Fix typos --- examples/README.md | 4 ++-- examples/diffusion/README.md | 8 ++++---- examples/diffusion/doc/_intro.md | 8 ++++---- examples/diffusion/doc/model.md | 2 +- examples/diffusion/model.hh | 2 +- 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/examples/README.md b/examples/README.md index 50b47011ee..beb935118d 100644 --- a/examples/README.md +++ b/examples/README.md @@ -15,8 +15,8 @@ You learn how to * setup a new simple model equation (diffusion equation) * read parameters from a configuration file * create a type tag and specialize properties for it -* generate a randomly distributed intial field (with MPI parallelism) -* solve a time-depedent diffusion problem in parallel +* generate a randomly distributed initial field (with MPI parallelism) +* solve a time-dependent diffusion problem in parallel __Model equations:__ A diffusion equation model fully developed and contained within the example<br /> __Discretization method:__ Vertex-centered finite volumes / control-volume finite elements (Lagrange, P1) (`BoxModel`) diff --git a/examples/diffusion/README.md b/examples/diffusion/README.md index d54937c4e1..31d2018ff9 100644 --- a/examples/diffusion/README.md +++ b/examples/diffusion/README.md @@ -19,7 +19,7 @@ __The main points illustrated in this example are__ ## Equation and problem description The scalar diffusion equation on a domain $\Omega \subset \mathbb{R}^2$ -with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neummann boundaries +with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neumann boundaries reads ```math @@ -93,18 +93,18 @@ The simulation result will look something like this. By default Dumux will try to speed up the assembly by using shared memory parallelism if a suitable backend has been found on your system (one of TBB, OpenMP, Kokkos, C++ parallel algorithms). You can limit the number of threads by prepending your executable with `DUMUX_NUM_THREADS=<number>`. -If you also want to use distributed memory parallelsim with MPI (works better for solvers at the moment), +If you also want to use distributed memory parallelism with MPI (works better for solvers at the moment), run the executable with your MPI environment. Each MPI process will use multi-threading if `DUMUX_NUM_THREADS` is larger than $1$. -Running the example with four MPI processes (distribution memory parallelsim) +Running the example with four MPI processes (distribution memory parallelism) each with two threads (shared memory parallelism): ```sh DUMUX_NUM_THREADS=2 mpirun -np 4 ./example_diffusion ``` -You can set the parameter `Grid.Overlap` to some non-zero integer in `param.input` +You can set the parameter `Grid.Overlap` to some non-zero integer in `params.input` to turn the domain decomposition into an overlapping decomposition where `Grid.Overlap` specifies the number of grid cells in the overlap between processes. This can help to increase the convergence speed of the linear solver. diff --git a/examples/diffusion/doc/_intro.md b/examples/diffusion/doc/_intro.md index 5ca71136bd..2f759145fb 100644 --- a/examples/diffusion/doc/_intro.md +++ b/examples/diffusion/doc/_intro.md @@ -17,7 +17,7 @@ __The main points illustrated in this example are__ ## Equation and problem description The scalar diffusion equation on a domain $\Omega \subset \mathbb{R}^2$ -with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neummann boundaries +with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neumann boundaries reads ```math @@ -91,18 +91,18 @@ The simulation result will look something like this. By default Dumux will try to speed up the assembly by using shared memory parallelism if a suitable backend has been found on your system (one of TBB, OpenMP, Kokkos, C++ parallel algorithms). You can limit the number of threads by prepending your executable with `DUMUX_NUM_THREADS=<number>`. -If you also want to use distributed memory parallelsim with MPI (works better for solvers at the moment), +If you also want to use distributed memory parallelism with MPI (works better for solvers at the moment), run the executable with your MPI environment. Each MPI process will use multi-threading if `DUMUX_NUM_THREADS` is larger than $1$. -Running the example with four MPI processes (distribution memory parallelsim) +Running the example with four MPI processes (distribution memory parallelism) each with two threads (shared memory parallelism): ```sh DUMUX_NUM_THREADS=2 mpirun -np 4 ./example_diffusion ``` -You can set the parameter `Grid.Overlap` to some non-zero integer in `param.input` +You can set the parameter `Grid.Overlap` to some non-zero integer in `params.input` to turn the domain decomposition into an overlapping decomposition where `Grid.Overlap` specifies the number of grid cells in the overlap between processes. This can help to increase the convergence speed of the linear solver. diff --git a/examples/diffusion/doc/model.md b/examples/diffusion/doc/model.md index 145552f4ad..292a56b6f4 100644 --- a/examples/diffusion/doc/model.md +++ b/examples/diffusion/doc/model.md @@ -56,7 +56,7 @@ Box method which is based on $P_1$ basis functions (piece-wise linears) and the degrees of freedom are on the nodes. Each node is associate with exactly one sub control volume (`scv`) per element and several ($2$ in $\mathbb{R}^2$) sub control volume faces (`scvf`). In the local residual, we can implement the -constribution for one `scv` (storage and source terms) or one `scvf` (flux terms). +contribution for one `scv` (storage and source terms) or one `scvf` (flux terms). Let's have a look at the class implementation. diff --git a/examples/diffusion/model.hh b/examples/diffusion/model.hh index c2c9b67dd3..df4f97c723 100644 --- a/examples/diffusion/model.hh +++ b/examples/diffusion/model.hh @@ -58,7 +58,7 @@ struct DiffusionModel {}; // and the degrees of freedom are on the nodes. Each node is associate with // exactly one sub control volume (`scv`) per element and several ($2$ in $\mathbb{R}^2$) // sub control volume faces (`scvf`). In the local residual, we can implement the -// constribution for one `scv` (storage and source terms) or one `scvf` (flux terms). +// contribution for one `scv` (storage and source terms) or one `scvf` (flux terms). // // Let's have a look at the class implementation. // -- GitLab