diff --git a/examples/README.md b/examples/README.md
index 50b47011eeeba901709d82b575e03fdb5cfedad0..beb935118d8db8a5d64f7bcedeb2a8c74f17f499 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -15,8 +15,8 @@ You learn how to
 * setup a new simple model equation (diffusion equation)
 * read parameters from a configuration file
 * create a type tag and specialize properties for it
-* generate a randomly distributed intial field (with MPI parallelism)
-* solve a time-depedent diffusion problem in parallel
+* generate a randomly distributed initial field (with MPI parallelism)
+* solve a time-dependent diffusion problem in parallel
 
 __Model equations:__ A diffusion equation model fully developed and contained within the example<br />
 __Discretization method:__ Vertex-centered finite volumes / control-volume finite elements (Lagrange, P1) (`BoxModel`)
diff --git a/examples/diffusion/README.md b/examples/diffusion/README.md
index d54937c4e16e64e59fa8b14a10393212affde86b..31d2018ff90e580df82cab146e87eec0f1fbd4d3 100644
--- a/examples/diffusion/README.md
+++ b/examples/diffusion/README.md
@@ -19,7 +19,7 @@ __The main points illustrated in this example are__
 ## Equation and problem description
 
 The scalar diffusion equation on a domain $\Omega \subset \mathbb{R}^2$
-with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neummann boundaries
+with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neumann boundaries
 reads
 
 ```math
@@ -93,18 +93,18 @@ The simulation result will look something like this.
 By default Dumux will try to speed up the assembly by using shared memory parallelism if a suitable
 backend has been found on your system (one of TBB, OpenMP, Kokkos, C++ parallel algorithms).
 You can limit the number of threads by prepending your executable with `DUMUX_NUM_THREADS=<number>`.
-If you also want to use distributed memory parallelsim with MPI (works better for solvers at the moment),
+If you also want to use distributed memory parallelism with MPI (works better for solvers at the moment),
 run the executable with your MPI environment. Each MPI process will use multi-threading if
 `DUMUX_NUM_THREADS` is larger than $1$.
 
-Running the example with four MPI processes (distribution memory parallelsim)
+Running the example with four MPI processes (distribution memory parallelism)
 each with two threads (shared memory parallelism):
 
 ```sh
 DUMUX_NUM_THREADS=2 mpirun -np 4 ./example_diffusion
 ```
 
-You can set the parameter `Grid.Overlap` to some non-zero integer in `param.input`
+You can set the parameter `Grid.Overlap` to some non-zero integer in `params.input`
 to turn the domain decomposition into an overlapping decomposition where
 `Grid.Overlap` specifies the number of grid cells in the overlap between processes.
 This can help to increase the convergence speed of the linear solver.
diff --git a/examples/diffusion/doc/_intro.md b/examples/diffusion/doc/_intro.md
index 5ca71136bd313a8ff721785681b24784ef940aa4..2f759145fbf6a2f13b9d58e41b86149e717be436 100644
--- a/examples/diffusion/doc/_intro.md
+++ b/examples/diffusion/doc/_intro.md
@@ -17,7 +17,7 @@ __The main points illustrated in this example are__
 ## Equation and problem description
 
 The scalar diffusion equation on a domain $\Omega \subset \mathbb{R}^2$
-with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neummann boundaries
+with boundary $\partial\Omega = \Gamma_D \cup \Gamma_N$ composed of Dirichlet and Neumann boundaries
 reads
 
 ```math
@@ -91,18 +91,18 @@ The simulation result will look something like this.
 By default Dumux will try to speed up the assembly by using shared memory parallelism if a suitable
 backend has been found on your system (one of TBB, OpenMP, Kokkos, C++ parallel algorithms).
 You can limit the number of threads by prepending your executable with `DUMUX_NUM_THREADS=<number>`.
-If you also want to use distributed memory parallelsim with MPI (works better for solvers at the moment),
+If you also want to use distributed memory parallelism with MPI (works better for solvers at the moment),
 run the executable with your MPI environment. Each MPI process will use multi-threading if
 `DUMUX_NUM_THREADS` is larger than $1$.
 
-Running the example with four MPI processes (distribution memory parallelsim)
+Running the example with four MPI processes (distribution memory parallelism)
 each with two threads (shared memory parallelism):
 
 ```sh
 DUMUX_NUM_THREADS=2 mpirun -np 4 ./example_diffusion
 ```
 
-You can set the parameter `Grid.Overlap` to some non-zero integer in `param.input`
+You can set the parameter `Grid.Overlap` to some non-zero integer in `params.input`
 to turn the domain decomposition into an overlapping decomposition where
 `Grid.Overlap` specifies the number of grid cells in the overlap between processes.
 This can help to increase the convergence speed of the linear solver.
diff --git a/examples/diffusion/doc/model.md b/examples/diffusion/doc/model.md
index 145552f4adc96c1441426d42f2d7af7d3804599a..292a56b6f410c59206d0f439bd15714401c91d50 100644
--- a/examples/diffusion/doc/model.md
+++ b/examples/diffusion/doc/model.md
@@ -56,7 +56,7 @@ Box method which is based on $P_1$ basis functions (piece-wise linears)
 and the degrees of freedom are on the nodes. Each node is associate with
 exactly one sub control volume (`scv`) per element and several ($2$ in $\mathbb{R}^2$)
 sub control volume faces (`scvf`). In the local residual, we can implement the
-constribution for one `scv` (storage and source terms) or one `scvf` (flux terms).
+contribution for one `scv` (storage and source terms) or one `scvf` (flux terms).
 
 Let's have a look at the class implementation.
 
diff --git a/examples/diffusion/model.hh b/examples/diffusion/model.hh
index c2c9b67dd3b0e1cb0c5ad925a49bbed887f87129..df4f97c7233129fa133f310d0769ec32cdcfad4e 100644
--- a/examples/diffusion/model.hh
+++ b/examples/diffusion/model.hh
@@ -58,7 +58,7 @@ struct DiffusionModel {};
 // and the degrees of freedom are on the nodes. Each node is associate with
 // exactly one sub control volume (`scv`) per element and several ($2$ in $\mathbb{R}^2$)
 // sub control volume faces (`scvf`). In the local residual, we can implement the
-// constribution for one `scv` (storage and source terms) or one `scvf` (flux terms).
+// contribution for one `scv` (storage and source terms) or one `scvf` (flux terms).
 //
 // Let's have a look at the class implementation.
 //