dumux-repositories issueshttps://git.iws.uni-stuttgart.de/groups/dumux-repositories/-/issues2022-05-25T10:55:26Zhttps://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/1155Dumux Day 25.05.2022: How to make the Dumux day more interesting2022-05-25T10:55:26ZMaziar VeyskaramiDumux Day 25.05.2022: How to make the Dumux day more interesting- Challenges a developer may face:
1. Some issues/merge requests are not descriptive enough. That could demotivate the developer to work on them.
2. The discussion about some issues becomes too technical during the Dumux day meeting. T...- Challenges a developer may face:
1. Some issues/merge requests are not descriptive enough. That could demotivate the developer to work on them.
2. The discussion about some issues becomes too technical during the Dumux day meeting. That can be frightening to others with different area of expertise.
3. Fear of failure deters people from contributing to unfamiliar issues.
4. No solid/detailed description of some parts of the code.
5. Strict guidelines
- The collected ideas:
1. The issues/merge requests should give more details and describe the issue in a proper way.
2. When the discussion becomes so technical that concerns only a part of the group during the Dumux day main meetings, it should be interrupted and continued only by the interested people in a separate meeting.
3. Dumux day is about learning things other than your own area. To realize that goal as well as to help those who their fear of failure prevent them to contribute, we can assign a task not to a single person but to a small group of people (2 or 3 persons). The group should consist of more experienced and less experienced members.
4. After clarifying what each part of the code aims to do, we can add a description for developers to the handbook and at least cover the classes which are used in the main file.
Another idea is to record or write down the tutorials given by experienced members of the group and keep them for the new members joining us in future.
5. Every body can develop and implement their code in a separate module. However, if they want to integrate the module in the Dumux, they must follow the guidelines. By doing so, we prevent inconsistency and bugs in the future. We recommend to use the guidelines even in your private module. In addition, following the guidelines in the code could be seen as a learning process which improves the programming skills.https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/908[disc] Implementation of nonlinear fv schemes2022-03-28T09:10:12ZMartin Schneider[disc] Implementation of nonlinear fv schemes3.6https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/884Group arguments in assembly rountines2021-03-09T12:19:15ZTimo Kochtimokoch@math.uio.noGroup arguments in assembly rountinesI think fv element geometry, elem volvars and flux vars cache belong together in an element-wise assembly view point.
This could be expressed, e.g. by grouping these objects together. This might reduce the number of arguments for some fu...I think fv element geometry, elem volvars and flux vars cache belong together in an element-wise assembly view point.
This could be expressed, e.g. by grouping these objects together. This might reduce the number of arguments for some functions.
It's not completely trivial since there are several sensible combination possible.
I added a suggestion for an element view in !2125.
Suggestions for improvement? What that speaks for/against this?https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/826Diffusion confusion (implementation)2022-05-25T10:55:30ZTimo Kochtimokoch@math.uio.noDiffusion confusion (implementation)We decided at some point that diffusion laws, e.g. `FicksLaw`, should compute all component fluxes in one go. This means two things
* we can now more efficiently compute the diffusive fluxes depending on the law (Fick / Maxwell-Stefan)
...We decided at some point that diffusion laws, e.g. `FicksLaw`, should compute all component fluxes in one go. This means two things
* we can now more efficiently compute the diffusive fluxes depending on the law (Fick / Maxwell-Stefan)
* `FicksLaw` now depends on the equation system
__Example:__
1. If I want to neglect diffusion in one phase, i can set the diffusion coefficient to zero. However, then the diffusive fluxes are still computed and I can throw them away in the custom local residual.
2. The Richards model is an immiscible two-phase two-component model but the air phase is never balanced. To integrate this in the current framework, we introduced `BalanceEqOpts::mainComponentIsBalanced(phaseIdx)` which is overloaded for the Richards model and used in Fick's law. In this case the dependency is actually there in the code in form of the additional dependency on `BalanceEqOpts`.
__One thought for a possible solution:__
If we would have a class `DiffusionFlux` replacing the current `FicksLaw`, then we could have a custom implementation `RichardsDiffusionFlux` which takes care of special requirements. `DiffusionFlux` would be a class on the level of `LocalResidual` only containing physics/equations. Internally it could use something like `FicksLaw` (new implementation that only contains the transmissibility part and discretization specifics) to compute the actual individual fluxes. `DiffusionFlux` may need to be specialized on the Law type (Fick / Maxwell-Stefan) but not the discretization.
Maybe, this would essentially be a code renaming / reordering. But that needs to be further investigated.3.6https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/1128Flexible hydrostatic reconstruction in Riemannproblem2022-02-20T09:00:20ZLeopold StadlerFlexible hydrostatic reconstruction in RiemannproblemThere exist different hydrostatic reconstructions for modelling shallow water flows over uneven terrain. The reconstruction is applied to obtain a state on the left/right side of an edge to be able to compute the flow with a Riemann Solv...There exist different hydrostatic reconstructions for modelling shallow water flows over uneven terrain. The reconstruction is applied to obtain a state on the left/right side of an edge to be able to compute the flow with a Riemann Solver.
In DuMux, the hydrostatic reconstruction of Audusse et al. is implemented, which is a good choice for modelling flow in rivers. However, the method has it limitations for flows with small water depths and large slope variations (e.g. rainfall-runoff modelling). A rainfall-runoff benchmark showed that the results can be strongly improved by using the reconstruction method of Chen and Noelle, but this method can be problematic if the slope gradient gets equal to the water height.
Actually the hydrostatic reconstruction is hard-wired in the `riemannProblem`. The structure of the code is a bit complex/messy since `ShallowWaterFlux` calls the `riemannProblem` in which the reconstruction is performed before the `exactRiemann` (exact Riemann Solver) is called.
It would be nice to provide different kind of reconstructions.
@utz what do you think?https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/976Discussion: naming of boundary condition functions / conditions2021-03-10T19:30:25ZTimo Kochtimokoch@math.uio.noDiscussion: naming of boundary condition functions / conditionsProblem currently need to implement `dirichlet` and `neumann` functions and we have the corresponding boundary conditions `dirichlet` and `neumann`. However we use `neumann` much more general than in there original meaning I believe. For...Problem currently need to implement `dirichlet` and `neumann` functions and we have the corresponding boundary conditions `dirichlet` and `neumann`. However we use `neumann` much more general than in there original meaning I believe. For example Robin/Cauchy boundary conditions can be realized in the `neumann` function.
Essentially `neumann` corresponds to the integrand of all boundary integrals in the equations. It's always a weakly enforced boundary condition. `dirichlet` corresponds to setting a fixed value and is strongly or weakly enforced depending on the discretization scheme.
Alternative naming schemes could be
* `boundaryFluxes` (for `neumann`) and BC type `flux`
* `boundaryValues` (for `dirichlet`) and BC type `value`/`fixed`
or only changing `neumann`
* `boundaryFluxes` (for `neumann`) and BC type `flux`
* `dirichlet` and BC type `dirichlet`
Other ideas and opinions?https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/951[freeflow] Introduce full shear stress terms, or document assumptions properly2022-05-10T12:07:41ZNed Coltman[freeflow] Introduce full shear stress terms, or document assumptions properlyWe ran into a few discussions this summer/fall regarding terms that we may have either not implemented in the navier-stokes environment, or terms that we have neglected, but have not documented the assumptions properly.
These should be...We ran into a few discussions this summer/fall regarding terms that we may have either not implemented in the navier-stokes environment, or terms that we have neglected, but have not documented the assumptions properly.
These should be included, in either both new and old staggereds, or only the new staggered.
These terms include:
- the dilataion term
```math
\tau = \mu (\nabla v + \nabla v^T) + ( \lambda \nabla \cdot v ) I
```
- with stokes hypothesis
```math
(\lambda = 2/3 \mu)
```
- the second term of the linear eddy viscosity reynolds stress: [cfdOnline](https://www.cfd-online.com/Wiki/Linear_eddy_viscosity_models)
```math
\tau_t = 2 \mu_t S - 2/3 \rho k \delta_{ij}
```
- any others?
From what I've seen, some of this is already underway. This issue is only a location to collect problems and track progress.3.6https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/940Clean incoporation of new time integration methods2021-03-25T09:23:04ZDennis GläserClean incoporation of new time integration methods@timok, @bernd, @kweis, @martins and I are currently discussing/developing the incorporation of a generic time integration framework in Dumux. In general, time handling is currently problematic in Dumux and leads to a bug in MultiDomain ...@timok, @bernd, @kweis, @martins and I are currently discussing/developing the incorporation of a generic time integration framework in Dumux. In general, time handling is currently problematic in Dumux and leads to a bug in MultiDomain (#792, #619).
The following work plan is currently envisioned to incorporate the features into an `Experimental` namespace while guaranteeing that the current features and tests on master still work:
1. [x] introduce new grid variables concept, where they represent the complete state of a simulation - thus, not only secondary but also primary variables and possibly a time level. (see !2285)
2. introduce new assembly concept <br>
- [x] make `NewtonSolver`, `PDESolver` accept both assemblers that assemble around given `SolutionVectors` or more generic `Variables` (see !2291) <br>
- [x] add time step methods (see !2296)<br>
- [ ] add generic version of `FVAssembler`, which assembles around `Variables` and uses the time integration methods (see !2519)<br>
- [ ] Introduce solution state (name is to be discussed) class that substitutes elementSolution during the assembly - that is, as argument to volume variables updates and in spatial parameters interfaces. The concept of this state class is to carry time information in addition to the element solution, that can be used within user interfaces. (!2520)
- [ ] Introduce context class, that wraps the local views after bind in order to pass that into the user interfaces. That reduces the number of arguments in a bunch of interfaces, and moreover, we usually have interfaces like `function(element, fvGeometry, elemVolVars,...)`, but `fvGeometry` makes little sense if not bound to `element` anyway and it also carries the bound element. With the same argument `elemVolVars` are basically unusable if you don't have the `scvs` at hand to access the corresponding volume variables. So in all those interfaces it makes sense to group the arguments. This is also introduced together with the solution state in !2520
- [ ] Extend problem/parameter/volvars interfaces to make it possible to inject some container with additionally required data, which in `MultiDomain` could be used to pass the coupling data. This is probably a lot of work and requires some thought regarding compatibility.<br>
- [ ] port the above concepts to `MultiDomain` (first goal: make `test_el2p` work, fixing the main bug)
Edit 25.03: We may postpone the introduction of the additional container to hold the coupling context for now and first realize multidomain in the new experimental framework but still with the context stored centrally in `CouplingManager` (an outdated but working draft is in !2448). This way, we can still reuse most interfaces in non-experimental namespace. Afterwards, we could address the issue of the central context separately, which would probably involve quite some interface changes... With either approach, the bug of non-converging Newton solver for poromechanics is addressed. Getting the context out of `CouplingManager` would additionally address thread safety in thread-parallel runs.
Problems that might need to be solved:
- The `assemble()` functions in the assembler now receive non-const GridVariables (introduced in 3d7068043ddb1cc7249aa0cd7d95ffa077733524). That was necessary because in the case of global caching we actually deflect the variables that come in. We should maybe think of a concept to circumvent this, and adapt !2519 accordingly.
Intermediate solutions/developments or related stuff, which should be deleted in case we favour the propositions above:
!2297, !2448, !2476, !2498, !2134, !2281, `feature/timestepmethods (now deleted)`
Things that should be revisited and adapted once this is ready:
!2130Dennis GläserDennis Gläserhttps://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/933Discussion: Unify LocalAssemblers and SubDomainCouplingAssemblers2021-05-08T12:35:57ZKilian WeishauptDiscussion: Unify LocalAssemblers and SubDomainCouplingAssemblersFrom a brief glance at `BoxLocalAssembler` and `SubDomainBoxLocalAssembler` it seems that there is a large degree of code duplication. I think it should be possible to make `SubDomainBoxLocalAssembler` inherit from `BoxLocalAssembler`.
...From a brief glance at `BoxLocalAssembler` and `SubDomainBoxLocalAssembler` it seems that there is a large degree of code duplication. I think it should be possible to make `SubDomainBoxLocalAssembler` inherit from `BoxLocalAssembler`.
The only critical parts are some calls to `couplingManager`. This could be replaced by some lambda calls which do nothing for the non-coupling assembler. The coupled assembler could then call the base class' function with a specialized lambda which .
, e.g., update the coupled variables.
We could evaluate the possibility of streamlining our code on the next Dumux day.https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/881some interfaces in FVLocalResidual receive problem as function argument2021-06-07T13:06:20ZDennis Gläsersome interfaces in FVLocalResidual receive problem as function argumentIn the base local residual for finite-volume schemes we have some interfaces that receive a problem instance and some that use the private variable `problem_` which is instantiated in the class constructor. For some functions, there exis...In the base local residual for finite-volume schemes we have some interfaces that receive a problem instance and some that use the private variable `problem_` which is instantiated in the class constructor. For some functions, there exist two overloads - one receiving problem and one using the private variable.
I find this a bit confusing at the moment. The functions are not static, so you need the local residual object - which was instantiated with a specific problem instance, i.e. we have the constructor
```cpp
//! the constructor
FVLocalResidual(const Problem* problem,
const TimeLoop* timeLoop = nullptr)
: problem_(problem)
, timeLoop_(timeLoop)
{}
```
What I find even more surprising is that `Problem` comes out of the property system, so I can potentially only call the interfaces receiving a problem with another instance of the same problem type.
So, I am wondering if the functions receiving a problem instance were designed to allow for evaluations for any problem, independent of how the class was instantiated? But then I'd assume the function the function to be static, or, to be free functions that temporarily instantiate a local residual using the provided problem and then calling these functions on that local variable without passing `problem`.https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/814Extract "constraint solvers" from volume variables2022-03-28T09:20:08ZTimo Kochtimokoch@math.uio.noExtract "constraint solvers" from volume variablesThe computation of the secondary variables from the primary variables is currently often coded inside the volume variables' `update` function. For the purpose of potential code reusage, readability, and testability it is IMO better to mo...The computation of the secondary variables from the primary variables is currently often coded inside the volume variables' `update` function. For the purpose of potential code reusage, readability, and testability it is IMO better to move this functionality to constraint solver classes. Even if the computation is quite simple. This is sometimes done in the form of the `computeFluidState` function. However, there is no general interface for all models.
If the constraint solver is in a separate class it is much easier to write unit tests. Volume variables have a lot of dependencies but the constraint solver can easily be tested and mock object are simple to construct for values obtained from e.g. the spatial params or some physical law.3.6https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/761Cleanup explicit flash of implicit 2p2c model2022-03-28T09:25:21ZBeatrix BeckerCleanup explicit flash of implicit 2p2c modelThe volumevariables of the 2p2c model have an explicit flash directly implemented in the volumevariables itself.
* In general I like the idea of a specialized 2p2c flash that is easy to understand and fast, but it shouldn't be included...The volumevariables of the 2p2c model have an explicit flash directly implemented in the volumevariables itself.
* In general I like the idea of a specialized 2p2c flash that is easy to understand and fast, but it shouldn't be included in the volumevariables. I would propose implementing it as a separate class in a separate header, like the other constraintsolvers.
* The flash is only used if `useConstraintSolver` is false and the default is true. For the 2p2c model I would make the flash the default, since it is a faster calculation than using the more general `MiscibleMultiPhaseComposition` constraintsolver which solves a linear system of equations. Maybe we should even completely delete `useConstraintSolver` because in my opinion the solver has no benefit here, it solves the same equations, just less efficiently.
* For the case of one phase we may use the `ComputeFromReferencePhase` constraintsolver since it does exactly what the flash does.
* I don't think the flash is currently tested, so this should be added. It should have the same result as the other constraintsolvers.
What do you think? Another solution could be to delete the flash code and always use the solvers that we already have, but as mentioned above, I prefer having a 2p2c-specific flash.
There are a few points that I'm not sure of, maybe @holle can comment on this:
* In my opinion this flash is not as correct as it could be because it uses the assumption that vapor pressure of the liquid component and partial pressure of the liquid component in the gas phase are the same. This is only the case if we neglect the presence of other components in the gas phase. There is an equally quick method to calculate the mass fractions without using this assumption, see the 2p2c flash of the sequential models (note: pre release 3.4)
* It seems that the flash assumes that we deal with one liquid and one gas phase and that the liquid phase is the first phase. I think the 2p2c flash of the sequential models doesn't have this constraint.
* There seems to be a bug in the case that only the first phase is present, in the calculation of the mole fraction of the first component in the second phase. A multiplication with the mole fraction of the first component in the first phase is probably missing.3.6https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/707Generic implementation of L2-norm calculation using generic L2-projection2022-02-14T12:29:10ZTimo Kochtimokoch@math.uio.noGeneric implementation of L2-norm calculation using generic L2-projectionWith !1609 we get a generic l2-projection. In order to use it for interpolation between two arbitrary grids (arbitrary function spaces already works), we need to implement
* [x] 2d-2d intersections (!1625)
* [x] 3d-3d intersections (!29...With !1609 we get a generic l2-projection. In order to use it for interpolation between two arbitrary grids (arbitrary function spaces already works), we need to implement
* [x] 2d-2d intersections (!1625)
* [x] 3d-3d intersections (!2977)
That would be a great tool for convergence tests.https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/661Possibly include update of Box flux variables cache2020-07-29T08:46:49ZDennis GläserPossibly include update of Box flux variables cacheThe flux variable caches for the box scheme are always assumed to be solution-independent. We should think of a way to support user-defined, solution-dependent caches.The flux variable caches for the box scheme are always assumed to be solution-independent. We should think of a way to support user-defined, solution-dependent caches.https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/524(Re-)Implement CFL criterion2021-03-25T08:35:28ZTimo Kochtimokoch@math.uio.no(Re-)Implement CFL criterionAs a first step of porting the decoupled/sequential models to the new structure, it would be a good thing to implement a CFL criterion for the time step control for the current porousmediumflow models. Add good start is e.g. a CFL is e.g...As a first step of porting the decoupled/sequential models to the new structure, it would be a good thing to implement a CFL criterion for the time step control for the current porousmediumflow models. Add good start is e.g. a CFL is e.g. the 1p_tracer test (https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/tree/master/test/porousmediumflow/tracer/1ptracer) that uses an explicit Euler for the transport but currently has a constant time step that is small enough for the test. CFL would be a big improvement.
@martins Maybe you are the best to deal with this?Martin SchneiderMartin Schneiderhttps://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/436Minimize private alias declarations and static constants?2020-03-18T09:22:07ZBernd FlemischMinimize private alias declarations and static constants?Currently, each Dumux class that takes a TypeTag as template parameter typically contains several/many private alias declarations and static constant definitions extracted from the TypeTag. This may happen hundreds of lines above the fir...Currently, each Dumux class that takes a TypeTag as template parameter typically contains several/many private alias declarations and static constant definitions extracted from the TypeTag. This may happen hundreds of lines above the first usage of the declared names.
An alternative would be to put the declarations as close as possible to the place where they are used. If the declarations are used as function parameter / return types, one could use template parameters / auto instead.
The expected benefit would be an improved readability of the code and the avoidance of unused declarations and definitions.
In order to discuss this, I set up !741. Please have a look and share your opinions here.https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/1161Possibly wrong velocity output for rotational symmetric domains2022-05-28T10:11:08ZTimo Kochtimokoch@math.uio.noPossibly wrong velocity output for rotational symmetric domainsReported by @yue:
calculateVelocity seems to be missing the extrusion area.
```
void calculateVelocity(...) const
{
...
if constexpr (isBox && dim == 1)
{
...
for (auto&& scvf : scvf...Reported by @yue:
calculateVelocity seems to be missing the extrusion area.
```
void calculateVelocity(...) const
{
...
if constexpr (isBox && dim == 1)
{
...
for (auto&& scvf : scvfs(fvGeometry))
{
if (scvf.boundary())
continue;
// insantiate the flux variables
FluxVariables fluxVars;
fluxVars.init(problem_, element, fvGeometry, elemVolVars, scvf, elemFluxVarsCache);
// get the volume flux divided by the area of the
// subcontrolvolume face in the reference element
Scalar localArea = scvfReferenceArea_(geomType, scvf.index());
Scalar flux = fluxVars.advectiveFlux(phaseIdx, upwindTerm) / localArea;
const auto& insideVolVars = elemVolVars[scvf.insideScvIdx()];
flux /= insideVolVars.extrusionFactor();
tmpVelocity *= flux;
const int eIdxGlobal = gridGeometry_.elementMapper().index(element);
velocity[eIdxGlobal] = tmpVelocity;
}
return;
}
```
**Proposed strategy for resolution**
Add velocity output to rotational symmetric test and make sure it's correct.
**Possible fix**
*There might be a factor `Extrusion::area(scvf)/scvf.area()` missing. Care has to be taken in case there is a mapping from local to global coordinates.*3.5https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/1160Prototypical math models for testing and as examples2022-05-25T19:40:19ZTimo Kochtimokoch@math.uio.noPrototypical math models for testing and as examples__Proposed feature__:
Many models in Dumux are relatively general and support many variations. This can also often be distracting both when understanding the code and when thinking about general concepts and structure. I propose (someth...__Proposed feature__:
Many models in Dumux are relatively general and support many variations. This can also often be distracting both when understanding the code and when thinking about general concepts and structure. I propose (something similar has been proposed in dumux-repositories/dumux-course#17) to add some models with simple structure. For example:
* Poisson's equation
* Helmholtz' equation
* Heat equation
* Wave equation
* Burger's equation
* Allen-Cahn equation
* Cahn-Hillard equation(s)
* (Multidomain) Incompressible Stokes equations
These prototypical models have the advantage that we can mostly assume scalar constant parameters (no need for fluid systems and so on), a small number of primary variables (e.g. 1 scalar), and simple local residual. Parameter names would be generic and not imply specific physics.
__The goal would be to use such models__
* to demonstrate (to users and developers) what the essential and minimal ingredients of a new model are
* to use them as starting point for new models
* to test if our software components work well in isolation and are small enough (testing). If they turn out not to be these models should be good candidates to think about better abstractions (development).
* to have simple demonstrators for teaching/outreach
* to use them directly as tools/components
__Open questions:__
* In which folder would such models go?
* Do we even want to hard-code some models for one spatial discretization scheme to further simplify?
_Examples were a procedure like this helped to improve the code:_
* #867https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/1159Wrong return type for function outsideScvfWithSameIntegrationPoint2022-05-25T18:22:25ZYue WangWrong return type for function outsideScvfWithSameIntegrationPointThe [code](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/dumux/discretization/facecentered/staggered/geometryhelper.hh#L179) here may return `otherScvf` as a reference of a local variable if caching is disabled....The [code](https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/blob/master/dumux/discretization/facecentered/staggered/geometryhelper.hh#L179) here may return `otherScvf` as a reference of a local variable if caching is disabled.
```
template<class FVElementGeometry, class SubControlVolumeFace>
static const SubControlVolumeFace& outsideScvfWithSameIntegrationPoint(const FVElementGeometry& fvGeometry, const SubControlVolumeFace& scvf)
{
const auto& lateralOrthogonalScvf = fvGeometry.lateralOrthogonalScvf(scvf);
assert(!lateralOrthogonalScvf.boundary());
const int offset = (dim == 2) ? 3 : 5;
const auto otherLocalIdx = isOdd_(scvf.localIndex()) ? scvf.localIndex() - offset : scvf.localIndex() + offset;
auto outsideFVGeometry = localView(fvGeometry.gridGeometry());
const auto outsideElementIdx = fvGeometry.scv(lateralOrthogonalScvf.outsideScvIdx()).elementIndex();
outsideFVGeometry.bindElement(fvGeometry.gridGeometry().element(outsideElementIdx));
for (const auto& otherScvf : scvfs(outsideFVGeometry))
{
if (otherScvf.localIndex() == otherLocalIdx)
return otherScvf;
}
DUNE_THROW(Dune::InvalidStateException, "No outside scvf found");
}
```3.5https://git.iws.uni-stuttgart.de/dumux-repositories/dumux/-/issues/1158test_1pnc_maxwellstefan_tpfa (Failed)2022-05-25T16:31:36ZYue Wangtest_1pnc_maxwellstefan_tpfa (Failed)<!--
This form is for bug reports ONLY!
If you're looking for help check out the [readme](/README.md).
-->
**Bug report**
The test_1pnc_maxwellstefan_tpfa needs 20 steps on my laptop, but the reference is compared with the 19th output ...<!--
This form is for bug reports ONLY!
If you're looking for help check out the [readme](/README.md).
-->
**Bug report**
The test_1pnc_maxwellstefan_tpfa needs 20 steps on my laptop, but the reference is compared with the 19th output in makefile. After I changed compared output in makefile, there is still a small discrepancy leading to failure.
**Environment**:
- Dune version: 2.8
- DuMux version: release/3.5
- OS Version: macOS 11.6.5
- Compiler Version: gcc 11.3.0
- Others:
Output
```
ctest -R test_1pnc_maxwellstefan_tpfa --rerun-failed --output-on-failure
Test project /Users/ouetsu/dumuxday/dumux/build-cmake
Start 383: test_1pnc_maxwellstefan_tpfa
1/1 Test #383: test_1pnc_maxwellstefan_tpfa .....***Failed 1.75 sec
/Users/ouetsu/dumuxday/dumux/build-cmake/test/porousmediumflow/1pnc/1p3c/test_1pnc_maxwellstefan_tpfa-00020.vtu
In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move.!
- Douglas Adams, HGttG
Reading parameters from file params.input.
Computed bounding box tree with 1799 nodes for 900 grid entites in 0.000222 seconds.
problem uses mole fractions
-- Using the default temperature of 293.15 in the entire domain. Overload temperatureAtPos() in your spatial params class to define a custom temperature field.Or provide the preferred domain temperature via the SpatialParams.Temperature parameter.
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.013 seconds.
Colored 900 elements with 7 colors in 0.000518 seconds.
Newton solver configured with the following options and parameters:
-- Newton.EnableShiftCriterion = true (relative shift convergence criterion)
-- Newton.MaxRelativeShift = 1e-11
-- Newton.MinSteps = 2
-- Newton.MaxSteps = 18
-- Newton.TargetSteps = 10
-- Newton.RetryTimeStepReductionFactor = 0.5
-- Newton.MaxTimeStepDivisions = 10
Newton iteration 1 done, maximum relative shift = 1.2000e-02
Newton iteration 2 done, maximum relative shift = 6.2859e-03
Newton iteration 3 done, maximum relative shift = 3.8102e-06
Newton iteration 4 done, maximum relative shift = 1.2834e-11
Newton iteration 5 done, maximum relative shift = 1.7347e-16
Assemble/solve/update time: 0.034(56.20%)/0.026(43.60%)/0.00012(0.20%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.014 seconds.
[ 0%] Time step 1 done in 0.06 seconds. Wall clock time: 0.07426, time: 1, time step size: 1
Newton iteration 1 done, maximum relative shift = 2.1265e-02
Newton iteration 2 done, maximum relative shift = 8.3938e-05
Newton iteration 3 done, maximum relative shift = 6.9125e-09
Newton iteration 4 done, maximum relative shift = 7.0832e-14
Assemble/solve/update time: 0.015(44.14%)/0.019(55.68%)/6.1e-05(0.18%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0092 seconds.
[ 0%] Time step 2 done in 0.049 seconds. Wall clock time: 0.1179, time: 2.4167, time step size: 1.4167
Newton iteration 1 done, maximum relative shift = 2.4593e-02
Newton iteration 2 done, maximum relative shift = 1.4577e-04
Newton iteration 3 done, maximum relative shift = 2.1208e-08
Newton iteration 4 done, maximum relative shift = 4.2510e-13
Assemble/solve/update time: 0.014(45.79%)/0.016(54.05%)/4.8e-05(0.16%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0092 seconds.
[ 0%] Time step 3 done in 0.04 seconds. Wall clock time: 0.1575, time: 4.5417, time step size: 2.125
Newton iteration 1 done, maximum relative shift = 2.6520e-02
Newton iteration 2 done, maximum relative shift = 2.1205e-04
Newton iteration 3 done, maximum relative shift = 4.0424e-08
Newton iteration 4 done, maximum relative shift = 9.3259e-14
Assemble/solve/update time: 0.013(43.73%)/0.016(56.03%)/6.8e-05(0.23%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0089 seconds.
[ 0%] Time step 4 done in 0.039 seconds. Wall clock time: 0.19582, time: 7.7292, time step size: 3.1875
Newton iteration 1 done, maximum relative shift = 2.6662e-02
Newton iteration 2 done, maximum relative shift = 2.6987e-04
Newton iteration 3 done, maximum relative shift = 3.8616e-08
Newton iteration 4 done, maximum relative shift = 1.3961e-13
Assemble/solve/update time: 0.012(41.74%)/0.017(57.99%)/7.9e-05(0.27%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0094 seconds.
[ 0%] Time step 5 done in 0.038 seconds. Wall clock time: 0.2344, time: 12.51, time step size: 4.7812
Newton iteration 1 done, maximum relative shift = 2.5129e-02
Newton iteration 2 done, maximum relative shift = 3.2645e-04
Newton iteration 3 done, maximum relative shift = 3.8414e-08
Newton iteration 4 done, maximum relative shift = 2.4670e-12
Assemble/solve/update time: 0.012(44.47%)/0.015(55.35%)/4.9e-05(0.18%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.009 seconds.
[ 1%] Time step 6 done in 0.037 seconds. Wall clock time: 0.27058, time: 19.682, time step size: 7.1719
Newton iteration 1 done, maximum relative shift = 2.2381e-02
Newton iteration 2 done, maximum relative shift = 3.3690e-04
Newton iteration 3 done, maximum relative shift = 7.9237e-08
Newton iteration 4 done, maximum relative shift = 2.3762e-12
Assemble/solve/update time: 0.012(45.95%)/0.014(53.87%)/4.7e-05(0.18%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0088 seconds.
[ 1%] Time step 7 done in 0.036 seconds. Wall clock time: 0.3059, time: 30.44, time step size: 10.758
Newton iteration 1 done, maximum relative shift = 2.3015e-02
Newton iteration 2 done, maximum relative shift = 3.0702e-04
Newton iteration 3 done, maximum relative shift = 9.6664e-08
Newton iteration 4 done, maximum relative shift = 1.0971e-11
Newton iteration 5 done, maximum relative shift = 1.1657e-15
Assemble/solve/update time: 0.015(41.69%)/0.021(57.99%)/0.00011(0.32%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0088 seconds.
[ 1%] Time step 8 done in 0.045 seconds. Wall clock time: 0.35098, time: 46.577, time step size: 16.137
Newton iteration 1 done, maximum relative shift = 2.3471e-02
Newton iteration 2 done, maximum relative shift = 2.3345e-04
Newton iteration 3 done, maximum relative shift = 6.3069e-08
Newton iteration 4 done, maximum relative shift = 1.1634e-11
Newton iteration 5 done, maximum relative shift = 9.4369e-16
Assemble/solve/update time: 0.015(43.41%)/0.019(56.41%)/6.1e-05(0.18%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.01 seconds.
[ 2%] Time step 9 done in 0.043 seconds. Wall clock time: 0.39514, time: 69.437, time step size: 22.86
Newton iteration 1 done, maximum relative shift = 2.2485e-02
Newton iteration 2 done, maximum relative shift = 1.8337e-04
Newton iteration 3 done, maximum relative shift = 4.9848e-08
Newton iteration 4 done, maximum relative shift = 7.6620e-12
Assemble/solve/update time: 0.012(40.34%)/0.017(59.44%)/6.4e-05(0.22%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0097 seconds.
[ 3%] Time step 10 done in 0.039 seconds. Wall clock time: 0.43413, time: 101.82, time step size: 32.385
Newton iteration 1 done, maximum relative shift = 2.2842e-02
Newton iteration 2 done, maximum relative shift = 1.7489e-04
Newton iteration 3 done, maximum relative shift = 5.3192e-08
Newton iteration 4 done, maximum relative shift = 2.1477e-13
Assemble/solve/update time: 0.012(44.81%)/0.015(54.88%)/8.6e-05(0.31%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0092 seconds.
[ 4%] Time step 11 done in 0.037 seconds. Wall clock time: 0.47081, time: 150.4, time step size: 48.578
Newton iteration 1 done, maximum relative shift = 2.3759e-02
Newton iteration 2 done, maximum relative shift = 1.7493e-04
Newton iteration 3 done, maximum relative shift = 5.4479e-08
Newton iteration 4 done, maximum relative shift = 2.9345e-13
Assemble/solve/update time: 0.012(43.70%)/0.015(55.89%)/0.00011(0.41%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0089 seconds.
[ 6%] Time step 12 done in 0.037 seconds. Wall clock time: 0.50748, time: 223.27, time step size: 72.867
Newton iteration 1 done, maximum relative shift = 2.3978e-02
Newton iteration 2 done, maximum relative shift = 1.8186e-04
Newton iteration 3 done, maximum relative shift = 5.9748e-08
Newton iteration 4 done, maximum relative shift = 3.3276e-13
Assemble/solve/update time: 0.012(40.62%)/0.018(59.11%)/8.2e-05(0.27%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0089 seconds.
[ 9%] Time step 13 done in 0.039 seconds. Wall clock time: 0.54644, time: 332.57, time step size: 109.3
Newton iteration 1 done, maximum relative shift = 2.4229e-02
Newton iteration 2 done, maximum relative shift = 1.9660e-04
Newton iteration 3 done, maximum relative shift = 6.2956e-08
Newton iteration 4 done, maximum relative shift = 7.8290e-13
Assemble/solve/update time: 0.012(43.12%)/0.015(56.60%)/7.8e-05(0.29%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0094 seconds.
[ 14%] Time step 14 done in 0.036 seconds. Wall clock time: 0.58331, time: 496.52, time step size: 163.95
Newton iteration 1 done, maximum relative shift = 2.4639e-02
Newton iteration 2 done, maximum relative shift = 2.2045e-04
Newton iteration 3 done, maximum relative shift = 8.7346e-08
Newton iteration 4 done, maximum relative shift = 1.3128e-12
Assemble/solve/update time: 0.013(39.48%)/0.02(60.35%)/5.6e-05(0.17%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0091 seconds.
[ 21%] Time step 15 done in 0.042 seconds. Wall clock time: 0.62524, time: 742.45, time step size: 245.93
Newton iteration 1 done, maximum relative shift = 2.4849e-02
Newton iteration 2 done, maximum relative shift = 2.6070e-04
Newton iteration 3 done, maximum relative shift = 1.5927e-07
Newton iteration 4 done, maximum relative shift = 4.2347e-11
Newton iteration 5 done, maximum relative shift = 1.5543e-15
Assemble/solve/update time: 0.017(42.04%)/0.023(57.71%)/0.0001(0.25%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0096 seconds.
[ 31%] Time step 16 done in 0.049 seconds. Wall clock time: 0.67486, time: 1111.3, time step size: 368.89
Newton iteration 1 done, maximum relative shift = 2.4659e-02
Newton iteration 2 done, maximum relative shift = 3.1316e-04
Newton iteration 3 done, maximum relative shift = 2.1819e-07
Newton iteration 4 done, maximum relative shift = 8.1444e-11
Newton iteration 5 done, maximum relative shift = 2.7649e-15
Assemble/solve/update time: 0.016(36.62%)/0.027(63.20%)/7.8e-05(0.18%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.01 seconds.
[ 45%] Time step 17 done in 0.053 seconds. Wall clock time: 0.72813, time: 1633.9, time step size: 522.6
Newton iteration 1 done, maximum relative shift = 2.6992e-02
Newton iteration 2 done, maximum relative shift = 3.3205e-04
Newton iteration 3 done, maximum relative shift = 4.3039e-07
Newton iteration 4 done, maximum relative shift = 1.6188e-11
Newton iteration 5 done, maximum relative shift = 3.7192e-15
Assemble/solve/update time: 0.016(40.42%)/0.024(59.43%)/6.2e-05(0.15%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0095 seconds.
[ 66%] Time step 18 done in 0.051 seconds. Wall clock time: 0.77863, time: 2374.3, time step size: 740.34
Newton iteration 1 done, maximum relative shift = 3.2737e-02
Newton iteration 2 done, maximum relative shift = 2.8143e-04
Newton iteration 3 done, maximum relative shift = 2.3822e-07
Newton iteration 4 done, maximum relative shift = 4.4269e-11
Newton iteration 5 done, maximum relative shift = 4.0246e-15
Assemble/solve/update time: 0.016(37.61%)/0.026(62.25%)/5.9e-05(0.14%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0094 seconds.
[ 95%] Time step 19 done in 0.052 seconds. Wall clock time: 0.83014, time: 3423.1, time step size: 1048.8
Newton iteration 1 done, maximum relative shift = 5.3190e-03
Newton iteration 2 done, maximum relative shift = 2.0091e-06
Newton iteration 3 done, maximum relative shift = 4.3137e-10
Newton iteration 4 done, maximum relative shift = 1.9984e-15
Assemble/solve/update time: 0.013(43.82%)/0.017(56.01%)/5.1e-05(0.17%)
Writing output for problem "test_1pnc_maxwellstefan_tpfa". Took 0.0094 seconds.
[100%] Time step 20 done in 0.04 seconds. Wall clock time: 0.87, time: 3600, time step size: 176.9
Simulation took 0.87 seconds on 1 processes.
The cumulative CPU time was 0.87 seconds.
Forty-two. I checked it very thoroughly, and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you've never actually known what the question is.
- Douglas Adams, HGttG
Fuzzy comparison...
Comparing /Users/ouetsu/dumuxday/dumux/test/references/test_1pnc_maxwellstefan_tpfa-reference.vtu and /Users/ouetsu/dumuxday/dumux/build-cmake/test/porousmediumflow/1pnc/1p3c/test_1pnc_maxwellstefan_tpfa-00020.vtu
... with a maximum relative error of 0.01 and a maximum absolute error of 1.5e-07*max_abs_parameter_value.
Data differs in parameter: delp
Difference is too large: 1.54% -> between: -0.000176918 and -0.000174189 Info for delp: max_abs_parameter_value=0.0211596 and min_abs_parameter_value=0.000174189.
Data differs in parameter: velocity_Gas (m/s)_0
Difference is too large: 1.28% -> between: 6.40399e-06 and 6.32196e-06 Info for velocity_Gas (m/s)_0: max_abs_parameter_value=6.44928e-06 and min_abs_parameter_value=2.75056e-07.
Fuzzy comparison done (not equal)
0% tests passed, 1 tests failed out of 1
Label Time Summary:
1pnc = 1.75 sec*proc (1 test)
porousmediumflow = 1.75 sec*proc (1 test)
Total Test time (real) = 1.96 sec
The following tests FAILED:
383 - test_1pnc_maxwellstefan_tpfa (Failed)
Errors while running CTest
```3.5