Tricky tests
Some of our tests fail depending e.g. on the machine or the solvers that are used. For those tests, small changes seem to make a large difference. This could be a bug in a test, could also be a very sensitive system. For the latter more stable tests need to be defined. A list of the tests that failed on my computer (but did not fail on the buildbot) is below. Please add additional tests that seem to show such a behavior in the comments.
- test_zeroeq (works with SuperLU, not with Umfpack)
- test_el1p2c
- test_boxadaptive2p (works with SuperLU, not with Umfpack)
- test_2cstokes2p2c
- test_2cnistokes2p2cni
- test_2cnistokes2p2cni_boundarylayer
- test_2czeroeq2p2c
- test_2cnizeroeq2p2cni
- test_forchheimer2p
- test_boxmpnckinetic
- test_cc2pncmin
- test_stokes
- lens2pexercise3
- co2plumeshapeexercise
- fuelcell
- naplinfiltration3p3c