Some of our tests fail depending e.g. on the machine or the solvers that are used. For those tests, small changes seem to make a large difference. This could be a bug in a test, could also be a very sensitive system. For the latter more stable tests need to be defined. A list of the tests that failed on my computer (but did not fail on the buildbot) is below. Please add additional tests that seem to show such a behavior in the comments.
- test_zeroeq (works with SuperLU, not with Umfpack)
- test_boxadaptive2p (works with SuperLU, not with Umfpack)