Filter of duplicate point from Post-Processing /Plot-over-line script
The plot-over-line from ParaView, accessible by the post-processing scripts, adds point by a fixed resolution, ignoring the underlying discretization. This leads to superfluous points inside of an element while it would be enough to have the first or last point of the element (we only have constant or linear functions space).
It would be nice to include a filter to reduce the file size of both the CSV and the resulting plot files.
I wrote a script which did the trick for me, but it is limited to constant function spaces and requires that the last (actually last 5 characters) differ between the element.
import re import sys # input and output files inp = open(sys.argv, 'r') out = open(sys.argv, 'w') # store time step sized for Newton step that did not converge prev2 = inp.readline() prev1 = inp.readline() out.write(prev2) # parse every line for line in inp: # compare last 5 characters to check for equal entries if line[-5:] == prev2[-5:]: prev1 = line else: out.write(prev1) prev2 = prev1 prev1 = line out.write(prev1) out.close() inp.close()
Good idea. Can't compare the gradient instead of the plain value, then you should also be prepared for the linear case? By the way, there are only .csv and no "plot" files. What does "# store time step sizes of failed Newton steps" do?
The comment is an oversight from copy-pasta, I fixed it. By plot files I meant when using the CSVs with Gnuplot or TikZ. Then the plot has many x's or o's and the generation of the plots slows down.
You are right with the linear case, my script was just quick&dirty to go to bed before dawn.