Before this, the only option for ordering the legend was by specifying
explicit -L/--add-label labels. This works for the most part, but
doesn't cover the case where you don't know the parameterization of the
input data.
And we already have -s/-S flags in other csv scripts, so it makes sense
to adopt them in plot.py/plotmpl.py to allow sorting by one or more
explicit fields.
Note that -s/-S can be combined with explicit -L/--add-labels to order
datasets with the same sort field:
$ ./scripts/plot.py bench.csv \
-bBLOCK_SIZE \
-xn \
-ybench_readed \
-ybench_proged \
-ybench_erased \
--legend \
-sBLOCK_SIZE \
-L'*,bench_readed=bs=%(BLOCK_SIZE)s' \
-L'*,bench_proged=' \
-L'*,bench_erased='
---
Unfortunately this conflicted with -s/--sleep, which is a common flag in
the ascii-art scripts. This was bound to conflict with -s/--sort
eventually, so a came up with some alternatives:
- -s/--sleep -> -~/--sleep
- -S/--coalesce -> -+/--coalesce
But I'll admit I'm not the happiest about these...
Two new tricks:
1. Hide the cursor while redrawing the ring buffer.
2. Build up the entire redraw in RAM first, and render everything in a
single write call.
These _mostly_ get rid of the cursor flickering issues in rapidly
updating scripts.
This mirrors how -H/--height and -W/--width work, with -n-1 using the
terminal height - 1 for the output.
This is very useful for carving out space for the shell prompt and other
things, without sacrificing automatic sizing.
It's a mess but it's working. Still a number of TODOs to cleanup...
This adopts all of the changes in dbgbmap.py/dbgbmapd3.py, block
grouping, nested curves, Canvas, Attrs, etc:
- Like dbgbmap.py, we now group by block first before applying space
filling curves, using nested space filling curves to render byte-level
operations.
Python's ft.lru_cache really shines here.
The previous behavior is still available via -u/--contiguous
- Adopted most features in dbgbmap.py, so --to-scale, -t/--tiny, custom
--title strings, etc.
- Adopted Attrs so now chars/coloring can be customized with
-./--add-char, -,/--add-wear-char, -C/--add-color,
-G/--add-wear-color.
- Renamed -R/--reset -> --volatile, which is a much better name.
- Wear is now colored cyan -> white -> read, which is a bit more
visually interesting. And we're not using cyan in any scripts yet.
In addition to the new stuff, there were a few simplifications:
- We no longer support sub-char -n/--lines with -:/--dots or
-⣿/--braille. Too complicated, required Canvas state hacks to get
working, and wasn't super useful.
We probably want to avoid doing too much cleverness with -:/--dots and
-⣿/--braille since we can't color sub-chars.
- Dropped -@/--blocks byte-level range stuff. This was just not worth
the amount of complexity it added. -@/--blocks is now limited to
simple block ranges. High-level scripts should stick to high-level
options.
- No fancy/complicated Bmap class. The bmap object is just a dict of
TraceBlocks which contain RangeSets for relevant operations.
Actually the new RangeSet class deserves a mention but this commit
message is probably already too long.
RangeSet is a decently efficient set of, well, ranges, that can be
merged and queried. In a lower-level language it should be implemented
as a binary tree, but in Python we're just using a sorted list because
we're probably not going to be able to beat O(n) list operations.
- Wear is tracked at the block level, no reason to overcomplicate this.
- We no longer resize based on new info. Instead we either expect a
-b/--block-size argument or wait until first bd init call.
We can probably drop the block size in BD_TRACE statements now, but
that's a TODO item.
- Instead of one amalgamated regex, we use string searches to figure out
the bd op and then smaller regexes to parse. Lesson learned here:
Python's string search is very fast (compared to regex).
- We do _not_ support labels on blocks like we do in treemap.py/
codemap.py. It's less useful here and would just be more hassle.
I also tried to reorganize main a bit to mirror the simple two-main
approach in dbgbmap.py and other ascii-rendering scripts, but it's a bit
difficult here since trace info is very stateful. Building up main
functions in the main main function seemed to work well enough:
main -+-> main_ -> trace__ (main thread)
'-> draw_ -> draw__ (daemon thread)
---
You may note some weirdness going on with flags. That's me trying to
avoid upcoming flag conflicts.
I think we want -n/--lines in more scripts, now that it's relatively
self-contained, but this conflicts with -n/--namespace-depth in
codemap[d3].py, and risks conflict with -N/--notes in csv.py which may
end up with namespace-related functionality in the future.
I ended up hijacking -_, but this conflicted with -_/--add-line-char in
plot.py, but that's ok because we also want a common "secondary char"
flag for wear in tracebd.py... Long story short I ended up moving a
bunch of flags around:
- added -n/--lines
- -n/--namespace-depth -> -_/--namespace-depth
- -N/--notes -> -N/--notes
- -./--add-char -> -./--add-char
- -_/--add-line-char -> -,/--add-line-char
- added -,/--add-wear-char
- -C/--color -> -C/--add-color
- added -> -G/--add-wear-color
Worth it? Dunno.
This only failed if "-" was used as an argument (for stdin/stdout), so
the issue was pretty hard to spot.
openio is a heavily copy-pasted function, so it makes sense to just add
the import os to openio directly. Otherwise this mistake will likely
happen again in the future.
You forget one script, running in the background, hogging a whole
core, and suddenly watch's default 2 second sleep time makes a lot more
sense...
One of the main motivators for watch.py _was_ for shorter sleep times,
short enough to render realtime animations (watch is limited to 0.1
seconds for some reason?), but this doesn't mean it needs to be the
default. This can still be accomplished by explicitly specifying
-s/--sleep, and we probably don't want the default to hog all the CPU.
The use case for fast sleeps has been mostly replaced by -k/--keep-open
anyways.
For tailpipe.py and tracebd.py it's a bit less clear, but we probably
don't need to be spamming open calls 10 times a second.
Moved local import hack behind if __name__ == "__main__"
These scripts aren't really intended to be used as python libraries.
Still, it's useful to import them for debugging and to get access to
their juicy internals.
This seems like a more fitting name now that this script has evolved
into more of a general purpose high-level CSV tool.
Unfortunately this does conflict with the standard csv module in Python,
breaking every script that imports csv (which is most of them).
Fortunately, Python is flexible enough to let us remove the current
directory before imports with a bit of an ugly hack:
# prevent local imports
__import__('sys').path.pop(0)
These scripts are intended to be standalone anyways, so this is probably
a good pattern to adopt.
This matches the style used in C, which is good for consistency:
a_really_long_function_name(
double_indent_after_first_newline(
single_indent_nested_newlines))
We were already doing this for multiline control-flow statements, simply
because I'm not sure how else you could indent this without making
things really confusing:
if a_really_long_function_name(
double_indent_after_first_newline(
single_indent_nested_newlines)):
do_the_thing()
This was the only real difference style-wise between the Python code and
C code, so now both should be following roughly the same style (80 cols,
double-indent multiline exprs, prefix multiline binary ops, etc).
This lets you view the first n lines of output instead of the last n
lines, as though the output was piped through head.
This is how the standard watch command works, and can be more useful
when most of the information is at the top, such as in our dbg*.py
scripts (watch.py was originally used as a sort of inotifywait-esque
build runner, which is the main reason it's different).
To make this work, RingIO (renamed from LinesIO) now uses terminal
height as a part of its canvas rendering. This has the added benefit of
more rigorously enforcing the canvas boundaries, but risks breaking when
not associated with a terminal. But that raises the question, does
RingIO even make sense without a terminal?
Worst case you can bypass all of this with -z/--cat.
Based loosely on Linux's perf tool, perfbd.py uses trace output with
backtraces to aggregate and show the block device usage of all functions
in a program, propagating block devices operation cost up the backtrace
for each operation.
This combined with --trace-period and --trace-freq for
sampling/filtering trace events allow the bench-runner to very
efficiently record the general cost of block device operations with very
little overhead.
Adopted this as the default side-effect of make bench, replacing
cycle-based performance measurements which are less important for
littlefs.
This provides 2 things:
1. perf integration with the bench/test runners - This is a bit tricky
with perf as it doesn't have its own way to combine perf measurements
across multiple processes. perf.py works around this by writing
everything to a zip file, using flock to synchronize. As a plus, free
compression!
2. Parsing and presentation of perf results in a format consistent with
the other CSV-based tools. This actually ran into a surprising number of
issues:
- We need to process raw events to get the information we want, this
ends up being a lot of data (~16MiB at 100Hz uncompressed), so we
paralellize the parsing of each decompressed perf file.
- perf reports raw addresses post-ASLR. It does provide sym+off which
is very useful, but to find the source of static functions we need to
reverse the ASLR by finding the delta the produces the best
symbol<->addr matches.
- This isn't related to perf, but decoding dwarf line-numbers is
really complicated. You basically need to write a tiny VM.
This also turns on perf measurement by default for the bench-runner, but at a
low frequency (100 Hz). This can be decreased or removed in the future
if it causes any slowdown.
- Changed multi-field flags to action=append instead of comma-separated.
- Dropped short-names for geometries/powerlosses
- Renamed -Pexponential -> -Plog
- Allowed omitting the 0 for -W0/-H0/-n0 and made -j0 consistent
- Better handling of --xlim/--ylim
Instead of trying to align to block-boundaries tracebd.py now just
aliases to whatever dimensions are provided.
Also reworked how scripts handle default sizing. Now using reasonable
defaults with 0 being a placeholder for automatic sizing. The addition
of -z/--cat makes it possible to pipe directly to stdout.
Also added support for dots/braille output which can capture more
detail, though care needs to be taken to not rely on accurate coloring.
These are really just different flavors of test.py and test_runner.c
without support for power-loss testing, but with support for measuring
the cumulative number of bytes read, programmed, and erased.
Note that the existing define parameterization should work perfectly
fine for running benchmarks across various dimensions:
./scripts/bench.py \
runners/bench_runner \
bench_file_read \
-gnor \
-DSIZE='range(0,131072,1024)'
Also added a couple basic benchmarks as a starting point.
- Added the littlefs license note to the scripts.
- Adopted parse_intermixed_args everywhere for more consistent arg
handling.
- Removed argparse's implicit help text formatting as it does not
work with perse_intermixed_args and breaks sometimes.
- Used string concatenation for argparse everywhere, uses backslashed
line continuations only works with argparse because it strips
redundant whitespace.
- Consistent argparse formatting.
- Consistent openio mode handling.
- Consistent color argument handling.
- Adopted functools.lru_cache in tracebd.py.
- Moved unicode printing behind --subscripts in traceby.py, making all
scripts ascii by default.
- Renamed pretty_asserts.py -> prettyasserts.py.
- Renamed struct.py -> struct_.py, the original name conflicts with
Python's built in struct module in horrible ways.
Based on a handful of local hacky variations, this sort of trace
rendering is surprisingly useful for getting an understanding of how
different filesystem operations interact with the underlying
block-device.
At some point it would probably be good to reimplement this in a
compiled language. Parsing and tracking the trace output quickly
becomes a bottleneck with the amount of trace output the tests
generate.
Note also that since tracebd.py run on trace output, it can also be
used to debug logged block-device operations post-run.
This mostly involved futzing around with some of the less intuitive
parts of Unix's named-pipes behavior.
This is a bit important since the tests can quickly generate several
gigabytes of trace output.