This only failed if "-" was used as an argument (for stdin/stdout), so
the issue was pretty hard to spot.
openio is a heavily copy-pasted function, so it makes sense to just add
the import os to openio directly. Otherwise this mistake will likely
happen again in the future.
Moved local import hack behind if __name__ == "__main__"
These scripts aren't really intended to be used as python libraries.
Still, it's useful to import them for debugging and to get access to
their juicy internals.
This ended up being a pretty in-depth rework of prettyasserts.py to
adopt the shared Parser class. But now prettyasserts.py should be both
more robust and faster.
The tricky parts:
- The Parser class eagerly munches whitespace by default. This is
usually a good thing, but for prettyasserts.py we need to keep track
of the whitespace somehow in order to write it to the output file.
The solution here is a little bit hacky. Instead of complicating the
Parser class, we implicitly add a regex group for whitespace when
compiling our lexer.
Unfortunately this does make last-minute patching of the lexer a bit
messy (for things like -p/--prefix, etc), thanks to Python's
re.Pattern class not being extendable. To work around this, the Lexer
class keeps track of the original patterns to allow recompilation.
- Since we no longer tokenize in a separate pass, we can't use the
None token to match any unmatched tokens.
Fortunately this can be worked around with sufficiently ugly regex.
See the 'STUFF' rule.
It's a good thing Python has negative lookaheads.
On the flip side, this means we no longer need to explicitly specify
all possible tokens when multiple tokens overlap.
- Unlike stack.py/csv.py, prettyasserts.py needs multi-token lookahead.
Fortunately this has a pretty straightforward solution with the
addition of an optional stack to the Parser class.
We can even have a bit of fun with Python's with statements (though I
do wish with statements could have else clauses, so we wouldn't need
double nesting to catch parser exceptions).
---
In addition to adopting the new Parser class, I also made sure to
eliminate intermediate string allocation through heavy use of Python's
io.StringIO class.
This, plus Parser's cheap shallow chomp/slice operations, gives
prettyasserts.py a much needed speed boost.
(Honestly, the original prettyasserts.py was pretty naive, with the
assumption that it wouldn't be the bottleneck during compilation. This
turned out to be wrong.)
These changes cut total compile time in ~half:
real user sys
before (time make test-runner -j): 0m56.202s 2m31.853s 0m2.827s
after (time make test-runner -j): 0m26.836s 1m51.213s 0m2.338s
Keep in mind this includes both prettyasserts.py and gcc -Os (and other
Makefile stuff).
This was flipped in b5e264b.
Infering the type from the right-hand side is tempting here, but the
right-hand side if often a constant, which gets a bit funky in C.
Consider:
assert(lfs->cfg->read != NULL);
gcc: warning: ISO C forbids initialization between function pointer
and ‘void *’ [-Wpedantic]
assert(err < 0ULL);
gcc: warning: comparison of unsigned expression in ‘< 0’ is always
false [-Wtype-limits]
Prefering the left-hand type should hopefully avoid these issues most of
the time.
This seems like a more fitting name now that this script has evolved
into more of a general purpose high-level CSV tool.
Unfortunately this does conflict with the standard csv module in Python,
breaking every script that imports csv (which is most of them).
Fortunately, Python is flexible enough to let us remove the current
directory before imports with a bit of an ugly hack:
# prevent local imports
__import__('sys').path.pop(0)
These scripts are intended to be standalone anyways, so this is probably
a good pattern to adopt.
This matches the style used in C, which is good for consistency:
a_really_long_function_name(
double_indent_after_first_newline(
single_indent_nested_newlines))
We were already doing this for multiline control-flow statements, simply
because I'm not sure how else you could indent this without making
things really confusing:
if a_really_long_function_name(
double_indent_after_first_newline(
single_indent_nested_newlines)):
do_the_thing()
This was the only real difference style-wise between the Python code and
C code, so now both should be following roughly the same style (80 cols,
double-indent multiline exprs, prefix multiline binary ops, etc).
Because of course ternary operators would cause problems.
The two problem:
LFS_ASSERT((exists) ? !err : err == LFS_ERR_NOENT);
lfsr_file_sync(&lfs, &file) => (zombie) ? 0 : LFS_ERR_NOENT;
We could work around these with parentheses, but with different assert
parsers floating around this issue is likely to crop up again in the
future.
Fortunately this just required separate "sep" vs "term" rules and a bit
more strict parsing.
The move to lfs_memcmp/lfs_strcmp highlighted an interesting hole in
prettyasserts.py: the lack of support for custom memcmp/strcmp symbols.
Rather than just adding more flags for an increasing number of symbols,
I've added -p/--prefix and -P/--prefix-insensitive to generate relevant
symbols based on a prefix. In littlefs's case, we use -Plfs_, which
matches both lfs_memcmp and LFS_ASSERT (and LFS_MEMCMP and lfs_assert
but ignore those):
$ ./scripts/prettyasserts.py -Plfs_ lfs.t.c -o lfs.t.a.c
Don't worry, you can still provide explicit symbols, but only via
long-form flags. This gets a bit noisy:
$ ./scripts/prettyasserts.py \
--assert=LFS_ASSERT \
--unreachable=LFS_UNREACHABLE \
--memcmp=lfs_memcmp \
--strcmp=lfs_strcmp \
lfs.t.c -o lfs.t.a.c
This commit also finally gives the prettyasserts.py's symbols actual
word boundaries, instead of the big error-prone hack of sorting by size.
- -n/--no-defaults - disable default patterns
The default patterns can be brought back explicitly with:
- -a/--assert - enable assert pattern
- -u/--unreachable - enable unreachable pattern
- -A/--arrow - enable arrow patterns
Technically the default configuration is equivalent to the follow:
$ ./scripts/prettyasserts.py \
-a assert \
-a __builtin_assert \
-u unreachable \
-u __builtin_unreachable \
-A \
input.a.c -o output.c
This isn't really useful for littlefs, but may be useful elsewhere
The main benefit is control over error reporting and avoiding the dive
into stdlib layers when debugging thanks to __builtin_trap().
This changes -p/--pattern -> -a/--assert
And adds -u/--unreachable
The main star of the show is the adoption of __builtin_trap() for
aborting on assert failure. I discovered this GCC/Clang extension
recently and it integrates much, _much_ better with GDB.
With stdlib's abort(), GDB drops you off in several layers of internal
stdlib functions, which is a pain to navigate out of to get to where the
assert actually happened. With __builtin_trap(), GDB stops immediately,
making debugging quick and easy.
This is great! The pain of debugging needs to come from understanding
the error, not just getting to it.
---
Also tweaked a few things with the internal print functions to make
reading the generated source easier, though I realize this is a rare
thing to do.
We end up passing intmax_t pointers around, but without a cast. This
results in a warning. Adding a cast fixes the warning. This is in the
printing logic, not the actual comparison, so hiding warnings with this
cast is not a concern here.
I also flipped the type we compare with to use the right-hand side. The
pretty-assert code already treats the right-hand as the "expected" value
(I wonder if this is an english language quirk), so I think it makes
sense to use the right-hand side as the "expected" type.
- Fixed prettyasserts.py parsing when '->' is in expr
- Made prettyasserts.py failures not crash (yay dynamic typing)
- Fixed the initial state of the emubd disk file to match the internal
state in RAM
- Fixed true/false getting changed to True/False in test.py/bench.py
defines
- Fixed accidental substring matching in plot.py's --by comparison
- Fixed a missed LFS_BLOCk_CYCLES in test_superblocks.toml that was
missed
- Changed test.py/bench.py -v to only show commands being run
Including the test output is still possible with test.py -v -O-, making
the implicit inclusion redundant and noisy.
- Added license comments to bench_runner/test_runner
Based loosely on Linux's perf tool, perfbd.py uses trace output with
backtraces to aggregate and show the block device usage of all functions
in a program, propagating block devices operation cost up the backtrace
for each operation.
This combined with --trace-period and --trace-freq for
sampling/filtering trace events allow the bench-runner to very
efficiently record the general cost of block device operations with very
little overhead.
Adopted this as the default side-effect of make bench, replacing
cycle-based performance measurements which are less important for
littlefs.
This provides 2 things:
1. perf integration with the bench/test runners - This is a bit tricky
with perf as it doesn't have its own way to combine perf measurements
across multiple processes. perf.py works around this by writing
everything to a zip file, using flock to synchronize. As a plus, free
compression!
2. Parsing and presentation of perf results in a format consistent with
the other CSV-based tools. This actually ran into a surprising number of
issues:
- We need to process raw events to get the information we want, this
ends up being a lot of data (~16MiB at 100Hz uncompressed), so we
paralellize the parsing of each decompressed perf file.
- perf reports raw addresses post-ASLR. It does provide sym+off which
is very useful, but to find the source of static functions we need to
reverse the ASLR by finding the delta the produces the best
symbol<->addr matches.
- This isn't related to perf, but decoding dwarf line-numbers is
really complicated. You basically need to write a tiny VM.
This also turns on perf measurement by default for the bench-runner, but at a
low frequency (100 Hz). This can be decreased or removed in the future
if it causes any slowdown.
- Added the littlefs license note to the scripts.
- Adopted parse_intermixed_args everywhere for more consistent arg
handling.
- Removed argparse's implicit help text formatting as it does not
work with perse_intermixed_args and breaks sometimes.
- Used string concatenation for argparse everywhere, uses backslashed
line continuations only works with argparse because it strips
redundant whitespace.
- Consistent argparse formatting.
- Consistent openio mode handling.
- Consistent color argument handling.
- Adopted functools.lru_cache in tracebd.py.
- Moved unicode printing behind --subscripts in traceby.py, making all
scripts ascii by default.
- Renamed pretty_asserts.py -> prettyasserts.py.
- Renamed struct.py -> struct_.py, the original name conflicts with
Python's built in struct module in horrible ways.