- Added support for negative numbers in the leb16 encoding with an
optional 'w' prefix.
- Changed prettyasserts.py rule to .a.c => .c, allowing other .a.c files
in the future.
- Updated .gitignore with missing generated files (tags, .csv).
- Removed suite-namespacing of test symbols, these are no longer needed.
- Changed test define overrides to have higher priority than explicit
defines encoded in test ids. So:
./runners/bench_runner bench_dir_open:0f1g12gg2b8c8dgg4e0 -DREAD_SIZE=16
Behaves as expected.
Otherwise it's not easy to experiment with known failing test cases.
- Fixed issue where the -b flag ignored explicit test/bench ids.
When you add a function to every benchmark suite, you know if should
probably be provided by the benchmark runner itself. That being said,
randomness in tests/benchmarks is a bit tricky because it needs to be
strictly controlled and reproducible.
No global state is used, allowing tests/benches to maintain multiple
randomness stream which can be useful for checking results during a run.
There's an argument for having global prng state in that the prng could
be preserved across power-loss, but I have yet to see a use for this,
and it would add a significant requirement to any future test/bench runner.
Two flags introduced: -fcallgraph-info=su for stack analysis, and
-ftrack-macro-expansions=0 for cleaner prettyassert.py warnings, are
unfortunately not supported in Clang.
The override vars in the Makefile meant it wasn't actually possible to
remove these flags for Clang testing, so this commit changes those vars
to normal, non-overriding vars. This means `make CFLAGS=-Werror` and
`CFLAGS=-Werror make` behave _very_ differently, but this is just an
unfortunate quirk of make that needs to be worked around.
- Moved to Ubuntu 22.04
This notably means we no longer have to bend over backwards to
install GCC 10!
- Changed shell in gha to include the verbose/undefined flags, making
debugging gha a bit less painful
- Adopted the new test.py/test_runners framework, which means no more
heavy recompilation for different configurations. This reduces the test job
runtime from >1 hour to ~15 minutes, while increasing the number of
geometries we are testing.
- Added exhaustive powerloss testing, because of time constraints this
is at most 1pls for general tests, 2pls for a subset of useful tests.
- Limited coverage measurements to `make test`
Originally I tried to maximize coverage numbers by including coverage
from every possible source, including the more elaborate CI jobs which
provide an extra level of fuzzing.
But this missed the purpose of coverage measurements, which is to find
areas where test cases can be improved. We don't want to improve coverage
by just shoving more fuzz tests into CI, we want to improve coverage by
adding specific, intentioned test cases, that, if they fail, highlight
the reason for the failure.
With this perspective, maximizing coverage measurement in CI is
counter-productive. This changes makes it so the reported coverage is
always less than actual CI coverage, but acts as a more useful metric.
This also simplifies coverage collection, so that's an extra plus.
- Added benchmarks to CI
Note this doesn't suffer from inconsistent CPU performance because our
benchmarks are based on purely simulated read/prog/erase measurements.
- Updated the generated markdown table to include line+branch coverage
info and benchmark results.
- Fixed prettyasserts.py parsing when '->' is in expr
- Made prettyasserts.py failures not crash (yay dynamic typing)
- Fixed the initial state of the emubd disk file to match the internal
state in RAM
- Fixed true/false getting changed to True/False in test.py/bench.py
defines
- Fixed accidental substring matching in plot.py's --by comparison
- Fixed a missed LFS_BLOCk_CYCLES in test_superblocks.toml that was
missed
- Changed test.py/bench.py -v to only show commands being run
Including the test output is still possible with test.py -v -O-, making
the implicit inclusion redundant and noisy.
- Added license comments to bench_runner/test_runner
Based loosely on Linux's perf tool, perfbd.py uses trace output with
backtraces to aggregate and show the block device usage of all functions
in a program, propagating block devices operation cost up the backtrace
for each operation.
This combined with --trace-period and --trace-freq for
sampling/filtering trace events allow the bench-runner to very
efficiently record the general cost of block device operations with very
little overhead.
Adopted this as the default side-effect of make bench, replacing
cycle-based performance measurements which are less important for
littlefs.
- Changed multi-field flags to action=append instead of comma-separated.
- Dropped short-names for geometries/powerlosses
- Renamed -Pexponential -> -Plog
- Allowed omitting the 0 for -W0/-H0/-n0 and made -j0 consistent
- Better handling of --xlim/--ylim
Without this redundant permutations can easily happen with runtime
overrides because the different define layers aren't aware of each
other. This causes problems for collecting benchmark results.
These are really just different flavors of test.py and test_runner.c
without support for power-loss testing, but with support for measuring
the cumulative number of bytes read, programmed, and erased.
Note that the existing define parameterization should work perfectly
fine for running benchmarks across various dimensions:
./scripts/bench.py \
runners/bench_runner \
bench_file_read \
-gnor \
-DSIZE='range(0,131072,1024)'
Also added a couple basic benchmarks as a starting point.