So:
$ ./scripts/code.py lfs.o -o- -q
Becomes:
$ ./scripts/code.py lfs.o -o-
The original intention of -o/-O _not_ being exclusive (aka table is
still rendered unless disabled with -q/--quiet), was to allow results to
be written to csv files and rendered to tables in a single pass.
But this was never useful. Heck, we're not even using this in our
Makefile right now because it would make the rule dependencies more
complicated than it's worth. Even for long-running result scripts
(perf.py, perfbd.py, etc), most of the work is building that csv file,
the cost of rendering a table in a second pass is negligible.
In every case I've used -o/-O, I've also wanted -q/--quiet, and almost
always forget this on the first run. So might as well make the expected
behavior the actual behavior.
---
As a plus, this let us simplify some of the scripts a bit, by replacing
visibility filters with -o/-O dependent by-fields.
This makes it so scripts with complex fields will still output all
fields to output csv/json files, while only showing a user-friendly
subset unless -f/--field is explicitly provided.
While internal fields are often too much information to show by default,
csv/json files are expected to go to other scripts, not humans. So more
information is more useful up until you actually hit a performance
bottleneck.
And if you _do_ somehow manage to hit a performance bottleneck, you can
always limit the output with explicit -f/--field flags.
With this, we apply the same result modifiers (exprs/defines/hot/etc) to
both the input results and -d/--diff results. So if both start with the
same format, diffing/hotifying/etc should work as expected.
This is really the only way I can seen -d/--diff results working with
result modifiers in a way that makes sense.
The downside of this is that you can't save results with some complex
operation applied, and then diff while applying the same operation,
since most of the newer operations (hotify) are _not_ idempotent.
Fortunately the two alternatives are not unreasonable:
1. Save results _without_ the operation applied, since the operation
will be applied to both the input and diff results.
This is a bit asymmetric, but should work.
2. Apply the operation to the input and then pipe to csv.py for diffing.
This used to "just work" when we did _not_ apply operations to output
csv/json, but this was really just equivalent to 1..
I think the moral of the story is you can solve any problem with enough
chained csv.py calls.
It's just too unintuitive to filter after exprs.
Note this is consistent with how exprs/mods are evaluated. Exprs/mods
can't reference other exprs/mods because csv.py is only single-pass, so
allowing defines to reference exprs/mods is surprising.
And the solution to needing these sort of post-expr/mod references is
the same for defines: You can always chain multiple csv.py calls.
The reason defines were change to evaluate after expr eval was because
this seemed inconsistent with other result scripts, but this is not
actually the case. Other result scripts simply don't have exprs/mods, so
filtering in fold is the same as filtering during collection. Note that
even in fold, filtering is done _before_ the actual fold/sum operation.
---
Also fixed a recursive-define regression when folding. Counter-
intuitively, we _don't_ want to recursively apply define filters. If we
do the results will just end up too confusing to be useful.
This should either have checked diff_result==None, or we should be
mapping diff_result=None => diff_result_=None. To be safe I've done
both.
This was a nasty typo and I only noticed because ctx.py stopped printing
"cycle detected" for our linked-lists (which are expected to be cyclic).
It felt weird that adding hidden fields required changing existing
flags unrelated to the field you actually want to affect, and the
upper/lower flag thing seems to work well for -s/-S sooo...
- Replaced -l/--label with -B/--hidden-by for by fields that can
be hidden from the table renderer.
- Added -F/--hidden-field as a similar thing for field fields.
- Better integrated -i/--enumerate into by fields, now these actually
maintain related order. And of course added a matching
-I/--hidden-enumerate flag.
The only downside is this is eating a lot of flag names.. But one of the
nice thing about limiting this complexity to csv.py is it avoids these
flag names cluttering up the other result scripts.
---
The -F/--hidden-fields flag I'm not so sure about, since field exprs
can't really reference each other (single pass). But it does provide
symmetry with -B/--hidden-by, and reserves the name in case hidden field
fields are more useful in the future.
Unfortunately it _is_ annoyingly inconsistent with other hidden fields
(-S/--sort, -D/--define, etc) in that it does end up in output csvs...
But this script is already feeling way over-engineered as is.
Kind of a complicated corner case, but this shows up if you try to sort
by fields as numbers and not as strings. In theory this is possible by
creating a hidden sort field with a typed expr:
$ ./scripts/csv.py test.csv -bi -bfunction -Si=i
But we weren't typechecking sort fields that already exist in the by
fields, since these are usually strings.
This fix is to make sure all exprs are in the typechecked fields, even
if they are already in by fields. There's no real cost to this.
---
Note this version does _not_ typecheck i, and sorts by string:
$ ./scripts/csv.py test.csv -bi -bfunction -Si
This raises the question, should we always sort by string by default?
I don't think so. It's easy to miss the difference, and a typecheck
error is a lot safer than incorrect sorting.
So this will sort by number, with i as a hidden field:
$ ./scripts/csv.py test.csv -bfunction -Si
If you want to sort by string with a hidden field, this is still
possible with -l/--label:
$ ./scripts/csv.py test.csv -bi -lfunction -Si
There's an ordering issue with hotifying and folding when we have
multiple foldable results with children. This was hard to notice since
most of the recursive scripts have unique results, but it _is_ an issue
for perf.py/perfbd.py, which rely on result folding to merge samples.
The fix is to fold _before_ hotifying.
We could fold multiple times to avoid changing the behavior of the
result scripts, but instead I've just moved the folding in the table
renderer up into the relevant main functions. This means 1. we only fold
once, and 2. folding affects outputted csv/json files.
I'm a bit on the fence about this behavior change, but it is a bit more
consistent with how -r/--hot, -z/--depth, etc, affect both table and
csv/json results consistently.
Maybe we should move towards the table render always reflecting the
csv/json results? Most csv/json usage is with -q/--quiet anyways...
---
This does create a new risk in that the table renderer can hide results
if they aren't folded first.
To hopefully avoid this I've added an assert in the table renderer if it
notices results being hidden.
I guess in addition to its other utilities, csv.py is now also turning
into a sort of man database for some of the more complicated APIs in the
scripts:
./csv.py --help
./csv.py --help-exprs
./csv.py --help-mods
It's a bit minimal, but better than nothing.
Also dropped the %c modifier because this never actually worked.
This gives csv.py access to a hidden feature in our table renderer used
by some of the other scripts: fields that affect by-field grouping, but
aren't actually printed.
For example, this prevents summing same named functions in different
files, but only shows the function name in the table render:
$ ./scripts/csv.py lfs.code.csv -bfile -bfunction -lfunction
function size
lfs_alloc 398
lfs_alloc_discard 31
lfs_alloc_findfree 77
...
This is especially useful when enumerating results. For example, this
prevents any summing without extra table noise:
$ ./scripts/csv.py lfs.code.csv -i -bfunction -fsize -lfunction
function size
lfs_alloc 398
lfs_alloc_discard 31
lfs_alloc_findfree 77
...
I also tweaked -b/--by field defaults a bit to account to
enumerate/label fields a bit better.
This removes most of the special behavior around how -r/--hot and
-i/--enumerate interact. This does mean -r/--hot risks folding results
if -i/--enumerate is not specified, but this is _technically_ a valid
operation.
For most of the recursive result scripts, I've replaced the "i" field
with separate "z" and "i" fields for depth and field number, which I
think is a bit more informative/useful.
I've also added a default-hidden "off" field to structs.py/ctx.py, since
we have that info available. I considered replacing "i" with this, but
decided against it since non-zero offsets for union members would risk
being confusing/mistake prone.
Guh
This may have been more work than I expected. The goal was to allowing
passing recursive results (callgraph info, structs, etc) between
scripts, which is simply not possible with csv files.
Unfortunately, this raised a number of questions: What happens if a
script receives recursive results? -d/--diff with recursive results?
How to prevent folding of ordered results (structs, hot, etc) in piped
scripts? etc.
And ended up with a significant rewrite of most of the result scripts'
internals.
Key changes:
- Most result scripts now support -O/--output-json in addition to
-o/--json, with -O/--output-json including any recursive results in
the "children" field.
- Most result scripts now support both csv and json as input to relevant
flags: -u/--use, -d/--diff, -p/--percent. This is accomplished by
looking for a '[' as the first character to decide if an input file is
json or csv.
Technically this breaks if your json has leading whitespace, but why
would you ever keep whitespace around in json? The human-editability
of json was already ruined the moment comments were disallowed.
- csv.py requires all fields to be explicitly defined, so added
-i/--enumerate, -Z/--children, and -N/--notes. At least we can provide
some reasonable defaults so you shouldn't usually need to type out the
whole field.
- Notably, the rendering scripts (plot.py, treemapd3.py, etc) and
test/bench scripts do _not_ support json. csv.py can always convert
to/from json when needed.
- The table renderer now supports diffing recursive results, which is
nice for seeing how the hot path changed in stack.py/perf.py/etc.
- Moved the -r/--hot logic up into main, so it also affects the
outputted results. Note it is impossible for -z/--depth to _not_
affect the outputted results.
- We now sort in one pass, which is in theory more efficient.
- Renamed -t/--hot -> -r/--hot and -R/--reverse-hot, matching -s/-S.
- Fixed an issue with -S/--reverse-sort where only the short form was
actually reversed (I misunderstood what argparse passes to Action
classes).
- csv.py now supports json input/output, which is funny.
In addition to providing more functionality for creating -b/--by fields,
this lets us remove strings from the expr parser. Strings had no
well-defined operations and could best be described as an "ugly wart".
Maybe we'll reintroduce string exprs in the future, but for now csv.py's
-f/--field fields will be limited to numeric values.
As an extra plus, no more excessive quoting when injecting new -b/--by
fields.
---
This also fixed sorting on non-field fields, which was apparently
broken. Or at least mostly useless since it was defaulting to string
sorting.
Apparently __builtins__ is a CPython implementation detail, and behaves
differently when executed vs imported???
import builtins is the correct way to go about this.
Moved local import hack behind if __name__ == "__main__"
These scripts aren't really intended to be used as python libraries.
Still, it's useful to import them for debugging and to get access to
their juicy internals.
Instead of trying to be too clever, this just adds a bunch of small
flags to control parts of table rendering:
- --no-header - Don't show the header.
- --small-header - Don't show by field names.
- --no-total - Don't show the total.
- -Q/--small-table - Equivalent to --small-header + --no-total.
Note that -Q/--small-table replaces the previous -Y/--summary +
-c/--compare hack, while also allowing a similar table style for
non-compare results.
This ended up being a pretty in-depth rework of prettyasserts.py to
adopt the shared Parser class. But now prettyasserts.py should be both
more robust and faster.
The tricky parts:
- The Parser class eagerly munches whitespace by default. This is
usually a good thing, but for prettyasserts.py we need to keep track
of the whitespace somehow in order to write it to the output file.
The solution here is a little bit hacky. Instead of complicating the
Parser class, we implicitly add a regex group for whitespace when
compiling our lexer.
Unfortunately this does make last-minute patching of the lexer a bit
messy (for things like -p/--prefix, etc), thanks to Python's
re.Pattern class not being extendable. To work around this, the Lexer
class keeps track of the original patterns to allow recompilation.
- Since we no longer tokenize in a separate pass, we can't use the
None token to match any unmatched tokens.
Fortunately this can be worked around with sufficiently ugly regex.
See the 'STUFF' rule.
It's a good thing Python has negative lookaheads.
On the flip side, this means we no longer need to explicitly specify
all possible tokens when multiple tokens overlap.
- Unlike stack.py/csv.py, prettyasserts.py needs multi-token lookahead.
Fortunately this has a pretty straightforward solution with the
addition of an optional stack to the Parser class.
We can even have a bit of fun with Python's with statements (though I
do wish with statements could have else clauses, so we wouldn't need
double nesting to catch parser exceptions).
---
In addition to adopting the new Parser class, I also made sure to
eliminate intermediate string allocation through heavy use of Python's
io.StringIO class.
This, plus Parser's cheap shallow chomp/slice operations, gives
prettyasserts.py a much needed speed boost.
(Honestly, the original prettyasserts.py was pretty naive, with the
assumption that it wouldn't be the bottleneck during compilation. This
turned out to be wrong.)
These changes cut total compile time in ~half:
real user sys
before (time make test-runner -j): 0m56.202s 2m31.853s 0m2.827s
after (time make test-runner -j): 0m26.836s 1m51.213s 0m2.338s
Keep in mind this includes both prettyasserts.py and gcc -Os (and other
Makefile stuff).
It's a bit funny, the motivation for a new Parser class came from the
success of simple regex + space munching in csv.py, but adopting Parser
in csv.py makes sense for a couple reasons:
- Consistency and better code sharing with other scripts that need to
parse things (stack.py, prettyasserts.py?).
- Should be more efficient, since we avoid copying the entire string
every time we chomp/slice.
Though I don't think this really matters for the size of csv.py's
exprs...
- No need to write every regex twice! Since Parser remembers the last
match.
Now that cycle detection is always done at result collection time, we
don't need this in the table renderer itself.
This had a tendency to cause problems for non-function scripts (ctx.py,
structs.py).
God, I wish Python had an OrderedSet.
This is a fix for duplicate "cycle detected" notes when using -t/--hot.
This mix of merging both _hot_notes and _notes in the HotResult class is
tricky when the underlying container is a list.
The order is unlikely to be guaranteed anyways, when different results
with different notes are folded.
And if we ever want more control over the order of notes in result
scripts we can always change this back later.
- Error on no/insufficient files.
Instead of just returning no results. This is more useful when
debugging complicated bash scripts.
- Use elf magic to allow any file order in perfbd.py/stack.py.
This was already implemented in stack.py, now also adopted in
perfbd.py.
Elf files always start with the magic string "\x7fELF", so we can use
this to figure out the types of input files without needing to rely on
argument order.
This is just one less thing to worry about when invoking these
scripts.
- Prevented childrenof memoization from hiding the source of a
detected cycle.
- Deduplicated multiple cycle detected notes.
- Fixed note rendering when last column does not have a notes list.
Currently this only happens when entry is None (no results).
Without this, naming a column i/children/notes in csv.py could cause
things to break. Unlikely for children/notes, but very likely for i,
especially when benchmarking.
Unfortunately namedtuple makes this tricky. I _want_ to just rename
these to _i/_children/_notes and call the problem solved, but namedtuple
reserves all underscore-prefixed fields for its own use.
As a workaround, the table renderer now looks for _i/_children/_notes at
the _class_ level, as an optional name of which namedtuple field to use.
This way Result types can stay lightweight namedtuples while including
extra table rendering info without risk of conflicts.
This also makes the HotResult type a bit more funky, but that's not a
big deal.
This extends the recursive part of the table renderer to sort children
by the optional "i" field, if available.
Note this only affects children entries. The top-level entries are
strictly ordered by the relevant "by" fields. I just haven't seen a use
case for this yet, and not sorting "i" at the top-level reduces that
number of things that can go wrong for scripts without children.
---
This also rewrites -t/--hot to take advantage of children ordering by
injecting a totally-no-hacky HotResult subclass.
Now -t/--hot should be strictly ordered by the call depth! Though note
entries that share "by" fields are still merged...
This also gives us a way to introduce the "cycle detected" note and
respect -z/--depth, so overall a big improvement for -t/--hot.
We don't really need padding for the notes on the last column of tables,
which is where row-level notes end up.
This may seem minor, but not padding here avoids quite a bit of
unnecessary line wrapping in small terminals.
- Adopted higher-level collect data structures:
- high-level DwarfEntry/DwarfInfo class
- high-level SymInfo class
- high-level LineInfo class
Note these had to be moved out of function scope due to pickling
issues in perf.py/perfbd.py. These were only function-local to
minimize scope leak so this fortunately was an easy change.
- Adopted better list-default patterns in Result types:
def __new__(..., children=None):
return Result(..., children if children is not None else [])
A classic python footgun.
- Adopted notes rendering, though this is only used by ctx.py at the
moment.
- Reverted to sorting children entries, for now.
Unfortunately there's no easy way to sort the result entries in
perf.py/perfbd.py before folding. Folding is going to make a mess
of more complicated children anyways, so another solution is
needed...
And some other shared miscellany.
$ ./scripts/csv.py lfs.code.csv -bfunction -fsize -S
... blablabla ...
TypeError: cannot unpack non-iterable NoneType object
The issue was argparse's const defaults bypassing the type callback, so
the sort field ends up with None when it expects a tuple (well
technically a tuple tuple).
This is only an issue for csv.py because csv.py's sort fields can
contain exprs.
The fact that our scripts' table renderer was slightly different for
recursive scripts (stack.py, perf.py) and non-recursive scripts
(code.py, structs.py) was a ticking time bomb, one innocent edit away
from breaking half the scripts.
The makes the table renderer consistent across all scripts, allowing for
easy copy-pasting when editing at the cost of some unused code in
scripts.
One hiccup with this though is the difference in cycle detection
behavior between scripts:
- stack.py:
lfsr_bd_sync
'-> lfsr_bd_prog
'-> lfsr_bd_sync <-- cycle!
- structs.py:
lfsr_bshrub_t
'-> u
'-> bsprout
'-> u <-- not a cycle!
To solve this the table renderer now accepts a simple detect_cycles
flag, which can be set per-script.
This makes the -p/--percent flag a bit more consistent with -d/--diff
and -c/--compare, both of which change the printing strategy based on
additional context.
This showcases the sort of high-level result printing where -c/--compare
is useful:
$ make summary-diff
code data stack structs
BEFORE 57057 0 3056 1476
AFTER 68864 (+20.7%) 0 (+0.0%) 3744 (+22.5%) 1520 (+3.0%)
There was one hiccup though: how to hide the name of the first field.
It may seem minor, but the missing field name really does help
readability when you're staring at a wall of CLI output.
It's a bit of a hack, but this can now be controlled with -Y/--summary,
which has the sole purpose of disabling the first field name if mixed
with -c/--compare.
-c/--compare is already a weird case for the summary row anyways...
Example:
$ ./scripts/csv.py lfs.code.csv \
-bfunction -fsize \
-clfsr_rbyd_appendrattr
function size
lfsr_rbyd_appendrattr 3598
lfsr_mdir_commit 5176 (+43.9%)
lfsr_btree_commit__.constprop.0 3955 (+9.9%)
lfsr_file_flush_ 2729 (-24.2%)
lfsr_file_carve 2503 (-30.4%)
lfsr_mountinited 2357 (-34.5%)
... snip ...
I don't think this is immediately useful for our code/stack/etc
measurement scripts, but it's certainly useful in csv.py for comparing
results at a high level.
And by useful I mean it replaces a 40-line long awk script that has
outgrown its original purpose...
This may be a (very javascript-esque) mistake, but implicit conversion
to strings is useful when mixing fields and strings in -b/--by field
exprs:
$ ./scripts/csv.py input.csv -bcase='"test"+n' -fn
Note that this now (mostly) matches the behavior when the n field is
unspecified:
$ ./scripts/csv.py input.csv -bcase='"test"+n'
Er... well... mostly. When we specify n as a field, csv.py does
typecheck and parse the field, which ends up sort of canonicalizing the
field, unlike omitting n which leaves n as a string... But at least if
the field was already canonicalized the behavior matches...
It may also be better to force all -b/--by expr inputs to strings first,
but this would require us to know which expr came from where. It also
wouldn't solve the canonicalization problem.
So in:
$ ./scripts/csv.py input.csv -fa='b?c:d'
c and d must have matching types or else an error is raised.
This requires an explicit definition for the ternary operator since it's
a special case in that the type of b does not matter.
Compare to a 3-arg max call:
$ ./scripts/csv.py input.csv -fa='int(b)?float(c):float(d)' # ok
$ ./scripts/csv.py input.csv -fa='max(int(b),float(c),float(d))' # error
The main benefit of this is allowing the sort order to be controlled by
fields that don't necessarily need to be printed:
./scripts/csv.py input.csv -ba -sb -fc
By default this sorts lexicographically, but this can be changed by
providing an expression:
./scripts/csv.py input.csv -ba -sb='int(b)' -fc
Note that sort fields do _not_ change inferred by fields, this allows
sort flags to be added to existing queries without changing the results
too much:
./scripts/csv.py input.csv -fc
./scripts/csv.py input.csv -sb -fc
The issue here is quite nuanced, but becomes a problem when you want to
both:
1. Filter results by a given field: -Dmeas=write
2. Output a new value for that field: -bmeas='"write+amor"'
If you didn't guess from the example, this comes up often in scripts
dealing with bench results, where we often find ourselves wanting to
append/merge modified results based on the raw measurements.
Fortunately the fix is relatively easy: We already filter by defines
in our collect function, so we don't really need to filter by defines
again when folding.
Folding occurs after expr evaluation, but collect occurs before, so this
limits filtering to the input fields _before_ expr evaluation.
This does mean we no longer filter on the output of exprs, but I don't
know if such behavior was ever intentionally desired. Worst case it can
be emulated by stacking multiple csv.py calls, which may be annoying,
but is at least well-intentioned and well-defined.
---
Note that the other result scripts, code.py, stack.py, etc, are a bit
different in that they rely on fold-time filtering for filtering
generated results. This may deserve a refactor at some point, but since
these scripts don't also evaluate exprs, it's not an immediate problem.
This may make some mathematician mad, but these are informative scripts.
Returning +-inf is much more useful than erroring when dealing with
several hundred rows of results.
And hey, if it's good enough for IEEE 754, it's good enough for us :)
Also fixed a division operator mismatch in RFrac that was causing
problems.
Not sure if this is an old habit from Python 2, or just because it looks
nicer next to __mul__, __mod__, etc, but in Python 3 this should be
__truediv__ (or __floordiv__), not __div__.
The only reason RFloats reused RInt's operator definitions was to save a
few keystrokes. But this dependency is unnecessary and will get in the
way if we ever add a script that only uses RFloats.
So now the available field exprs can be queried with --help-exprs:
$ ./scripts/csv.py --help-exprs
uops:
+a Non-negation
-a Negation
!a 1 if a is zero, otherwise 0
bops:
a * b Multiplication
a / b Division
... snip ...
I was a bit torn on if this should be named --help-exprs or --list-exprs
to match test.py/bench.py, but decided on --help-exprs since it's
querying something "inside" the script, whereas test.py/bench.py's
--list-cases is querying something "outside" the script.
Internally this uses Python's docstrings, which is a nice language
feature to lean on.
Mainly for consistency with int operators, though it's unclear if either
mod is useful in the context of csv.py and related scripts.
This may be worth reverting at some point.
Now, by default, an error is raised if any branch of an expr has an
inconsistent type.
This isn't always what we want. The ternary operator, for example,
doesn't really care if the condition's type doesn't match the branch
arms. But it's a good default, and special cases can always override the
type function with their own explicit typechecking.
There's a bit of a push and pull when it comes to typechecking CSV
fields in our scripts. On one hand, we want the flexibility to accepts
scripts with various mismatched fields, on the other hand, we _really_
want to know if a typo caused a field to be quietly replaced with all
zeros...
I _think_ it's safe to say: if no fields across _all_ input files match
a requested field, we should error.
But I may end up wrong about this. Worst case we can always revert in
the future, maybe with an explicit flag to ignore missing fields.
- Updated the example in the header comment.
The previous example was way old, from back when fields were separated
by commas! Introduced in 20ec0be87 in 2022 according to git blame.
- Renamed a couple internal RExpr classes:
- Not -> NotNot
- And -> AndAnd
- Or -> OrOr
- Ife -> IfElse
This is mainly to leave room for bitwise operators in case we every
want to add them.
- Added isinf, isnan, isint, etc:
- isint(a)
- isfloat(a)
- isfrac(a)
- isinf(a)
- isnan(a)
In theory useful for conditional exprs based on the field's type.
- Accept +-nan as a float literal.
Niche, but seems necessary for completeness. Unfortunately this does
mean a field named nan (or inf) may cause problems...
I still think the 24 (23+1) char minimum is a good default for 2 column
output such as help text, especially if you don't have automatic width
detection. But our result scripts need to be a bit more flexible.
Consider:
$ make summary
code data stack structs
TOTAL 68864 0 3744 1520
Vs:
$ make summary
code data stack structs
TOTAL 68864 0 3744 1520
Up until now we were just kind of working around this with cut -c 25- in
our Makefile, but now that our result scripts automatically scale the
table widths, they should really just default to whatever is the most
useful.
- Allow single-arg frac:
- frac(a) => a/a
- frac(a, b) => a/b
This was already supported internally.
- Implicitly cast to frac in frac ops:
- ratio(3) => ratio(3/3) => 1.0 (100%)
- total(3) => total(3/3) => 3
This makes a bit more sense than erroring.
This now returns 1.0 if the total part of the fraction is 0.
There may be a better way to handle this, but the intention is for 0/0
to map to 100% for thing like code coverage (cov.py), test coverage
(test.py), etc.