This adds __csv__ methods to all Csv* classes to indicate how to write
csv/json output, and adopts Python's default float repr. As a plus, this
also lets us use "inf" for infinity in csv/json files, avoiding
potential unicode issues.
Before this we were reusing __str__ for both table rendering and
csv/json writing, which rounded to a single decimal digit! This made
float output pretty much useless outside of trivial cases.
---
Note Python apparently does some of its own rounding (1/10 -> 0.1?), so
the result may still not be round-trippable, but this is probably fine
for our somewhat hack-infested csv scripts.
Globs in CLI attrs (-L'*=bs=%(bs)s' for example), have been remarkably
useful. It makes sense to extend this to the other flags that match
against CSV fields, though this does add complexity to a large number of
smaller scripts.
- -D/--define can now use globs when filtering:
$ ./scripts/code.py lfs.o -Dfunction='lfsr_file_*'
-D/--define already accepted a comma-separated list of options, so
extending this to globs makes sense.
Note this differs from test.py/bench.py's -D/--define. Globbing in
test.py/bench.py wouldn't really work since -D/--define is generative,
not matching. But there's already other differences such as integer
parsing, range, etc. It's not worth making these perfectly consistent
as they are really two different tools that just happen to look the
same.
- -c/--compare now matches with globs when finding the compare entry:
$ ./scripts/code.py lfs.o -c'lfs*_file_sync'
This is quite a bit less useful that -D/--define, but makes sense for
consistency.
Note -c/--compare just chooses the first match. It doesn't really make
sense to compare against multiple entries.
This raised the question of globs in the field specifiers themselves
(-f'bench_*' for example), but I'm rejecting this for now as I need to
draw the complexity/scope _somewhere_, and I'm worried it's already way
over on the too-complex side.
So, for now, field names must always be specified explicitly. Globbing
field names would add too much complexity. Especially considering how
many flags accept field names in these scripts.
I don't know how I completely missed that this doesn't actually work!
Using del _does_ work in Python's repl, but it makes sense the repl may
differ from actual function execution in this case.
The problem is Python still thinks the relevant builtin is a local
variables after deletion, raising an UnboundLocalError instead of
performing a global lookup. In theory this would work if the variable
could be made global, but since global/nonlocal statements are lifted,
Python complains with "SyntaxError: name 'list' is parameter and
global".
And that's A-Ok! Intentionally shadowing language builtins already puts
this code deep into ugly hacks territory.
- CsvInt.x -> CsvInt.a
- CsvFloat.x -> CsvFloat.a
- Rev.x -> Rev.a
This matches CsvFrac.a (paired with CsvFrac.b), and avoids confusion
with x/y variables such as Tile.x and Tile.y.
The other contender was .v, since these are cs*v* related types, but
sticking with .a gets the point across that the name really doesn't have
any meaning.
There's also some irony that we're forcing namedtuples to have
meaningless names, but it is useful to have a quick accessor for the
internal value.
This prefix was extremely arbitrary anyways.
The prefix Csv* has slightly more meaning than R*, since these scripts
interact with .csv files quite a bit, and it avoids confusion with
rbyd-related things such as Rattr, Ralt, etc.
This affects the table renderers as well as csv.py's ratio expr.
This is a bit more correct, handwaving 0/0 (mapping 0/0 -> 100% is
useful for cov.py, please don't kill me mathematicians):
frac(1,0) => 1/0 (∞%)
frac(0,0) => 0/0 (100.0%)
frac(0,1) => 0/1 (0.0%)
So now the result scripts always require -d/--diff to diff:
- before: ./scripts/csv.py a.csv -pb.csv
- after: ./scripts/csv.py a.csv -db.csv -p
For a couple reasons:
- Easier to toggle
- Simpler internally to only have one diff path flag
- The previous behavior was a bit unintuitive
So:
all_ = all; del all
Instead of:
import builtins
all_, all = all, builtins.all
The del exposes the globally scoped builtin we accidentally shadow.
This requires less megic, and no module imports, though tbh I'm
surprised it works.
It also works in the case where you change a builtin globally, but
that's a bit too crazy even for me...
For the same reason we output all field fields by default: Because
machines can process more information than humans can.
Worst case, by fields can still be limited via explicit -b/--by flags.
This only failed if "-" was used as an argument (for stdin/stdout), so
the issue was pretty hard to spot.
openio is a heavily copy-pasted function, so it makes sense to just add
the import os to openio directly. Otherwise this mistake will likely
happen again in the future.
I forgot that this is still useful for erroring scripts, such as
stack.py when checking for recursion.
Technically this is possible with -o/dev/null, but that's both
unnecessarily complicated and includes the csv encoding cost for no
reason.
-!/--everything has been useful enough to warrant a short form flag,
and -! is unlikely to conflict with other flags while also getting the
point across that this is a bit of an unusual option.
Now that I'm looking into some higher-level scripts, being able to merge
results without first renaming everything is useful.
This gives most scripts an implicit prefix for field fields, but _not_
by fields, allowing easy merging of results from different scripts:
$ ./scripts/stack.py lfs.ci -o-
function,stack_frame,stack_limit
lfs_alloc,288,1328
lfs_alloc_discard,8,8
lfs_alloc_findfree,16,32
...
At least now these have better support in scripts with the addition of
the --prefix flag (this was tricky for csv.py), which allows explicit
control over field field prefixes:
$ ./scripts/stack.py lfs.ci -o- --prefix=
function,frame,limit
lfs_alloc,288,1328
lfs_alloc_discard,8,8
lfs_alloc_findfree,16,32
...
$ ./scripts/stack.py lfs.ci -o- --prefix=wonky_
function,wonky_frame,wonky_limit
lfs_alloc,288,1328
lfs_alloc_discard,8,8
lfs_alloc_findfree,16,32
...
So:
$ ./scripts/code.py lfs.o -o- -q
Becomes:
$ ./scripts/code.py lfs.o -o-
The original intention of -o/-O _not_ being exclusive (aka table is
still rendered unless disabled with -q/--quiet), was to allow results to
be written to csv files and rendered to tables in a single pass.
But this was never useful. Heck, we're not even using this in our
Makefile right now because it would make the rule dependencies more
complicated than it's worth. Even for long-running result scripts
(perf.py, perfbd.py, etc), most of the work is building that csv file,
the cost of rendering a table in a second pass is negligible.
In every case I've used -o/-O, I've also wanted -q/--quiet, and almost
always forget this on the first run. So might as well make the expected
behavior the actual behavior.
---
As a plus, this let us simplify some of the scripts a bit, by replacing
visibility filters with -o/-O dependent by-fields.
This makes it so scripts with complex fields will still output all
fields to output csv/json files, while only showing a user-friendly
subset unless -f/--field is explicitly provided.
While internal fields are often too much information to show by default,
csv/json files are expected to go to other scripts, not humans. So more
information is more useful up until you actually hit a performance
bottleneck.
And if you _do_ somehow manage to hit a performance bottleneck, you can
always limit the output with explicit -f/--field flags.
With this, we apply the same result modifiers (exprs/defines/hot/etc) to
both the input results and -d/--diff results. So if both start with the
same format, diffing/hotifying/etc should work as expected.
This is really the only way I can seen -d/--diff results working with
result modifiers in a way that makes sense.
The downside of this is that you can't save results with some complex
operation applied, and then diff while applying the same operation,
since most of the newer operations (hotify) are _not_ idempotent.
Fortunately the two alternatives are not unreasonable:
1. Save results _without_ the operation applied, since the operation
will be applied to both the input and diff results.
This is a bit asymmetric, but should work.
2. Apply the operation to the input and then pipe to csv.py for diffing.
This used to "just work" when we did _not_ apply operations to output
csv/json, but this was really just equivalent to 1..
I think the moral of the story is you can solve any problem with enough
chained csv.py calls.
It's just too unintuitive to filter after exprs.
Note this is consistent with how exprs/mods are evaluated. Exprs/mods
can't reference other exprs/mods because csv.py is only single-pass, so
allowing defines to reference exprs/mods is surprising.
And the solution to needing these sort of post-expr/mod references is
the same for defines: You can always chain multiple csv.py calls.
The reason defines were change to evaluate after expr eval was because
this seemed inconsistent with other result scripts, but this is not
actually the case. Other result scripts simply don't have exprs/mods, so
filtering in fold is the same as filtering during collection. Note that
even in fold, filtering is done _before_ the actual fold/sum operation.
---
Also fixed a recursive-define regression when folding. Counter-
intuitively, we _don't_ want to recursively apply define filters. If we
do the results will just end up too confusing to be useful.
This should either have checked diff_result==None, or we should be
mapping diff_result=None => diff_result_=None. To be safe I've done
both.
This was a nasty typo and I only noticed because ctx.py stopped printing
"cycle detected" for our linked-lists (which are expected to be cyclic).
There's an ordering issue with hotifying and folding when we have
multiple foldable results with children. This was hard to notice since
most of the recursive scripts have unique results, but it _is_ an issue
for perf.py/perfbd.py, which rely on result folding to merge samples.
The fix is to fold _before_ hotifying.
We could fold multiple times to avoid changing the behavior of the
result scripts, but instead I've just moved the folding in the table
renderer up into the relevant main functions. This means 1. we only fold
once, and 2. folding affects outputted csv/json files.
I'm a bit on the fence about this behavior change, but it is a bit more
consistent with how -r/--hot, -z/--depth, etc, affect both table and
csv/json results consistently.
Maybe we should move towards the table render always reflecting the
csv/json results? Most csv/json usage is with -q/--quiet anyways...
---
This does create a new risk in that the table renderer can hide results
if they aren't folded first.
To hopefully avoid this I've added an assert in the table renderer if it
notices results being hidden.
This gives csv.py access to a hidden feature in our table renderer used
by some of the other scripts: fields that affect by-field grouping, but
aren't actually printed.
For example, this prevents summing same named functions in different
files, but only shows the function name in the table render:
$ ./scripts/csv.py lfs.code.csv -bfile -bfunction -lfunction
function size
lfs_alloc 398
lfs_alloc_discard 31
lfs_alloc_findfree 77
...
This is especially useful when enumerating results. For example, this
prevents any summing without extra table noise:
$ ./scripts/csv.py lfs.code.csv -i -bfunction -fsize -lfunction
function size
lfs_alloc 398
lfs_alloc_discard 31
lfs_alloc_findfree 77
...
I also tweaked -b/--by field defaults a bit to account to
enumerate/label fields a bit better.
This removes most of the special behavior around how -r/--hot and
-i/--enumerate interact. This does mean -r/--hot risks folding results
if -i/--enumerate is not specified, but this is _technically_ a valid
operation.
For most of the recursive result scripts, I've replaced the "i" field
with separate "z" and "i" fields for depth and field number, which I
think is a bit more informative/useful.
I've also added a default-hidden "off" field to structs.py/ctx.py, since
we have that info available. I considered replacing "i" with this, but
decided against it since non-zero offsets for union members would risk
being confusing/mistake prone.
Guh
This may have been more work than I expected. The goal was to allowing
passing recursive results (callgraph info, structs, etc) between
scripts, which is simply not possible with csv files.
Unfortunately, this raised a number of questions: What happens if a
script receives recursive results? -d/--diff with recursive results?
How to prevent folding of ordered results (structs, hot, etc) in piped
scripts? etc.
And ended up with a significant rewrite of most of the result scripts'
internals.
Key changes:
- Most result scripts now support -O/--output-json in addition to
-o/--json, with -O/--output-json including any recursive results in
the "children" field.
- Most result scripts now support both csv and json as input to relevant
flags: -u/--use, -d/--diff, -p/--percent. This is accomplished by
looking for a '[' as the first character to decide if an input file is
json or csv.
Technically this breaks if your json has leading whitespace, but why
would you ever keep whitespace around in json? The human-editability
of json was already ruined the moment comments were disallowed.
- csv.py requires all fields to be explicitly defined, so added
-i/--enumerate, -Z/--children, and -N/--notes. At least we can provide
some reasonable defaults so you shouldn't usually need to type out the
whole field.
- Notably, the rendering scripts (plot.py, treemapd3.py, etc) and
test/bench scripts do _not_ support json. csv.py can always convert
to/from json when needed.
- The table renderer now supports diffing recursive results, which is
nice for seeing how the hot path changed in stack.py/perf.py/etc.
- Moved the -r/--hot logic up into main, so it also affects the
outputted results. Note it is impossible for -z/--depth to _not_
affect the outputted results.
- We now sort in one pass, which is in theory more efficient.
- Renamed -t/--hot -> -r/--hot and -R/--reverse-hot, matching -s/-S.
- Fixed an issue with -S/--reverse-sort where only the short form was
actually reversed (I misunderstood what argparse passes to Action
classes).
- csv.py now supports json input/output, which is funny.
Unfortunately the import sys in the argparse block was hiding missing
sys imports.
The mistake was assuming the import sys in Python would limit the scope
to that if block, but Python's late binding strikes again...
Apparently __builtins__ is a CPython implementation detail, and behaves
differently when executed vs imported???
import builtins is the correct way to go about this.
Moved local import hack behind if __name__ == "__main__"
These scripts aren't really intended to be used as python libraries.
Still, it's useful to import them for debugging and to get access to
their juicy internals.
Instead of trying to be too clever, this just adds a bunch of small
flags to control parts of table rendering:
- --no-header - Don't show the header.
- --small-header - Don't show by field names.
- --no-total - Don't show the total.
- -Q/--small-table - Equivalent to --small-header + --no-total.
Note that -Q/--small-table replaces the previous -Y/--summary +
-c/--compare hack, while also allowing a similar table style for
non-compare results.
This reverts per-result source file mapping, and tears out of a bunch of
messy dwarf parsing code. Results from the same .o file are now mapped
to the same source file.
This was just way too much complexity for slightly better result->file
mapping, which risked losing results accidentally mapped to the wrong
file.
---
I was originally going to revert all the way back to relying strictly on
the .o name and --build-dir (490e1c4) (this is the simplest solution),
but after poking around in dwarf-info a bit, I realized we do have
access to the original source file in DW_TAG_compile_unit's
DW_AT_comp_dir + DW_AT_name.
This is much simpler/more robust than parsing objdump --dwarf=rawline,
and avoid needing --build-dir in a bunch of scripts.
---
This also reverts stack.py to rely only on the .ci files. These seem as
reliable as DW_TAG_compile_unit while simplifying things significantly.
Symbol mapping used to be a problem, but this was fixed by using the
symbol in the title field instead of the label field (which strips some
optimization suffixes?)
Now that cycle detection is always done at result collection time, we
don't need this in the table renderer itself.
This had a tendency to cause problems for non-function scripts (ctx.py,
structs.py).
God, I wish Python had an OrderedSet.
This is a fix for duplicate "cycle detected" notes when using -t/--hot.
This mix of merging both _hot_notes and _notes in the HotResult class is
tricky when the underlying container is a list.
The order is unlikely to be guaranteed anyways, when different results
with different notes are folded.
And if we ever want more control over the order of notes in result
scripts we can always change this back later.
- Error on no/insufficient files.
Instead of just returning no results. This is more useful when
debugging complicated bash scripts.
- Use elf magic to allow any file order in perfbd.py/stack.py.
This was already implemented in stack.py, now also adopted in
perfbd.py.
Elf files always start with the magic string "\x7fELF", so we can use
this to figure out the types of input files without needing to rely on
argument order.
This is just one less thing to worry about when invoking these
scripts.
- Prevented childrenof memoization from hiding the source of a
detected cycle.
- Deduplicated multiple cycle detected notes.
- Fixed note rendering when last column does not have a notes list.
Currently this only happens when entry is None (no results).
Without this, naming a column i/children/notes in csv.py could cause
things to break. Unlikely for children/notes, but very likely for i,
especially when benchmarking.
Unfortunately namedtuple makes this tricky. I _want_ to just rename
these to _i/_children/_notes and call the problem solved, but namedtuple
reserves all underscore-prefixed fields for its own use.
As a workaround, the table renderer now looks for _i/_children/_notes at
the _class_ level, as an optional name of which namedtuple field to use.
This way Result types can stay lightweight namedtuples while including
extra table rendering info without risk of conflicts.
This also makes the HotResult type a bit more funky, but that's not a
big deal.
This extends the recursive part of the table renderer to sort children
by the optional "i" field, if available.
Note this only affects children entries. The top-level entries are
strictly ordered by the relevant "by" fields. I just haven't seen a use
case for this yet, and not sorting "i" at the top-level reduces that
number of things that can go wrong for scripts without children.
---
This also rewrites -t/--hot to take advantage of children ordering by
injecting a totally-no-hacky HotResult subclass.
Now -t/--hot should be strictly ordered by the call depth! Though note
entries that share "by" fields are still merged...
This also gives us a way to introduce the "cycle detected" note and
respect -z/--depth, so overall a big improvement for -t/--hot.
We don't really need padding for the notes on the last column of tables,
which is where row-level notes end up.
This may seem minor, but not padding here avoids quite a bit of
unnecessary line wrapping in small terminals.
- Adopted higher-level collect data structures:
- high-level DwarfEntry/DwarfInfo class
- high-level SymInfo class
- high-level LineInfo class
Note these had to be moved out of function scope due to pickling
issues in perf.py/perfbd.py. These were only function-local to
minimize scope leak so this fortunately was an easy change.
- Adopted better list-default patterns in Result types:
def __new__(..., children=None):
return Result(..., children if children is not None else [])
A classic python footgun.
- Adopted notes rendering, though this is only used by ctx.py at the
moment.
- Reverted to sorting children entries, for now.
Unfortunately there's no easy way to sort the result entries in
perf.py/perfbd.py before folding. Folding is going to make a mess
of more complicated children anyways, so another solution is
needed...
And some other shared miscellany.
It looks like the failure case in our scripts' subprocess stderr
handling was not tested well during a fix to stderr blocking (a735bcd).
This code was attempting to print stderr only if an error occured, but
with stderr=None this just results in a NoneType TypeError.
In retrospect, completely hiding stderr is kind of shitty if a
subprocess fails, but it doesn't seem possible to read from both stdin
and stderr with Python's APIs without getting stuck when the stderr's
buffer is full.
It might be possible to work around this with either multithreading,
select calls, or a temp file, but I'm not sure slightly less verbose
scripts are worth the added complexity in every single subprocess call.
For now just reverting to unconditionally forwarding stderr from the
child process. This is the simplest/most robust option.
- stack.py:collect -> collect + collect_cov
- perf.py:collect_syms_and_lines -> collect_syms + collect_dwarf_lines
- perfbd.py:collect_syms_and_lines -> collect_syms + collect_dwarf_lines
This should hopefully lead to both better readability and better code
reuse.
Note collect_dwarf_lines is a bit different than collect_dwarf_files in
code.py/data.py/etc, but the extra complexity of collect_dwarf_lines is
probably not worth sharing here.
The fact that our scripts' table renderer was slightly different for
recursive scripts (stack.py, perf.py) and non-recursive scripts
(code.py, structs.py) was a ticking time bomb, one innocent edit away
from breaking half the scripts.
The makes the table renderer consistent across all scripts, allowing for
easy copy-pasting when editing at the cost of some unused code in
scripts.
One hiccup with this though is the difference in cycle detection
behavior between scripts:
- stack.py:
lfsr_bd_sync
'-> lfsr_bd_prog
'-> lfsr_bd_sync <-- cycle!
- structs.py:
lfsr_bshrub_t
'-> u
'-> bsprout
'-> u <-- not a cycle!
To solve this the table renderer now accepts a simple detect_cycles
flag, which can be set per-script.
This makes the -p/--percent flag a bit more consistent with -d/--diff
and -c/--compare, both of which change the printing strategy based on
additional context.
This showcases the sort of high-level result printing where -c/--compare
is useful:
$ make summary-diff
code data stack structs
BEFORE 57057 0 3056 1476
AFTER 68864 (+20.7%) 0 (+0.0%) 3744 (+22.5%) 1520 (+3.0%)
There was one hiccup though: how to hide the name of the first field.
It may seem minor, but the missing field name really does help
readability when you're staring at a wall of CLI output.
It's a bit of a hack, but this can now be controlled with -Y/--summary,
which has the sole purpose of disabling the first field name if mixed
with -c/--compare.
-c/--compare is already a weird case for the summary row anyways...
Example:
$ ./scripts/csv.py lfs.code.csv \
-bfunction -fsize \
-clfsr_rbyd_appendrattr
function size
lfsr_rbyd_appendrattr 3598
lfsr_mdir_commit 5176 (+43.9%)
lfsr_btree_commit__.constprop.0 3955 (+9.9%)
lfsr_file_flush_ 2729 (-24.2%)
lfsr_file_carve 2503 (-30.4%)
lfsr_mountinited 2357 (-34.5%)
... snip ...
I don't think this is immediately useful for our code/stack/etc
measurement scripts, but it's certainly useful in csv.py for comparing
results at a high level.
And by useful I mean it replaces a 40-line long awk script that has
outgrown its original purpose...
This may make some mathematician mad, but these are informative scripts.
Returning +-inf is much more useful than erroring when dealing with
several hundred rows of results.
And hey, if it's good enough for IEEE 754, it's good enough for us :)
Also fixed a division operator mismatch in RFrac that was causing
problems.
Not sure if this is an old habit from Python 2, or just because it looks
nicer next to __mul__, __mod__, etc, but in Python 3 this should be
__truediv__ (or __floordiv__), not __div__.
I still think the 24 (23+1) char minimum is a good default for 2 column
output such as help text, especially if you don't have automatic width
detection. But our result scripts need to be a bit more flexible.
Consider:
$ make summary
code data stack structs
TOTAL 68864 0 3744 1520
Vs:
$ make summary
code data stack structs
TOTAL 68864 0 3744 1520
Up until now we were just kind of working around this with cut -c 25- in
our Makefile, but now that our result scripts automatically scale the
table widths, they should really just default to whatever is the most
useful.
- RInt/RFloat now accepts implicitly castable types (mainly
RInt(RFloat(x)) and RFloat(RInt(x))).
- RInt/RFloat/RFrac are now "truthy", implements __bool__.
- More operator support for RInt/RFloat/RFrac:
- __pos__ => +a
- __neg__ => -a
- __abs__ => abs(a)
- __div__ => a/b
- __mod__ => a%b
These work in Python, but are mainly used to implement expr eval in
csv.py.
This seems like a more fitting name now that this script has evolved
into more of a general purpose high-level CSV tool.
Unfortunately this does conflict with the standard csv module in Python,
breaking every script that imports csv (which is most of them).
Fortunately, Python is flexible enough to let us remove the current
directory before imports with a bit of an ugly hack:
# prevent local imports
__import__('sys').path.pop(0)
These scripts are intended to be standalone anyways, so this is probably
a good pattern to adopt.