- Adopted higher-level collect data structures:
- high-level DwarfEntry/DwarfInfo class
- high-level SymInfo class
- high-level LineInfo class
Note these had to be moved out of function scope due to pickling
issues in perf.py/perfbd.py. These were only function-local to
minimize scope leak so this fortunately was an easy change.
- Adopted better list-default patterns in Result types:
def __new__(..., children=None):
return Result(..., children if children is not None else [])
A classic python footgun.
- Adopted notes rendering, though this is only used by ctx.py at the
moment.
- Reverted to sorting children entries, for now.
Unfortunately there's no easy way to sort the result entries in
perf.py/perfbd.py before folding. Folding is going to make a mess
of more complicated children anyways, so another solution is
needed...
And some other shared miscellany.
- Dropped --internal flag, structs.py includes all structs now.
No reason to limit structs.py to public structs if ctx.py exists.
- Added struct/union/enum prefixes to results (enums were missing in
ctx.py).
- Only sort children layers if explicitly requested. This should
preserve field order, which is nice.
- Adopt more advanced FileInfo/DwarfInfo classes.
- Adopted table renderer changes (notes rendering).
- Sorting struct fields by name? Eh, that's not a big deal.
- Sorting function params by name? Okay, that's really annoying.
This compromises by sorting only the top-level results by name, and
leaving recursive results in the order returned by collect by default.
Recursive results should usually have a well-defined order.
This should be extendable to the other result scripts as well.
This is a bit more readable and better matches the names used in the C
code (lfs_config vs struct lfs_config).
The downside is we now have fields with spaces in them, which may cause
problems for naive parsers.
ctx.py reports functions' "contexts", i.e. the sum of the size of all
function parameters and indirect structs, recursively dereferencing
pointers when possible.
The idea is this should give us a rough lower bound on the amount of
state that needs to be allocated to call the function:
$ ./scripts/ctx.py lfs.o lfs_util.o -Dfunction=lfsr_file_write -z3 -s
function size
lfsr_file_write 596
|-> lfs 436
| '-> lfs_t 432
|-> file 152
| '-> lfsr_file_t 148
|-> buffer 4
'-> size 4
TOTAL 596
---
The long story short is that structs.py, while very useful for
introspection, has not been useful as a general metric.
Sure it can give you a rough idea of the impact of small changes to
struct sizes, but it's not uncommon for larger changes to add/remove
structs that have no real impact on the user facing RAM usage. There are
some structs we care about (lfs_t) and some we don't (lfsr_data_t).
Internal-only structs should already be measured by stack.py.
Which raises the question, how do we know which structs we care about?
The idea here is to look at function parameters and chase pointers. This
gives a complicated, but I think reasonable, heuristic. Fortunately
dwarf-info gives us all the necessary info.
Some notes:
- This does _not_ include buffer sizes. Buffer sizes are user
configurable, so it's sort of up to the user to account for these.
- We include structs once if we find a cycle (lfsr_file_t.o for
example). Can't really do any better and this at least provides a
lower bound for complex data-structures.
- We sum all params/fields, but find the max of all functions. Note this
prevents common types (lfs_t for example) from being counted more than
once.
- We only include global functions (based on the symbol flag). In theory
the context of all internal functions should end up in stack.py.
This can be overridden with --everything.
Note this doesn't replace structs.py. structs.py is still useful for
looking at all structs in the system. ctx.py should just be more useful
for comparing builds at a high level.