Replacing -R/--aspect-ratio, --to-ratio now calculates the width/height
_before_ adding decoration such as headers, stack info, etc.
I toying around with generalizing -R/--aspect-ratio to include
decorations, but when Wolfram Alpha spit this mess for the post-header
formula:
header*r - sqrt(4*v*r + padding^2*r)
w = ------------------------------------
2
I decided maybe a generalized -R/--aspect-ratio is a _bit_ too
complicated for what are supposed to be small standalone Python
scripts...
---
Also fixed the scaling formula, which should've taken the sqrt _after_
multiplying by the aspect ratio:
w = sqrt(v*r)
I only noticed while trying to solve for the more complicated
post-decoration formula, the difference is pretty minor.
Crashing on invalid input isn't the _worst_ behavior, but with a few
tweaks we can make these scripts more-or-less noop in such cases. This
is useful when running with -k/--keep-open since intermediate file
states often contain garbage.
(Ironically one of the precise problems littlefs is trying to solve.)
Also added a special case to treemap.py/codemap.py to not output the
canvas if there's nothing to show and height is implicit. Otherwise the
history mode with -n/--lines ends up filled with blank lines.
Note this makes -H1 subtly different from no -H/--height, with -H1
printing a blank line if there is nothing to show. The -H1 behavior may
also be useful in niche cases where you want that part of the screen
cleared.
---
This was found while trying to run codemap.py -k -n5 during compilation.
GCC writes object files incrementally, and this was breaking our script.
The notable exception being plot.py, where line-level history doesn't
really make sense.
These scripts all default to height=1, and -n/--lines can be useful for
viewing changes over time.
In theory you could achieve something similar to this with tailpipe.py,
but you would lose the header info, which is useful.
---
Note, as a point of simplicity, we do _not_ show sub-char history like
we used to in tracebd.py. That was way too complicated for what it was
worth.
This simplifies attrs a bit, and scripts can always override
__getitem__ if they want to provide lazy attr generation.
The original intention of accepting functions was to make lazy attr
generation easier, but while tinkering around with the idea I realized
the actual attr mapping/generation would be complicated enough that
you'd probably want a full class anyways.
All of our scripts are only using dict attrs anyways. And lazy attr
generation is probably a premature optimization for the same reason
everyone's ok with Python's slices being O(n).
Reading Wikipedia:
> Later terminals added the ability to directly specify the "bright"
> colors with 90–97 and 100–107.
So if we want to stick to one pattern, we should probably go with
brightness as a separate modifier.
This shouldn't noticeably change any script, unless your terminal
interprets 90-97m colors differently from 1;30-37m, in which case things
should be more consistent now.
This mirrors how -H/--height and -W/--width work, with -n-1 using the
terminal height - 1 for the output.
This is very useful for carving out space for the shell prompt and other
things, without sacrificing automatic sizing.
This allows for combining braille/dots with custom chars for specific
elements:
$ ./scripts/codemap.py lfs.o -H16 -: -.lfsr_rbyd_appendrattr=A
Note this is already how plot.py works, letting braille/dots take
priority in the new scripts/reworks was just an oversight.
So percentages now include unused blocks, instead of being derived from
only blocks in use.
This is a bit inconsistent with tracebd.py, where we show ops as
percentages of all ops, but it's more useful:
- mdir+btree+data gives you the total usage, which is useful if you want
to know how full disk is. You can't get this info from in-use
percentages.
Note that total field is sticking around, so you can show the total
usage directly if you provide your own title string:
$ ./scripts/dbgbmap.py disk \
--title="bd %(block_size)sx%(block_count)s, %(total_percent)s"
- You can derive the in-use percentages from total percentages if you
need them: in-use-mdir = mdir/(mdir+btree+data).
Maybe this should be added to the --title fields, but I can't think of
a good name at the moment...
Attempting to make tracebd.py consistent with dbgbmap.py doesn't really
make sense either: Showing op percentages of total bmap will usually be
an extremely small number.
At least dbgbmap.py is consistent with tracebd.py's --wear percentage,
which is out of all erase state in the bmap.
- Create a grid with dashes even in -%/--usage mode.
This was surprisingly annoying since it breaks the existing
1 block = 1 char assumption.
- Derive percentages from in-use blocks, not all blocks. This matches
behavior of tracebd.py's percentages (% read/prog/erase).
Though not tracebd.py's percent wear...
- Added mdir/btree/data counts/percentages to dbgbmapd3.py, for use in
custom --title strings and the newly added --title-usage.
Because why not. Unlike dbgbmap.py, performance is not a concern at
all, and the consistency between these two scripts helps
maintainability.
Case in point: also fixed a typo from copying the block_count
inference between scripts.
It's a mess but it's working. Still a number of TODOs to cleanup...
This adopts all of the changes in dbgbmap.py/dbgbmapd3.py, block
grouping, nested curves, Canvas, Attrs, etc:
- Like dbgbmap.py, we now group by block first before applying space
filling curves, using nested space filling curves to render byte-level
operations.
Python's ft.lru_cache really shines here.
The previous behavior is still available via -u/--contiguous
- Adopted most features in dbgbmap.py, so --to-scale, -t/--tiny, custom
--title strings, etc.
- Adopted Attrs so now chars/coloring can be customized with
-./--add-char, -,/--add-wear-char, -C/--add-color,
-G/--add-wear-color.
- Renamed -R/--reset -> --volatile, which is a much better name.
- Wear is now colored cyan -> white -> read, which is a bit more
visually interesting. And we're not using cyan in any scripts yet.
In addition to the new stuff, there were a few simplifications:
- We no longer support sub-char -n/--lines with -:/--dots or
-⣿/--braille. Too complicated, required Canvas state hacks to get
working, and wasn't super useful.
We probably want to avoid doing too much cleverness with -:/--dots and
-⣿/--braille since we can't color sub-chars.
- Dropped -@/--blocks byte-level range stuff. This was just not worth
the amount of complexity it added. -@/--blocks is now limited to
simple block ranges. High-level scripts should stick to high-level
options.
- No fancy/complicated Bmap class. The bmap object is just a dict of
TraceBlocks which contain RangeSets for relevant operations.
Actually the new RangeSet class deserves a mention but this commit
message is probably already too long.
RangeSet is a decently efficient set of, well, ranges, that can be
merged and queried. In a lower-level language it should be implemented
as a binary tree, but in Python we're just using a sorted list because
we're probably not going to be able to beat O(n) list operations.
- Wear is tracked at the block level, no reason to overcomplicate this.
- We no longer resize based on new info. Instead we either expect a
-b/--block-size argument or wait until first bd init call.
We can probably drop the block size in BD_TRACE statements now, but
that's a TODO item.
- Instead of one amalgamated regex, we use string searches to figure out
the bd op and then smaller regexes to parse. Lesson learned here:
Python's string search is very fast (compared to regex).
- We do _not_ support labels on blocks like we do in treemap.py/
codemap.py. It's less useful here and would just be more hassle.
I also tried to reorganize main a bit to mirror the simple two-main
approach in dbgbmap.py and other ascii-rendering scripts, but it's a bit
difficult here since trace info is very stateful. Building up main
functions in the main main function seemed to work well enough:
main -+-> main_ -> trace__ (main thread)
'-> draw_ -> draw__ (daemon thread)
---
You may note some weirdness going on with flags. That's me trying to
avoid upcoming flag conflicts.
I think we want -n/--lines in more scripts, now that it's relatively
self-contained, but this conflicts with -n/--namespace-depth in
codemap[d3].py, and risks conflict with -N/--notes in csv.py which may
end up with namespace-related functionality in the future.
I ended up hijacking -_, but this conflicted with -_/--add-line-char in
plot.py, but that's ok because we also want a common "secondary char"
flag for wear in tracebd.py... Long story short I ended up moving a
bunch of flags around:
- added -n/--lines
- -n/--namespace-depth -> -_/--namespace-depth
- -N/--notes -> -N/--notes
- -./--add-char -> -./--add-char
- -_/--add-line-char -> -,/--add-line-char
- added -,/--add-wear-char
- -C/--color -> -C/--add-color
- added -> -G/--add-wear-color
Worth it? Dunno.
This is actually faster than a byte-wise xor in Python:
parity.py disk (1MiB) w/ crc32c lib: 0m0.027s
parity.py disk (1MiB) w/o crc32c lib: 0m0.051s
There's probably some other library that can do this even faster, but
parity.py is not a critical script.
By default, we don't actually do anything if we find an invalid gcksum,
so there's no reason to calculate it everytime.
Though this performance improvement may not be very noticeable:
dbgbmap.py w/ crc32c lib w/ no_ck --no-ckdata: 0m0.221s
dbgbmap.py w/ crc32c lib w/o no_ck --no-ckdata: 0m0.269s
dbgbmap.py w/o crc32c lib w/ no_ck --no-ckdata: 0m0.388s
dbgbmap.py w/o crc32c lib w/o no_ck --no-ckdata: 0m0.490s
dbgbmap.old.py: 0m0.231s
Note that there's no point in adopting this in dbgbmapd3.py: 1. svg
rendering dominates (probably, I haven't measured this), and 2. we
default to showing the littlefs mount string instead of mdir/btree/data
percentages.
Jumping from a simple Python implementation to the fully hardware
accelerated crc32c library basically deletes any crc32c related
bottlenecks:
crc32c.py disk (1MiB) w/ crc32c lib: 0m0.027s
crc32c.py disk (1MiB) w/o crc32c lib: 0m0.844s
This uses the same try-import trick we use for inotify_simple, so we get
the speed improvement without losing portability.
---
In dbgbmap.py:
dbgbmap.py w/ crc32c lib: 0m0.273s
dbgbmap.py w/o crc32c lib: 0m0.697s
dbgbmap.py w/ crc32c lib --no-ckdata: 0m0.269s
dbgbmap.py w/o crc32c lib --no-ckdata: 0m0.490s
dbgbmap.old.py: 0m0.231s
The bulk of the runtime is still in Rbyd.fetch, but this is now
dominated by leb128 decoding, which makes sense. We do ~twice as many
fetches in the new dbgbmap.py in order to calculate the gcksum (which
we then ignore...).
Checking every data block for errors really slows down dbgbmap.py, which
is unfortunate for realtime rendering.
To be fair, the real issue is our naive crc32c impl, but the mindset of
these scripts is if you want speed you really shouldn't be using Python
and should rewrite the script in Rust/C/something (see prettyasserts for
example). You _could_ speed things up with a table-based crc32c, but at
that point you should probably just find C-bindings for crc32c (maybe
optional like inotify?... actually that's not a bad idea...).
At least --no-ckmeta/--no-ckdata allow for the previous behavior of not
checking for relevant errors for a bit of speed.
---
Note that --no-ckmeta currently doesn't really do anything. I toyed with
adding a non-fetching Rbyd.fetchtrunk method, but this seems out of
scope for these scripts.
This better matches what you would expect from a function called
bd.read, at least in the context of littlefs, while also decreasing the
state (seek) we have to worry about.
Note that bd.readblock already behaved mostly like this, and is
preferred by every class except for Bptr.
So no more __getitem__, __contains__, or __iter__ for Rbyd, Btree, Mdir,
Mtree, Lfs.File, etc.
These were way too error-prone, especially when accidental unpacking
triggered unintended disk traversal and weird error states. We didn't
even use the implicit behavior because we preferred the full name for
heavy disk operations.
The motivation for this was Python not catching this bug, which is a bit
silly:
rid, rattr, *path_ = rbyd
And made it slightly darker to match arrows in light mode.
Just trying to make the separator look a bit nicer, but it's tricky
since this is the only non-tile non-text element.
This is a rework of dbgbmap.py to match dbgbmapd3.py, adopt the new
Rbyd/Lfs class abstractions, as well as Canvas, -k/--keep-open, etc.
Some of the main changes:
- dbgbmap.py now reports corrupt/conflict blocks, which can be useful
for debugging.
Note though that you will probably get false positives if running with
-k/--keep-open while something is writing to the disk. littlefs is
powerloss safe, not multi-write safe! Very different problem!
- dbgbmap.py now groups by blocks before mapping to the space filling
curve. This matches dbgbmapd3.py and I think is more intuitive now
that we have a bmap tiling algorithm.
-%/--usage still works, but is rendered as a second space filling
curve _inside_ the block tile. Different blocks can end up with
slightly different sizes due to rounding, but it's not the end of the
world.
I wasn't originally going to keep it around, but ended up caving, so
you can still get the original byte-level curve via -u/--contiguous.
- Like the other ascii rendering script, dbgbmap.py now supports
-k/--keep-open and friends as a thin main wrapper. This just makes it
a bit easier to watch a realtime bmap without needing to use watch.py.
- --mtree-only is supported, but filtering via --mdirs/--btrees/--data
is _not_ supported. This was too much complexity for a minor feature,
and doesn't cover other niche blocks like corrupted/conflict or parity
in the future.
- Things are more customizable thanks to the Attr class. For an example
you can now use the littlefs mount string as the title via
--title-littlefs.
- Support for --to-scale and -t/--tiny mode, if you want to scale based
on block_size.
One of the bigger differences dbgbmapd3.py -> dbgbmap.py is that
dbgbmap.py still supports -%/--usage. Should we backport -%/--usage to
dbgbmapd3.py? Uhhhh...
This ends up a funny example of raster graphics vs vector graphics. A
pixel-level space filling curve is easy with raster graphics, but with
an svg you'd need some sort of pixel -> path wrapping algorithm...
So no -%/--usage in dbgbmapd3.py for now.
Also just ripped out all of the -@/--blocks byte-level range stuff. Way
too complicated for what it was worth. -@/--blocks is limited to simple
block ranges now. High-level scripts should stick to high-level options.
One last thing to note is the adoption of "if '%' in label__" checks
before applying punescape. I wasn't sure if we should support punescape
in dbgbmap.py, since it's quite a bit less useful here, and may be
costly due to the lazy attr generation. Adding this simple check avoids
the cost and consistency question, so I adopted it in all scripts.
This matches the coloring in dbglfs.py for other erroneous conditions,
and also matches how we color hidden items when shown.
Also fixed some minor bugs in grm printing.
This can be useful when you just want to check for errors.
The only exception being dbgblock.py/dbgcat.py, since these don't really
have a concept of an error.
For more aggressive checking of filesystem state. These should match the
behavior of LFS_M_CKMETA/CKDATA in lfs.c.
Also tweaked dbgbmapd3.py (and eventually dbgmap.py) to match, though we
don't need new flags there since we're already checking every block in
the filesystem.
These were hard to read, especially in light mode (which I use the
least). They're still hard to read, but hopefully a bit less so:
- Decreased opacity of unfocused tiles 0.7 -> 0.5
- Don't unfocus unused blocks in dbgbmapd3.py
- Softened arrow color in light mode #000000 -> #555555
- Added Lfs.traverse for full filesystem traversal
- Added Rbyd.shrub flag so we can tell if an Rbyd is a shrub
- Removed redundant leaves from paths in leaf iters
Like codemapd3.py this include an interactive UI for viewing the
underlying filesystem graph, including:
- mode-tree - Shows all reachable blocks from a given block
- mode-branches - Shows immediate children of a given block
- mode-references - Shows parents of a given block
- mode-redund - Shows sibling blocks in redund groups (This is
currently just mdir pairs, but the plan is to add more)
This is _not_ a full filesystem explorer, so we don't embed all block
data/metadata in the svg. That's probably a project for another time.
However we do include interesting bits such as trunk addresses,
checksums, etc.
An example:
# create an filesystem image
$ make test-runner -j
$ ./scripts/test.py -B test_files_many -a -ddisk -O- \
-DBLOCK_SIZE=1024 \
-DCHUNK=10 \
-DSIZE=2050 \
-DN=128 \
-DBLOCK_RECYCLES=1
... snip ...
done: 2/2 passed, 0/2 failed, 164pls!, in 0.16s
# generate bmap svg
$ ./scripts/dbgbmapd3.py disk -b1024 -otest.svg \
-W1400 -H750 -Z --dark
updated test.svg, littlefs v0.0 1024x1024 0x{26e,26f}.d8 w64.128, cksu
m 41ea791e
And open test.svg in a browser of your choice.
Here's what the current colors mean:
- yellow => mdirs
- blue => btree nodes
- green => data blocks
- red => corrupt/conflict issue
- gray => unused blocks
But like codemapd3.py the output is decently customizable. See -h/--help
for more info.
And, just like codemapd3.py, this is based on ideas from d3 and
brendangregg's flamegraphs:
- d3 - https://d3js.org
- brendangregg's flamegraphs - https://github.com/brendangregg/FlameGraph
Note we don't actually use d3... the name might be a bit confusing...
---
One interesting change from the previous dbgbmap.py is the addition of
"corrupt" (bad checksum) and "conflict" (multiple parents) blocks, which
can help find bugs.
You may find the "conflict" block reporting a bit strange. Yes it's
useful for finding block allocation failures, but won't naturally formed
dags in file btrees also be reported as "conflicts"?
Yes, but the long-term plan is to move away from dags and make littlefs
a pure tree (for block allocator and error correction reasons). This
hasn't been implemented yet, so for now dags will result in false
positives.
---
Implementation wise, this script was pretty straightforward given prior
dbglfs.py and codemapd3.py work.
However there was an interesting case of https://xkcd.com/1425:
- Traverse the filesystem and build a graph - easy
- Tile a rectangle with n nice looking rectangles - uhhh
I toyed around with an analytical approach (something like block width =
sqrt(canvas_width*canvas_height/n) * block_aspect_ratio), but ended up
settling on an algorithm that divides the number of columns by 2 until
we hit our target aspect ratio.
This algorithm seems to work quite well, runs in only O(log n), and
perfectly tiles the grid for powers-of-two. Honestly the result is
better than I was expecting.
This fixes an issue where shrub trunks were never printed even with
-i/--internal.
While only showing mdir/shrub/btree/bptr addresses on block changes is
nice in theory, it results in shrub trunks never being printed because
the mdir -> shrub block doesn't change.
Also checking for changes in block type avoids this.
I'm trying to avoid having classes with different implementations across
scripts, as it makes updating things error-prone, but at same time
copying all the tree renderers to all dbg scripts would be a bit much.
Monkey-patching the TreeArt class in relevant scripts seems like a
reasonable compromise.
These are pretty script specific, so probably shouldn't be in the
abstract littlefs classes. This also avoids the tree renderers getting
copied into scripts that don't need them (mtree -> dbglfs.py, dbgbmap.py
in the future, etc).
This also makes TreeArt consistent with JumpArt and LifetimeArt.
This just organizes things a bit better and makes dbg_log less of a
monolith:
- JumpArt - Encapsulates ascii jump rendering (-j/--jumps)
- LifetimeArt - Encapsulates ascii lifetime rendering (-g/--lifetimes)
So, instead of trying to be clever with python's tuple globbing, just
rely on lazy tuple unpacking and a whole bunch of if statements.
This is more verbose, but less magical. And generally, the less magic
there is, the easier things are to read.
This also drops the always-tupled lookup_ variants, which were
cluttering up the various namespaces.
Also tweaked how we fetch shrubs, adding Rbyd.fetchshrub and
Btree.fetchshrub instead of overloading the bd argument.
Oh, and also added --trunk to dbgmtree.py and dbglfs.py. Actually
_using_ --trunk isn't advised, since it will probably just result in a
corrupted filesystem, but these scripts are for accessing things that
aren't normally allowed anyways.
The reason for dropping the list/tuple distinction is because it was a
big ugly hack, unpythonic, and likely to catch users (and myself) by
surprise. Now, Rbyd.fetch and friends always require separate
block/trunk arguments, and the exercise of deciding which trunk to use
is left up to the caller.
Why not, -e/--exec seems useful/general purpose enough to deserve a
shortform flag. Especially since much of our testing involves emulation.
The only risk of conflicts is with -e/--error-* in other scripts, but
the _whole point_ of test.py is to error on failure, so I don't think
this will be an issue.
Note that -E may be more useful for environment variables in the future.
I feel like -e/--exec was more common in other programs, but I've only
found sed -e and perl -e so far. Most programs stick to -c/--command
(bash, python) which would conflict with -c/--compile here.
So:
$ ./scripts/dbgflags.py -l LFS_I
Is equivalent to:
$ ./scripts/dbgflags.py -l I
This matches some of the implicit prefixing during name lookup:
$ ./scripts/dbgflags.py LFS_I_SYNC
$ ./scripts/dbgflags.py I_SYNC
$ ./scripts/dbgflags.py SYNC
So:
all_ = all; del all
Instead of:
import builtins
all_, all = all, builtins.all
The del exposes the globally scoped builtin we accidentally shadow.
This requires less megic, and no module imports, though tbh I'm
surprised it works.
It also works in the case where you change a builtin globally, but
that's a bit too crazy even for me...
The inconsistency between inner/non-inner (-i/--inner) views was a bit
too confusing.
At least now the bptr rendering in dbglfs.py matches behavior, showing
the bptr tag -> bptr jump even when not showing inner nodes.
If the point of these renderers is to show all jumps necessary to reach
a given piece of data, hiding bptr jumps only sometimes is somewhat
counterproductive...
I'm starting to regret these reworks. They've been a big time sink. But
at least these should be much easier to extend with the future planned
auxiliary trees?
New classes:
- Bptr - A representation of littlefs's data-only block pointers.
Extra fun is the lazily checked Bptr.__bool__ method, which should
prevent slowing down scripts that don't actually verify checksums.
- Config - The set of littlefs config entries.
- Gstate - The set of littlefs gstate.
I may have had too much fun with Config and Gstate. Not only do these
provide lookup functions for config/gstate, but known config/gstate
get lazily parsed classes that can provide easy access to the relevant
metadata.
These even abuse Python's __subclasses__, so all you need to do to add
a new known config/gstate is extend the relevant Config.Config/
Gstate.Gstate class.
The __subclasses__ API is a weird but powerful one.
- Lfs - The big one, a high-level abstraction of littlefs itself.
Contains subclasses for known files: Lfs.Reg, Lfs.Dir, Lfs.Stickynote,
etc, which can be accessed by path, did+name, mid, etc. It even
supports iterating over orphaned files, though it's expensive (but
incredibly valuable for debugging!).
Note that all file types can currently have attached bshrubs/btrees.
In the existing implementation only reg files should actually end up
with bshrubs/btrees, but the whole point of these scripts is to debug
things that _shouldn't_ happen.
I intentionally gave up on providing depth bounds in Lfs. Too
complicated for something so high-level.
On noteworthy change is not recursing into directories by default. This
hopefully avoids overloading new users and matches the behavior of most
other Linux/Unix tools.
This adopts -r/--recurse/--file-depth for controlling how far to recurse
down directories, and -z/--depth/--tree-depth for controlling how far to
recurse down tree structures (mostly files). I like this API. It's
consistent with -z/--depth in the other dbg scripts, and -r/--recurse is
probably intuitive for most Linux/Unix users.
To make this work we did need to change -r/--raw -> -x/--raw. But --raw
is already a bit of a weird name for what really means "include a hex
dump".
Note that -z/--depth/--tree-depth does _not_ imply --files. Right now
only files can contain tree structures, but this will change when we get
around to adding the auxiliary trees.
This also adds the ability to specify a file path to use as the root
directory, though we need the leading slash to disambiguate file paths
and mroot addresses.
---
Also tagrepr has been tweaked to include the global/delta names,
toggleable with the optional global_ kwarg.
Rattr now has its own lazy parsers for did + name. A more organized
codebase would probably have a separate Name type, but it just wasn't
worth the hassle.
And the abstraction classes have all been tweaked to require the
explicit Rbyd.repr() function for a CLI-friendly representation. Relying
on __str__ hurt readability and debugging, especially since Python
prefers __str__ over __repr__ when printing things.
The main difference between -t/--tree and -R/--tree-rbyd is that only
the latter shows all internal jumps (unconditional alt->alt), so it
makes sense to also hide internal branches (rbyd->rbyd).
Note that we already hide the rbyd->block branches in dbglfs.py.
Also added color-ignoring comparison operators to our internal
TreeBranch struct. This fixes an issue where our non-inner branch
merging logic could end up with identical branches with different
colors, resulting in different colorings per run. Not the end of the
world, but something we want to avoid.
This requires an additional traversal of the mtree just to precalculate
the mrid width (mbits provides an upper-bound, but the actual number of
mrids in any given mdir may be much less), but it makes the output look
nicer.
This is where the high-level structure of littlefs starts to reveal
itself.
This is also where a lot of really annoying Mtree vs Btree API questions
come to a head, like should Mtree.lookup return an Mdir or an Rattr?
What about Btree.lookup? What gets included in the returned path in all
of these? Well, at least this is an interesting exercise in rethinking
littlefs's internal APIs...
New classes:
- Mid - A representation of littlefs's metadata ids. I've just gone
ahead and included the block_size-dependent mbits as a field in every
Mid instance to try to make Mid operations easier.
It's not like we care about one extra word of storage in Python.
- Mdir - Again, we intentionally _don't_ inherit Rbyd to try to reduce
type errors, though Mdirs really are just Rbyds in this design.
- Mtree - The skeleton of littlefs. Tricky bits include traversing the
mroot chain and handling mroot-inlined mdirs. Note mroots are included
in the mdir/mid iteration methods.
Getting the tree renderers all working again was a real pain in the ass.
Now that these are contained in the Rattr class, including the
tag/weight just clutters these APIs and makes things more confusing.
To make this more convenient, I've adding __iter__ methods that allow
unpacking both the Rattr and Ralt classes. These more-or-less represent
tag+weight+data tuples anyways.
Like the Rbyd class, Btree serves as an abstraction for littlefs's
btrees in Python.
New classes:
- Btree - btree abstraction, note this does _not_ inherit from Rbyd. I
find that sort of inheritance too error-prone. Instead Btree
_contains_ the root rbyd, which can always be accessed via Btree.rbyd.
If you want low-level root-rbyd details, just access Btree.rbyd.
Though most fields that are relevant to the Btree are also forwarded
via Python's @property properties.
- Bd - This just serves as a handle for the disk file that includes
block_size/block_count metadata.
One important change to note is the adoption of required vestigial names
in all btree nodes (yes this scripts was written... checks notes...
2 years ago... even the same month huh). This means we don't need the
parent name mapping, so the non-inner btree printing code no longer
needs to be extremely confusing at all times.
Also adopted the Rbyd class and friends, and backported Bd to
dbgrbyd.py.
Also tried to give a couple useful algorithms their own self-contained
functions, mainly:
- pathdelta - for emulating a traversal over exhaustive paths
- treerepr - for the common ascii tree rendering code
Just some minor tweaks:
- rbydaddr: Return list instead of tuple, note we rely on the type
distinction in Rbyd.fetch now.
- tagrepr: Rename w -> weight.
This reworks dbgrbyd.py to use the Rbyd class (well, a rewrite of the
Rbyd class) as an abstraction of littlefs's rbyd disk structure in
Python.
Duplicating common classes/functions across these scripts has proven
useful for sharing code without preventing these scripts from being
standalone (a problem for _actual_ code sharing, relative imports, etc).
And, because of how these scripts were written, dbgrbyd.py humorously
ended up the only script not sharing the Rbyd class.
I'm also trying to make the actual Rbyd abstraction a bit more concrete
now that the filesystem's design has had some time to mature. This means
more classes for things like Rattrs that reduce the sheer number of
tuples that were flying around.
New classes:
- Rattr - rbyd attrs, tag + weight + data, this includes all relevant
offsets which is useful for rendering hexdumps/etc.
- Ralt - rbyd alt pointers, useful for building tree representations.
- Rbyd - rbyd abstraction, including lookup/traversal methods
Note also that while the Rbyd class replaces most of the dbg_tree logic,
dbg_log is still pretty low-level and abstractionless.
---
Eventually I hope to have well defined classes for Btrees, Mdirs, Files,
etc, to make it easier to write more interesting debug scripts such as
dbgbmap.py.
Separating Btree, Mdirs, etc also means we shouldn't need the hacky
btree_lookup/tree_lookup methods in every script anymore. Having those
in dbgrbyd.py would've been a bit weird.
Might as well, since we already need to find this to calculate stack
info.
I've been considering adding -z/--depth to these scripts as well, but
that would require quite a bit more work. It's probably not worth the
added complexity/headache. Depth termination would need to happen on the
javascript side, and we'd still need cycle detection anyways.
But an error code is easy to add.
This drops the option to read tags from a disk file. I don't think I've
ever used this, and it requires quite a bit of circuitry to implement.
Also dropped -s/--string, because most tags can't be represented as
strings?
And tweaked -x/--hex flags to correctly parse spaces in arguments, so
now these are equivalent:
- ./scripts/dbgtag.py -x 00 03 00 08
- ./scripts/dbgtag.py -x "00 03 00 08"
I mean, why not. dbgblock.py is already a bit special compared to the
other dbg scripts:
$ ./scripts/dbgblock.py disk -b4096 0 1 -n16
block 0x0, size 16, cksum a90f45b6
00000000: 68 69 21 0e 00 03 00 08 6c 69 74 74 6c 65 66 73 hi!.....littlefs
block 0x1, size 16, cksum 01e5f5e4
00000000: 68 69 21 0c 80 03 00 08 6c 69 74 74 6c 65 66 73 hi!.....littlefs
This matches dbgcat.py, which is useful when switching between the two
for debugging pipelines, etc.
We want dbgblock.py/dbgcat.py to be as identical as possible, and if you
removed the multiple blocks from dbgcat.py you'd have to really start
asking why it's named dbgCAT.py.