Tree deletion is such a pain. It always seems like an easy addition to
the core algorithm but always comes with problems.
The initial plan for deletes was to iterate through all tags, tombstone,
and then adjust weights as needed. This accomplishes deletes with little
change to the rbyd algorithm, but adds a complex traversal inside the
commit logic. Doable in one commit, but complex. It also risks weird
unintuitive corner cases since the cost of deletion grows with the number
of tags being deleted (O(m log n)).
But this rbyd data structure is a tree, so in theory it's possible to
delete a whole range of tags in a single O(log n) operation.
---
This is a proof-of-concept range deletion algorithm for rbyd trees.
Note, this does not preserve rbyd's balancing properties! But it is no
worse than tombstoning. This is acceptable for littlefs as any
unbalanced trees will be rebalanced during compaction.
The idea is to follow the same underlying dhara algorithm, where we
follow a search path and save any alt pointers not taken, but we follow
both search paths that form the outside of the range, and only keep
outside edges.
For example, a tree:
.-------o-------.
| |
.---o---. .---o---.
| | | |
.-o-. .-o-. .-o-. .-o-.
| | | | | | | |
a b c d e f g h
To delete the range d-e, we would search for d, and search for e:
********o********
* *
.---***** *****---.
| * * |
.-o-. .-*** ***-. .-o-.
| | | * * | | |
a b c d e f g h
And keep the outside edges:
.--- ---.
| |
.-o-. .- -. .-o-.
| | | | | |
a b c f g h
But how do we combine the outside edges? The simpler option is to do
both searches seperately, one after the other. This would end up with a
tree like this:
.---------o
| |
.-o-. .---o
| | | |
a b c o---------.
| |
o---. .-o-.
| | | |
_ f g h
But this horribly throws off the balance of our tree! It's worse than
tombstoning, and gets worse with more tags.
An alternative strategy, which is used here, is to alternate edges as we
descend down the tree. This unfortunately is more complex, and requires
~2x the RAM, but better preserves the balance of our tree. It isn't
perfect, because we lose color information, but we can leave that up to
compaction:
.---------o
| |
.-o-. o---------.
| | | |
a b .---o .-o-.
| | | |
c o---. g h
| |
_ f
I also hope this can be merged into lfs_rbyd_append, deduplicating the
entire core rbyd append algorithm.
Considered adding --ignore-errors to watch.py, but it doesn't really
make sense with watch.py's implementation. watch.py would need to not update
in realtime, which conflicts with other use cases.
It's quite lucky a spare bit is free in the tag encoding, this means we
don't need a reserved length value as originally planned. We end up using
all of the bits that overlap the alt pointer encoding, which is nice and
unexpected.
It turns out statefulness works quite well with this algorithm (The
prototype was in Haskell, which created some artificial problems. I
think it may have just been too high-level a language for this
near-instruction-level algorithm).
This bias makes it so that tag lookups always find a tag strictly >= the
requested tag, unless we are at the end of the tree.
This makes tree traversal trivial, which is quite nice.
Need to remove ntag now, it's no longer needed.
- Moved alt encoding 0x1 => 0x4, which can lead to slightly better
lookup tables, the perturb bit takes the same place as the color bit,
which means both can be ignored in readonly operations.
- Dropped lfs_rbyd_fetchmatch, asking each lfs_rbyd_fetch to include NULL
isn't that bad.
New encoding:
tags:
iiii iiiiiii iiiiiTT TTTTTTt ttt0tpv
^--------^------^^^- 16-bit id
'------|||- 8-bit type2
'||- 5-bit type1
'|- perturb bit
'- valid bit
llll lllllll lllllll lllllll lllllll
^- n-bit length
alts:
wwww wwwwwww wwwwwww wwwwwww www1dcv
^^^-^- 28-bit weight
'|-|- color bit
'-|- direction bit
'- valid bit
jjjj jjjjjjj jjjjjjj jjjjjjj jjjjjjj
^- n-bit jump