Changed always-follow alts that we use to terminated grow/shrink/remove
operations to use `altle 0xfff0` instead of `altgt 0`.
`altgt 0` gets the job done as long as you make sure tag 0 never ends up
in an rbyd query. But this kept showing up as a problem, and recent
debugging revealed some erronous 0 tag lookups created vestigial alt
pointers (not necessarily a problem, but space-wasting).
Since we moved to a strict 16-bit tag, making these `altle 0xfff0`
doesn't really have a downside, and means we can expect rbyd lookups
around 0 to behave how one would normally expect.
As a (very minor) plus, the value zero usually has special encodings in
instruction sets, so being able to use it for rbyd_lookups offers a
(very minor) code size saving.
---
Sidenote: The reasons altle/altgt is how it is and asymmetric:
1. Flipping these alts is a single bit-flip, which only happens if they
are asymmetric (only one includes the equal case).
2. Our branches are biased to prefer the larger tag. This makes
traversal trivial. It might be possible to make this still work with
altlt/altge, but would require some increments/decrements, which
might cause problems with boundary conditions around the 16-bit tag
limit.
I only recently noticed there is enough information in each rbyd trunk
to infer the effective grow/shrinks. This has a number of benefits:
- Cleans up the tag encoding a bit, no longer expecting tag size to
sometimes contain a weight (though this could've been fixed other
ways).
0x6 in the lower nibble now reserved exclusively for in-device tags.
- grow/shrinks can be implicit to any tag. Will attempt to leverage this
in the future.
- The weight of an rbyd can no longer go out-of-sync with itself. While
this _shouldn't_ happen normally, if it does I imagine it'd be very
hard to debug.
Now, there is only one source of knowledge about the weight of the
rbyd: The most recent set of alt-pointers.
Note that remove/unreachable tags now behave _very_ differently when it
comes to weight calculation, remove tags require the tree to make the
tag unreachable. This is a tradeoff for the above.
The main motivation for this was issues fitting a good tag encoding into
14-bits. The extra 2-bits (though really only 1 bit was needed) from
making this not a leb encoding opens up the space from 3 suptypes to
15 suptypes, which is nothing to shake a stick at.
The main downsides:
1. We can't rely on leb encoding for effectively-infinite extensions.
2. We can't shorten small tags (crcs, grows, shrinks) to one byte.
For 1., extending the leb encoding beyond 14-bits is already
unpalatable, because it would increase RAM costs in the tag
encoder/decoder,` which must assume a worst-case tag size, and would likely
add storage cost to every alt pointer, more on this in the next section.
The current encoding is quite generous, so I think it is unlikely we
will exceed the 16-bit encoding space. But even if we do, it's possible
to use a spare bit for an "extended" set of tags in the future.
As for 2., the lack of compression is a downside, but I've realized the
only tags that really matter storage-wise are the alt pointers. In any
rbyds there will be roughly O(m log m) alt pointers, but at most O(m) of
any other tags. What this means is that the encoding of any other tag is
in the noise of the encoding of our alt pointers.
Our alt pointers are already pretty densely packed. But because the
sparse key part of alt-pointers are stored as-is, the worst-case
encoding of in-tree tags likely ends up as the encoding of our
alt-pointers. So going up to 3-byte tags adds a surprisingly large
storage cost.
As a minor plus, le16s should be slightly cheaper to encode/decode. It
should also be slightly easier to debug tags on-disk.
tag encoding:
TTTTtttt ttttTTTv
^--------^--^^- 4+3-bit suptype
'---|- 8-bit subtype
'- valid bit
iiii iiiiiii iiiiiii iiiiiii iiiiiii
^- m-bit id/weight
llll lllllll lllllll lllllll lllllll
^- m-bit length/jump
Also renamed the "mk" tags, since they no longer have special behavior
outside of providing names for entries:
- LFSR_TAG_MK => LFSR_TAG_NAME
- LFSR_TAG_MKBRANCH => LFSR_TAG_BNAME
- LFSR_TAG_MKREG => LFSR_TAG_REG
- LFSR_TAG_MKDIR => LFSR_TAG_DIR
B-trees with names are now working, though this required a number of
changes to the B-tree layout:
1. B-tree no-longer require name entries (LFSR_TAG_MK) on each branch.
This is a nice optimization to the design, since these name entries
just waste space in purely weight-based B-trees, which are probably
going to be most B-trees in the filesystem.
If a name entry is missing, the struct entry, which is required,
should have the effective weight of the entry.
The first entry in every rbyd block is expected to be have no name
entry, since this is the default path for B-tree lookups.
2. The first entry in every rbyd block _may_ have a name entry, which
is ignored. I'm calling these "vestigial names" to make them sound
cooler than they actually are.
These vestigial names show up in a couple complicated B-tree
operations:
- During B-tree split, since pending attributes are calculated before
the split, we need to play out pending attributes into the rbyd
before deciding what name becomes the name of entry in the parent.
This creates a vestigial name which we _could_ immediately remove,
but the remove adds additional size to the must-fit split operation
- During B-tree pop/merge, if we remove the leading no-name entry,
the second, named entry becomes the leading entry. This creates a
vestigial name that _looks_ easy enough to remove when making the
pending attributes for pop/merge, but turns out the be surprisingly
tricky if the parent undergoes a split/merge at the same time.
It may be possible to remove all these vestigial names proactively,
but this adds additional rbyd lookups to figure out the exact tag to
remove, complicates things in a fragile way, and doesn't actually
reduce storage costs until the rbyd is compacted.
The main downside is that these B-trees may be a bit more confusing
to debug.
These benchmarks are now more useful for seeing how these B-trees perform.
In plot.py/plotmpl.py:
- Added --legend as another alias for -l, --legend-right.
- Allowed omitting of datasets from the legend by using empty strings
in --labels.
- Do not sum multiple data points on the same x coordinate. This was a
bad idea that risks invalid results going unnoticed.
As a plus multiple data points on the same x coordinate can be abused for
a cheap representation of measurement error.
This reworks test.py/bench.py a bit to map arguments to ids as a first
step instead of defering as much as possible. This is a better design
and avoids the hackiness around -b/-B. As a plus, test_id globbing is
easy to add.
I wondered if walking in Python 2's footsteps was going to run into the
same issues and sure enough, memory backed iterators became unweildy.
The motivation for this change is that large ranges in tests, such as
iterators over seeds or permutations, became prohibitively expensive to
compile. This meant more iteration moving into tests with more steps to
reproduce failures. This sort of defeats the purpuse of the test
framework.
The solution here is to move test permutation generation out of test.py
and into the test runner itself. The allows defines to generate their
values programmatically.
This does conflict with the test frameworks support of sets of explicit
permutations, but this is fixed by also moving these "permutation sets"
down into the test runner.
I guess it turns out the closer your representation matches your
implementation the better everythign works.
Additionally the define caching layer got a bit of tweaking. We can't
precalculate the defines because of mutual recursion, but we can
precalculate which define/permutation each define id maps to. This is
necessary as otherwise figuring out each define's define-specific
permutation would be prohibitively expensive.
B-tree remove/merge is the most annoying part of B-trees.
The implementation here follows the same ideas implemented in push/split:
1. Defer splits/merges until compaction.
2. Assume our split/merge will succeed and play it out into the rbyd.
3. On the first sign of failure, revert any unnecessary changes by
appending deletes.
4. Do all of this in a single commit to avoid issues with single-prog
blocks.
Mapping this onto B-tree merge, the condition that triggers merge is
when our rbyd is <1/4 the block_size after compaction, and the condition
that aborts a merge is when our rbyd is >1/2 the block_size, since that
would trigger a split on a later compact.
Weaving this into lfsr_btree_commit is a bit subtle, but relatively
straightforward all things considered.
One downside is it's not physically possible to try merging with both
siblings, so we have to choose just one to attempt a merge. We handle
the corner case of merging the last sibling in a block explicitly, and
in theory the other sibling will eventually trigger a merge during its
own compaction.
Extra annoying are the corner cases with merges in the root rbyd that
make the root rbyd degenerate. We really should avoid a compaction in
this case, as otherwise we would erase a block that we immediately
inline at a significant cost. However determining if our root rbyd is
degenerate is tricky. We can determine a degenerate root with children
by checking if our rbyd's weight matches the B-tree's weight when we
merge. But determining a degenerate root that is a leaf requires
manually looking up both children in lfsr_btree_pop to see if they will
result in a degenerate root. Ugh.
On the bright side, this does all seem to be working now. Which
completes the last of the core B-tree algorithms.
This was a rather simple exercise. lfsr_btree_commit does most of the
work already, so all this needed was setting up the pending attributes
correctly.
Also:
- Tweaked dbgrbyd.py's tree rendering to match dbgbtree.py's.
- Added a print to each B-tree test to help find the resulting B-tree
when debugging.
Changed so there is no 1-to-1 mk-tag/id assumption, any unique ids
create a simulated lifetime to render. This fixes the issue where
grows/shrinks left-aligned ids confused dbgrbyd.py.
As a plus, now dbgrbyd.py can actually handle multi-id grow/shrinks, and
is more robust against out-of-sync grow/shrinks. This sort of lifetime issues
are when you'd want to run dgbrbyd.py, so it's a bit important this is handled
gracefully.
An example:
$ ./scripts/dbgbtree.py -B4096 disk 0xaa -t -i
btree 0xaa.1000, rev 35, weight 278
block ids name tag data
(truncated)
00aa.1000: +-+ 0-16 branch id16 3 7e d4 10 ~..
007e.0854: | |-> 0 inlined id0 1 73 s
| |-> 1 inlined id1 1 74 t
| |-> 2 inlined id2 1 75 u
| |-> 3 inlined id3 1 76 v
| |-> 4 inlined id4 1 77 w
| |-> 5 inlined id5 1 78 x
| |-> 6 inlined id6 1 79 y
| |-> 7 inlined id7 1 7a z
| |-> 8 inlined id8 1 61 a
| |-> 9 inlined id9 1 62 b
...
This added the idea of block+limit addresses such as 0xaa.1000. Added
this as an option to dbgrbyd.py along with a couple other tweaks:
- Added block+limit support (0x<block>.<limit>).
- Fixed in-device representation indentation when trees are present.
- Changed fromtag to implicitly fixup ids/weights off-by-one-ness, this
is consistent with lfs.c.
This involves many, many hacks, but is enough to test the concept
and start looking at how it interacts with different block sizes.
Note only append (lfsr_btree_push on the end) is implemented, and it
makes some assumption about how the ids can interact when splitting
rbyds.
This implements a common B-tree using rbyd's as inner nodes.
Since our rbyds actually map to sorted arrays, this fits together quite
well.
The main caveat/concern is that we can't rely on strict knowledge on the
on-disk size of these things. This first shows up with B-tree insertion,
we can't split in preparation to insert as we descend down the tree.
Normally, this means our B-tree would require recursion in order to keep
track of each parent as we descend down our tree. However, we can
avoid this by not storing our parent, but by looking it up again on each
step of the splitting operation.
This brute-force-ish approach makes our algorithm tail-recursive, so
bounded RAM, but raises our runtime from O(logB(n)) to O(logB(n)^2)
That being said, O(logB(n)^2) is still sublinear, and, thanks to
B-tree's extremely high branching factor, may be insignificant.
The way sparse ids interact with our flat id+attr tree is a bit wonky.
Normally, with weighted trees, one entry is associated with one weight.
But since our rbyd trees use id+attr pairs as keys, in theory each set of
id+attr pairs should share a single weight.
+-+-+-+-> id0,attr0 -.
| | | '-> id0,attr1 +- weight 5
| | '-+-> id0,attr2 -'
| | |
| | '-> id5,attr0 -.
| '-+-+-> id5,attr1 +- weight 5
| | '-> id5,attr2 -'
| |
| '-+-> id10,attr0 -.
| '-> id10,attr1 +- weight 5
'-------> id10,attr2 -'
To make this representable, we could give a single id+attr pair the
weight, and make the other attrs have a weight of zero. In our current
scheme, attr0 (actually LFSR_TAG_MK) is the only attr required for every
id, and it has the benefit of being the first attr found during
traversal. So it is the obvious choice for storing the id's effective weight.
But there's still some trickiness. Keep in mind our ids are derived from
the weights in the rbyd tree. So if follow intuition and implement this naively:
+-+-+-+-> id0,attr0 weight 5
| | | '-> id5,attr1 weight 0
| | '-+-> id5,attr2 weight 0
| | |
| | '-> id5,attr0 weight 5
| '-+-+-> id10,attr1 weight 0
| | '-> id10,attr2 weight 0
| |
| '-+-> id10,attr0 weight 5
| '-> id15,attr1 weight 0
'-------> id15,attr2 weight 0
Suddenly the ids in the attr sets don't match!
It may be possible to work around this with special cases for attr0, but
this would complicate the code and make the presence of attr0 a strict
requirement.
Instead, if we associate each attr set with not the smallest id in the
weight but the largest id in the weight, so id' = id+(weight-1), then
our requirements work out while still keeping each attr set on the same
low-level id:
+-+-+-+-> id4,attr0 weight 5
| | | '-> id4,attr1 weight 0
| | '-+-> id4,attr2 weight 0
| | |
| | '-> id9,attr0 weight 5
| '-+-+-> id9,attr1 weight 0
| | '-> id9,attr2 weight 0
| |
| '-+-> id14,attr0 weight 5
| '-> id14,attr1 weight 0
'-------> id14,attr2 weight 0
To be blunt, this is unintuitive, and I'm worried it may be its own
source of complexity/bugs. But this representation does solve the problem
at hand, so I'm just going to see how it works out.
- Fixed off-by-one id for unknown tags.
- Allowed block_size and block to go unspecified, assumes the block
device is one big block in that case.
- Added --buffer and --ignore-errors to watch.py, making it a bit better
for watching slow and sometimes error scripts, such as dbgrbyd.py when
watching a block device under test.
Well not really fixed, more just added an assert to make sure
lfsr_rbyd_lookup is not called with tag 0. Because our alt tags only
encode less-than-or-equal and greater-than, which can be flipped
trivially, it's not possible to encode removal of tag 0 during deletes.
Fortunately, this tag should already not exist for other pragmatic
reasons, it was just used as the initial value for traversals, where it
could cause this bug.
If we combine rbyd ids and B-tree weights, we need 32-bit ids since this
will eventually need to cover the full range of a file. This simply
doesn't fit into a single word anymore, unless littlefs uses 64-bit tags.
Generally not a great idea for a filesystem targeting even 8-bit
microcontrollers.
So here is a tag encoding that uses 3 leb128 words. This will likely
have more code cost and slightly more disk usage (we can no longer fit
tags into 2 bytes), though with most tags being alt pointers (O(m log m)
vs O(m)), this may not be that significant.
Note that we try to keep tags limited to 14-bits to avoid an extra leb128 byte,
which would likely affect all alt pointers. To pull this off we do away
with the subtype/suptype distinction, limiting in-tree tag types to
10-bits encoded on a per-suptype basis:
in-tree tags:
ttttttt ttt00rv
^--^^- 10-bit type
'|- removed bit
'- valid bit
iiii iiiiiii iiiiiii iiiiiii iiiiiii
^- n-bit id
lllllll lllllll lllllll lllllll
^- m-bit length
out-of-tree tags:
ttttttt ttt010v
^---^- 10-bit type
'- valid bit
0000000
lllllll lllllll lllllll lllllll
^- m-bit length
alt tags:
kkkkkkk kkk1dcv
^-^^^- 10-bit key
'||- direction bit
'|- color bit
'- valid bit
wwww wwwwwww wwwwwww wwwwwww wwwwwww
^- n-bit weight
jjjjjjj jjjjjjj jjjjjjj jjjjjjj
^- m-bit jump
The real pain is that with separate integers for id and tag, it no
longer makes sense to combine these into one big weight field. This
requires a significant rewrite.
- Added both uattr (limited to 256) and id (limited to 65535) benchmarks
covering the main rbyd operations
- Fixed issue where --defines gets passed to the test/bench runners when
querying id-specific information. After changing the test/bench
runners to prioritize explicit defines, this causes problems for
recorded benchmark results and debug related things.
- In plot.py/plotmpl.py, made --by/-x/-y in subplots behave somewhat
reasonably, contributing to a global dataset and the figure's legend,
colors, etc, but only shown in the specified subplot. This is useful
mainly for showing different -y values on different subplots.
- In plot.py/plotmpl.py, added --labels to allow explicit configuration
of legend labels, much like --colors/--formats/--chars/etc. This
removes one of the main annoying needs for modifying benchmark results.
I'm still not sure this is the best decision, since it may add some
complexity to tag parsing, but making most crcs one byte may be valuable
since these exist in every single commit.
This gives tags three high-level encodings:
in-tree tags:
iiiiiii iiiiitt ttTTTTT TTT00rv
^----^--------^--^^- 16-bit id
'--------|--||- 4-bit suptype
'--||- 8-bit subtype
'|- removed bit
'- valid bit
lllllll lllllll lllllll lllllll
^- n-bit length
out-of-tree tags:
------- -----TT TTTTTTt ttt01pv
^----^--^^- 8-bit subtype
'--||- 4-bit suptype
'|- perturb bit
'- valid bit
lllllll lllllll lllllll lllllll
^- n-bit length
alt tags:
wwwwwww wwwwwww wwwwwww www1dcv
^-^^^- 28-bit weight
'||- direction bit
'|- color bit
'- valid bit
jjjjjjj jjjjjjj jjjjjjj jjjjjjj
^- n-bit jump
Having the location of the subtype flipped for crc tags vs tree tags is
unintuitive, but it makes more crc tags fit in a single byte, while
preserving expected tag ordering for tree tags.
The only case where crc tags don't fit in a single byte if is non-crc
checksums (sha256?) are added, at which point I expect the subtype to
indicate which checksum algorithm is in use.
$ ./scripts/dbgrbyd.py disk 4096 0 -t
mdir 0x0, rev 1, size 121
off tag data (truncated)
0000005e: +-+-+--> uattr 0x01 4 aa aa aa aa ....
0000000f: | | '--> uattr 0x02 4 aa aa aa aa ....
0000001d: | '----> uattr 0x03 4 aa aa aa aa ....
0000002d: | .----> uattr 0x04 4 aa aa aa aa ....
0000003d: | | .--> uattr 0x05 4 aa aa aa aa ....
0000004f: '-+-+-+> uattr 0x06 4 aa aa aa aa ....
00000004: '> uattr 0x07 4 aa aa aa aa ....
Unfortunately this tree can end up a bit confusing when alt pointers
live in unrelated search paths...
Toying around with the idea that since rbyd trees have strict height
gaurantees after compaction (2*log2(n)+1), we can proactively calculate
the maximum on-disk space required for a worst case tree+leb128
encoding.
This would _greatly_ simplify things such as metadata compaction and
splitting, and allow unstorable file metadata (too many custom
attributes) to error early.
One issue is that this calculated worst case will likely be ~4-5x worst
than the actual encoding due to leb128 compression. Though this may be an
acceptable tradeoff for the simplification and more reliable behavior.
Previously the subtype was encoded above the suptype. This was an issue
if you wanted to, say, traverse all tags in a given suptype.
I'm not sure yet if this sort of functionality is needed, it may be
useful for cleaning up/replacing classes of tags, such as file struct
tags, but not sure yet. At the very least is avoids unintuitive tag
ordering in the tree, which could potential cause problems for
create/deletes.
New encoding:
tags:
iiiiiii iiiiitt ttTTTTT TTT0trv
^----^--------^-^^^- 16-bit id
'--------|-'||- 5-bit suptype (split)
'--||- 8-bit subtype
'|- perturb/remove bit
'- valid bit
lllllll lllllll lllllll lllllll
^- n-bit length
alts:
wwwwwww wwwwwww wwwwwww www1dcv
^^^-^- 28-bit weight
'|-|- color bit
'-|- direction bit
'- valid bit
jjjjjjj jjjjjjj jjjjjjj jjjjjjj
^- n-bit jump
Also a large amount of name changes and other cleanup.
Tree deletion is such a pain. It always seems like an easy addition to
the core algorithm but always comes with problems.
The initial plan for deletes was to iterate through all tags, tombstone,
and then adjust weights as needed. This accomplishes deletes with little
change to the rbyd algorithm, but adds a complex traversal inside the
commit logic. Doable in one commit, but complex. It also risks weird
unintuitive corner cases since the cost of deletion grows with the number
of tags being deleted (O(m log n)).
But this rbyd data structure is a tree, so in theory it's possible to
delete a whole range of tags in a single O(log n) operation.
---
This is a proof-of-concept range deletion algorithm for rbyd trees.
Note, this does not preserve rbyd's balancing properties! But it is no
worse than tombstoning. This is acceptable for littlefs as any
unbalanced trees will be rebalanced during compaction.
The idea is to follow the same underlying dhara algorithm, where we
follow a search path and save any alt pointers not taken, but we follow
both search paths that form the outside of the range, and only keep
outside edges.
For example, a tree:
.-------o-------.
| |
.---o---. .---o---.
| | | |
.-o-. .-o-. .-o-. .-o-.
| | | | | | | |
a b c d e f g h
To delete the range d-e, we would search for d, and search for e:
********o********
* *
.---***** *****---.
| * * |
.-o-. .-*** ***-. .-o-.
| | | * * | | |
a b c d e f g h
And keep the outside edges:
.--- ---.
| |
.-o-. .- -. .-o-.
| | | | | |
a b c f g h
But how do we combine the outside edges? The simpler option is to do
both searches seperately, one after the other. This would end up with a
tree like this:
.---------o
| |
.-o-. .---o
| | | |
a b c o---------.
| |
o---. .-o-.
| | | |
_ f g h
But this horribly throws off the balance of our tree! It's worse than
tombstoning, and gets worse with more tags.
An alternative strategy, which is used here, is to alternate edges as we
descend down the tree. This unfortunately is more complex, and requires
~2x the RAM, but better preserves the balance of our tree. It isn't
perfect, because we lose color information, but we can leave that up to
compaction:
.---------o
| |
.-o-. o---------.
| | | |
a b .---o .-o-.
| | | |
c o---. g h
| |
_ f
I also hope this can be merged into lfs_rbyd_append, deduplicating the
entire core rbyd append algorithm.
Considered adding --ignore-errors to watch.py, but it doesn't really
make sense with watch.py's implementation. watch.py would need to not update
in realtime, which conflicts with other use cases.
It's quite lucky a spare bit is free in the tag encoding, this means we
don't need a reserved length value as originally planned. We end up using
all of the bits that overlap the alt pointer encoding, which is nice and
unexpected.
It turns out statefulness works quite well with this algorithm (The
prototype was in Haskell, which created some artificial problems. I
think it may have just been too high-level a language for this
near-instruction-level algorithm).
This bias makes it so that tag lookups always find a tag strictly >= the
requested tag, unless we are at the end of the tree.
This makes tree traversal trivial, which is quite nice.
Need to remove ntag now, it's no longer needed.
- Moved alt encoding 0x1 => 0x4, which can lead to slightly better
lookup tables, the perturb bit takes the same place as the color bit,
which means both can be ignored in readonly operations.
- Dropped lfs_rbyd_fetchmatch, asking each lfs_rbyd_fetch to include NULL
isn't that bad.
New encoding:
tags:
iiii iiiiiii iiiiiTT TTTTTTt ttt0tpv
^--------^------^^^- 16-bit id
'------|||- 8-bit type2
'||- 5-bit type1
'|- perturb bit
'- valid bit
llll lllllll lllllll lllllll lllllll
^- n-bit length
alts:
wwww wwwwwww wwwwwww wwwwwww www1dcv
^^^-^- 28-bit weight
'|-|- color bit
'-|- direction bit
'- valid bit
jjjj jjjjjjj jjjjjjj jjjjjjj jjjjjjj
^- n-bit jump
Initially I thought the fcrc would be sufficient for all of the
end-of-commit context, since indicating that there is a new commit is a
simple as invalidating the fcrc. But it turns out there are cases that
make this impossible.
The surprising, and actually common, case, is that of an fcrc that
will end up containing a full commit. This is common as soon as the
prog_size is big, as small commits are padded to the prog_size at
minimum.
.------------------. \
| metadata | |
| | |
| | +-.
|------------------| | |
| foward CRC ------------.
|------------------| / | |
| commit CRC -----' |
|------------------| |
| padding | |
| | |
|------------------| \ \ |
| metadata | | | |
| | +-. | |
| | | | +-'
|------------------| / | |
| commit CRC --------' |
|------------------| |
| | /
'------------------'
When the commit + crc is all contained in the fcrc, something silly
happens with the math behind crcs. Everything in the commit gets
canceled out:
crc(m) = m(x) x^|P|-1 mod P(x)
m ++ crc(m) = m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))
crc(m ++ crc(m)) = (m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))) x^|P|-1 mod P(x)
crc(m ++ crc(m)) = (m(x) x^|P|-1 + m(x) x^|P|-1) x^|P|-1 mod P(x)
crc(m ++ crc(m)) = 0 * x^|P|-1 mod P(x)
This is the reason the crc of a message + naive crc is zero. Even with an
initializer/bit-fiddling, the crc of the whole commit ends up as some
constant.
So no manipulation of the commit can change the fcrc...
But even if this did work, or we changed this scheme to use two
different checksums, it would still require calculating the fcrc of
the whole commit to know if we need to tweak the first bit to invalidate
the unlikely-but-problematic case where we happen to match the fcrc. This
would add a large amount of complexity to the commit code.
It's much simpler and cheaper to keep the 1-bit counter in the tag, even
if it adds another moving part to the system.
This fixes most of the remaining bugs (except one with multiple padding
commits + noop erases in test_badblocks), with some other code tweaks.
The biggest change was dropping reliance on end-of-block commits to know
when to stop parsing commits. We can just continue to parse tags and
rely on the crc for catch bad commits, avoiding a backwards-compatiblity
hiccup. So no new commit tag.
Also renamed nprogcrc -> fcrc and commitcrc -> ccrc and made naming in
the code a bit more consistent.
Previously forward-looking CRCs was just two new CRC types, one for
commits with forward-looking CRCs, one without. These both contained the
CRC needed to complete the current commit (note that the commit CRC
must come last!).
[-- 32 --|-- 32 --|-- 32 --|-- 32 --]
with: [ crc3 tag | nprog size | nprog crc | commit crc ]
without: [ crc2 tag | commit crc ]
This meant there had to be several checks for the two possible structure
sizes, messying up the implementation.
[-- 32 --|-- 32 --|-- 32 --|-- 32 --|-- 32 --]
with: [nprogcrc tag| nprog size | nprog crc | commit tag | commit crc ]
without: [ commit tag | commit crc ]
But we already have a mechanism for storing optional metadata! The
different metadata tags! So why not use a separate tage for the
forward-looking CRC, separate from the commit CRC?
I wasn't sure this would actually help that much, there are still
necessary conditions for wether or not a forward-looking CRC is there,
but in the end it simplified the code quite nicely, and resulted in a ~200 byte
code-cost saving.
This change is necessary to handle out-of-order writes found by pjsg's
fuzzing work.
The problem is that it is possible for (non-NOR) block devices to write
pages in any order, or to even write random data in the case of a
power-loss. This breaks littlefs's use of the first bit in a page to
indicate the erase-state.
pjsg notes this behavior is documented in the W25Q here:
https://community.cypress.com/docs/DOC-10507
---
The basic idea here is to CRC the next page, and use this "erase-state CRC" to
check if the next page is erased and ready to accept programs.
.------------------. \ commit
| metadata | |
| | +---.
| | | |
|------------------| | |
| erase-state CRC -----. |
|------------------| | | |
| commit CRC ---|-|-'
|------------------| / |
| padding | | padding (doesn't need CRC)
| | |
|------------------| \ | next prog
| erased? | +-'
| | | |
| v | /
| |
| |
'------------------'
This is made a bit annoying since littlefs doesn't actually store the
page (prog_size) in the superblock, since it doesn't need to know the
size for any other operation. We can work around this by storing both
the CRC and size of the next page when necessary.
Another interesting note is that we don't need to any bit tweaking
information, since we read the next page every time we would need to
know how to clobber the erase-state CRC. And since we only read
prog_size, this works really well with our caching, since the caches
must be a multiple of prog_size.
This also brings back the internal lfs_bd_crc function, in which we can
use some optimizations added to lfs_bd_cmp.
Needs some cleanup but the idea is passing most relevant tests.