Files
littlefs/scripts
Christopher Haster 56b18dfd9a Reworked revision count logic a bit, block_cycles -> block_recycles
The original goal here was to restore all of the revision count/
wear-leveling features that were intentionally ignored during
refactoring, but over time a few other ideas to better leverage our
revision count bits crept in, so this is sort of the amalgamation of
that...

Note! None of these changes affect reading. mdir fetch strictly needs
only to look at the revision count as a big 32-bit counter to determine
which block is the most recent.

The interesting thing about the original definition of the revision
count, a simple 32-bit counter, is that it actually only needs 2-bits to
work. Well, three states really: 1. most recent, 2. less recent, 3.
future most recent. This means the remaining bits are sort of up for
grabs to other things.

Previously, we've used the extra revision count bits as a heuristic for
wear-leveling. Here we reintroduce that, a bit more rigorously, while
also carving out space for a nonce to help with commit collisions.

Here's the new revision count breakdown:

  vvvvrrrr rrrrrrnn nnnnnnnn nnnnnnnn
  '-.''----.----''---------.--------'
    '------|---------------|---------- 4-bit relocation revision
           '---------------|---------- recycle-bits recycle counter
                           '---------- pseudorandom nonce

- 4-bit relocation revision

  We technically only need 2-bits to tell which block is the most
  recent, but I've bumped it up to 4-bits just to be safe and to make
  it a bit more readable in hex form.

- recycle-bits recycle counter

  A user configurable counter, this counter tracks how many times a
  metadata block has been erased. When it overflows we return the block
  to the allocator to participate in block-level wear-leveling again.
  This implements our copy-on-bounded-write strategy.

- pseudorandom nonce

  The remaining bits we fill with a pseudorandom nonce derived from the
  filesystem's prng. Note this prng isn't the greatest (it's just the
  xor of all mdir cksums), but it gets the job done. It should also be
  reproducible, which can be a good thing.

  Suggested by ithinuel, the addition of a nonce should help with the
  commit collision issue caused by noop erases. It doesn't completely
  solve things, since we're only using crc32c cksums not collision
  resistant cryptographic hashes, but we still have the existing
  valid/perturb bit system to fall back on.

When we allocate a new mdir, we want to zero the recycle counter. This
is where our relocation revision is useful for indicating which block is
the most recent:

  initial state: 10101010 10101010 10101010 10101010
                 '-.'
                  +1     zero           random
                   v .----'----..---------'--------.
  lfsr_rev_init: 10110000 00000011 01110010 11101111

When we increment, we increment recycle counter and xor in a new nonce:

  initial state: 10110000 00000011 01110010 11101111
                 '--------.----''---------.--------'
                         +1              xor <-- random
                          v               v
  lfsr_rev_init: 10110000 00000111 01010100 01000000

And when the recycle counter overflows, we relocate the mdir.

If we aren't wear-leveling, we just increment the relocation revision to
maximize the nonce.

---

Some other notes:

- Renamed block_cycles -> block_recycles.

  This is intended to help avoid confusing block_cycles with the actual
  physical number of erase cycles supported by the device.

  I've noticed this happening a few times, and it's unfortunately
  equivalent to disabling wear-leveling completely. This can be improved
  with better documentation, but also changing the name doesn't hurt.

- We now relocate both blocks in the mdir at the same time.

  Previously we only relocated one block in the mdir per recycle. This
  was necessary to keep our threaded linked-list in sync, but the
  threaded linked-list is now no more!

  Relocating both blocks is simpler, updates the mtree less often,
  compatible with metadata redundancy, and avoids aliasing issues that
  were a problem when relocating one block.

  Note that block_recycles is internally multiplied by 2 so each block
  sees the correct number of erase cycles.

- block_recycles is now rounded down to a power-of-2.

  This makes the counter logic easier to work with and takes up less RAM
  in lfs_t. This is a rough heuristic anyways.

- Moved the lfs->seed updates into lfsr_mountinited + lfsr_mdir_commit.

  This avoids readonly operations affecting the seed and should help
  reproducibility.

- Changed rev count in dbg scripts to render as hex, similar to cksums.

  Now that we using most of the bits in the revision count, the decimal
  version is, uh, not helpful...

Code changes:

           code          stack
  before: 33342           2640
  after:  33434 (+0.3%)   2640 (+0.0%)
2024-05-22 18:49:05 -05:00
..
2023-09-15 18:42:48 -05:00
2024-05-18 13:00:15 -05:00
2024-05-18 13:00:15 -05:00
2023-11-06 20:31:21 -06:00
2020-11-22 15:05:22 -06:00