I think realistically no one is using this. It's already only partially
supported and untested.
Worst case, if someone does depend on this we can always revert.
lfs_init handles the checks/asserts of most configuration, moving these
checks near lfs_init attempts to keep all of these checks nearby each
other.
Also updated the comments to avoid somtimes-ambiguous range notation.
And removed negative bounds checks. Negative bounds should be obviously
incorrect, and 0 is _technically_ not illegal for any define (though
admittedly unlikely to be correct).
This drops the lookahead buffer from operating on 32-bit words to
operating on 8-bit bytes, and removes any alignment requirement. This
may have some minor performance impact, but it is unlikely to be
significant when you consider IO overhead.
The original motivation for 32-bit alignment was an attempt at
future-proofing in case we wanted some more complex on-disk data
structure. This never happened, and even if it did, it could have been
added via additional config options.
This has been a significant pain point for users, since providing
word-aligned byte-sized buffers in C can be a bit annoying.
- Renamed lfs.free -> lfs.lookahead
- Renamed lfs.free.off -> lfs.lookahead.start
- Renamed lfs.free.i -> lfs.lookahead.next
- Renamed lfs.free.ack -> lfs.lookahead.ckpoint
- Renamed lfs_alloc_ack -> lfs_alloc_ckpoint
These have been named a bit confusingly, and I think the new names make
their relevant purposes a bit clearer.
At the very it's clear lfs.lookahead is related to the lookahead buffer.
(and doesn't imply a closed free-bitmap).
Superblock expansion is an irreversible operation. In an effort to
prevent superblock expansion from claiming valuable scratch space
(important for small, <~8 block filesystems), littlefs prevents
superblock expansion when the disk is "mostly full".
In true computer-scientist fashion, this "mostly full" threshold was
set to ~50%.
As pointed out by gbolgradov and rojer, >~50% utilization is not
uncommon, and it can lead to a situation where superblock expansion does
not occur in a relatively healthy filesystem, causing focused wear at
the root.
To remedy this, the threshold is now increased to ~88% (7/8) full.
This may change in the future and should probably be eventually user
configurable.
Found by gbolgradov and rojer
The idea is in the future this function may be extended to support other
block janitorial work. In such a case calling this lfs_fs_gc provides a
more general name that can include other operations.
This is currently just wishful thinking, however.
The initial implementation for this was provided by kaetemi, originally
as a mount flag. However, it has been modified here to be self-contained
in an explicit runtime function that can be called after mount.
The reasons for an explicit function:
1. lfs_mount stays a strictly readonly operation, and avoids pulling in
all of the write machinery.
2. filesystem-wide operations such as lfs_fs_grow can be a bit risky,
and irreversable. The action of growing the filesystem should be very
intentional.
---
One concern with this change is that this will be the first function
that changes metadata in the superblock. This might break tools that
expect the first valid superblock entry to contain the most recent
metadata, since only the last superblock in the superblock chain will
contain the updated metadata.
The code-cost wasn't that bad: 16556 B -> 16754 B (+1.2%)
But moving write support of older versions behind a compile-time flag
allows us to be a bit more liberal with what gets added to support older
versions, since the cost won't hit most users.
The intention is to help interop with older minor versions of littlefs.
Unfortunately, since lfs2.0 drivers cannot mount lfs2.1 images, there are
situations where it would be useful to write to write strictly lfs2.0
compatible images. The solution here adds a "disk_version" configuration
option which determines the behavior of lfs2.1 dependent features.
Normally you would expect this to only change write behavior. But since the
main change in lfs2.1 increased validation of erased data, we also need to
skip this extra validation (fcrc) or see terrible slowdowns when writing.
In terms of ease-of-use, a user familiar with other filesystems expects
block_usage in fsinfo. But in terms of practicality, block_usage can be
expensive to find in littlefs, so if it's not needed in the resulting
fsinfo, that operation is wasteful.
It's not clear to me what the best course of action is, but since
block_usage can always be added to fsinfo later, but not removed without
breaking backwards compatibility, I'm leaving this out for now.
Block usage can still be found by explicitly calling lfs_fs_size.
Version are now returned with major/minor packed into 32-bits,
so 0x00020001 is the current disk version, for example.
1. This needed to change to use a disk_* prefix for consistency with the
defines that already exist for LFS_VERSION/LFS_DISK_VERSION.
2. Encoding the version this way has the nice side effect of making 0 an
invalid value. This is useful for adding a similar config option
that needs to have reasonable default behavior for backwards
compatibility.
In theory this uses more space, but in practice most other config/status
is 32-bits in littlefs. We would be wasting this space for alignment
anyways.
Currently this includes:
- minor_version - on-disk minor version
- block_usage - estimated number of in-use blocks
- name_max - configurable name limit
- file_max - configurable file limit
- attr_max - configurable attr limit
These are currently the only configuration operations that need to be
written to disk. Other configuration is either needed to mount, such as
block_size, or does not change the on-disk representation, such as
read/prog_size.
This also includes the current block usage, which is common in other
filesystems, though a more expensive to find in littlefs. I figure it's
not unreasonable to make lfs_fs_stat no worse than block allocation,
hopefully this isn't a mistake. It may be worth caching the current
usage after the most recent lookahead scan.
More configuration may be added to this struct in the future.
This fixes the overflowing left shift on 8 bit platforms.
littlefs2/lfs.c: In function ‘lfs_dir_commitcrc’:
littlefs2/lfs.c:1654:51: error: left shift count >= width of type [-Werror=shift-count-overflow]
commit->ptag = ntag ^ ((0x80 & ~eperturb) << 24);
lfs_fs_mkconsistent allows running the internal consistency operations
(desuperblock/deorphan/demove) on demand and without any other
filesystem changes.
This can be useful for front-loading and persisting consistency operations
when you don't want to pay for this cost on the first write to the
filesystem.
Conveniently, this also offers a way to force the on-disk minor version
to bump, if that is wanted behavior.
Idea from kasper0
The underlying issue is that lfs_fs_deorphan did not updating gstate
correctly. The way it determined if there are any orphans remaining in
the filesystem was by subtracting the number of found orphans from an
internal counter.
This internal counter is a leftover from a previous implementation that
allowed leaving the lfs_fs_deorphan loop early if we know the number of
expected orphans. This can happen during recursive mdir relocations, but
with only a single bit in the gstate, can't happen during mount. If we
detect orphans during mount, we set this internal counter to 1, assuming
we will find at least one orphan.
But this presents a problem, what if we find _no_ orphans? If this happens
we never decrement the internal counter of orphans, so we would never
clear the bit in the gstate. This leads to a running lfs_fs_deorphan
on more-or-less every mutable operation in the filesystem, resulting in
an extreme performance hit.
The solution here is to not subtract the number of found orphans, but assume
that when our lfs_fs_deorphan loop finishes, we will have no orphans, because
that's the whole point of lfs_fs_deorphan.
Note that the early termination of lfs_fs_deorphan was dropped because
it would not actually change the runtime complexity of lfs_fs_deorphan,
adds code cost, and risks fragile corner cases such as this one.
---
Also added tests to assert we run lfs_fs_deorphan at most once.
Found by kasper0 and Ldd309
This just means a rewrite of the superblock entry with the new minor
version.
Though it's interesting to note, we don't need to rewrite the superblock
entry until the first write operation in the filesystem, an optimization
that is already in use for the fixing of orphans and in-flight moves.
To keep track of any outdated minor version found during lfs_mount, we
can carve out a bit from the reserved bits in our gstate. These are
currently used for a counter tracking the number of orphans in the
filesystem, but this is usually a very small number so this hopefully
won't be an issue.
In-device gstate tag:
[-- 32 --]
[1|- 11 -| 10 |1| 9 ]
^----^-----^--^--^-- 1-bit has orphans
'-----|--|--|-- 11-bit move type
'--|--|-- 10-bit move id
'--|-- 1-bit needs superblock
'-- 9-bit orphan count
This was just an oversight. Seeking to the end of the directory should
not error, but instead restore the directory to the state where the next
read returns 0.
Note this matches the behavior of lfs_file_tell/lfs_file_seek.
Found by sosthene-nitrokey
There was already an assert for this, but because it included the
underlying equation for the requirement it was too confusing for
users that had no prior knowledge for why the assert could trigger.
The math works out such that 128 bytes is a reasonable minimum
requirement, so I've added that number as an explicit assert.
Hopefully this makes this sort of situation easier to debug.
Note that this requirement would need to be increased to 512 bytes if
block addresses are ever increased to 64-bits. DESIGN.md goes into more
detail why this is.