Compare commits

...

26 Commits

Author SHA1 Message Date
Christopher Haster
8b8fd14187 Added inline_max, to optionally limit the size of inlined files
Inlined files live in metadata and decrease storage requirements, but
may be limited to improve metadata-related performance. This is
especially important given the current plague of metadata performance.

Though decreasing inline_max may make metadata more dense and increase
block usage, so it's important to benchmark if optimizing for speed.

The underlying limits of inlined files haven't changed:
1. Inlined files need to fit in RAM, so <= cache_size
2. Inlined files need to fit in a single attr, so <= attr_max
3. Inlined files need to fit in 1/8 of a block to avoid metadata
   overflow issues, this is after limiting by metadata_max,
   so <= min(metadata_max, block_size)/8

By default, the largest possible inline_max is used. This preserves
backwards compatibility and is probably a good default for most use
cases.

This does have the awkward effect of requiring inline_max=-1 to
indicate disabled inlined files, but I don't think there's a good
way around this.
2024-01-19 13:00:27 -06:00
Christopher Haster
09972a1710 Merge pull request #913 from littlefs-project/gc-compactions
Extend lfs_fs_gc to compact metadata, compact_thresh
2024-01-19 12:51:11 -06:00
Christopher Haster
ed7bd05435 Merge pull request #912 from littlefs-project/relaxed-lookahead
Relaxed lookahead alignment, other internal block alloc readability improvements
2024-01-19 12:27:14 -06:00
Christopher Haster
b5cd957f42 Extended lfs_fs_gc to compact metadata, compact_thresh
This extends lfs_fs_gc to now handle three things:

1. Calls mkconsistent if not already consistent
2. Compacts metadata > compact_thresh
3. Populates the block allocator

Which should be all of the janitorial work that can be done without
additional on-disk data structures.

Normally, metadata compaction occurs when an mdir is full, and results in
mdirs that are at most block_size/2.

Now, if you call lfs_fs_gc, littlefs will eagerly compact any mdirs that
exceed the compact_thresh configuration option. Because the resulting
mdirs are at most block_size/2, it only makes sense for compact_thresh to
be >= block_size/2 and <= block_size.

Additionally, there are some special values:

- compact_thresh=0  => defaults to ~88% block_size, may change
- compact_thresh=-1 => disables metadata compaction during lfs_fs_gc

Note that compact_thresh only affects lfs_fs_gc. Normal compactions
still only occur when full.
2024-01-19 12:25:45 -06:00
Christopher Haster
1195d606ae Merge pull request #909 from littlefs-project/easy-util-defines
Add some easier util overrides: LFS_MALLOC/FREE/CRC
2024-01-19 12:24:16 -06:00
Christopher Haster
1711bdef76 Merge pull request #886 from BrianPugh/macro-sanity-check
Add value-range checks for user-definable macros at compile-time
2024-01-19 12:23:36 -06:00
Christopher Haster
6691718b18 Restricted LFS_FILE_MAX to signed 32-bits, <2^31, <=2147483647
I think realistically no one is using this. It's already only partially
supported and untested.

Worst case, if someone does depend on this we can always revert.
2024-01-16 23:40:30 -06:00
Christopher Haster
1fefcbbcba Rearranged compile-time constant checks to live near lfs_init
lfs_init handles the checks/asserts of most configuration, moving these
checks near lfs_init attempts to keep all of these checks nearby each
other.

Also updated the comments to avoid somtimes-ambiguous range notation.

And removed negative bounds checks. Negative bounds should be obviously
incorrect, and 0 is _technically_ not illegal for any define (though
admittedly unlikely to be correct).
2024-01-16 23:39:51 -06:00
Christopher Haster
60567677b9 Relaxed alignment requirements for lfs_malloc
The only reason we needed this alignment was for the lookahead buffer.

Now that the lookahead buffer is relaxed to operate on bytes, we can
relax our malloc alignment requirement all the way down to the byte
level, since we mainly use lfs_malloc to allocate byte-level buffers.

This does introduce a risk that we might need word-level mallocs in the
future. If that happens we will need to decide if changing the malloc
alignment is a breaking change, or gate alignment requirements behind
user provided defines.

Found by HiFiPhile.
2024-01-16 00:27:07 -06:00
Christopher Haster
3513ff1afc Merge pull request #911 from littlefs-project/fix-release-structs
Fix struct sizes missing from generated release notes
2023-12-21 00:08:16 -06:00
Christopher Haster
8a22bd6e67 Merge pull request #910 from littlefs-project/fix-superblock-expansion-thresh
Increase threshold for superblock expansion from ~50% -> ~88% full
2023-12-21 00:07:55 -06:00
Christopher Haster
9b82db72d8 Merge pull request #898 from zchen24/patch-1
Update DESIGN.md minor typo
2023-12-21 00:06:29 -06:00
Zihan Chen
99b84ee3db Update DESIGN.md, fix minor typo 2023-12-20 23:42:26 -06:00
Christopher Haster
b1b10c0e75 Relaxed lookahead buffer alignment
This drops the lookahead buffer from operating on 32-bit words to
operating on 8-bit bytes, and removes any alignment requirement. This
may have some minor performance impact, but it is unlikely to be
significant when you consider IO overhead.

The original motivation for 32-bit alignment was an attempt at
future-proofing in case we wanted some more complex on-disk data
structure. This never happened, and even if it did, it could have been
added via additional config options.

This has been a significant pain point for users, since providing
word-aligned byte-sized buffers in C can be a bit annoying.
2023-12-20 00:39:11 -06:00
Christopher Haster
1f9c3c04b1 Reworked the block allocator so the logic is hopefully simpler
Some of this is just better documentation, some of this is reworking the
logic to be more intention driven... if that makes sense...
2023-12-20 00:24:56 -06:00
Christopher Haster
7b68441888 Renamed a number of internal block-allocator fields
- Renamed lfs.free      -> lfs.lookahead
- Renamed lfs.free.off  -> lfs.lookahead.start
- Renamed lfs.free.i    -> lfs.lookahead.next
- Renamed lfs.free.ack  -> lfs.lookahead.ckpoint
- Renamed lfs_alloc_ack -> lfs_alloc_ckpoint

These have been named a bit confusingly, and I think the new names make
their relevant purposes a bit clearer.

At the very it's clear lfs.lookahead is related to the lookahead buffer.
(and doesn't imply a closed free-bitmap).
2023-12-20 00:17:08 -06:00
Christopher Haster
e91a29d2b5 Fixed struct sizes missing from generated release notes
This script was missed during a struct -> structs naming change
2023-12-19 22:00:18 -06:00
Christopher Haster
b9b95ab4bc Increase threshold for superblock expansion from ~50% -> ~88% full
Superblock expansion is an irreversible operation. In an effort to
prevent superblock expansion from claiming valuable scratch space
(important for small, <~8 block filesystems), littlefs prevents
superblock expansion when the disk is "mostly full".

In true computer-scientist fashion, this "mostly full" threshold was
set to ~50%.

As pointed out by gbolgradov and rojer, >~50% utilization is not
uncommon, and it can lead to a situation where superblock expansion does
not occur in a relatively healthy filesystem, causing focused wear at
the root.

To remedy this, the threshold is now increased to ~88% (7/8) full.

This may change in the future and should probably be eventually user
configurable.

Found by gbolgradov and rojer
2023-12-19 16:51:17 -06:00
Christopher Haster
9a620c730c Added LFS_CRC, easier override for lfs_crc
Now you can override littlefs's CRC implementation with some simple
defines:

  -DLFS_CRC=lfs_crc

The motivation for this is the same for LFS_MALLOC/LFS_FREE. I think
these are the main "system-level" utils that users want to override.

Don't override with this something that's not CRC32! Your filesystem
will no longer be compatible with other tools! This is only intended for
provided hardware acceleration!
2023-12-19 14:12:10 -06:00
Christopher Haster
a0c6c54345 Added LFS_MALLOC/FREE, easier overrides for lfs_malloc/free
Now you can override littlefs's malloc with some simple defines:

  -DLFS_MALLOC=my_malloc
  -DLFS_FREE=my_free

This is probably what most users expected when wanting to override
malloc/free in littlefs, but it hasn't been available, since instead
littlefs provides a file-level override of builtin utils.

The thinking was that there's just too many builtins that could be
overriden, lfs_max/min/alignup/npw2/etc/etc/etc, so allowing users to
just override the util file provides the best flexibility without a ton
of ifdefs.

But it's become clear this is awkward for users that just want to
replace malloc.

Maybe the original goal was too optimistic, maybe there's a better way
to structure this file, or maybe the best API is just a bunch of ifdefs,
I have no idea! This will hopefully continue to evolve.
2023-12-19 13:57:17 -06:00
Zihan Chen
10bcff1af8 Update DESIGN.md minor typo 2023-11-26 11:10:24 -08:00
Christopher Haster
c733d9ec57 Merge pull request #884 from DvdGiessen/static-functions
lfs_fs_raw* functions should be static
2023-10-31 13:26:35 -05:00
Brian Pugh
c531a5e88f Replace erroneous LFS_FILE_MAX upper bound 4294967296 to 4294967295 2023-10-30 11:18:20 -07:00
Brian Pugh
8f9427dd53 Add value-range checks for user-definable macros 2023-10-29 13:50:38 -07:00
Christopher Haster
8f3f32d1f3 Added -Wmissing-prototypes
This warning is useful for catching the easy mistake of missing the
keyword static on functions intended to be internal-only.

Missing the static keyword risks symbol polution and misses potential
compiler optimizations.

This is an interesting warning, while useful for libraries such as
littlefs, it's perfectly valid C to not predeclare all functions, and
common in final application binaries.

Relatedly, this warning is re-disabled for the test/bench runner. There
may be a better way to organize the CFLAGS, maybe into separate
LIB/RUNNER CFLAGS, but I'll leave this to future work if our CFLAGS grow
more complicated.

This was motivated by non-static internal-only functions leaking into a
release. Found and fixed by DvdGiessen.
2023-10-24 12:04:54 -05:00
Daniël van de Giessen
92fc780f71 lfs_fs_raw* functions should be static 2023-10-23 13:35:34 +02:00
14 changed files with 342 additions and 147 deletions

View File

@@ -112,7 +112,7 @@ jobs:
table[$i,$j]=$c_camel
((j+=1))
for s in code stack struct
for s in code stack structs
do
f=sizes/thumb${c:+-$c}.$s.csv
[ -e $f ] && table[$i,$j]=$( \

View File

@@ -59,7 +59,7 @@ This leaves us with three major requirements for an embedded filesystem.
RAM to temporarily store filesystem metadata.
For ROM, this means we need to keep our design simple and reuse code paths
were possible. For RAM we have a stronger requirement, all RAM usage is
where possible. For RAM we have a stronger requirement, all RAM usage is
bounded. This means RAM usage does not grow as the filesystem changes in
size or number of files. This creates a unique challenge as even presumably
simple operations, such as traversing the filesystem, become surprisingly
@@ -626,7 +626,7 @@ log&#8322;_n_ pointers that skip to different preceding elements of the
skip-list.
The name comes from heavy use of the [CTZ instruction][wikipedia-ctz], which
lets us calculate the power-of-two factors efficiently. For a give block _n_,
lets us calculate the power-of-two factors efficiently. For a given block _n_,
that block contains ctz(_n_)+1 pointers.
```

View File

@@ -63,6 +63,7 @@ CFLAGS += -fcallgraph-info=su
CFLAGS += -g3
CFLAGS += -I.
CFLAGS += -std=c99 -Wall -Wextra -pedantic
CFLAGS += -Wmissing-prototypes
CFLAGS += -ftrack-macro-expansion=0
ifdef DEBUG
CFLAGS += -O0
@@ -354,6 +355,7 @@ summary-diff sizes-diff: $(OBJ) $(CI)
## Build the test-runner
.PHONY: test-runner build-test
test-runner build-test: CFLAGS+=-Wno-missing-prototypes
ifndef NO_COV
test-runner build-test: CFLAGS+=--coverage
endif
@@ -405,6 +407,7 @@ testmarks-diff: $(TEST_CSV)
## Build the bench-runner
.PHONY: bench-runner build-bench
bench-runner build-bench: CFLAGS+=-Wno-missing-prototypes
ifdef YES_COV
bench-runner build-bench: CFLAGS+=--coverage
endif

306
lfs.c
View File

@@ -593,45 +593,52 @@ static int lfs_rawunmount(lfs_t *lfs);
/// Block allocator ///
// allocations should call this when all allocated blocks are committed to
// the filesystem
//
// after a checkpoint, the block allocator may realloc any untracked blocks
static void lfs_alloc_ckpoint(lfs_t *lfs) {
lfs->lookahead.ckpoint = lfs->block_count;
}
// drop the lookahead buffer, this is done during mounting and failed
// traversals in order to avoid invalid lookahead state
static void lfs_alloc_drop(lfs_t *lfs) {
lfs->lookahead.size = 0;
lfs->lookahead.next = 0;
lfs_alloc_ckpoint(lfs);
}
#ifndef LFS_READONLY
static int lfs_alloc_lookahead(void *p, lfs_block_t block) {
lfs_t *lfs = (lfs_t*)p;
lfs_block_t off = ((block - lfs->free.off)
lfs_block_t off = ((block - lfs->lookahead.start)
+ lfs->block_count) % lfs->block_count;
if (off < lfs->free.size) {
lfs->free.buffer[off / 32] |= 1U << (off % 32);
if (off < lfs->lookahead.size) {
lfs->lookahead.buffer[off / 8] |= 1U << (off % 8);
}
return 0;
}
#endif
// indicate allocated blocks have been committed into the filesystem, this
// is to prevent blocks from being garbage collected in the middle of a
// commit operation
static void lfs_alloc_ack(lfs_t *lfs) {
lfs->free.ack = lfs->block_count;
}
// drop the lookahead buffer, this is done during mounting and failed
// traversals in order to avoid invalid lookahead state
static void lfs_alloc_drop(lfs_t *lfs) {
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
}
#ifndef LFS_READONLY
static int lfs_fs_rawgc(lfs_t *lfs) {
// Move free offset at the first unused block (lfs->free.i)
// lfs->free.i is equal lfs->free.size when all blocks are used
lfs->free.off = (lfs->free.off + lfs->free.i) % lfs->block_count;
lfs->free.size = lfs_min(8*lfs->cfg->lookahead_size, lfs->free.ack);
lfs->free.i = 0;
static int lfs_alloc_scan(lfs_t *lfs) {
// move lookahead buffer to the first unused block
//
// note we limit the lookahead buffer to at most the amount of blocks
// checkpointed, this prevents the math in lfs_alloc from underflowing
lfs->lookahead.start = (lfs->lookahead.start + lfs->lookahead.next)
% lfs->block_count;
lfs->lookahead.next = 0;
lfs->lookahead.size = lfs_min(
8*lfs->cfg->lookahead_size,
lfs->lookahead.ckpoint);
// find mask of free blocks from tree
memset(lfs->free.buffer, 0, lfs->cfg->lookahead_size);
memset(lfs->lookahead.buffer, 0, lfs->cfg->lookahead_size);
int err = lfs_fs_rawtraverse(lfs, lfs_alloc_lookahead, lfs, true);
if (err) {
lfs_alloc_drop(lfs);
@@ -645,36 +652,49 @@ static int lfs_fs_rawgc(lfs_t *lfs) {
#ifndef LFS_READONLY
static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) {
while (true) {
while (lfs->free.i != lfs->free.size) {
lfs_block_t off = lfs->free.i;
lfs->free.i += 1;
lfs->free.ack -= 1;
if (!(lfs->free.buffer[off / 32] & (1U << (off % 32)))) {
// scan our lookahead buffer for free blocks
while (lfs->lookahead.next < lfs->lookahead.size) {
if (!(lfs->lookahead.buffer[lfs->lookahead.next / 8]
& (1U << (lfs->lookahead.next % 8)))) {
// found a free block
*block = (lfs->free.off + off) % lfs->block_count;
*block = (lfs->lookahead.start + lfs->lookahead.next)
% lfs->block_count;
// eagerly find next off so an alloc ack can
// discredit old lookahead blocks
while (lfs->free.i != lfs->free.size &&
(lfs->free.buffer[lfs->free.i / 32]
& (1U << (lfs->free.i % 32)))) {
lfs->free.i += 1;
lfs->free.ack -= 1;
// eagerly find next free block to maximize how many blocks
// lfs_alloc_ckpoint makes available for scanning
while (true) {
lfs->lookahead.next += 1;
lfs->lookahead.ckpoint -= 1;
if (lfs->lookahead.next >= lfs->lookahead.size
|| !(lfs->lookahead.buffer[lfs->lookahead.next / 8]
& (1U << (lfs->lookahead.next % 8)))) {
return 0;
}
}
return 0;
}
lfs->lookahead.next += 1;
lfs->lookahead.ckpoint -= 1;
}
// check if we have looked at all blocks since last ack
if (lfs->free.ack == 0) {
LFS_ERROR("No more free space %"PRIu32,
lfs->free.i + lfs->free.off);
// In order to keep our block allocator from spinning forever when our
// filesystem is full, we mark points where there are no in-flight
// allocations with a checkpoint before starting a set of allocations.
//
// If we've looked at all blocks since the last checkpoint, we report
// the filesystem as out of storage.
//
if (lfs->lookahead.ckpoint <= 0) {
LFS_ERROR("No more free space 0x%"PRIx32,
(lfs->lookahead.start + lfs->lookahead.next)
% lfs->cfg->block_count);
return LFS_ERR_NOSPC;
}
int err = lfs_fs_rawgc(lfs);
// No blocks in our lookahead buffer, we need to scan the filesystem for
// unused blocks in the next lookahead window.
int err = lfs_alloc_scan(lfs);
if(err) {
return err;
}
@@ -2151,9 +2171,11 @@ static int lfs_dir_splittingcompact(lfs_t *lfs, lfs_mdir_t *dir,
return size;
}
// do we have extra space? littlefs can't reclaim this space
// by itself, so expand cautiously
if ((lfs_size_t)size < lfs->block_count/2) {
// littlefs cannot reclaim expanded superblocks, so expand cautiously
//
// if our filesystem is more than ~88% full, don't expand, this is
// somewhat arbitrary
if (lfs->block_count - size > lfs->block_count/8) {
LFS_DEBUG("Expanding superblock at rev %"PRIu32, dir->rev);
int err = lfs_dir_split(lfs, dir, attrs, attrcount,
source, begin, end);
@@ -2586,7 +2608,7 @@ static int lfs_rawmkdir(lfs_t *lfs, const char *path) {
}
// build up new directory
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
lfs_mdir_t dir;
err = lfs_dir_alloc(lfs, &dir);
if (err) {
@@ -3272,7 +3294,7 @@ relocate:
#ifndef LFS_READONLY
static int lfs_file_outline(lfs_t *lfs, lfs_file_t *file) {
file->off = file->pos;
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
int err = lfs_file_relocate(lfs, file);
if (err) {
return err;
@@ -3502,11 +3524,7 @@ static lfs_ssize_t lfs_file_flushedwrite(lfs_t *lfs, lfs_file_t *file,
lfs_size_t nsize = size;
if ((file->flags & LFS_F_INLINE) &&
lfs_max(file->pos+nsize, file->ctz.size) >
lfs_min(0x3fe, lfs_min(
lfs->cfg->cache_size,
(lfs->cfg->metadata_max ?
lfs->cfg->metadata_max : lfs->cfg->block_size) / 8))) {
lfs_max(file->pos+nsize, file->ctz.size) > lfs->inline_max) {
// inline file doesn't fit anymore
int err = lfs_file_outline(lfs, file);
if (err) {
@@ -3535,7 +3553,7 @@ static lfs_ssize_t lfs_file_flushedwrite(lfs_t *lfs, lfs_file_t *file,
}
// extend file with new blocks
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
int err = lfs_ctz_extend(lfs, &file->cache, &lfs->rcache,
file->block, file->pos,
&file->block, &file->off);
@@ -3578,7 +3596,7 @@ relocate:
data += diff;
nsize -= diff;
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
}
return size;
@@ -3703,10 +3721,7 @@ static int lfs_file_rawtruncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size) {
lfs_off_t oldsize = lfs_file_rawsize(lfs, file);
if (size < oldsize) {
// revert to inline file?
if (size <= lfs_min(0x3fe, lfs_min(
lfs->cfg->cache_size,
(lfs->cfg->metadata_max ?
lfs->cfg->metadata_max : lfs->cfg->block_size) / 8))) {
if (size <= lfs->inline_max) {
// flush+seek to head
lfs_soff_t res = lfs_file_rawseek(lfs, file, 0, LFS_SEEK_SET);
if (res < 0) {
@@ -4106,6 +4121,21 @@ static int lfs_rawremoveattr(lfs_t *lfs, const char *path, uint8_t type) {
/// Filesystem operations ///
// compile time checks, see lfs.h for why these limits exist
#if LFS_NAME_MAX > 1022
#error "Invalid LFS_NAME_MAX, must be <= 1022"
#endif
#if LFS_FILE_MAX > 2147483647
#error "Invalid LFS_FILE_MAX, must be <= 2147483647"
#endif
#if LFS_ATTR_MAX > 1022
#error "Invalid LFS_ATTR_MAX, must be <= 1022"
#endif
// common filesystem initialization
static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
lfs->cfg = cfg;
lfs->block_count = cfg->block_count; // May be 0
@@ -4153,6 +4183,14 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
// wear-leveling.
LFS_ASSERT(lfs->cfg->block_cycles != 0);
// check that compact_thresh makes sense
//
// metadata can't be compacted below block_size/2, and metadata can't
// exceed a block_size
LFS_ASSERT(lfs->cfg->compact_thresh == 0
|| lfs->cfg->compact_thresh >= lfs->cfg->block_size/2);
LFS_ASSERT(lfs->cfg->compact_thresh == (lfs_size_t)-1
|| lfs->cfg->compact_thresh <= lfs->cfg->block_size);
// setup read cache
if (lfs->cfg->read_buffer) {
@@ -4180,15 +4218,14 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
lfs_cache_zero(lfs, &lfs->rcache);
lfs_cache_zero(lfs, &lfs->pcache);
// setup lookahead, must be multiple of 64-bits, 32-bit aligned
// setup lookahead buffer, note mount finishes initializing this after
// we establish a decent pseudo-random seed
LFS_ASSERT(lfs->cfg->lookahead_size > 0);
LFS_ASSERT(lfs->cfg->lookahead_size % 8 == 0 &&
(uintptr_t)lfs->cfg->lookahead_buffer % 4 == 0);
if (lfs->cfg->lookahead_buffer) {
lfs->free.buffer = lfs->cfg->lookahead_buffer;
lfs->lookahead.buffer = lfs->cfg->lookahead_buffer;
} else {
lfs->free.buffer = lfs_malloc(lfs->cfg->lookahead_size);
if (!lfs->free.buffer) {
lfs->lookahead.buffer = lfs_malloc(lfs->cfg->lookahead_size);
if (!lfs->lookahead.buffer) {
err = LFS_ERR_NOMEM;
goto cleanup;
}
@@ -4215,6 +4252,27 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
LFS_ASSERT(lfs->cfg->metadata_max <= lfs->cfg->block_size);
LFS_ASSERT(lfs->cfg->inline_max == (lfs_size_t)-1
|| lfs->cfg->inline_max <= lfs->cfg->cache_size);
LFS_ASSERT(lfs->cfg->inline_max == (lfs_size_t)-1
|| lfs->cfg->inline_max <= lfs->attr_max);
LFS_ASSERT(lfs->cfg->inline_max == (lfs_size_t)-1
|| lfs->cfg->inline_max <= ((lfs->cfg->metadata_max)
? lfs->cfg->metadata_max
: lfs->cfg->block_size)/8);
lfs->inline_max = lfs->cfg->inline_max;
if (lfs->inline_max == (lfs_size_t)-1) {
lfs->inline_max = 0;
} else if (lfs->inline_max == 0) {
lfs->inline_max = lfs_min(
lfs->cfg->cache_size,
lfs_min(
lfs->attr_max,
((lfs->cfg->metadata_max)
? lfs->cfg->metadata_max
: lfs->cfg->block_size)/8));
}
// setup default state
lfs->root[0] = LFS_BLOCK_NULL;
lfs->root[1] = LFS_BLOCK_NULL;
@@ -4245,7 +4303,7 @@ static int lfs_deinit(lfs_t *lfs) {
}
if (!lfs->cfg->lookahead_buffer) {
lfs_free(lfs->free.buffer);
lfs_free(lfs->lookahead.buffer);
}
return 0;
@@ -4265,12 +4323,12 @@ static int lfs_rawformat(lfs_t *lfs, const struct lfs_config *cfg) {
LFS_ASSERT(cfg->block_count != 0);
// create free lookahead
memset(lfs->free.buffer, 0, lfs->cfg->lookahead_size);
lfs->free.off = 0;
lfs->free.size = lfs_min(8*lfs->cfg->lookahead_size,
memset(lfs->lookahead.buffer, 0, lfs->cfg->lookahead_size);
lfs->lookahead.start = 0;
lfs->lookahead.size = lfs_min(8*lfs->cfg->lookahead_size,
lfs->block_count);
lfs->free.i = 0;
lfs_alloc_ack(lfs);
lfs->lookahead.next = 0;
lfs_alloc_ckpoint(lfs);
// create root dir
lfs_mdir_t root;
@@ -4438,6 +4496,9 @@ static int lfs_rawmount(lfs_t *lfs, const struct lfs_config *cfg) {
}
lfs->attr_max = superblock.attr_max;
// we also need to update inline_max in case attr_max changed
lfs->inline_max = lfs_min(lfs->inline_max, lfs->attr_max);
}
// this is where we get the block_count from disk if block_count=0
@@ -4478,7 +4539,7 @@ static int lfs_rawmount(lfs_t *lfs, const struct lfs_config *cfg) {
// setup free lookahead, to distribute allocations uniformly across
// boots, we start the allocator at a random location
lfs->free.off = lfs->seed % lfs->block_count;
lfs->lookahead.start = lfs->seed % lfs->block_count;
lfs_alloc_drop(lfs);
return 0;
@@ -4999,7 +5060,7 @@ static int lfs_fs_forceconsistency(lfs_t *lfs) {
#endif
#ifndef LFS_READONLY
int lfs_fs_rawmkconsistent(lfs_t *lfs) {
static int lfs_fs_rawmkconsistent(lfs_t *lfs) {
// lfs_fs_forceconsistency does most of the work here
int err = lfs_fs_forceconsistency(lfs);
if (err) {
@@ -5045,8 +5106,59 @@ static lfs_ssize_t lfs_fs_rawsize(lfs_t *lfs) {
return size;
}
// explicit garbage collection
#ifndef LFS_READONLY
int lfs_fs_rawgrow(lfs_t *lfs, lfs_size_t block_count) {
static int lfs_fs_rawgc(lfs_t *lfs) {
// force consistency, even if we're not necessarily going to write,
// because this function is supposed to take care of janitorial work
// isn't it?
int err = lfs_fs_forceconsistency(lfs);
if (err) {
return err;
}
// try to compact metadata pairs, note we can't really accomplish
// anything if compact_thresh doesn't at least leave a prog_size
// available
if (lfs->cfg->compact_thresh
< lfs->cfg->block_size - lfs->cfg->prog_size) {
// iterate over all mdirs
lfs_mdir_t mdir = {.tail = {0, 1}};
while (!lfs_pair_isnull(mdir.tail)) {
err = lfs_dir_fetch(lfs, &mdir, mdir.tail);
if (err) {
return err;
}
// not erased? exceeds our compaction threshold?
if (!mdir.erased || ((lfs->cfg->compact_thresh == 0)
? mdir.off > lfs->cfg->block_size - lfs->cfg->block_size/8
: mdir.off > lfs->cfg->compact_thresh)) {
// the easiest way to trigger a compaction is to mark
// the mdir as unerased and add an empty commit
mdir.erased = false;
err = lfs_dir_commit(lfs, &mdir, NULL, 0);
if (err) {
return err;
}
}
}
}
// try to populate the lookahead buffer, unless it's already full
if (lfs->lookahead.size < 8*lfs->cfg->lookahead_size) {
err = lfs_alloc_scan(lfs);
if (err) {
return err;
}
}
return 0;
}
#endif
#ifndef LFS_READONLY
static int lfs_fs_rawgrow(lfs_t *lfs, lfs_size_t block_count) {
// shrinking is not supported
LFS_ASSERT(block_count >= lfs->block_count);
@@ -5451,10 +5563,10 @@ static int lfs1_mount(lfs_t *lfs, struct lfs1 *lfs1,
lfs->lfs1->root[1] = LFS_BLOCK_NULL;
// setup free lookahead
lfs->free.off = 0;
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
lfs->lookahead.start = 0;
lfs->lookahead.size = 0;
lfs->lookahead.next = 0;
lfs_alloc_ckpoint(lfs);
// load superblock
lfs1_dir_t dir;
@@ -6250,22 +6362,6 @@ int lfs_fs_traverse(lfs_t *lfs, int (*cb)(void *, lfs_block_t), void *data) {
return err;
}
#ifndef LFS_READONLY
int lfs_fs_gc(lfs_t *lfs) {
int err = LFS_LOCK(lfs->cfg);
if (err) {
return err;
}
LFS_TRACE("lfs_fs_gc(%p)", (void*)lfs);
err = lfs_fs_rawgc(lfs);
LFS_TRACE("lfs_fs_gc -> %d", err);
LFS_UNLOCK(lfs->cfg);
return err;
}
#endif
#ifndef LFS_READONLY
int lfs_fs_mkconsistent(lfs_t *lfs) {
int err = LFS_LOCK(lfs->cfg);
@@ -6282,6 +6378,22 @@ int lfs_fs_mkconsistent(lfs_t *lfs) {
}
#endif
#ifndef LFS_READONLY
int lfs_fs_gc(lfs_t *lfs) {
int err = LFS_LOCK(lfs->cfg);
if (err) {
return err;
}
LFS_TRACE("lfs_fs_gc(%p)", (void*)lfs);
err = lfs_fs_rawgc(lfs);
LFS_TRACE("lfs_fs_gc -> %d", err);
LFS_UNLOCK(lfs->cfg);
return err;
}
#endif
#ifndef LFS_READONLY
int lfs_fs_grow(lfs_t *lfs, lfs_size_t block_count) {
int err = LFS_LOCK(lfs->cfg);

76
lfs.h
View File

@@ -52,10 +52,8 @@ typedef uint32_t lfs_block_t;
#endif
// Maximum size of a file in bytes, may be redefined to limit to support other
// drivers. Limited on disk to <= 4294967296. However, above 2147483647 the
// functions lfs_file_seek, lfs_file_size, and lfs_file_tell will return
// incorrect values due to using signed integers. Stored in superblock and
// must be respected by other littlefs drivers.
// drivers. Limited on disk to <= 2147483647. Stored in superblock and must be
// respected by other littlefs drivers.
#ifndef LFS_FILE_MAX
#define LFS_FILE_MAX 2147483647
#endif
@@ -226,9 +224,20 @@ struct lfs_config {
// Size of the lookahead buffer in bytes. A larger lookahead buffer
// increases the number of blocks found during an allocation pass. The
// lookahead buffer is stored as a compact bitmap, so each byte of RAM
// can track 8 blocks. Must be a multiple of 8.
// can track 8 blocks.
lfs_size_t lookahead_size;
// Threshold for metadata compaction during lfs_fs_gc in bytes. Metadata
// pairs that exceed this threshold will be compacted during lfs_fs_gc.
// Defaults to ~88% block_size when zero, though the default may change
// in the future.
//
// Note this only affects lfs_fs_gc. Normal compactions still only occur
// when full.
//
// Set to -1 to disable metadata compaction during lfs_fs_gc.
lfs_size_t compact_thresh;
// Optional statically allocated read buffer. Must be cache_size.
// By default lfs_malloc is used to allocate this buffer.
void *read_buffer;
@@ -237,9 +246,8 @@ struct lfs_config {
// By default lfs_malloc is used to allocate this buffer.
void *prog_buffer;
// Optional statically allocated lookahead buffer. Must be lookahead_size
// and aligned to a 32-bit boundary. By default lfs_malloc is used to
// allocate this buffer.
// Optional statically allocated lookahead buffer. Must be lookahead_size.
// By default lfs_malloc is used to allocate this buffer.
void *lookahead_buffer;
// Optional upper limit on length of file names in bytes. No downside for
@@ -264,6 +272,15 @@ struct lfs_config {
// Defaults to block_size when zero.
lfs_size_t metadata_max;
// Optional upper limit on inlined files in bytes. Inlined files live in
// metadata and decrease storage requirements, but may be limited to
// improve metadata-related performance. Must be <= cache_size, <=
// attr_max, and <= block_size/8. Defaults to the largest possible
// inline_max when zero.
//
// Set to -1 to disable inlined files.
lfs_size_t inline_max;
#ifdef LFS_MULTIVERSION
// On-disk version to use when writing in the form of 16-bit major version
// + 16-bit minor version. This limiting metadata to what is supported by
@@ -430,19 +447,20 @@ typedef struct lfs {
lfs_gstate_t gdisk;
lfs_gstate_t gdelta;
struct lfs_free {
lfs_block_t off;
struct lfs_lookahead {
lfs_block_t start;
lfs_block_t size;
lfs_block_t i;
lfs_block_t ack;
uint32_t *buffer;
} free;
lfs_block_t next;
lfs_block_t ckpoint;
uint8_t *buffer;
} lookahead;
const struct lfs_config *cfg;
lfs_size_t block_count;
lfs_size_t name_max;
lfs_size_t file_max;
lfs_size_t attr_max;
lfs_size_t inline_max;
#ifdef LFS_MIGRATE
struct lfs1 *lfs1;
@@ -712,18 +730,6 @@ lfs_ssize_t lfs_fs_size(lfs_t *lfs);
// Returns a negative error code on failure.
int lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
// Attempt to proactively find free blocks
//
// Calling this function is not required, but may allowing the offloading of
// the expensive block allocation scan to a less time-critical code path.
//
// Note: littlefs currently does not persist any found free blocks to disk.
// This may change in the future.
//
// Returns a negative error code on failure. Finding no free blocks is
// not an error.
int lfs_fs_gc(lfs_t *lfs);
#ifndef LFS_READONLY
// Attempt to make the filesystem consistent and ready for writing
//
@@ -736,6 +742,24 @@ int lfs_fs_gc(lfs_t *lfs);
int lfs_fs_mkconsistent(lfs_t *lfs);
#endif
#ifndef LFS_READONLY
// Attempt any janitorial work
//
// This currently:
// 1. Calls mkconsistent if not already consistent
// 2. Compacts metadata > compact_thresh
// 3. Populates the block allocator
//
// Though additional janitorial work may be added in the future.
//
// Calling this function is not required, but may allow the offloading of
// expensive janitorial work to a less time-critical code path.
//
// Returns a negative error code on failure. Accomplishing nothing is not
// an error.
int lfs_fs_gc(lfs_t *lfs);
#endif
#ifndef LFS_READONLY
// Grows the filesystem to a new size, updating the superblock with the new
// block count.

View File

@@ -11,6 +11,8 @@
#ifndef LFS_CONFIG
// If user provides their own CRC impl we don't need this
#ifndef LFS_CRC
// Software CRC implementation with small lookup table
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size) {
static const uint32_t rtable[16] = {
@@ -29,6 +31,7 @@ uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size) {
return crc;
}
#endif
#endif

View File

@@ -212,12 +212,22 @@ static inline uint32_t lfs_tobe32(uint32_t a) {
}
// Calculate CRC-32 with polynomial = 0x04c11db7
#ifdef LFS_CRC
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size) {
return LFS_CRC(crc, buffer, size)
}
#else
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size);
#endif
// Allocate memory, only used if buffers are not provided to littlefs
// Note, memory must be 64-bit aligned
//
// littlefs current has no alignment requirements, as it only allocates
// byte-level buffers.
static inline void *lfs_malloc(size_t size) {
#ifndef LFS_NO_MALLOC
#if defined(LFS_MALLOC)
return LFS_MALLOC(size);
#elif !defined(LFS_NO_MALLOC)
return malloc(size);
#else
(void)size;
@@ -227,7 +237,9 @@ static inline void *lfs_malloc(size_t size) {
// Deallocate memory, only used if buffers are not provided to littlefs
static inline void lfs_free(void *p) {
#ifndef LFS_NO_MALLOC
#if defined(LFS_FREE)
LFS_FREE(p);
#elif !defined(LFS_NO_MALLOC)
free(p);
#else
(void)p;

View File

@@ -1321,6 +1321,8 @@ void perm_run(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
.inline_max = INLINE_MAX,
};
struct lfs_emubd_config bdcfg = {

View File

@@ -95,11 +95,13 @@ intmax_t bench_define(size_t define);
#define BLOCK_COUNT_i 5
#define CACHE_SIZE_i 6
#define LOOKAHEAD_SIZE_i 7
#define BLOCK_CYCLES_i 8
#define ERASE_VALUE_i 9
#define ERASE_CYCLES_i 10
#define BADBLOCK_BEHAVIOR_i 11
#define POWERLOSS_BEHAVIOR_i 12
#define COMPACT_THRESH_i 8
#define INLINE_MAX_i 9
#define BLOCK_CYCLES_i 10
#define ERASE_VALUE_i 11
#define ERASE_CYCLES_i 12
#define BADBLOCK_BEHAVIOR_i 13
#define POWERLOSS_BEHAVIOR_i 14
#define READ_SIZE bench_define(READ_SIZE_i)
#define PROG_SIZE bench_define(PROG_SIZE_i)
@@ -109,6 +111,8 @@ intmax_t bench_define(size_t define);
#define BLOCK_COUNT bench_define(BLOCK_COUNT_i)
#define CACHE_SIZE bench_define(CACHE_SIZE_i)
#define LOOKAHEAD_SIZE bench_define(LOOKAHEAD_SIZE_i)
#define COMPACT_THRESH bench_define(COMPACT_THRESH_i)
#define INLINE_MAX bench_define(INLINE_MAX_i)
#define BLOCK_CYCLES bench_define(BLOCK_CYCLES_i)
#define ERASE_VALUE bench_define(ERASE_VALUE_i)
#define ERASE_CYCLES bench_define(ERASE_CYCLES_i)
@@ -124,6 +128,8 @@ intmax_t bench_define(size_t define);
BENCH_DEF(BLOCK_COUNT, ERASE_COUNT/lfs_max(BLOCK_SIZE/ERASE_SIZE,1))\
BENCH_DEF(CACHE_SIZE, lfs_max(64,lfs_max(READ_SIZE,PROG_SIZE))) \
BENCH_DEF(LOOKAHEAD_SIZE, 16) \
BENCH_DEF(COMPACT_THRESH, 0) \
BENCH_DEF(INLINE_MAX, 0) \
BENCH_DEF(BLOCK_CYCLES, -1) \
BENCH_DEF(ERASE_VALUE, 0xff) \
BENCH_DEF(ERASE_CYCLES, 0) \
@@ -131,7 +137,7 @@ intmax_t bench_define(size_t define);
BENCH_DEF(POWERLOSS_BEHAVIOR, LFS_EMUBD_POWERLOSS_NOOP)
#define BENCH_GEOMETRY_DEFINE_COUNT 4
#define BENCH_IMPLICIT_DEFINE_COUNT 13
#define BENCH_IMPLICIT_DEFINE_COUNT 15
#endif

View File

@@ -1346,6 +1346,8 @@ static void run_powerloss_none(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
.inline_max = INLINE_MAX,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
@@ -1422,6 +1424,8 @@ static void run_powerloss_linear(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
.inline_max = INLINE_MAX,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
@@ -1515,6 +1519,8 @@ static void run_powerloss_log(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
.inline_max = INLINE_MAX,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
@@ -1606,6 +1612,8 @@ static void run_powerloss_cycles(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
.inline_max = INLINE_MAX,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
@@ -1795,6 +1803,8 @@ static void run_powerloss_exhaustive(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
.inline_max = INLINE_MAX,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif

View File

@@ -88,12 +88,14 @@ intmax_t test_define(size_t define);
#define BLOCK_COUNT_i 5
#define CACHE_SIZE_i 6
#define LOOKAHEAD_SIZE_i 7
#define BLOCK_CYCLES_i 8
#define ERASE_VALUE_i 9
#define ERASE_CYCLES_i 10
#define BADBLOCK_BEHAVIOR_i 11
#define POWERLOSS_BEHAVIOR_i 12
#define DISK_VERSION_i 13
#define COMPACT_THRESH_i 8
#define INLINE_MAX_i 9
#define BLOCK_CYCLES_i 10
#define ERASE_VALUE_i 11
#define ERASE_CYCLES_i 12
#define BADBLOCK_BEHAVIOR_i 13
#define POWERLOSS_BEHAVIOR_i 14
#define DISK_VERSION_i 15
#define READ_SIZE TEST_DEFINE(READ_SIZE_i)
#define PROG_SIZE TEST_DEFINE(PROG_SIZE_i)
@@ -103,6 +105,8 @@ intmax_t test_define(size_t define);
#define BLOCK_COUNT TEST_DEFINE(BLOCK_COUNT_i)
#define CACHE_SIZE TEST_DEFINE(CACHE_SIZE_i)
#define LOOKAHEAD_SIZE TEST_DEFINE(LOOKAHEAD_SIZE_i)
#define COMPACT_THRESH TEST_DEFINE(COMPACT_THRESH_i)
#define INLINE_MAX TEST_DEFINE(INLINE_MAX_i)
#define BLOCK_CYCLES TEST_DEFINE(BLOCK_CYCLES_i)
#define ERASE_VALUE TEST_DEFINE(ERASE_VALUE_i)
#define ERASE_CYCLES TEST_DEFINE(ERASE_CYCLES_i)
@@ -119,6 +123,8 @@ intmax_t test_define(size_t define);
TEST_DEF(BLOCK_COUNT, ERASE_COUNT/lfs_max(BLOCK_SIZE/ERASE_SIZE,1)) \
TEST_DEF(CACHE_SIZE, lfs_max(64,lfs_max(READ_SIZE,PROG_SIZE))) \
TEST_DEF(LOOKAHEAD_SIZE, 16) \
TEST_DEF(COMPACT_THRESH, 0) \
TEST_DEF(INLINE_MAX, 0) \
TEST_DEF(BLOCK_CYCLES, -1) \
TEST_DEF(ERASE_VALUE, 0xff) \
TEST_DEF(ERASE_CYCLES, 0) \
@@ -127,7 +133,7 @@ intmax_t test_define(size_t define);
TEST_DEF(DISK_VERSION, 0)
#define TEST_GEOMETRY_DEFINE_COUNT 4
#define TEST_IMPLICIT_DEFINE_COUNT 14
#define TEST_IMPLICIT_DEFINE_COUNT 16
#endif

View File

@@ -7,6 +7,7 @@ if = 'BLOCK_CYCLES == -1'
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.GC = [false, true]
defines.COMPACT_THRESH = ['-1', '0', 'BLOCK_SIZE/2']
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_file_t files[FILES];
@@ -60,6 +61,7 @@ code = '''
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.GC = [false, true]
defines.COMPACT_THRESH = ['-1', '0', 'BLOCK_SIZE/2']
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};

View File

@@ -1,5 +1,6 @@
[cases.test_files_simple]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -25,6 +26,7 @@ code = '''
[cases.test_files_large]
defines.SIZE = [32, 8192, 262144, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 33, 1, 1023]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -67,6 +69,7 @@ code = '''
defines.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
defines.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 1]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -152,6 +155,7 @@ code = '''
defines.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
defines.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 1]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -232,6 +236,7 @@ code = '''
defines.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
defines.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 1]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -303,6 +308,7 @@ code = '''
[cases.test_files_reentrant_write]
defines.SIZE = [32, 0, 7, 2049]
defines.CHUNKSIZE = [31, 16, 65]
defines.INLINE_MAX = [0, -1, 8]
reentrant = true
code = '''
lfs_t lfs;
@@ -354,11 +360,20 @@ code = '''
[cases.test_files_reentrant_write_sync]
defines = [
# append (O(n))
{MODE='LFS_O_APPEND', SIZE=[32, 0, 7, 2049], CHUNKSIZE=[31, 16, 65]},
{MODE='LFS_O_APPEND',
SIZE=[32, 0, 7, 2049],
CHUNKSIZE=[31, 16, 65],
INLINE_MAX=[0, -1, 8]},
# truncate (O(n^2))
{MODE='LFS_O_TRUNC', SIZE=[32, 0, 7, 200], CHUNKSIZE=[31, 16, 65]},
{MODE='LFS_O_TRUNC',
SIZE=[32, 0, 7, 200],
CHUNKSIZE=[31, 16, 65],
INLINE_MAX=[0, -1, 8]},
# rewrite (O(n^2))
{MODE=0, SIZE=[32, 0, 7, 200], CHUNKSIZE=[31, 16, 65]},
{MODE=0,
SIZE=[32, 0, 7, 200],
CHUNKSIZE=[31, 16, 65],
INLINE_MAX=[0, -1, 8]},
]
reentrant = true
code = '''

View File

@@ -98,7 +98,7 @@ code = '''
lfs_mount(&lfs, cfg) => 0;
// create an orphan
lfs_mdir_t orphan;
lfs_alloc_ack(&lfs);
lfs_alloc_ckpoint(&lfs);
lfs_dir_alloc(&lfs, &orphan) => 0;
lfs_dir_commit(&lfs, &orphan, NULL, 0) => 0;
@@ -170,7 +170,7 @@ code = '''
lfs_mount(&lfs, cfg) => 0;
// create an orphan
lfs_mdir_t orphan;
lfs_alloc_ack(&lfs);
lfs_alloc_ckpoint(&lfs);
lfs_dir_alloc(&lfs, &orphan) => 0;
lfs_dir_commit(&lfs, &orphan, NULL, 0) => 0;