Compare commits

...

40 Commits

Author SHA1 Message Date
Christopher Haster
b5cd957f42 Extended lfs_fs_gc to compact metadata, compact_thresh
This extends lfs_fs_gc to now handle three things:

1. Calls mkconsistent if not already consistent
2. Compacts metadata > compact_thresh
3. Populates the block allocator

Which should be all of the janitorial work that can be done without
additional on-disk data structures.

Normally, metadata compaction occurs when an mdir is full, and results in
mdirs that are at most block_size/2.

Now, if you call lfs_fs_gc, littlefs will eagerly compact any mdirs that
exceed the compact_thresh configuration option. Because the resulting
mdirs are at most block_size/2, it only makes sense for compact_thresh to
be >= block_size/2 and <= block_size.

Additionally, there are some special values:

- compact_thresh=0  => defaults to ~88% block_size, may change
- compact_thresh=-1 => disables metadata compaction during lfs_fs_gc

Note that compact_thresh only affects lfs_fs_gc. Normal compactions
still only occur when full.
2024-01-19 12:25:45 -06:00
Christopher Haster
60567677b9 Relaxed alignment requirements for lfs_malloc
The only reason we needed this alignment was for the lookahead buffer.

Now that the lookahead buffer is relaxed to operate on bytes, we can
relax our malloc alignment requirement all the way down to the byte
level, since we mainly use lfs_malloc to allocate byte-level buffers.

This does introduce a risk that we might need word-level mallocs in the
future. If that happens we will need to decide if changing the malloc
alignment is a breaking change, or gate alignment requirements behind
user provided defines.

Found by HiFiPhile.
2024-01-16 00:27:07 -06:00
Christopher Haster
b1b10c0e75 Relaxed lookahead buffer alignment
This drops the lookahead buffer from operating on 32-bit words to
operating on 8-bit bytes, and removes any alignment requirement. This
may have some minor performance impact, but it is unlikely to be
significant when you consider IO overhead.

The original motivation for 32-bit alignment was an attempt at
future-proofing in case we wanted some more complex on-disk data
structure. This never happened, and even if it did, it could have been
added via additional config options.

This has been a significant pain point for users, since providing
word-aligned byte-sized buffers in C can be a bit annoying.
2023-12-20 00:39:11 -06:00
Christopher Haster
1f9c3c04b1 Reworked the block allocator so the logic is hopefully simpler
Some of this is just better documentation, some of this is reworking the
logic to be more intention driven... if that makes sense...
2023-12-20 00:24:56 -06:00
Christopher Haster
7b68441888 Renamed a number of internal block-allocator fields
- Renamed lfs.free      -> lfs.lookahead
- Renamed lfs.free.off  -> lfs.lookahead.start
- Renamed lfs.free.i    -> lfs.lookahead.next
- Renamed lfs.free.ack  -> lfs.lookahead.ckpoint
- Renamed lfs_alloc_ack -> lfs_alloc_ckpoint

These have been named a bit confusingly, and I think the new names make
their relevant purposes a bit clearer.

At the very it's clear lfs.lookahead is related to the lookahead buffer.
(and doesn't imply a closed free-bitmap).
2023-12-20 00:17:08 -06:00
Christopher Haster
c733d9ec57 Merge pull request #884 from DvdGiessen/static-functions
lfs_fs_raw* functions should be static
2023-10-31 13:26:35 -05:00
Christopher Haster
8f3f32d1f3 Added -Wmissing-prototypes
This warning is useful for catching the easy mistake of missing the
keyword static on functions intended to be internal-only.

Missing the static keyword risks symbol polution and misses potential
compiler optimizations.

This is an interesting warning, while useful for libraries such as
littlefs, it's perfectly valid C to not predeclare all functions, and
common in final application binaries.

Relatedly, this warning is re-disabled for the test/bench runner. There
may be a better way to organize the CFLAGS, maybe into separate
LIB/RUNNER CFLAGS, but I'll leave this to future work if our CFLAGS grow
more complicated.

This was motivated by non-static internal-only functions leaking into a
release. Found and fixed by DvdGiessen.
2023-10-24 12:04:54 -05:00
Daniël van de Giessen
92fc780f71 lfs_fs_raw* functions should be static 2023-10-23 13:35:34 +02:00
Christopher Haster
f77214d1f0 Merge pull request #877 from littlefs-project/devel
Minor release: v2.8
2023-09-22 11:52:21 -05:00
Christopher Haster
f91c5bd687 Bumped minor version to v2.8 2023-09-21 13:02:09 -05:00
Christopher Haster
0eb52a2df1 Merge pull request #875 from littlefs-project/fs-gc
Add lfs_fs_gc to enable proactive finding of free blocks
2023-09-21 13:01:19 -05:00
Christopher Haster
6b33ee5e34 Renamed lfs_fs_findfreeblocks -> lfs_fs_gc, tweaked documentation
The idea is in the future this function may be extended to support other
block janitorial work. In such a case calling this lfs_fs_gc provides a
more general name that can include other operations.

This is currently just wishful thinking, however.
2023-09-21 12:23:38 -05:00
Christopher Haster
63e4408f2a Extended alloc tests to test some properties of lfs_fs_findfreeblocks
- Test that the code actually runs.

- Test that lfs_fs_findfreeblocks does not break block allocations.

- Test that lfs_fs_findfreeblocks does not error when no space is
  available, it should only errors when the block is actually needed.
2023-09-21 12:23:38 -05:00
Christopher Haster
dbe4598c12 Added API boilerplate for lfs_fs_findfreeblocks and consistent style
This adds the tracing and optional locking for the littlefs API.

Also updated to match the code style, and added LFS_READONLY guards
where necessary.
2023-09-21 12:23:36 -05:00
ondrap
d85a0fe2e2 Move lookahead buffer offset at the first free block if such block doesn't exist move it for whole lookahead size. 2023-09-21 12:21:25 -05:00
ondrap
b637379210 Update lfs_find_free_blocks to match the latest changes. 2023-09-21 12:18:55 -05:00
Christopher Haster
1ba4ed03f0 Merge pull request #872 from littlefs-project/fs-grow
Add lfs_fs_grow to enable limited resizing of the filesystem
2023-09-21 12:11:35 -05:00
Christopher Haster
e4b7fa15c1 Merge pull request #866 from BrianPugh/optional-block-count
Infer block_count from superblock if not provided in config.
2023-09-21 12:07:00 -05:00
Christopher Haster
23505fa9fa Added lfs_fs_grow for growing the filesystem to a different block_count
The initial implementation for this was provided by kaetemi, originally
as a mount flag. However, it has been modified here to be self-contained
in an explicit runtime function that can be called after mount.

The reasons for an explicit function:

1. lfs_mount stays a strictly readonly operation, and avoids pulling in
   all of the write machinery.

2. filesystem-wide operations such as lfs_fs_grow can be a bit risky,
   and irreversable. The action of growing the filesystem should be very
   intentional.

---

One concern with this change is that this will be the first function
that changes metadata in the superblock. This might break tools that
expect the first valid superblock entry to contain the most recent
metadata, since only the last superblock in the superblock chain will
contain the updated metadata.
2023-09-12 01:32:09 -05:00
Christopher Haster
2c222af17d Tweaked lfs_fsinfo block_size/block_count fields
Mainly to match superblock ordering and emphasize these are logical
blocks.
2023-09-12 01:31:21 -05:00
Christopher Haster
127d84b681 Added a couple mixed/unknown block_count tests
These were cherry-picked from some previous work on a related feature.
2023-09-12 01:14:39 -05:00
Christopher Haster
027331b2f0 Adopted erase_size/erase_count config in test block-devices/runners
In separating the configuration of littlefs from the physical geometry
of the underlying device, we can no longer rely solely on lfs_config to
contain all of the information necessary for the simulated block devices
we use for testing.

This adds a new lfs_*bd_config struct for each of the block devices, and
new erase_size/erase_count fields. The erase_* name was chosen since
these reflect the (simulated) physical erase size and count of
erase-sized blocks, unlike the block_* variants which represent logical
block sizes used for littlefs's bookkeeping.

It may be worth adopting erase_size/erase_count in littlefs's config at
some point in the future, but at the moment doesn't seem necessary.

Changing the lfs_bd_config structs to be required is probably a good
idea anyways, as it moves us more towards separating the bds from
littlefs. Though we can't quite get rid of the lfs_config parameter
because of the block-device API in lfs_config. Eventually it would be
nice to get rid of it, but that would require API breakage.
2023-09-12 00:39:09 -05:00
Christopher Haster
9c23329dd7 Revert of refactor lfs_scan_* out of lfs_format
This would result in two passes through the superblock chain during
mount, when we can access everything we need to in one.
2023-09-03 13:19:03 -05:00
Christopher Haster
130790fa91 Merge pull request #863 from littlefs-project/fix-conversion-warning
Fix integer conversion warning from Code Composer Studio
2023-09-03 12:46:38 -05:00
Christopher Haster
531d5e5073 Merge pull request #855 from mdahamshi/mmd_fix
initlize struct lfs_diskoff disk = {0}
2023-09-03 12:46:28 -05:00
Christopher Haster
e40d8f5410 Merge pull request #849 from littlefs-project/fix-ci-release-no-version
Fix release script breaking if there is no previous version
2023-09-03 12:46:18 -05:00
Brian Pugh
23089d5758 remove previous block_count detection from lfs_format 2023-08-20 14:10:12 -07:00
Brian Pugh
d6098bd3ce Add block_count and block_size to fsinfo 2023-08-20 11:53:18 -07:00
Brian Pugh
d6c0c6a786 linting 2023-08-20 11:33:29 -07:00
Brian Pugh
5caa83fb77 forgot to unmount lfs in test; leaking memory 2023-08-17 22:10:53 -07:00
Brian Pugh
7521e0a6b2 fix newly introduced missing cleanup when an invalid superblock is found. 2023-08-17 20:51:33 -07:00
Brian Pugh
2ebfec78c3 test for failure when interpretting block count when formatting without superblock 2023-08-17 15:20:46 -07:00
Brian Pugh
3d0bcf4066 Add test_superblocks_mount_unknown_block_count 2023-08-17 15:13:16 -07:00
Brian Pugh
6de3fc6ae8 fix corruption check 2023-08-17 15:07:19 -07:00
Brian Pugh
df238ebac6 Add a unit test; currently hanging on final permutation.
Some block-device bound-checks are disabled during superblock search.
2023-08-16 23:07:55 -07:00
Brian Pugh
be6812213d introduce lfs->block_count. If cfg->block_count is 0, autopopulate from superblock 2023-08-16 22:23:34 -07:00
Brian Pugh
6dae7038f9 remove redundant superblock check 2023-08-16 22:23:34 -07:00
Brian Pugh
73285278b9 refactor lfs_scan_for_state_updates and lfs_scan_for_superblock out of lfs_format 2023-08-16 22:23:34 -07:00
Mohammad Dahamshi
5a834b6fc1 initlize struct lfs_diskoff disk = {0}
so we don't use it uninitlized in first run
2023-08-03 11:21:58 -05:00
Christopher Haster
d775b46e3d Fixed integer conversion warning from Code Composer Studio
Proposed by FiddlingBits
2023-08-03 11:16:40 -05:00
19 changed files with 871 additions and 303 deletions

View File

@@ -63,6 +63,7 @@ CFLAGS += -fcallgraph-info=su
CFLAGS += -g3
CFLAGS += -I.
CFLAGS += -std=c99 -Wall -Wextra -pedantic
CFLAGS += -Wmissing-prototypes
CFLAGS += -ftrack-macro-expansion=0
ifdef DEBUG
CFLAGS += -O0
@@ -354,6 +355,7 @@ summary-diff sizes-diff: $(OBJ) $(CI)
## Build the test-runner
.PHONY: test-runner build-test
test-runner build-test: CFLAGS+=-Wno-missing-prototypes
ifndef NO_COV
test-runner build-test: CFLAGS+=--coverage
endif
@@ -405,6 +407,7 @@ testmarks-diff: $(TEST_CSV)
## Build the bench-runner
.PHONY: bench-runner build-bench
bench-runner build-bench: CFLAGS+=-Wno-missing-prototypes
ifdef YES_COV
bench-runner build-bench: CFLAGS+=--coverage
endif

View File

@@ -48,6 +48,7 @@ static void lfs_emubd_decblock(lfs_emubd_block_t *block) {
static lfs_emubd_block_t *lfs_emubd_mutblock(
const struct lfs_config *cfg,
lfs_emubd_block_t **block) {
lfs_emubd_t *bd = cfg->context;
lfs_emubd_block_t *block_ = *block;
if (block_ && block_->rc == 1) {
// rc == 1? can modify
@@ -56,13 +57,13 @@ static lfs_emubd_block_t *lfs_emubd_mutblock(
} else if (block_) {
// rc > 1? need to create a copy
lfs_emubd_block_t *nblock = malloc(
sizeof(lfs_emubd_block_t) + cfg->block_size);
sizeof(lfs_emubd_block_t) + bd->cfg->erase_size);
if (!nblock) {
return NULL;
}
memcpy(nblock, block_,
sizeof(lfs_emubd_block_t) + cfg->block_size);
sizeof(lfs_emubd_block_t) + bd->cfg->erase_size);
nblock->rc = 1;
lfs_emubd_decblock(block_);
@@ -72,7 +73,7 @@ static lfs_emubd_block_t *lfs_emubd_mutblock(
} else {
// no block? need to allocate
lfs_emubd_block_t *nblock = malloc(
sizeof(lfs_emubd_block_t) + cfg->block_size);
sizeof(lfs_emubd_block_t) + bd->cfg->erase_size);
if (!nblock) {
return NULL;
}
@@ -81,10 +82,9 @@ static lfs_emubd_block_t *lfs_emubd_mutblock(
nblock->wear = 0;
// zero for consistency
lfs_emubd_t *bd = cfg->context;
memset(nblock->data,
(bd->cfg->erase_value != -1) ? bd->cfg->erase_value : 0,
cfg->block_size);
bd->cfg->erase_size);
*block = nblock;
return nblock;
@@ -94,22 +94,22 @@ static lfs_emubd_block_t *lfs_emubd_mutblock(
// emubd create/destroy
int lfs_emubd_createcfg(const struct lfs_config *cfg, const char *path,
int lfs_emubd_create(const struct lfs_config *cfg,
const struct lfs_emubd_config *bdcfg) {
LFS_EMUBD_TRACE("lfs_emubd_createcfg(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\", "
"%p {.erase_value=%"PRId32", .erase_cycles=%"PRIu32", "
LFS_EMUBD_TRACE("lfs_emubd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p}, "
"%p {.read_size=%"PRIu32", .prog_size=%"PRIu32", "
".erase_size=%"PRIu32", .erase_count=%"PRIu32", "
".erase_value=%"PRId32", .erase_cycles=%"PRIu32", "
".badblock_behavior=%"PRIu8", .power_cycles=%"PRIu32", "
".powerloss_behavior=%"PRIu8", .powerloss_cb=%p, "
".powerloss_data=%p, .track_branches=%d})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path, (void*)bdcfg, bdcfg->erase_value, bdcfg->erase_cycles,
(void*)bdcfg,
bdcfg->read_size, bdcfg->prog_size, bdcfg->erase_size,
bdcfg->erase_count, bdcfg->erase_value, bdcfg->erase_cycles,
bdcfg->badblock_behavior, bdcfg->power_cycles,
bdcfg->powerloss_behavior, (void*)(uintptr_t)bdcfg->powerloss_cb,
bdcfg->powerloss_data, bdcfg->track_branches);
@@ -117,12 +117,12 @@ int lfs_emubd_createcfg(const struct lfs_config *cfg, const char *path,
bd->cfg = bdcfg;
// allocate our block array, all blocks start as uninitialized
bd->blocks = malloc(cfg->block_count * sizeof(lfs_emubd_block_t*));
bd->blocks = malloc(bd->cfg->erase_count * sizeof(lfs_emubd_block_t*));
if (!bd->blocks) {
LFS_EMUBD_TRACE("lfs_emubd_createcfg -> %d", LFS_ERR_NOMEM);
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
memset(bd->blocks, 0, cfg->block_count * sizeof(lfs_emubd_block_t*));
memset(bd->blocks, 0, bd->cfg->erase_count * sizeof(lfs_emubd_block_t*));
// setup testing things
bd->readed = 0;
@@ -134,7 +134,7 @@ int lfs_emubd_createcfg(const struct lfs_config *cfg, const char *path,
if (bd->cfg->disk_path) {
bd->disk = malloc(sizeof(lfs_emubd_disk_t));
if (!bd->disk) {
LFS_EMUBD_TRACE("lfs_emubd_createcfg -> %d", LFS_ERR_NOMEM);
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
bd->disk->rc = 1;
@@ -156,21 +156,21 @@ int lfs_emubd_createcfg(const struct lfs_config *cfg, const char *path,
// if we're emulating erase values, we can keep a block around in
// memory of just the erase state to speed up emulated erases
if (bd->cfg->erase_value != -1) {
bd->disk->scratch = malloc(cfg->block_size);
bd->disk->scratch = malloc(bd->cfg->erase_size);
if (!bd->disk->scratch) {
LFS_EMUBD_TRACE("lfs_emubd_createcfg -> %d", LFS_ERR_NOMEM);
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
memset(bd->disk->scratch,
bd->cfg->erase_value,
cfg->block_size);
bd->cfg->erase_size);
// go ahead and erase all of the disk, otherwise the file will not
// match our internal representation
for (size_t i = 0; i < cfg->block_count; i++) {
for (size_t i = 0; i < bd->cfg->erase_count; i++) {
ssize_t res = write(bd->disk->fd,
bd->disk->scratch,
cfg->block_size);
bd->cfg->erase_size);
if (res < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", err);
@@ -180,33 +180,16 @@ int lfs_emubd_createcfg(const struct lfs_config *cfg, const char *path,
}
}
LFS_EMUBD_TRACE("lfs_emubd_createcfg -> %d", 0);
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", 0);
return 0;
}
int lfs_emubd_create(const struct lfs_config *cfg, const char *path) {
LFS_EMUBD_TRACE("lfs_emubd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\")",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path);
static const struct lfs_emubd_config defaults = {.erase_value=-1};
int err = lfs_emubd_createcfg(cfg, path, &defaults);
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", err);
return err;
}
int lfs_emubd_destroy(const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_destroy(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
// decrement reference counts
for (lfs_block_t i = 0; i < cfg->block_count; i++) {
for (lfs_block_t i = 0; i < bd->cfg->erase_count; i++) {
lfs_emubd_decblock(bd->blocks[i]);
}
free(bd->blocks);
@@ -237,10 +220,10 @@ int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_emubd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(off % cfg->read_size == 0);
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(off+size <= cfg->block_size);
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->read_size == 0);
LFS_ASSERT(size % bd->cfg->read_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// get the block
const lfs_emubd_block_t *b = bd->blocks[block];
@@ -287,10 +270,10 @@ int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_emubd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(off % cfg->prog_size == 0);
LFS_ASSERT(size % cfg->prog_size == 0);
LFS_ASSERT(off+size <= cfg->block_size);
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->prog_size == 0);
LFS_ASSERT(size % bd->cfg->prog_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// get the block
lfs_emubd_block_t *b = lfs_emubd_mutblock(cfg, &bd->blocks[block]);
@@ -327,7 +310,7 @@ int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
// mirror to disk file?
if (bd->disk) {
off_t res1 = lseek(bd->disk->fd,
(off_t)block*cfg->block_size + (off_t)off,
(off_t)block*bd->cfg->erase_size + (off_t)off,
SEEK_SET);
if (res1 < 0) {
int err = -errno;
@@ -372,11 +355,11 @@ int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_EMUBD_TRACE("lfs_emubd_erase(%p, 0x%"PRIx32" (%"PRIu32"))",
(void*)cfg, block, cfg->block_size);
(void*)cfg, block, ((lfs_emubd_t*)cfg->context)->cfg->erase_size);
lfs_emubd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(block < bd->cfg->erase_count);
// get the block
lfs_emubd_block_t *b = lfs_emubd_mutblock(cfg, &bd->blocks[block]);
@@ -405,12 +388,12 @@ int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
// emulate an erase value?
if (bd->cfg->erase_value != -1) {
memset(b->data, bd->cfg->erase_value, cfg->block_size);
memset(b->data, bd->cfg->erase_value, bd->cfg->erase_size);
// mirror to disk file?
if (bd->disk) {
off_t res1 = lseek(bd->disk->fd,
(off_t)block*cfg->block_size,
(off_t)block*bd->cfg->erase_size,
SEEK_SET);
if (res1 < 0) {
int err = -errno;
@@ -420,7 +403,7 @@ int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
ssize_t res2 = write(bd->disk->fd,
bd->disk->scratch,
cfg->block_size);
bd->cfg->erase_size);
if (res2 < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", err);
@@ -430,7 +413,7 @@ int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
}
// track erases
bd->erased += cfg->block_size;
bd->erased += bd->cfg->erase_size;
if (bd->cfg->erase_sleep) {
int err = nanosleep(&(struct timespec){
.tv_sec=bd->cfg->erase_sleep/1000000000,
@@ -573,7 +556,7 @@ lfs_emubd_swear_t lfs_emubd_wear(const struct lfs_config *cfg,
lfs_emubd_t *bd = cfg->context;
// check if block is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(block < bd->cfg->erase_count);
// get the wear
lfs_emubd_wear_t wear;
@@ -595,7 +578,7 @@ int lfs_emubd_setwear(const struct lfs_config *cfg,
lfs_emubd_t *bd = cfg->context;
// check if block is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(block < bd->cfg->erase_count);
// set the wear
lfs_emubd_block_t *b = lfs_emubd_mutblock(cfg, &bd->blocks[block]);
@@ -635,13 +618,13 @@ int lfs_emubd_copy(const struct lfs_config *cfg, lfs_emubd_t *copy) {
lfs_emubd_t *bd = cfg->context;
// lazily copy over our block array
copy->blocks = malloc(cfg->block_count * sizeof(lfs_emubd_block_t*));
copy->blocks = malloc(bd->cfg->erase_count * sizeof(lfs_emubd_block_t*));
if (!copy->blocks) {
LFS_EMUBD_TRACE("lfs_emubd_copy -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
for (size_t i = 0; i < cfg->block_count; i++) {
for (size_t i = 0; i < bd->cfg->erase_count; i++) {
copy->blocks[i] = lfs_emubd_incblock(bd->blocks[i]);
}

View File

@@ -67,6 +67,18 @@ typedef int64_t lfs_emubd_ssleep_t;
// emubd config, this is required for testing
struct lfs_emubd_config {
// Minimum size of a read operation in bytes.
lfs_size_t read_size;
// Minimum size of a program operation in bytes.
lfs_size_t prog_size;
// Size of an erase operation in bytes.
lfs_size_t erase_size;
// Number of erase blocks on the device.
lfs_size_t erase_count;
// 8-bit erase value to use for simulating erases. -1 does not simulate
// erases, which can speed up testing by avoiding the extra block-device
// operations to store the erase value.
@@ -149,11 +161,7 @@ typedef struct lfs_emubd {
/// Block device API ///
// Create an emulating block device using the geometry in lfs_config
//
// Note that filebd is used if a path is provided, if path is NULL
// emubd will use rambd which can be much faster.
int lfs_emubd_create(const struct lfs_config *cfg, const char *path);
int lfs_emubd_createcfg(const struct lfs_config *cfg, const char *path,
int lfs_emubd_create(const struct lfs_config *cfg,
const struct lfs_emubd_config *bdcfg);
// Clean up memory associated with block device

View File

@@ -15,18 +15,22 @@
#include <windows.h>
#endif
int lfs_filebd_create(const struct lfs_config *cfg, const char *path) {
int lfs_filebd_create(const struct lfs_config *cfg, const char *path,
const struct lfs_filebd_config *bdcfg) {
LFS_FILEBD_TRACE("lfs_filebd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"\"%s\")",
".read=%p, .prog=%p, .erase=%p, .sync=%p}, "
"\"%s\", "
"%p {.read_size=%"PRIu32", .prog_size=%"PRIu32", "
".erase_size=%"PRIu32", .erase_count=%"PRIu32"})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
path, (void*)bdcfg, bdcfg->erase_value);
path,
(void*)bdcfg,
bdcfg->read_size, bdcfg->prog_size, bdcfg->erase_size,
bdcfg->erase_count);
lfs_filebd_t *bd = cfg->context;
bd->cfg = bdcfg;
// open file
#ifdef _WIN32
@@ -66,17 +70,17 @@ int lfs_filebd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_filebd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(off % cfg->read_size == 0);
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(off+size <= cfg->block_size);
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->read_size == 0);
LFS_ASSERT(size % bd->cfg->read_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// zero for reproducibility (in case file is truncated)
memset(buffer, 0, size);
// read
off_t res1 = lseek(bd->fd,
(off_t)block*cfg->block_size + (off_t)off, SEEK_SET);
(off_t)block*bd->cfg->erase_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_read -> %d", err);
@@ -102,14 +106,14 @@ int lfs_filebd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_filebd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(off % cfg->prog_size == 0);
LFS_ASSERT(size % cfg->prog_size == 0);
LFS_ASSERT(off+size <= cfg->block_size);
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->prog_size == 0);
LFS_ASSERT(size % bd->cfg->prog_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// program data
off_t res1 = lseek(bd->fd,
(off_t)block*cfg->block_size + (off_t)off, SEEK_SET);
(off_t)block*bd->cfg->erase_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_prog -> %d", err);
@@ -129,10 +133,11 @@ int lfs_filebd_prog(const struct lfs_config *cfg, lfs_block_t block,
int lfs_filebd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_FILEBD_TRACE("lfs_filebd_erase(%p, 0x%"PRIx32" (%"PRIu32"))",
(void*)cfg, block, cfg->block_size);
(void*)cfg, block, ((lfs_file_t*)cfg->context)->cfg->erase_size);
lfs_filebd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(block < bd->cfg->erase_count);
// erase is a noop
(void)block;

View File

@@ -26,14 +26,31 @@ extern "C"
#endif
#endif
// filebd config
struct lfs_filebd_config {
// Minimum size of a read operation in bytes.
lfs_size_t read_size;
// Minimum size of a program operation in bytes.
lfs_size_t prog_size;
// Size of an erase operation in bytes.
lfs_size_t erase_size;
// Number of erase blocks on the device.
lfs_size_t erase_count;
};
// filebd state
typedef struct lfs_filebd {
int fd;
const struct lfs_filebd_config *cfg;
} lfs_filebd_t;
// Create a file block device using the geometry in lfs_config
int lfs_filebd_create(const struct lfs_config *cfg, const char *path);
// Create a file block device
int lfs_filebd_create(const struct lfs_config *cfg, const char *path,
const struct lfs_filebd_config *bdcfg);
// Clean up memory associated with block device
int lfs_filebd_destroy(const struct lfs_config *cfg);

View File

@@ -7,18 +7,19 @@
*/
#include "bd/lfs_rambd.h"
int lfs_rambd_createcfg(const struct lfs_config *cfg,
int lfs_rambd_create(const struct lfs_config *cfg,
const struct lfs_rambd_config *bdcfg) {
LFS_RAMBD_TRACE("lfs_rambd_createcfg(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"}, "
"%p {.buffer=%p})",
LFS_RAMBD_TRACE("lfs_rambd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p}, "
"%p {.read_size=%"PRIu32", .prog_size=%"PRIu32", "
".erase_size=%"PRIu32", .erase_count=%"PRIu32", "
".buffer=%p})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count,
(void*)bdcfg, bdcfg->buffer);
(void*)bdcfg,
bdcfg->read_size, bdcfg->prog_size, bdcfg->erase_size,
bdcfg->erase_count, bdcfg->buffer);
lfs_rambd_t *bd = cfg->context;
bd->cfg = bdcfg;
@@ -26,35 +27,20 @@ int lfs_rambd_createcfg(const struct lfs_config *cfg,
if (bd->cfg->buffer) {
bd->buffer = bd->cfg->buffer;
} else {
bd->buffer = lfs_malloc(cfg->block_size * cfg->block_count);
bd->buffer = lfs_malloc(bd->cfg->erase_size * bd->cfg->erase_count);
if (!bd->buffer) {
LFS_RAMBD_TRACE("lfs_rambd_createcfg -> %d", LFS_ERR_NOMEM);
LFS_RAMBD_TRACE("lfs_rambd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
}
// zero for reproducibility
memset(bd->buffer, 0, cfg->block_size * cfg->block_count);
memset(bd->buffer, 0, bd->cfg->erase_size * bd->cfg->erase_count);
LFS_RAMBD_TRACE("lfs_rambd_createcfg -> %d", 0);
LFS_RAMBD_TRACE("lfs_rambd_create -> %d", 0);
return 0;
}
int lfs_rambd_create(const struct lfs_config *cfg) {
LFS_RAMBD_TRACE("lfs_rambd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p, "
".read_size=%"PRIu32", .prog_size=%"PRIu32", "
".block_size=%"PRIu32", .block_count=%"PRIu32"})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
cfg->read_size, cfg->prog_size, cfg->block_size, cfg->block_count);
static const struct lfs_rambd_config defaults = {0};
int err = lfs_rambd_createcfg(cfg, &defaults);
LFS_RAMBD_TRACE("lfs_rambd_create -> %d", err);
return err;
}
int lfs_rambd_destroy(const struct lfs_config *cfg) {
LFS_RAMBD_TRACE("lfs_rambd_destroy(%p)", (void*)cfg);
// clean up memory
@@ -74,13 +60,13 @@ int lfs_rambd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_rambd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(off % cfg->read_size == 0);
LFS_ASSERT(size % cfg->read_size == 0);
LFS_ASSERT(off+size <= cfg->block_size);
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->read_size == 0);
LFS_ASSERT(size % bd->cfg->read_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// read data
memcpy(buffer, &bd->buffer[block*cfg->block_size + off], size);
memcpy(buffer, &bd->buffer[block*bd->cfg->erase_size + off], size);
LFS_RAMBD_TRACE("lfs_rambd_read -> %d", 0);
return 0;
@@ -94,13 +80,13 @@ int lfs_rambd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_rambd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(off % cfg->prog_size == 0);
LFS_ASSERT(size % cfg->prog_size == 0);
LFS_ASSERT(off+size <= cfg->block_size);
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->prog_size == 0);
LFS_ASSERT(size % bd->cfg->prog_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// program data
memcpy(&bd->buffer[block*cfg->block_size + off], buffer, size);
memcpy(&bd->buffer[block*bd->cfg->erase_size + off], buffer, size);
LFS_RAMBD_TRACE("lfs_rambd_prog -> %d", 0);
return 0;
@@ -108,10 +94,11 @@ int lfs_rambd_prog(const struct lfs_config *cfg, lfs_block_t block,
int lfs_rambd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_RAMBD_TRACE("lfs_rambd_erase(%p, 0x%"PRIx32" (%"PRIu32"))",
(void*)cfg, block, cfg->block_size);
(void*)cfg, block, ((lfs_rambd_t*)cfg->context)->cfg->erase_size);
lfs_rambd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < cfg->block_count);
LFS_ASSERT(block < bd->cfg->erase_count);
// erase is a noop
(void)block;

View File

@@ -26,8 +26,20 @@ extern "C"
#endif
#endif
// rambd config (optional)
// rambd config
struct lfs_rambd_config {
// Minimum size of a read operation in bytes.
lfs_size_t read_size;
// Minimum size of a program operation in bytes.
lfs_size_t prog_size;
// Size of an erase operation in bytes.
lfs_size_t erase_size;
// Number of erase blocks on the device.
lfs_size_t erase_count;
// Optional statically allocated buffer for the block device.
void *buffer;
};
@@ -39,9 +51,8 @@ typedef struct lfs_rambd {
} lfs_rambd_t;
// Create a RAM block device using the geometry in lfs_config
int lfs_rambd_create(const struct lfs_config *cfg);
int lfs_rambd_createcfg(const struct lfs_config *cfg,
// Create a RAM block device
int lfs_rambd_create(const struct lfs_config *cfg,
const struct lfs_rambd_config *bdcfg);
// Clean up memory associated with block device

348
lfs.c
View File

@@ -46,8 +46,8 @@ static int lfs_bd_read(lfs_t *lfs,
lfs_block_t block, lfs_off_t off,
void *buffer, lfs_size_t size) {
uint8_t *data = buffer;
if (block >= lfs->cfg->block_count ||
off+size > lfs->cfg->block_size) {
if (off+size > lfs->cfg->block_size
|| (lfs->block_count && block >= lfs->block_count)) {
return LFS_ERR_CORRUPT;
}
@@ -104,7 +104,7 @@ static int lfs_bd_read(lfs_t *lfs,
}
// load to cache, first condition can no longer fail
LFS_ASSERT(block < lfs->cfg->block_count);
LFS_ASSERT(!lfs->block_count || block < lfs->block_count);
rcache->block = block;
rcache->off = lfs_aligndown(off, lfs->cfg->read_size);
rcache->size = lfs_min(
@@ -176,7 +176,7 @@ static int lfs_bd_crc(lfs_t *lfs,
static int lfs_bd_flush(lfs_t *lfs,
lfs_cache_t *pcache, lfs_cache_t *rcache, bool validate) {
if (pcache->block != LFS_BLOCK_NULL && pcache->block != LFS_BLOCK_INLINE) {
LFS_ASSERT(pcache->block < lfs->cfg->block_count);
LFS_ASSERT(pcache->block < lfs->block_count);
lfs_size_t diff = lfs_alignup(pcache->size, lfs->cfg->prog_size);
int err = lfs->cfg->prog(lfs->cfg, pcache->block,
pcache->off, pcache->buffer, diff);
@@ -229,7 +229,7 @@ static int lfs_bd_prog(lfs_t *lfs,
lfs_block_t block, lfs_off_t off,
const void *buffer, lfs_size_t size) {
const uint8_t *data = buffer;
LFS_ASSERT(block == LFS_BLOCK_INLINE || block < lfs->cfg->block_count);
LFS_ASSERT(block == LFS_BLOCK_INLINE || block < lfs->block_count);
LFS_ASSERT(off + size <= lfs->cfg->block_size);
while (size > 0) {
@@ -273,7 +273,7 @@ static int lfs_bd_prog(lfs_t *lfs,
#ifndef LFS_READONLY
static int lfs_bd_erase(lfs_t *lfs, lfs_block_t block) {
LFS_ASSERT(block < lfs->cfg->block_count);
LFS_ASSERT(block < lfs->block_count);
int err = lfs->cfg->erase(lfs->cfg, block);
LFS_ASSERT(err <= 0);
return err;
@@ -593,77 +593,109 @@ static int lfs_rawunmount(lfs_t *lfs);
/// Block allocator ///
// allocations should call this when all allocated blocks are committed to
// the filesystem
//
// after a checkpoint, the block allocator may realloc any untracked blocks
static void lfs_alloc_ckpoint(lfs_t *lfs) {
lfs->lookahead.ckpoint = lfs->block_count;
}
// drop the lookahead buffer, this is done during mounting and failed
// traversals in order to avoid invalid lookahead state
static void lfs_alloc_drop(lfs_t *lfs) {
lfs->lookahead.size = 0;
lfs->lookahead.next = 0;
lfs_alloc_ckpoint(lfs);
}
#ifndef LFS_READONLY
static int lfs_alloc_lookahead(void *p, lfs_block_t block) {
lfs_t *lfs = (lfs_t*)p;
lfs_block_t off = ((block - lfs->free.off)
+ lfs->cfg->block_count) % lfs->cfg->block_count;
lfs_block_t off = ((block - lfs->lookahead.start)
+ lfs->block_count) % lfs->block_count;
if (off < lfs->free.size) {
lfs->free.buffer[off / 32] |= 1U << (off % 32);
if (off < lfs->lookahead.size) {
lfs->lookahead.buffer[off / 8] |= 1U << (off % 8);
}
return 0;
}
#endif
// indicate allocated blocks have been committed into the filesystem, this
// is to prevent blocks from being garbage collected in the middle of a
// commit operation
static void lfs_alloc_ack(lfs_t *lfs) {
lfs->free.ack = lfs->cfg->block_count;
}
#ifndef LFS_READONLY
static int lfs_alloc_scan(lfs_t *lfs) {
// move lookahead buffer to the first unused block
//
// note we limit the lookahead buffer to at most the amount of blocks
// checkpointed, this prevents the math in lfs_alloc from underflowing
lfs->lookahead.start = (lfs->lookahead.start + lfs->lookahead.next)
% lfs->block_count;
lfs->lookahead.next = 0;
lfs->lookahead.size = lfs_min(
8*lfs->cfg->lookahead_size,
lfs->lookahead.ckpoint);
// drop the lookahead buffer, this is done during mounting and failed
// traversals in order to avoid invalid lookahead state
static void lfs_alloc_drop(lfs_t *lfs) {
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
// find mask of free blocks from tree
memset(lfs->lookahead.buffer, 0, lfs->cfg->lookahead_size);
int err = lfs_fs_rawtraverse(lfs, lfs_alloc_lookahead, lfs, true);
if (err) {
lfs_alloc_drop(lfs);
return err;
}
return 0;
}
#endif
#ifndef LFS_READONLY
static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) {
while (true) {
while (lfs->free.i != lfs->free.size) {
lfs_block_t off = lfs->free.i;
lfs->free.i += 1;
lfs->free.ack -= 1;
if (!(lfs->free.buffer[off / 32] & (1U << (off % 32)))) {
// scan our lookahead buffer for free blocks
while (lfs->lookahead.next < lfs->lookahead.size) {
if (!(lfs->lookahead.buffer[lfs->lookahead.next / 8]
& (1U << (lfs->lookahead.next % 8)))) {
// found a free block
*block = (lfs->free.off + off) % lfs->cfg->block_count;
*block = (lfs->lookahead.start + lfs->lookahead.next)
% lfs->block_count;
// eagerly find next off so an alloc ack can
// discredit old lookahead blocks
while (lfs->free.i != lfs->free.size &&
(lfs->free.buffer[lfs->free.i / 32]
& (1U << (lfs->free.i % 32)))) {
lfs->free.i += 1;
lfs->free.ack -= 1;
// eagerly find next free block to maximize how many blocks
// lfs_alloc_ckpoint makes available for scanning
while (true) {
lfs->lookahead.next += 1;
lfs->lookahead.ckpoint -= 1;
if (lfs->lookahead.next >= lfs->lookahead.size
|| !(lfs->lookahead.buffer[lfs->lookahead.next / 8]
& (1U << (lfs->lookahead.next % 8)))) {
return 0;
}
}
return 0;
}
lfs->lookahead.next += 1;
lfs->lookahead.ckpoint -= 1;
}
// check if we have looked at all blocks since last ack
if (lfs->free.ack == 0) {
LFS_ERROR("No more free space %"PRIu32,
lfs->free.i + lfs->free.off);
// In order to keep our block allocator from spinning forever when our
// filesystem is full, we mark points where there are no in-flight
// allocations with a checkpoint before starting a set of allocations.
//
// If we've looked at all blocks since the last checkpoint, we report
// the filesystem as out of storage.
//
if (lfs->lookahead.ckpoint <= 0) {
LFS_ERROR("No more free space 0x%"PRIx32,
(lfs->lookahead.start + lfs->lookahead.next)
% lfs->cfg->block_count);
return LFS_ERR_NOSPC;
}
lfs->free.off = (lfs->free.off + lfs->free.size)
% lfs->cfg->block_count;
lfs->free.size = lfs_min(8*lfs->cfg->lookahead_size, lfs->free.ack);
lfs->free.i = 0;
// find mask of free blocks from tree
memset(lfs->free.buffer, 0, lfs->cfg->lookahead_size);
int err = lfs_fs_rawtraverse(lfs, lfs_alloc_lookahead, lfs, true);
if (err) {
lfs_alloc_drop(lfs);
// No blocks in our lookahead buffer, we need to scan the filesystem for
// unused blocks in the next lookahead window.
int err = lfs_alloc_scan(lfs);
if(err) {
return err;
}
}
@@ -877,7 +909,7 @@ static int lfs_dir_traverse(lfs_t *lfs,
// iterate over directory and attrs
lfs_tag_t tag;
const void *buffer;
struct lfs_diskoff disk;
struct lfs_diskoff disk = {0};
while (true) {
{
if (off+lfs_tag_dsize(ptag) < dir->off) {
@@ -1067,7 +1099,8 @@ static lfs_stag_t lfs_dir_fetchmatch(lfs_t *lfs,
// if either block address is invalid we return LFS_ERR_CORRUPT here,
// otherwise later writes to the pair could fail
if (pair[0] >= lfs->cfg->block_count || pair[1] >= lfs->cfg->block_count) {
if (lfs->block_count
&& (pair[0] >= lfs->block_count || pair[1] >= lfs->block_count)) {
return LFS_ERR_CORRUPT;
}
@@ -1628,7 +1661,7 @@ static int lfs_dir_commitcrc(lfs_t *lfs, struct lfs_commit *commit) {
}
// space for fcrc?
uint8_t eperturb = -1;
uint8_t eperturb = (uint8_t)-1;
if (noff >= end && noff <= lfs->cfg->block_size - lfs->cfg->prog_size) {
// first read the leading byte, this always contains a bit
// we can perturb to avoid writes that don't change the fcrc
@@ -2140,7 +2173,7 @@ static int lfs_dir_splittingcompact(lfs_t *lfs, lfs_mdir_t *dir,
// do we have extra space? littlefs can't reclaim this space
// by itself, so expand cautiously
if ((lfs_size_t)size < lfs->cfg->block_count/2) {
if ((lfs_size_t)size < lfs->block_count/2) {
LFS_DEBUG("Expanding superblock at rev %"PRIu32, dir->rev);
int err = lfs_dir_split(lfs, dir, attrs, attrcount,
source, begin, end);
@@ -2573,7 +2606,7 @@ static int lfs_rawmkdir(lfs_t *lfs, const char *path) {
}
// build up new directory
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
lfs_mdir_t dir;
err = lfs_dir_alloc(lfs, &dir);
if (err) {
@@ -3259,7 +3292,7 @@ relocate:
#ifndef LFS_READONLY
static int lfs_file_outline(lfs_t *lfs, lfs_file_t *file) {
file->off = file->pos;
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
int err = lfs_file_relocate(lfs, file);
if (err) {
return err;
@@ -3522,7 +3555,7 @@ static lfs_ssize_t lfs_file_flushedwrite(lfs_t *lfs, lfs_file_t *file,
}
// extend file with new blocks
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
int err = lfs_ctz_extend(lfs, &file->cache, &lfs->rcache,
file->block, file->pos,
&file->block, &file->off);
@@ -3565,7 +3598,7 @@ relocate:
data += diff;
nsize -= diff;
lfs_alloc_ack(lfs);
lfs_alloc_ckpoint(lfs);
}
return size;
@@ -4095,6 +4128,7 @@ static int lfs_rawremoveattr(lfs_t *lfs, const char *path, uint8_t type) {
/// Filesystem operations ///
static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
lfs->cfg = cfg;
lfs->block_count = cfg->block_count; // May be 0
int err = 0;
#ifdef LFS_MULTIVERSION
@@ -4139,6 +4173,14 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
// wear-leveling.
LFS_ASSERT(lfs->cfg->block_cycles != 0);
// check that compact_thresh makes sense
//
// metadata can't be compacted below block_size/2, and metadata can't
// exceed a block_size
LFS_ASSERT(lfs->cfg->compact_thresh == 0
|| lfs->cfg->compact_thresh >= lfs->cfg->block_size/2);
LFS_ASSERT(lfs->cfg->compact_thresh == (lfs_size_t)-1
|| lfs->cfg->compact_thresh <= lfs->cfg->block_size);
// setup read cache
if (lfs->cfg->read_buffer) {
@@ -4166,15 +4208,14 @@ static int lfs_init(lfs_t *lfs, const struct lfs_config *cfg) {
lfs_cache_zero(lfs, &lfs->rcache);
lfs_cache_zero(lfs, &lfs->pcache);
// setup lookahead, must be multiple of 64-bits, 32-bit aligned
// setup lookahead buffer, note mount finishes initializing this after
// we establish a decent pseudo-random seed
LFS_ASSERT(lfs->cfg->lookahead_size > 0);
LFS_ASSERT(lfs->cfg->lookahead_size % 8 == 0 &&
(uintptr_t)lfs->cfg->lookahead_buffer % 4 == 0);
if (lfs->cfg->lookahead_buffer) {
lfs->free.buffer = lfs->cfg->lookahead_buffer;
lfs->lookahead.buffer = lfs->cfg->lookahead_buffer;
} else {
lfs->free.buffer = lfs_malloc(lfs->cfg->lookahead_size);
if (!lfs->free.buffer) {
lfs->lookahead.buffer = lfs_malloc(lfs->cfg->lookahead_size);
if (!lfs->lookahead.buffer) {
err = LFS_ERR_NOMEM;
goto cleanup;
}
@@ -4231,12 +4272,14 @@ static int lfs_deinit(lfs_t *lfs) {
}
if (!lfs->cfg->lookahead_buffer) {
lfs_free(lfs->free.buffer);
lfs_free(lfs->lookahead.buffer);
}
return 0;
}
#ifndef LFS_READONLY
static int lfs_rawformat(lfs_t *lfs, const struct lfs_config *cfg) {
int err = 0;
@@ -4246,13 +4289,15 @@ static int lfs_rawformat(lfs_t *lfs, const struct lfs_config *cfg) {
return err;
}
LFS_ASSERT(cfg->block_count != 0);
// create free lookahead
memset(lfs->free.buffer, 0, lfs->cfg->lookahead_size);
lfs->free.off = 0;
lfs->free.size = lfs_min(8*lfs->cfg->lookahead_size,
lfs->cfg->block_count);
lfs->free.i = 0;
lfs_alloc_ack(lfs);
memset(lfs->lookahead.buffer, 0, lfs->cfg->lookahead_size);
lfs->lookahead.start = 0;
lfs->lookahead.size = lfs_min(8*lfs->cfg->lookahead_size,
lfs->block_count);
lfs->lookahead.next = 0;
lfs_alloc_ckpoint(lfs);
// create root dir
lfs_mdir_t root;
@@ -4265,7 +4310,7 @@ static int lfs_rawformat(lfs_t *lfs, const struct lfs_config *cfg) {
lfs_superblock_t superblock = {
.version = lfs_fs_disk_version(lfs),
.block_size = lfs->cfg->block_size,
.block_count = lfs->cfg->block_count,
.block_count = lfs->block_count,
.name_max = lfs->name_max,
.file_max = lfs->file_max,
.attr_max = lfs->attr_max,
@@ -4422,13 +4467,17 @@ static int lfs_rawmount(lfs_t *lfs, const struct lfs_config *cfg) {
lfs->attr_max = superblock.attr_max;
}
if (superblock.block_count != lfs->cfg->block_count) {
// this is where we get the block_count from disk if block_count=0
if (lfs->cfg->block_count
&& superblock.block_count != lfs->cfg->block_count) {
LFS_ERROR("Invalid block count (%"PRIu32" != %"PRIu32")",
superblock.block_count, lfs->cfg->block_count);
err = LFS_ERR_INVAL;
goto cleanup;
}
lfs->block_count = superblock.block_count;
if (superblock.block_size != lfs->cfg->block_size) {
LFS_ERROR("Invalid block size (%"PRIu32" != %"PRIu32")",
superblock.block_size, lfs->cfg->block_size);
@@ -4444,12 +4493,6 @@ static int lfs_rawmount(lfs_t *lfs, const struct lfs_config *cfg) {
}
}
// found superblock?
if (lfs_pair_isnull(lfs->root)) {
err = LFS_ERR_INVAL;
goto cleanup;
}
// update littlefs with gstate
if (!lfs_gstate_iszero(&lfs->gstate)) {
LFS_DEBUG("Found pending gstate 0x%08"PRIx32"%08"PRIx32"%08"PRIx32,
@@ -4462,7 +4505,7 @@ static int lfs_rawmount(lfs_t *lfs, const struct lfs_config *cfg) {
// setup free lookahead, to distribute allocations uniformly across
// boots, we start the allocator at a random location
lfs->free.off = lfs->seed % lfs->cfg->block_count;
lfs->lookahead.start = lfs->seed % lfs->block_count;
lfs_alloc_drop(lfs);
return 0;
@@ -4506,6 +4549,10 @@ static int lfs_fs_rawstat(lfs_t *lfs, struct lfs_fsinfo *fsinfo) {
fsinfo->disk_version = superblock.version;
}
// filesystem geometry
fsinfo->block_size = lfs->cfg->block_size;
fsinfo->block_count = lfs->block_count;
// other on-disk configuration, we cache all of these for internal use
fsinfo->name_max = lfs->name_max;
fsinfo->file_max = lfs->file_max;
@@ -4771,7 +4818,7 @@ static int lfs_fs_desuperblock(lfs_t *lfs) {
lfs_superblock_t superblock = {
.version = lfs_fs_disk_version(lfs),
.block_size = lfs->cfg->block_size,
.block_count = lfs->cfg->block_count,
.block_count = lfs->block_count,
.name_max = lfs->name_max,
.file_max = lfs->file_max,
.attr_max = lfs->attr_max,
@@ -4979,7 +5026,7 @@ static int lfs_fs_forceconsistency(lfs_t *lfs) {
#endif
#ifndef LFS_READONLY
int lfs_fs_rawmkconsistent(lfs_t *lfs) {
static int lfs_fs_rawmkconsistent(lfs_t *lfs) {
// lfs_fs_forceconsistency does most of the work here
int err = lfs_fs_forceconsistency(lfs);
if (err) {
@@ -5025,6 +5072,95 @@ static lfs_ssize_t lfs_fs_rawsize(lfs_t *lfs) {
return size;
}
// explicit garbage collection
#ifndef LFS_READONLY
static int lfs_fs_rawgc(lfs_t *lfs) {
// force consistency, even if we're not necessarily going to write,
// because this function is supposed to take care of janitorial work
// isn't it?
int err = lfs_fs_forceconsistency(lfs);
if (err) {
return err;
}
// try to compact metadata pairs, note we can't really accomplish
// anything if compact_thresh doesn't at least leave a prog_size
// available
if (lfs->cfg->compact_thresh
< lfs->cfg->block_size - lfs->cfg->prog_size) {
// iterate over all mdirs
lfs_mdir_t mdir = {.tail = {0, 1}};
while (!lfs_pair_isnull(mdir.tail)) {
err = lfs_dir_fetch(lfs, &mdir, mdir.tail);
if (err) {
return err;
}
// not erased? exceeds our compaction threshold?
if (!mdir.erased || ((lfs->cfg->compact_thresh == 0)
? mdir.off > lfs->cfg->block_size - lfs->cfg->block_size/8
: mdir.off > lfs->cfg->compact_thresh)) {
// the easiest way to trigger a compaction is to mark
// the mdir as unerased and add an empty commit
mdir.erased = false;
err = lfs_dir_commit(lfs, &mdir, NULL, 0);
if (err) {
return err;
}
}
}
}
// try to populate the lookahead buffer, unless it's already full
if (lfs->lookahead.size < 8*lfs->cfg->lookahead_size) {
err = lfs_alloc_scan(lfs);
if (err) {
return err;
}
}
return 0;
}
#endif
#ifndef LFS_READONLY
static int lfs_fs_rawgrow(lfs_t *lfs, lfs_size_t block_count) {
// shrinking is not supported
LFS_ASSERT(block_count >= lfs->block_count);
if (block_count > lfs->block_count) {
lfs->block_count = block_count;
// fetch the root
lfs_mdir_t root;
int err = lfs_dir_fetch(lfs, &root, lfs->root);
if (err) {
return err;
}
// update the superblock
lfs_superblock_t superblock;
lfs_stag_t tag = lfs_dir_get(lfs, &root, LFS_MKTAG(0x7ff, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_INLINESTRUCT, 0, sizeof(superblock)),
&superblock);
if (tag < 0) {
return tag;
}
lfs_superblock_fromle32(&superblock);
superblock.block_count = lfs->block_count;
lfs_superblock_tole32(&superblock);
err = lfs_dir_commit(lfs, &root, LFS_MKATTRS(
{tag, &superblock}));
if (err) {
return err;
}
}
return 0;
}
#endif
#ifdef LFS_MIGRATE
////// Migration from littelfs v1 below this //////
@@ -5393,10 +5529,10 @@ static int lfs1_mount(lfs_t *lfs, struct lfs1 *lfs1,
lfs->lfs1->root[1] = LFS_BLOCK_NULL;
// setup free lookahead
lfs->free.off = 0;
lfs->free.size = 0;
lfs->free.i = 0;
lfs_alloc_ack(lfs);
lfs->lookahead.start = 0;
lfs->lookahead.size = 0;
lfs->lookahead.next = 0;
lfs_alloc_ckpoint(lfs);
// load superblock
lfs1_dir_t dir;
@@ -5449,6 +5585,10 @@ static int lfs1_unmount(lfs_t *lfs) {
/// v1 migration ///
static int lfs_rawmigrate(lfs_t *lfs, const struct lfs_config *cfg) {
struct lfs1 lfs1;
// Indeterminate filesystem size not allowed for migration.
LFS_ASSERT(cfg->block_count != 0);
int err = lfs1_mount(lfs, &lfs1, cfg);
if (err) {
return err;
@@ -6204,6 +6344,38 @@ int lfs_fs_mkconsistent(lfs_t *lfs) {
}
#endif
#ifndef LFS_READONLY
int lfs_fs_gc(lfs_t *lfs) {
int err = LFS_LOCK(lfs->cfg);
if (err) {
return err;
}
LFS_TRACE("lfs_fs_gc(%p)", (void*)lfs);
err = lfs_fs_rawgc(lfs);
LFS_TRACE("lfs_fs_gc -> %d", err);
LFS_UNLOCK(lfs->cfg);
return err;
}
#endif
#ifndef LFS_READONLY
int lfs_fs_grow(lfs_t *lfs, lfs_size_t block_count) {
int err = LFS_LOCK(lfs->cfg);
if (err) {
return err;
}
LFS_TRACE("lfs_fs_grow(%p, %"PRIu32")", (void*)lfs, block_count);
err = lfs_fs_rawgrow(lfs, block_count);
LFS_TRACE("lfs_fs_grow -> %d", err);
LFS_UNLOCK(lfs->cfg);
return err;
}
#endif
#ifdef LFS_MIGRATE
int lfs_migrate(lfs_t *lfs, const struct lfs_config *cfg) {
int err = LFS_LOCK(cfg);

67
lfs.h
View File

@@ -21,7 +21,7 @@ extern "C"
// Software library version
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_VERSION 0x00020007
#define LFS_VERSION 0x00020008
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
@@ -226,9 +226,20 @@ struct lfs_config {
// Size of the lookahead buffer in bytes. A larger lookahead buffer
// increases the number of blocks found during an allocation pass. The
// lookahead buffer is stored as a compact bitmap, so each byte of RAM
// can track 8 blocks. Must be a multiple of 8.
// can track 8 blocks.
lfs_size_t lookahead_size;
// Threshold for metadata compaction during lfs_fs_gc in bytes. Metadata
// pairs that exceed this threshold will be compacted during lfs_fs_gc.
// Defaults to ~88% block_size when zero, though the default may change
// in the future.
//
// Note this only affects lfs_fs_gc. Normal compactions still only occur
// when full.
//
// Set to -1 to disable metadata compaction during lfs_fs_gc.
lfs_size_t compact_thresh;
// Optional statically allocated read buffer. Must be cache_size.
// By default lfs_malloc is used to allocate this buffer.
void *read_buffer;
@@ -237,9 +248,8 @@ struct lfs_config {
// By default lfs_malloc is used to allocate this buffer.
void *prog_buffer;
// Optional statically allocated lookahead buffer. Must be lookahead_size
// and aligned to a 32-bit boundary. By default lfs_malloc is used to
// allocate this buffer.
// Optional statically allocated lookahead buffer. Must be lookahead_size.
// By default lfs_malloc is used to allocate this buffer.
void *lookahead_buffer;
// Optional upper limit on length of file names in bytes. No downside for
@@ -293,6 +303,12 @@ struct lfs_fsinfo {
// On-disk version.
uint32_t disk_version;
// Size of a logical block in bytes.
lfs_size_t block_size;
// Number of logical blocks in filesystem.
lfs_size_t block_count;
// Upper limit on the length of file names in bytes.
lfs_size_t name_max;
@@ -424,15 +440,16 @@ typedef struct lfs {
lfs_gstate_t gdisk;
lfs_gstate_t gdelta;
struct lfs_free {
lfs_block_t off;
struct lfs_lookahead {
lfs_block_t start;
lfs_block_t size;
lfs_block_t i;
lfs_block_t ack;
uint32_t *buffer;
} free;
lfs_block_t next;
lfs_block_t ckpoint;
uint8_t *buffer;
} lookahead;
const struct lfs_config *cfg;
lfs_size_t block_count;
lfs_size_t name_max;
lfs_size_t file_max;
lfs_size_t attr_max;
@@ -717,6 +734,34 @@ int lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
int lfs_fs_mkconsistent(lfs_t *lfs);
#endif
#ifndef LFS_READONLY
// Attempt any janitorial work
//
// This currently:
// 1. Calls mkconsistent if not already consistent
// 2. Compacts metadata > compact_thresh
// 3. Populates the block allocator
//
// Though additional janitorial work may be added in the future.
//
// Calling this function is not required, but may allow the offloading of
// expensive janitorial work to a less time-critical code path.
//
// Returns a negative error code on failure. Accomplishing nothing is not
// an error.
int lfs_fs_gc(lfs_t *lfs);
#endif
#ifndef LFS_READONLY
// Grows the filesystem to a new size, updating the superblock with the new
// block count.
//
// Note: This is irreversible.
//
// Returns a negative error code on failure.
int lfs_fs_grow(lfs_t *lfs, lfs_size_t block_count);
#endif
#ifndef LFS_READONLY
#ifdef LFS_MIGRATE
// Attempts to migrate a previous version of littlefs

View File

@@ -215,7 +215,9 @@ static inline uint32_t lfs_tobe32(uint32_t a) {
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size);
// Allocate memory, only used if buffers are not provided to littlefs
// Note, memory must be 64-bit aligned
//
// littlefs current has no alignment requirements, as it only allocates
// byte-level buffers.
static inline void *lfs_malloc(size_t size) {
#ifndef LFS_NO_MALLOC
return malloc(size);

View File

@@ -1271,9 +1271,9 @@ static void list_geometries(void) {
builtin_geometries[g].name,
READ_SIZE,
PROG_SIZE,
BLOCK_SIZE,
BLOCK_COUNT,
BLOCK_SIZE*BLOCK_COUNT);
ERASE_SIZE,
ERASE_COUNT,
ERASE_SIZE*ERASE_COUNT);
}
}
@@ -1321,9 +1321,14 @@ void perm_run(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
};
struct lfs_emubd_config bdcfg = {
.read_size = READ_SIZE,
.prog_size = PROG_SIZE,
.erase_size = ERASE_SIZE,
.erase_count = ERASE_COUNT,
.erase_value = ERASE_VALUE,
.erase_cycles = ERASE_CYCLES,
.badblock_behavior = BADBLOCK_BEHAVIOR,
@@ -1333,7 +1338,7 @@ void perm_run(
.erase_sleep = bench_erase_sleep,
};
int err = lfs_emubd_createcfg(&cfg, bench_disk_path, &bdcfg);
int err = lfs_emubd_create(&cfg, &bdcfg);
if (err) {
fprintf(stderr, "error: could not create block device: %d\n", err);
exit(-1);
@@ -1761,19 +1766,19 @@ invalid_define:
= BENCH_LIT(sizes[0]);
geometry->defines[PROG_SIZE_i]
= BENCH_LIT(sizes[1]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= BENCH_LIT(sizes[2]);
} else if (count >= 2) {
geometry->defines[PROG_SIZE_i]
= BENCH_LIT(sizes[0]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= BENCH_LIT(sizes[1]);
} else {
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= BENCH_LIT(sizes[0]);
}
if (count >= 4) {
geometry->defines[BLOCK_COUNT_i]
geometry->defines[ERASE_COUNT_i]
= BENCH_LIT(sizes[3]);
}
optarg = s;
@@ -1805,19 +1810,19 @@ invalid_define:
= BENCH_LIT(sizes[0]);
geometry->defines[PROG_SIZE_i]
= BENCH_LIT(sizes[1]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= BENCH_LIT(sizes[2]);
} else if (count >= 2) {
geometry->defines[PROG_SIZE_i]
= BENCH_LIT(sizes[0]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= BENCH_LIT(sizes[1]);
} else {
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= BENCH_LIT(sizes[0]);
}
if (count >= 4) {
geometry->defines[BLOCK_COUNT_i]
geometry->defines[ERASE_COUNT_i]
= BENCH_LIT(sizes[3]);
}
optarg = s;

View File

@@ -89,22 +89,28 @@ intmax_t bench_define(size_t define);
#define READ_SIZE_i 0
#define PROG_SIZE_i 1
#define BLOCK_SIZE_i 2
#define BLOCK_COUNT_i 3
#define CACHE_SIZE_i 4
#define LOOKAHEAD_SIZE_i 5
#define BLOCK_CYCLES_i 6
#define ERASE_VALUE_i 7
#define ERASE_CYCLES_i 8
#define BADBLOCK_BEHAVIOR_i 9
#define POWERLOSS_BEHAVIOR_i 10
#define ERASE_SIZE_i 2
#define ERASE_COUNT_i 3
#define BLOCK_SIZE_i 4
#define BLOCK_COUNT_i 5
#define CACHE_SIZE_i 6
#define LOOKAHEAD_SIZE_i 7
#define COMPACT_THRESH_i 8
#define BLOCK_CYCLES_i 9
#define ERASE_VALUE_i 10
#define ERASE_CYCLES_i 11
#define BADBLOCK_BEHAVIOR_i 12
#define POWERLOSS_BEHAVIOR_i 13
#define READ_SIZE bench_define(READ_SIZE_i)
#define PROG_SIZE bench_define(PROG_SIZE_i)
#define ERASE_SIZE bench_define(ERASE_SIZE_i)
#define ERASE_COUNT bench_define(ERASE_COUNT_i)
#define BLOCK_SIZE bench_define(BLOCK_SIZE_i)
#define BLOCK_COUNT bench_define(BLOCK_COUNT_i)
#define CACHE_SIZE bench_define(CACHE_SIZE_i)
#define LOOKAHEAD_SIZE bench_define(LOOKAHEAD_SIZE_i)
#define COMPACT_THRESH bench_define(COMPACT_THRESH_i)
#define BLOCK_CYCLES bench_define(BLOCK_CYCLES_i)
#define ERASE_VALUE bench_define(ERASE_VALUE_i)
#define ERASE_CYCLES bench_define(ERASE_CYCLES_i)
@@ -113,11 +119,14 @@ intmax_t bench_define(size_t define);
#define BENCH_IMPLICIT_DEFINES \
BENCH_DEF(READ_SIZE, PROG_SIZE) \
BENCH_DEF(PROG_SIZE, BLOCK_SIZE) \
BENCH_DEF(BLOCK_SIZE, 0) \
BENCH_DEF(BLOCK_COUNT, (1024*1024)/BLOCK_SIZE) \
BENCH_DEF(PROG_SIZE, ERASE_SIZE) \
BENCH_DEF(ERASE_SIZE, 0) \
BENCH_DEF(ERASE_COUNT, (1024*1024)/BLOCK_SIZE) \
BENCH_DEF(BLOCK_SIZE, ERASE_SIZE) \
BENCH_DEF(BLOCK_COUNT, ERASE_COUNT/lfs_max(BLOCK_SIZE/ERASE_SIZE,1))\
BENCH_DEF(CACHE_SIZE, lfs_max(64,lfs_max(READ_SIZE,PROG_SIZE))) \
BENCH_DEF(LOOKAHEAD_SIZE, 16) \
BENCH_DEF(COMPACT_THRESH, 0) \
BENCH_DEF(BLOCK_CYCLES, -1) \
BENCH_DEF(ERASE_VALUE, 0xff) \
BENCH_DEF(ERASE_CYCLES, 0) \
@@ -125,7 +134,7 @@ intmax_t bench_define(size_t define);
BENCH_DEF(POWERLOSS_BEHAVIOR, LFS_EMUBD_POWERLOSS_NOOP)
#define BENCH_GEOMETRY_DEFINE_COUNT 4
#define BENCH_IMPLICIT_DEFINE_COUNT 11
#define BENCH_IMPLICIT_DEFINE_COUNT 14
#endif

View File

@@ -1312,9 +1312,9 @@ static void list_geometries(void) {
builtin_geometries[g].name,
READ_SIZE,
PROG_SIZE,
BLOCK_SIZE,
BLOCK_COUNT,
BLOCK_SIZE*BLOCK_COUNT);
ERASE_SIZE,
ERASE_COUNT,
ERASE_SIZE*ERASE_COUNT);
}
}
@@ -1346,12 +1346,17 @@ static void run_powerloss_none(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
};
struct lfs_emubd_config bdcfg = {
.read_size = READ_SIZE,
.prog_size = PROG_SIZE,
.erase_size = ERASE_SIZE,
.erase_count = ERASE_COUNT,
.erase_value = ERASE_VALUE,
.erase_cycles = ERASE_CYCLES,
.badblock_behavior = BADBLOCK_BEHAVIOR,
@@ -1361,7 +1366,7 @@ static void run_powerloss_none(
.erase_sleep = test_erase_sleep,
};
int err = lfs_emubd_createcfg(&cfg, test_disk_path, &bdcfg);
int err = lfs_emubd_create(&cfg, &bdcfg);
if (err) {
fprintf(stderr, "error: could not create block device: %d\n", err);
exit(-1);
@@ -1418,12 +1423,17 @@ static void run_powerloss_linear(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
};
struct lfs_emubd_config bdcfg = {
.read_size = READ_SIZE,
.prog_size = PROG_SIZE,
.erase_size = ERASE_SIZE,
.erase_count = ERASE_COUNT,
.erase_value = ERASE_VALUE,
.erase_cycles = ERASE_CYCLES,
.badblock_behavior = BADBLOCK_BEHAVIOR,
@@ -1437,7 +1447,7 @@ static void run_powerloss_linear(
.powerloss_data = &powerloss_jmp,
};
int err = lfs_emubd_createcfg(&cfg, test_disk_path, &bdcfg);
int err = lfs_emubd_create(&cfg, &bdcfg);
if (err) {
fprintf(stderr, "error: could not create block device: %d\n", err);
exit(-1);
@@ -1507,12 +1517,17 @@ static void run_powerloss_log(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
};
struct lfs_emubd_config bdcfg = {
.read_size = READ_SIZE,
.prog_size = PROG_SIZE,
.erase_size = ERASE_SIZE,
.erase_count = ERASE_COUNT,
.erase_value = ERASE_VALUE,
.erase_cycles = ERASE_CYCLES,
.badblock_behavior = BADBLOCK_BEHAVIOR,
@@ -1526,7 +1541,7 @@ static void run_powerloss_log(
.powerloss_data = &powerloss_jmp,
};
int err = lfs_emubd_createcfg(&cfg, test_disk_path, &bdcfg);
int err = lfs_emubd_create(&cfg, &bdcfg);
if (err) {
fprintf(stderr, "error: could not create block device: %d\n", err);
exit(-1);
@@ -1594,12 +1609,17 @@ static void run_powerloss_cycles(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
};
struct lfs_emubd_config bdcfg = {
.read_size = READ_SIZE,
.prog_size = PROG_SIZE,
.erase_size = ERASE_SIZE,
.erase_count = ERASE_COUNT,
.erase_value = ERASE_VALUE,
.erase_cycles = ERASE_CYCLES,
.badblock_behavior = BADBLOCK_BEHAVIOR,
@@ -1613,7 +1633,7 @@ static void run_powerloss_cycles(
.powerloss_data = &powerloss_jmp,
};
int err = lfs_emubd_createcfg(&cfg, test_disk_path, &bdcfg);
int err = lfs_emubd_create(&cfg, &bdcfg);
if (err) {
fprintf(stderr, "error: could not create block device: %d\n", err);
exit(-1);
@@ -1779,12 +1799,17 @@ static void run_powerloss_exhaustive(
.block_cycles = BLOCK_CYCLES,
.cache_size = CACHE_SIZE,
.lookahead_size = LOOKAHEAD_SIZE,
.compact_thresh = COMPACT_THRESH,
#ifdef LFS_MULTIVERSION
.disk_version = DISK_VERSION,
#endif
};
struct lfs_emubd_config bdcfg = {
.read_size = READ_SIZE,
.prog_size = PROG_SIZE,
.erase_size = ERASE_SIZE,
.erase_count = ERASE_COUNT,
.erase_value = ERASE_VALUE,
.erase_cycles = ERASE_CYCLES,
.badblock_behavior = BADBLOCK_BEHAVIOR,
@@ -1797,7 +1822,7 @@ static void run_powerloss_exhaustive(
.powerloss_data = NULL,
};
int err = lfs_emubd_createcfg(&cfg, test_disk_path, &bdcfg);
int err = lfs_emubd_create(&cfg, &bdcfg);
if (err) {
fprintf(stderr, "error: could not create block device: %d\n", err);
exit(-1);
@@ -2314,19 +2339,19 @@ invalid_define:
= TEST_LIT(sizes[0]);
geometry->defines[PROG_SIZE_i]
= TEST_LIT(sizes[1]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= TEST_LIT(sizes[2]);
} else if (count >= 2) {
geometry->defines[PROG_SIZE_i]
= TEST_LIT(sizes[0]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= TEST_LIT(sizes[1]);
} else {
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= TEST_LIT(sizes[0]);
}
if (count >= 4) {
geometry->defines[BLOCK_COUNT_i]
geometry->defines[ERASE_COUNT_i]
= TEST_LIT(sizes[3]);
}
optarg = s;
@@ -2358,19 +2383,19 @@ invalid_define:
= TEST_LIT(sizes[0]);
geometry->defines[PROG_SIZE_i]
= TEST_LIT(sizes[1]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= TEST_LIT(sizes[2]);
} else if (count >= 2) {
geometry->defines[PROG_SIZE_i]
= TEST_LIT(sizes[0]);
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= TEST_LIT(sizes[1]);
} else {
geometry->defines[BLOCK_SIZE_i]
geometry->defines[ERASE_SIZE_i]
= TEST_LIT(sizes[0]);
}
if (count >= 4) {
geometry->defines[BLOCK_COUNT_i]
geometry->defines[ERASE_COUNT_i]
= TEST_LIT(sizes[3]);
}
optarg = s;

View File

@@ -82,23 +82,29 @@ intmax_t test_define(size_t define);
#define READ_SIZE_i 0
#define PROG_SIZE_i 1
#define BLOCK_SIZE_i 2
#define BLOCK_COUNT_i 3
#define CACHE_SIZE_i 4
#define LOOKAHEAD_SIZE_i 5
#define BLOCK_CYCLES_i 6
#define ERASE_VALUE_i 7
#define ERASE_CYCLES_i 8
#define BADBLOCK_BEHAVIOR_i 9
#define POWERLOSS_BEHAVIOR_i 10
#define DISK_VERSION_i 11
#define ERASE_SIZE_i 2
#define ERASE_COUNT_i 3
#define BLOCK_SIZE_i 4
#define BLOCK_COUNT_i 5
#define CACHE_SIZE_i 6
#define LOOKAHEAD_SIZE_i 7
#define COMPACT_THRESH_i 8
#define BLOCK_CYCLES_i 9
#define ERASE_VALUE_i 10
#define ERASE_CYCLES_i 11
#define BADBLOCK_BEHAVIOR_i 12
#define POWERLOSS_BEHAVIOR_i 13
#define DISK_VERSION_i 14
#define READ_SIZE TEST_DEFINE(READ_SIZE_i)
#define PROG_SIZE TEST_DEFINE(PROG_SIZE_i)
#define ERASE_SIZE TEST_DEFINE(ERASE_SIZE_i)
#define ERASE_COUNT TEST_DEFINE(ERASE_COUNT_i)
#define BLOCK_SIZE TEST_DEFINE(BLOCK_SIZE_i)
#define BLOCK_COUNT TEST_DEFINE(BLOCK_COUNT_i)
#define CACHE_SIZE TEST_DEFINE(CACHE_SIZE_i)
#define LOOKAHEAD_SIZE TEST_DEFINE(LOOKAHEAD_SIZE_i)
#define COMPACT_THRESH TEST_DEFINE(COMPACT_THRESH_i)
#define BLOCK_CYCLES TEST_DEFINE(BLOCK_CYCLES_i)
#define ERASE_VALUE TEST_DEFINE(ERASE_VALUE_i)
#define ERASE_CYCLES TEST_DEFINE(ERASE_CYCLES_i)
@@ -108,11 +114,14 @@ intmax_t test_define(size_t define);
#define TEST_IMPLICIT_DEFINES \
TEST_DEF(READ_SIZE, PROG_SIZE) \
TEST_DEF(PROG_SIZE, BLOCK_SIZE) \
TEST_DEF(BLOCK_SIZE, 0) \
TEST_DEF(BLOCK_COUNT, (1024*1024)/BLOCK_SIZE) \
TEST_DEF(PROG_SIZE, ERASE_SIZE) \
TEST_DEF(ERASE_SIZE, 0) \
TEST_DEF(ERASE_COUNT, (1024*1024)/ERASE_SIZE) \
TEST_DEF(BLOCK_SIZE, ERASE_SIZE) \
TEST_DEF(BLOCK_COUNT, ERASE_COUNT/lfs_max(BLOCK_SIZE/ERASE_SIZE,1)) \
TEST_DEF(CACHE_SIZE, lfs_max(64,lfs_max(READ_SIZE,PROG_SIZE))) \
TEST_DEF(LOOKAHEAD_SIZE, 16) \
TEST_DEF(COMPACT_THRESH, 0) \
TEST_DEF(BLOCK_CYCLES, -1) \
TEST_DEF(ERASE_VALUE, 0xff) \
TEST_DEF(ERASE_CYCLES, 0) \
@@ -120,8 +129,8 @@ intmax_t test_define(size_t define);
TEST_DEF(POWERLOSS_BEHAVIOR, LFS_EMUBD_POWERLOSS_NOOP) \
TEST_DEF(DISK_VERSION, 0)
#define TEST_IMPLICIT_DEFINE_COUNT 12
#define TEST_GEOMETRY_DEFINE_COUNT 4
#define TEST_IMPLICIT_DEFINE_COUNT 15
#endif

View File

@@ -6,6 +6,8 @@ if = 'BLOCK_CYCLES == -1'
[cases.test_alloc_parallel]
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.GC = [false, true]
defines.COMPACT_THRESH = ['-1', '0', 'BLOCK_SIZE/2']
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_file_t files[FILES];
@@ -24,6 +26,9 @@ code = '''
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int n = 0; n < FILES; n++) {
if (GC) {
lfs_fs_gc(&lfs) => 0;
}
size_t size = strlen(names[n]);
for (lfs_size_t i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &files[n], names[n], size) => size;
@@ -55,6 +60,8 @@ code = '''
[cases.test_alloc_serial]
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.GC = [false, true]
defines.COMPACT_THRESH = ['-1', '0', 'BLOCK_SIZE/2']
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
@@ -75,6 +82,9 @@ code = '''
uint8_t buffer[1024];
memcpy(buffer, names[n], size);
for (int i = 0; i < SIZE; i += size) {
if (GC) {
lfs_fs_gc(&lfs) => 0;
}
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
@@ -247,6 +257,9 @@ code = '''
}
res => LFS_ERR_NOSPC;
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
@@ -298,6 +311,9 @@ code = '''
}
res => LFS_ERR_NOSPC;
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
@@ -337,6 +353,8 @@ code = '''
count += 1;
}
err => LFS_ERR_NOSPC;
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
@@ -435,6 +453,8 @@ code = '''
break;
}
}
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
@@ -460,8 +480,8 @@ code = '''
# chained dir exhaustion test
[cases.test_alloc_chained_dir_exhaustion]
if = 'BLOCK_SIZE == 512'
defines.BLOCK_COUNT = 1024
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -538,8 +558,8 @@ code = '''
# split dir test
[cases.test_alloc_split_dir]
if = 'BLOCK_SIZE == 512'
defines.BLOCK_COUNT = 1024
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -587,8 +607,8 @@ code = '''
# outdated lookahead test
[cases.test_alloc_outdated_lookahead]
if = 'BLOCK_SIZE == 512'
defines.BLOCK_COUNT = 1024
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
@@ -655,8 +675,8 @@ code = '''
# outdated lookahead and split dir test
[cases.test_alloc_outdated_lookahead_split_dir]
if = 'BLOCK_SIZE == 512'
defines.BLOCK_COUNT = 1024
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;

View File

@@ -2,7 +2,7 @@
if = '(int32_t)BLOCK_CYCLES == -1'
[cases.test_badblocks_single]
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [
@@ -82,7 +82,7 @@ code = '''
'''
[cases.test_badblocks_region_corruption] # (causes cascading failures)
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [
@@ -161,7 +161,7 @@ code = '''
'''
[cases.test_badblocks_alternating_corruption] # (causes cascading failures)
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [

View File

@@ -1,7 +1,7 @@
# test running a filesystem to exhaustion
[cases.test_exhaustion_normal]
defines.ERASE_CYCLES = 10
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
@@ -94,7 +94,7 @@ exhausted:
# which also requires expanding superblocks
[cases.test_exhaustion_superblocks]
defines.ERASE_CYCLES = 10
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
@@ -188,7 +188,7 @@ exhausted:
# wear-level test running a filesystem to exhaustion
[cases.test_exhuastion_wear_leveling]
defines.ERASE_CYCLES = 20
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.FILES = 10
code = '''
@@ -288,7 +288,7 @@ exhausted:
# wear-level test + expanding superblock
[cases.test_exhaustion_wear_leveling_superblocks]
defines.ERASE_CYCLES = 20
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.FILES = 10
code = '''
@@ -385,7 +385,7 @@ exhausted:
# test that we wear blocks roughly evenly
[cases.test_exhaustion_wear_distribution]
defines.ERASE_CYCLES = 0xffffffff
defines.BLOCK_COUNT = 256 # small bd so test runs faster
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = [5, 4, 3, 2, 1]
defines.CYCLES = 100
defines.FILES = 10

View File

@@ -98,7 +98,7 @@ code = '''
lfs_mount(&lfs, cfg) => 0;
// create an orphan
lfs_mdir_t orphan;
lfs_alloc_ack(&lfs);
lfs_alloc_ckpoint(&lfs);
lfs_dir_alloc(&lfs, &orphan) => 0;
lfs_dir_commit(&lfs, &orphan, NULL, 0) => 0;
@@ -170,7 +170,7 @@ code = '''
lfs_mount(&lfs, cfg) => 0;
// create an orphan
lfs_mdir_t orphan;
lfs_alloc_ack(&lfs);
lfs_alloc_ckpoint(&lfs);
lfs_dir_alloc(&lfs, &orphan) => 0;
lfs_dir_commit(&lfs, &orphan, NULL, 0) => 0;

View File

@@ -14,6 +14,21 @@ code = '''
lfs_unmount(&lfs) => 0;
'''
# mount/unmount from interpretting a previous superblock block_count
[cases.test_superblocks_mount_unknown_block_count]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
memset(&lfs, 0, sizeof(lfs));
struct lfs_config tweaked_cfg = *cfg;
tweaked_cfg.block_count = 0;
lfs_mount(&lfs, &tweaked_cfg) => 0;
assert(lfs.block_count == cfg->block_count);
lfs_unmount(&lfs) => 0;
'''
# reentrant format
[cases.test_superblocks_reentrant_format]
reentrant = true
@@ -197,3 +212,255 @@ code = '''
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''
# mount with unknown block_count
[cases.test_superblocks_unknown_blocks]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// known block_size/block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// unknown block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// do some work
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_t file;
lfs_file_open(&lfs, &file, "test",
LFS_O_CREAT | LFS_O_EXCL | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hello!", 6) => 6;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_open(&lfs, &file, "test", LFS_O_RDONLY) => 0;
uint8_t buffer[256];
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => 6;
lfs_file_close(&lfs, &file) => 0;
assert(memcmp(buffer, "hello!", 6) == 0);
lfs_unmount(&lfs) => 0;
'''
# mount with blocks fewer than the erase_count
[cases.test_superblocks_fewer_blocks]
defines.BLOCK_COUNT = ['ERASE_COUNT/2', 'ERASE_COUNT/4', '2']
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// known block_size/block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// incorrect block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = ERASE_COUNT;
lfs_mount(&lfs, cfg) => LFS_ERR_INVAL;
// unknown block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// do some work
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_t file;
lfs_file_open(&lfs, &file, "test",
LFS_O_CREAT | LFS_O_EXCL | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hello!", 6) => 6;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_open(&lfs, &file, "test", LFS_O_RDONLY) => 0;
uint8_t buffer[256];
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => 6;
lfs_file_close(&lfs, &file) => 0;
assert(memcmp(buffer, "hello!", 6) == 0);
lfs_unmount(&lfs) => 0;
'''
# mount with more blocks than the erase_count
[cases.test_superblocks_more_blocks]
defines.FORMAT_BLOCK_COUNT = '2*ERASE_COUNT'
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_init(&lfs, cfg) => 0;
lfs.block_count = BLOCK_COUNT;
lfs_mdir_t root = {
.pair = {0, 0}, // make sure this goes into block 0
.rev = 0,
.off = sizeof(uint32_t),
.etag = 0xffffffff,
.count = 0,
.tail = {LFS_BLOCK_NULL, LFS_BLOCK_NULL},
.erased = false,
.split = false,
};
lfs_superblock_t superblock = {
.version = LFS_DISK_VERSION,
.block_size = BLOCK_SIZE,
.block_count = FORMAT_BLOCK_COUNT,
.name_max = LFS_NAME_MAX,
.file_max = LFS_FILE_MAX,
.attr_max = LFS_ATTR_MAX,
};
lfs_superblock_tole32(&superblock);
lfs_dir_commit(&lfs, &root, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_CREATE, 0, 0), NULL},
{LFS_MKTAG(LFS_TYPE_SUPERBLOCK, 0, 8), "littlefs"},
{LFS_MKTAG(LFS_TYPE_INLINESTRUCT, 0, sizeof(superblock)),
&superblock})) => 0;
lfs_deinit(&lfs) => 0;
// known block_size/block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => LFS_ERR_INVAL;
'''
# mount and grow the filesystem
[cases.test_superblocks_grow]
defines.BLOCK_COUNT = ['ERASE_COUNT/2', 'ERASE_COUNT/4', '2']
defines.BLOCK_COUNT_2 = 'ERASE_COUNT'
defines.KNOWN_BLOCK_COUNT = [true, false]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
if (KNOWN_BLOCK_COUNT) {
cfg->block_count = BLOCK_COUNT;
} else {
cfg->block_count = 0;
}
// mount with block_size < erase_size
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// same size is a noop
lfs_mount(&lfs, cfg) => 0;
lfs_fs_grow(&lfs, BLOCK_COUNT) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// grow to new size
lfs_mount(&lfs, cfg) => 0;
lfs_fs_grow(&lfs, BLOCK_COUNT_2) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
if (KNOWN_BLOCK_COUNT) {
cfg->block_count = BLOCK_COUNT_2;
} else {
cfg->block_count = 0;
}
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
// mounting with the previous size should fail
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => LFS_ERR_INVAL;
if (KNOWN_BLOCK_COUNT) {
cfg->block_count = BLOCK_COUNT_2;
} else {
cfg->block_count = 0;
}
// same size is a noop
lfs_mount(&lfs, cfg) => 0;
lfs_fs_grow(&lfs, BLOCK_COUNT_2) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
// do some work
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_file_t file;
lfs_file_open(&lfs, &file, "test",
LFS_O_CREAT | LFS_O_EXCL | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hello!", 6) => 6;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_file_open(&lfs, &file, "test", LFS_O_RDONLY) => 0;
uint8_t buffer[256];
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => 6;
lfs_file_close(&lfs, &file) => 0;
assert(memcmp(buffer, "hello!", 6) == 0);
lfs_unmount(&lfs) => 0;
'''