Compare commits

...

524 Commits

Author SHA1 Message Date
Christopher Haster
4dd30c1b8f Merge pull request #948 from littlefs-project/fix-sync-ordering
Fix sync issue where data writes could appear before metadata writes
2024-03-08 16:49:59 -06:00
Christopher Haster
5c0d332ecd Merge pull request #939 from Graveflo/master
Add nim-littlefs to readme
2024-03-08 16:49:11 -06:00
Christopher Haster
cf68333a55 Merge pull request #937 from littlefs-project/fix-pending-rm-get-underflow
Fix synthetic move underflows in lfs_dir_get
2024-03-08 16:48:50 -06:00
Christopher Haster
7873d811a0 Fixed memory leak in emubd's out-of-order write emulation
We need to decrement the saved block state on sync, when we reset
out-of-order emulation. Otherwise we leak blocks out the wazoo.
2024-02-27 21:39:34 -06:00
Christopher Haster
fc2aa3350c Fixed issue with exhaustive + out-of-order powerloss testing
Unlike the heuristic based testing, exhaustive powerloss testing
effectively forks the current test and runs both the interrupted and
uninterrupted test states to completion. But emubd wasn't expecting
bd->cfg->powerloss_cb to return.

The fix here is to keep track to both the old+new out-of-order block
states and unrevert them if bd->cfg->powerloss_cb returns.

This may leak the temporary copy, but powerloss testing is already
inherently leaky.
2024-02-27 21:14:59 -06:00
Christopher Haster
6352185949 Fixed sync issue where data writes could appear before metadata writes
Long story short we aren't calling sync correctly in littlefs. This
fixes that.

Some forms of storage, mainly anything with an FTL, eMMC, SD, etc, do
not guarantee a strict write order for writes to different blocks. In
theory this is what bd sync is for, to tell the bd when it is important
for the writes to be ordered.

Currently, littlefs calls bd sync after committing metadata. This is
useful as it ensures that user code can rely on lfs_file_sync for
ordering external side-effects.

But this is insufficient for handling storage with out-of-order writes.

Consider the simple case of a file with one data block:

1. lfs_file_write(blablabla) => writes data into a new data block

2. lfs_file_sync() => commits metadata to point to the new data block

But with out-of-order writes, the bd is free to reorder things such that
the metadata is updated _before_ the data is written. If we lose power,
that would be bad.

The solution to this is to call bd sync twice: Once before we commit
the metadata to tell the bd that these writes must be ordered, and once
after we commit the metadata to allow ordering with user code.

As a small optimization, we only call bd sync if the current file is not
inlined and has actually been modified (LFS_F_DIRTY). It's possible for
inlined files to be interleaved with writes to other files.

Found by MFaehling and alex31
2024-02-27 14:00:10 -06:00
Christopher Haster
f2a6f45eef Added out-of-order write testing to emubd
Some forms of storage, mainly anything with an FTL, eMMC, SD, etc, do
not guarantee a strict write order for writes to different blocks. It
would be good to test that this doesn't break littlefs.

This adds LFS_EMUBD_POWERLOSS_OOO to lfs_emubd, which tells lfs_emubd to
try to break any order-dependent code on powerloss.

The behavior right now is a bit simple, but does result in test
breakage:

1. Save the state of the block on first write (erase really) after
   sync/init.

2. On powerloss, revert the first write to its original state.

This might be a bit confusing when debugging, since the block will
appear to time-travel, but doing anything fancier would make emubd quite
a bit more complicated.

You could also get a bit fancier with which/how many blocks to revert,
but this should at least be sufficient to make sure bd sync calls are in
the right place.
2024-02-27 13:59:37 -06:00
Ryan McConnell
2752d8c486 add nim-littlefs to readme 2024-02-07 02:53:16 -05:00
Christopher Haster
ddbfcaa722 Fixed synthetic move underflows in lfs_dir_get
By "luck" the previous code somehow managed to not be broken, though it
was possible to traverse the same file twice in lfs_fs_traverse/size
(which is not an error).

The problem was an underlying assumption in lfs_dir_get that it would
never be called when the requested id is pending removal because of a
powerloss. The assumption was either:

1. lfs_dir_find would need to be called first to find the id, and it
   would correctly toss out pending-rms with LFS_ERR_NOENT.

2. lfs_fs_mkconsistent would be implicitly called before any filesystem
   traversals, cleaning up any pending-rms. This is at least true for
   allocator scans.

But, as noted by andriyndev, both lfs_fs_traverse and lfs_fs_size can
call lfs_fs_get with a pending-rm id if called in a readonly context.

---

By "luck" this somehow manages to not break anything:

1. If the pending-rm id is >0, the id is decremented by 1 in lfs_fs_get,
   returning the previous file entry during traversal. Worst case, this
   reports any blocks owned by the previous file entry twice.

   Note this is not an error, lfs_fs_traverse/size may return the same
   block multiple times due to underlying copy-on-write structures.

2. More concerning, if the pending-rm id is 0, the id is decremented by
   1 in lfs_fs_get and underflows. This underflow propagates into the
   type field of the tag we are searching for, decrementing it from
   0x200 (LFS_TYPE_STRUCT) to 0x1ff (LFS_TYPE_INTERNAL(UNUSED)).

   Fortunately, since this happens to underflow to the INTERNAL tag
   type, the type intended to never exist on disk, we should never find
   a matching tag during our lfs_fs_get search. The result? lfs_dir_get
   returns LFS_ERR_NOENT, which is actually what we want.

Also note that LFS_ERR_NOENT does not terminate the mdir traversal
early. If it did we would have missed files instead of duplicating
files, which is a slightly worse situation.

---

The fix is to add an explicit check for pending-rms in lfs_dir_get, just
like in lfs_dir_find. This avoids relying on unintended underflow
propagation, and should make the internal API behavior more consistent.

This is especially important for potential future gc extensions.

Found by andriyndev
2024-02-04 15:12:31 -06:00
Christopher Haster
f53a0cc961 Merge pull request #929 from littlefs-project/devel
Minor release: v2.9
2024-01-23 12:33:13 -06:00
Christopher Haster
42910bc8e5 Bumped minor version to v2.9 2024-01-19 14:37:37 -06:00
Christopher Haster
a3e1d12ce1 Merge pull request #915 from littlefs-project/well-done
Rename internal functions _raw* -> _*_
2024-01-19 13:58:29 -06:00
Christopher Haster
a70870c628 Renamed internal functions _raw* -> _*_
So instead of lfs_file_rawopencfg, it's now lfs_file_opencfg_.

The "raw" prefix is annoying, doesn't really add meaning ("internal"
would have been better), and gets in the way of finding the relevant
function implementations.

I have been using _s as suffixes for unimportant name collisions in
other codebases, and it seems to work well at reducing wasted brain
cycles naming things. Adopting it here avoids the need for "raw"
prefixes.

It's quite a bit like the use of prime symbols to resolve name
collisions in math, e.g. x' = x + 1. Which is even supported in Haskell
and is quite nice there.

And the main benefit: Now if you search for the public API name, you get
the internal function first, which is probably what you care about.

Here is the exact script:

  sed -i 's/_raw\([a-z0-9_]*\)\>/_\1_/g' $(git ls-tree -r HEAD --name-only | grep '.*\.c')
2024-01-19 13:20:56 -06:00
Christopher Haster
ceb17a0f4a Merge pull request #917 from tomscii/fix_return_value_of_lfs_rename
Fix return value of lfs_rename()
2024-01-19 13:19:21 -06:00
Christopher Haster
a8a0905777 Merge pull request #916 from littlefs-project/ci-ubuntu-latest
Change CI to just run on ubuntu-latest
2024-01-19 13:19:07 -06:00
Christopher Haster
13d78616fe Merge pull request #914 from littlefs-project/inline-max
Add inline_max, to optionally limit the size of inlined files
2024-01-19 13:18:54 -06:00
Christopher Haster
8b8fd14187 Added inline_max, to optionally limit the size of inlined files
Inlined files live in metadata and decrease storage requirements, but
may be limited to improve metadata-related performance. This is
especially important given the current plague of metadata performance.

Though decreasing inline_max may make metadata more dense and increase
block usage, so it's important to benchmark if optimizing for speed.

The underlying limits of inlined files haven't changed:
1. Inlined files need to fit in RAM, so <= cache_size
2. Inlined files need to fit in a single attr, so <= attr_max
3. Inlined files need to fit in 1/8 of a block to avoid metadata
   overflow issues, this is after limiting by metadata_max,
   so <= min(metadata_max, block_size)/8

By default, the largest possible inline_max is used. This preserves
backwards compatibility and is probably a good default for most use
cases.

This does have the awkward effect of requiring inline_max=-1 to
indicate disabled inlined files, but I don't think there's a good
way around this.
2024-01-19 13:00:27 -06:00
Christopher Haster
09972a1710 Merge pull request #913 from littlefs-project/gc-compactions
Extend lfs_fs_gc to compact metadata, compact_thresh
2024-01-19 12:51:11 -06:00
Christopher Haster
ed7bd05435 Merge pull request #912 from littlefs-project/relaxed-lookahead
Relaxed lookahead alignment, other internal block alloc readability improvements
2024-01-19 12:27:14 -06:00
Christopher Haster
b5cd957f42 Extended lfs_fs_gc to compact metadata, compact_thresh
This extends lfs_fs_gc to now handle three things:

1. Calls mkconsistent if not already consistent
2. Compacts metadata > compact_thresh
3. Populates the block allocator

Which should be all of the janitorial work that can be done without
additional on-disk data structures.

Normally, metadata compaction occurs when an mdir is full, and results in
mdirs that are at most block_size/2.

Now, if you call lfs_fs_gc, littlefs will eagerly compact any mdirs that
exceed the compact_thresh configuration option. Because the resulting
mdirs are at most block_size/2, it only makes sense for compact_thresh to
be >= block_size/2 and <= block_size.

Additionally, there are some special values:

- compact_thresh=0  => defaults to ~88% block_size, may change
- compact_thresh=-1 => disables metadata compaction during lfs_fs_gc

Note that compact_thresh only affects lfs_fs_gc. Normal compactions
still only occur when full.
2024-01-19 12:25:45 -06:00
Christopher Haster
1195d606ae Merge pull request #909 from littlefs-project/easy-util-defines
Add some easier util overrides: LFS_MALLOC/FREE/CRC
2024-01-19 12:24:16 -06:00
Christopher Haster
1711bdef76 Merge pull request #886 from BrianPugh/macro-sanity-check
Add value-range checks for user-definable macros at compile-time
2024-01-19 12:23:36 -06:00
Christopher Haster
f522ed907a Added tests over rename type errors 2024-01-17 00:10:30 -06:00
Tom Szilagyi
4f32738cd6 Fix return value of lfs_rename()
When lfs_rename() is called trying to rename (move) a file to an
existing directory, LFS_ERR_ISDIR is (correctly) returned. However, in
the opposite case, if one tries to rename (move) a directory to a path
currently occupied by a regular file, LFS_ERR_NOTDIR should be
returned (since the error is that the destination is NOT a directory),
but in reality, LFS_ERR_ISDIR is returned in this case as well.

This commit fixes the code so that in the latter case, LFS_ERR_NOTDIR
is returned.
2024-01-17 00:06:52 -06:00
Christopher Haster
6691718b18 Restricted LFS_FILE_MAX to signed 32-bits, <2^31, <=2147483647
I think realistically no one is using this. It's already only partially
supported and untested.

Worst case, if someone does depend on this we can always revert.
2024-01-16 23:40:30 -06:00
Christopher Haster
1fefcbbcba Rearranged compile-time constant checks to live near lfs_init
lfs_init handles the checks/asserts of most configuration, moving these
checks near lfs_init attempts to keep all of these checks nearby each
other.

Also updated the comments to avoid somtimes-ambiguous range notation.

And removed negative bounds checks. Negative bounds should be obviously
incorrect, and 0 is _technically_ not illegal for any define (though
admittedly unlikely to be correct).
2024-01-16 23:39:51 -06:00
Christopher Haster
60567677b9 Relaxed alignment requirements for lfs_malloc
The only reason we needed this alignment was for the lookahead buffer.

Now that the lookahead buffer is relaxed to operate on bytes, we can
relax our malloc alignment requirement all the way down to the byte
level, since we mainly use lfs_malloc to allocate byte-level buffers.

This does introduce a risk that we might need word-level mallocs in the
future. If that happens we will need to decide if changing the malloc
alignment is a breaking change, or gate alignment requirements behind
user provided defines.

Found by HiFiPhile.
2024-01-16 00:27:07 -06:00
Christopher Haster
897b571318 Changed CI to just run on ubuntu-latest
If we already have to bump this version as GitHub phases out older
Ubuntu runners (which is reasonable), I don't really see the value of
pinning a specific version. We might as well just respond to any
broken dependencies caused by GitHub's implicit updates as they
happen...

It's not like CI is truly continuous.
2023-12-21 00:33:44 -06:00
Christopher Haster
3513ff1afc Merge pull request #911 from littlefs-project/fix-release-structs
Fix struct sizes missing from generated release notes
2023-12-21 00:08:16 -06:00
Christopher Haster
8a22bd6e67 Merge pull request #910 from littlefs-project/fix-superblock-expansion-thresh
Increase threshold for superblock expansion from ~50% -> ~88% full
2023-12-21 00:07:55 -06:00
Christopher Haster
9b82db72d8 Merge pull request #898 from zchen24/patch-1
Update DESIGN.md minor typo
2023-12-21 00:06:29 -06:00
Zihan Chen
99b84ee3db Update DESIGN.md, fix minor typo 2023-12-20 23:42:26 -06:00
Christopher Haster
b1b10c0e75 Relaxed lookahead buffer alignment
This drops the lookahead buffer from operating on 32-bit words to
operating on 8-bit bytes, and removes any alignment requirement. This
may have some minor performance impact, but it is unlikely to be
significant when you consider IO overhead.

The original motivation for 32-bit alignment was an attempt at
future-proofing in case we wanted some more complex on-disk data
structure. This never happened, and even if it did, it could have been
added via additional config options.

This has been a significant pain point for users, since providing
word-aligned byte-sized buffers in C can be a bit annoying.
2023-12-20 00:39:11 -06:00
Christopher Haster
1f9c3c04b1 Reworked the block allocator so the logic is hopefully simpler
Some of this is just better documentation, some of this is reworking the
logic to be more intention driven... if that makes sense...
2023-12-20 00:24:56 -06:00
Christopher Haster
7b68441888 Renamed a number of internal block-allocator fields
- Renamed lfs.free      -> lfs.lookahead
- Renamed lfs.free.off  -> lfs.lookahead.start
- Renamed lfs.free.i    -> lfs.lookahead.next
- Renamed lfs.free.ack  -> lfs.lookahead.ckpoint
- Renamed lfs_alloc_ack -> lfs_alloc_ckpoint

These have been named a bit confusingly, and I think the new names make
their relevant purposes a bit clearer.

At the very it's clear lfs.lookahead is related to the lookahead buffer.
(and doesn't imply a closed free-bitmap).
2023-12-20 00:17:08 -06:00
Christopher Haster
e91a29d2b5 Fixed struct sizes missing from generated release notes
This script was missed during a struct -> structs naming change
2023-12-19 22:00:18 -06:00
Christopher Haster
b9b95ab4bc Increase threshold for superblock expansion from ~50% -> ~88% full
Superblock expansion is an irreversible operation. In an effort to
prevent superblock expansion from claiming valuable scratch space
(important for small, <~8 block filesystems), littlefs prevents
superblock expansion when the disk is "mostly full".

In true computer-scientist fashion, this "mostly full" threshold was
set to ~50%.

As pointed out by gbolgradov and rojer, >~50% utilization is not
uncommon, and it can lead to a situation where superblock expansion does
not occur in a relatively healthy filesystem, causing focused wear at
the root.

To remedy this, the threshold is now increased to ~88% (7/8) full.

This may change in the future and should probably be eventually user
configurable.

Found by gbolgradov and rojer
2023-12-19 16:51:17 -06:00
Christopher Haster
9a620c730c Added LFS_CRC, easier override for lfs_crc
Now you can override littlefs's CRC implementation with some simple
defines:

  -DLFS_CRC=lfs_crc

The motivation for this is the same for LFS_MALLOC/LFS_FREE. I think
these are the main "system-level" utils that users want to override.

Don't override with this something that's not CRC32! Your filesystem
will no longer be compatible with other tools! This is only intended for
provided hardware acceleration!
2023-12-19 14:12:10 -06:00
Christopher Haster
a0c6c54345 Added LFS_MALLOC/FREE, easier overrides for lfs_malloc/free
Now you can override littlefs's malloc with some simple defines:

  -DLFS_MALLOC=my_malloc
  -DLFS_FREE=my_free

This is probably what most users expected when wanting to override
malloc/free in littlefs, but it hasn't been available, since instead
littlefs provides a file-level override of builtin utils.

The thinking was that there's just too many builtins that could be
overriden, lfs_max/min/alignup/npw2/etc/etc/etc, so allowing users to
just override the util file provides the best flexibility without a ton
of ifdefs.

But it's become clear this is awkward for users that just want to
replace malloc.

Maybe the original goal was too optimistic, maybe there's a better way
to structure this file, or maybe the best API is just a bunch of ifdefs,
I have no idea! This will hopefully continue to evolve.
2023-12-19 13:57:17 -06:00
Zihan Chen
10bcff1af8 Update DESIGN.md minor typo 2023-11-26 11:10:24 -08:00
Christopher Haster
c733d9ec57 Merge pull request #884 from DvdGiessen/static-functions
lfs_fs_raw* functions should be static
2023-10-31 13:26:35 -05:00
Brian Pugh
c531a5e88f Replace erroneous LFS_FILE_MAX upper bound 4294967296 to 4294967295 2023-10-30 11:18:20 -07:00
Brian Pugh
8f9427dd53 Add value-range checks for user-definable macros 2023-10-29 13:50:38 -07:00
Christopher Haster
8f3f32d1f3 Added -Wmissing-prototypes
This warning is useful for catching the easy mistake of missing the
keyword static on functions intended to be internal-only.

Missing the static keyword risks symbol polution and misses potential
compiler optimizations.

This is an interesting warning, while useful for libraries such as
littlefs, it's perfectly valid C to not predeclare all functions, and
common in final application binaries.

Relatedly, this warning is re-disabled for the test/bench runner. There
may be a better way to organize the CFLAGS, maybe into separate
LIB/RUNNER CFLAGS, but I'll leave this to future work if our CFLAGS grow
more complicated.

This was motivated by non-static internal-only functions leaking into a
release. Found and fixed by DvdGiessen.
2023-10-24 12:04:54 -05:00
Daniël van de Giessen
92fc780f71 lfs_fs_raw* functions should be static 2023-10-23 13:35:34 +02:00
Christopher Haster
f77214d1f0 Merge pull request #877 from littlefs-project/devel
Minor release: v2.8
2023-09-22 11:52:21 -05:00
Christopher Haster
f91c5bd687 Bumped minor version to v2.8 2023-09-21 13:02:09 -05:00
Christopher Haster
0eb52a2df1 Merge pull request #875 from littlefs-project/fs-gc
Add lfs_fs_gc to enable proactive finding of free blocks
2023-09-21 13:01:19 -05:00
Christopher Haster
6b33ee5e34 Renamed lfs_fs_findfreeblocks -> lfs_fs_gc, tweaked documentation
The idea is in the future this function may be extended to support other
block janitorial work. In such a case calling this lfs_fs_gc provides a
more general name that can include other operations.

This is currently just wishful thinking, however.
2023-09-21 12:23:38 -05:00
Christopher Haster
63e4408f2a Extended alloc tests to test some properties of lfs_fs_findfreeblocks
- Test that the code actually runs.

- Test that lfs_fs_findfreeblocks does not break block allocations.

- Test that lfs_fs_findfreeblocks does not error when no space is
  available, it should only errors when the block is actually needed.
2023-09-21 12:23:38 -05:00
Christopher Haster
dbe4598c12 Added API boilerplate for lfs_fs_findfreeblocks and consistent style
This adds the tracing and optional locking for the littlefs API.

Also updated to match the code style, and added LFS_READONLY guards
where necessary.
2023-09-21 12:23:36 -05:00
ondrap
d85a0fe2e2 Move lookahead buffer offset at the first free block if such block doesn't exist move it for whole lookahead size. 2023-09-21 12:21:25 -05:00
ondrap
b637379210 Update lfs_find_free_blocks to match the latest changes. 2023-09-21 12:18:55 -05:00
Christopher Haster
1ba4ed03f0 Merge pull request #872 from littlefs-project/fs-grow
Add lfs_fs_grow to enable limited resizing of the filesystem
2023-09-21 12:11:35 -05:00
Christopher Haster
e4b7fa15c1 Merge pull request #866 from BrianPugh/optional-block-count
Infer block_count from superblock if not provided in config.
2023-09-21 12:07:00 -05:00
Christopher Haster
23505fa9fa Added lfs_fs_grow for growing the filesystem to a different block_count
The initial implementation for this was provided by kaetemi, originally
as a mount flag. However, it has been modified here to be self-contained
in an explicit runtime function that can be called after mount.

The reasons for an explicit function:

1. lfs_mount stays a strictly readonly operation, and avoids pulling in
   all of the write machinery.

2. filesystem-wide operations such as lfs_fs_grow can be a bit risky,
   and irreversable. The action of growing the filesystem should be very
   intentional.

---

One concern with this change is that this will be the first function
that changes metadata in the superblock. This might break tools that
expect the first valid superblock entry to contain the most recent
metadata, since only the last superblock in the superblock chain will
contain the updated metadata.
2023-09-12 01:32:09 -05:00
Christopher Haster
2c222af17d Tweaked lfs_fsinfo block_size/block_count fields
Mainly to match superblock ordering and emphasize these are logical
blocks.
2023-09-12 01:31:21 -05:00
Christopher Haster
127d84b681 Added a couple mixed/unknown block_count tests
These were cherry-picked from some previous work on a related feature.
2023-09-12 01:14:39 -05:00
Christopher Haster
027331b2f0 Adopted erase_size/erase_count config in test block-devices/runners
In separating the configuration of littlefs from the physical geometry
of the underlying device, we can no longer rely solely on lfs_config to
contain all of the information necessary for the simulated block devices
we use for testing.

This adds a new lfs_*bd_config struct for each of the block devices, and
new erase_size/erase_count fields. The erase_* name was chosen since
these reflect the (simulated) physical erase size and count of
erase-sized blocks, unlike the block_* variants which represent logical
block sizes used for littlefs's bookkeeping.

It may be worth adopting erase_size/erase_count in littlefs's config at
some point in the future, but at the moment doesn't seem necessary.

Changing the lfs_bd_config structs to be required is probably a good
idea anyways, as it moves us more towards separating the bds from
littlefs. Though we can't quite get rid of the lfs_config parameter
because of the block-device API in lfs_config. Eventually it would be
nice to get rid of it, but that would require API breakage.
2023-09-12 00:39:09 -05:00
Christopher Haster
9c23329dd7 Revert of refactor lfs_scan_* out of lfs_format
This would result in two passes through the superblock chain during
mount, when we can access everything we need to in one.
2023-09-03 13:19:03 -05:00
Christopher Haster
130790fa91 Merge pull request #863 from littlefs-project/fix-conversion-warning
Fix integer conversion warning from Code Composer Studio
2023-09-03 12:46:38 -05:00
Christopher Haster
531d5e5073 Merge pull request #855 from mdahamshi/mmd_fix
initlize struct lfs_diskoff disk = {0}
2023-09-03 12:46:28 -05:00
Christopher Haster
e40d8f5410 Merge pull request #849 from littlefs-project/fix-ci-release-no-version
Fix release script breaking if there is no previous version
2023-09-03 12:46:18 -05:00
Brian Pugh
23089d5758 remove previous block_count detection from lfs_format 2023-08-20 14:10:12 -07:00
Brian Pugh
d6098bd3ce Add block_count and block_size to fsinfo 2023-08-20 11:53:18 -07:00
Brian Pugh
d6c0c6a786 linting 2023-08-20 11:33:29 -07:00
Brian Pugh
5caa83fb77 forgot to unmount lfs in test; leaking memory 2023-08-17 22:10:53 -07:00
Brian Pugh
7521e0a6b2 fix newly introduced missing cleanup when an invalid superblock is found. 2023-08-17 20:51:33 -07:00
Brian Pugh
2ebfec78c3 test for failure when interpretting block count when formatting without superblock 2023-08-17 15:20:46 -07:00
Brian Pugh
3d0bcf4066 Add test_superblocks_mount_unknown_block_count 2023-08-17 15:13:16 -07:00
Brian Pugh
6de3fc6ae8 fix corruption check 2023-08-17 15:07:19 -07:00
Brian Pugh
df238ebac6 Add a unit test; currently hanging on final permutation.
Some block-device bound-checks are disabled during superblock search.
2023-08-16 23:07:55 -07:00
Brian Pugh
be6812213d introduce lfs->block_count. If cfg->block_count is 0, autopopulate from superblock 2023-08-16 22:23:34 -07:00
Brian Pugh
6dae7038f9 remove redundant superblock check 2023-08-16 22:23:34 -07:00
Brian Pugh
73285278b9 refactor lfs_scan_for_state_updates and lfs_scan_for_superblock out of lfs_format 2023-08-16 22:23:34 -07:00
Mohammad Dahamshi
5a834b6fc1 initlize struct lfs_diskoff disk = {0}
so we don't use it uninitlized in first run
2023-08-03 11:21:58 -05:00
Christopher Haster
d775b46e3d Fixed integer conversion warning from Code Composer Studio
Proposed by FiddlingBits
2023-08-03 11:16:40 -05:00
Christopher Haster
96fb8bec85 Fixed release script breaking if there is no previous version
This can't actually happen in the current state of the littlefs GitHub
repo, but could in theory cause problems if CI is enabled on a fork.

Found while enabling GitHub Actions on littlefs-fuse.
2023-07-03 12:27:17 -05:00
Christopher Haster
611c9b20db Merge pull request #848 from littlefs-project/devel
Minor release: v2.7
2023-06-30 12:33:10 -05:00
Christopher Haster
a942cdba66 Bumped minor version to v2.7 2023-06-30 00:28:10 -05:00
Christopher Haster
225fc31a17 Merge pull request #846 from littlefs-project/link-chan-fatfs
Add a link to ChaN's FatFS implementation
2023-06-30 00:26:43 -05:00
Christopher Haster
5db368c0a2 Merge pull request #839 from littlefs-project/configurable-disk-version
Add support for writing previous on-disk minor versions
2023-06-30 00:26:29 -05:00
Christopher Haster
f09c6a4eb7 Merge pull request #838 from littlefs-project/fs-stat
Add lfs_fs_stat for access to filesystem status/configuration
2023-06-30 00:25:59 -05:00
Christopher Haster
79cc75d18f Added LFS_MULTIVERSION and testing of lfs2.0 to CI
- Added test-multiversion test job
- Added test-lfs2_0 test job
- Added mutliversion size measurement
2023-06-29 12:31:22 -05:00
Christopher Haster
eb9af7abe5 Added LFS_MULTIVERSION, made lfs2.0 support a compile-time option
The code-cost wasn't that bad: 16556 B -> 16754 B (+1.2%)

But moving write support of older versions behind a compile-time flag
allows us to be a bit more liberal with what gets added to support older
versions, since the cost won't hit most users.
2023-06-29 12:31:22 -05:00
Christopher Haster
b72c96d440 Added support for writing on-disk version lfs2.0
The intention is to help interop with older minor versions of littlefs.

Unfortunately, since lfs2.0 drivers cannot mount lfs2.1 images, there are
situations where it would be useful to write to write strictly lfs2.0
compatible images. The solution here adds a "disk_version" configuration
option which determines the behavior of lfs2.1 dependent features.

Normally you would expect this to only change write behavior. But since the
main change in lfs2.1 increased validation of erased data, we also need to
skip this extra validation (fcrc) or see terrible slowdowns when writing.
2023-06-29 12:31:22 -05:00
Christopher Haster
265692e709 Removed fsinfo.block_usage for now
In terms of ease-of-use, a user familiar with other filesystems expects
block_usage in fsinfo. But in terms of practicality, block_usage can be
expensive to find in littlefs, so if it's not needed in the resulting
fsinfo, that operation is wasteful.

It's not clear to me what the best course of action is, but since
block_usage can always be added to fsinfo later, but not removed without
breaking backwards compatibility, I'm leaving this out for now.

Block usage can still be found by explicitly calling lfs_fs_size.
2023-06-29 12:23:33 -05:00
Christopher Haster
08a132e048 Added a link to ChaN's FatFS implementation
ChaN's FAT implementation definitely deserves a mention here, since it
was one of the first open-source microcontroller-oriented filesystem
implementations that I'm aware of, and has a lot of good ideas at the
implementation level.

Honestly I didn't realize this wasn't already linked to from here. If
you're using FAT on a microcontroller, it's most likely this library.
2023-06-26 15:37:32 -05:00
Christopher Haster
c5fb3f181b Changed fsinfo.minor_version -> fsinfo.disk_version
Version are now returned with major/minor packed into 32-bits,
so 0x00020001 is the current disk version, for example.

1. This needed to change to use a disk_* prefix for consistency with the
   defines that already exist for LFS_VERSION/LFS_DISK_VERSION.

2. Encoding the version this way has the nice side effect of making 0 an
   invalid value. This is useful for adding a similar config option
   that needs to have reasonable default behavior for backwards
   compatibility.

In theory this uses more space, but in practice most other config/status
is 32-bits in littlefs. We would be wasting this space for alignment
anyways.
2023-06-06 22:03:00 -05:00
Christopher Haster
8610f7c36b Increased context on failures for Valgrind in CI
Valgrind output is very verbose but useful, with a default limit of 5
lines the output usually doesn't contain much useful info.
2023-06-06 22:02:14 -05:00
Christopher Haster
a51be18765 Removed previous-version lfsp_fs_stat checks in test_compat
This function naturally doesn't exist in the previous version. We should
eventually add these calls when we can expect the previous version to
support this function, though it's a bit unclear when that should happen.

Or maybe not! Maybe this is testing more of the previous version than we
really care about.
2023-06-06 22:00:26 -05:00
Christopher Haster
a7ccc1df59 Promoted lfs_gstate_needssuperblock to be available in readonly builds
Needed for minor version reporting in lfs_fs_stat to work correctly.
2023-06-06 15:59:45 -05:00
Christopher Haster
fdee127f74 Removed use of LFS_VERSION in test_compat
LFS_VERSION -> LFS_DISK_VERSION

These tests shouldn't depend on LFS_VERSION. It's a bit subtle, but
LFS_VERSION versions the API, and LFS_DISK_VERSION versions the
on-disk format, which is what test_compat should be testing.
2023-06-06 14:55:22 -05:00
Christopher Haster
87bbf1d374 Added lfs_fs_stat for access to filesystem status/configuration
Currently this includes:

- minor_version - on-disk minor version
- block_usage - estimated number of in-use blocks
- name_max - configurable name limit
- file_max - configurable file limit
- attr_max - configurable attr limit

These are currently the only configuration operations that need to be
written to disk. Other configuration is either needed to mount, such as
block_size, or does not change the on-disk representation, such as
read/prog_size.

This also includes the current block usage, which is common in other
filesystems, though a more expensive to find in littlefs. I figure it's
not unreasonable to make lfs_fs_stat no worse than block allocation,
hopefully this isn't a mistake. It may be worth caching the current
usage after the most recent lookahead scan.

More configuration may be added to this struct in the future.
2023-06-06 13:02:16 -05:00
Christopher Haster
66f07563c3 Merge pull request #832 from littlefs-project/remove-sys-types
Remove unnecessary sys/types.h include
2023-05-23 14:46:12 -05:00
Christopher Haster
5eed341059 Merge pull request #819 from benpicco/fix-AVR
Fix build for AVR
2023-05-23 14:45:34 -05:00
Christopher Haster
97e2526a81 Merge pull request #818 from littlefs-project/convince-github-littlefs-is-c
Convince GitHub littlefs is a C project
2023-05-23 14:44:48 -05:00
Christopher Haster
8a4ee65fc3 Removed unnecessary sys/types.h include
Likely included at some point for ssize_t, this is no longer needed and
causes some problems for embedded compilers.

Currently littlefs doesn't even use size_t/ssize_t in its definition of
lfs_size_t/lfs_ssize_t, so I don't think this will ever be required.

Found by LDong-Arm, vvn-git
2023-05-17 11:11:27 -05:00
Benjamin Valentin
6fda813ce8 Fix build for AVR
This fixes the overflowing left shift on 8 bit platforms.

    littlefs2/lfs.c: In function ‘lfs_dir_commitcrc’:
    littlefs2/lfs.c:1654:51: error: left shift count >= width of type [-Werror=shift-count-overflow]
             commit->ptag = ntag ^ ((0x80 & ~eperturb) << 24);
2023-05-05 12:11:20 +02:00
Christopher Haster
f2bc6a8e88 Reclassify .toml files as .c files on GitHub
With the new test framework, GitHub really wants to mark littlefs as a
python project. telling it to reclassify our test .toml files as C code
(which they are 95% of anyways) remedies this.

An alternative would be to add syntax=c vim modelines to the test/bench
files, which would also render them with C syntax highlighting on
GitHub. Unfortunately the interspersed toml metadata mucks this up,
making the result not very useful.
2023-05-04 14:01:04 -05:00
Christopher Haster
ec3ec86bcc Merge pull request #814 from littlefs-project/devel
Minor release: v2.6
2023-05-04 12:55:52 -05:00
Christopher Haster
405f33214a Merge pull request #812 from littlefs-project/mkconsistent
Add lfs_fs_mkconsistent
2023-04-30 23:26:04 -05:00
Christopher Haster
3dca02911f Merge pull request #811 from littlefs-project/fix-deorphan-repeatedly
Fix issue where lfs_fs_deorphan may run more than needed
2023-04-30 23:25:01 -05:00
Christopher Haster
259535ee73 Added lfs_fs_mkconsistent
lfs_fs_mkconsistent allows running the internal consistency operations
(desuperblock/deorphan/demove) on demand and without any other
filesystem changes.

This can be useful for front-loading and persisting consistency operations
when you don't want to pay for this cost on the first write to the
filesystem.

Conveniently, this also offers a way to force the on-disk minor version
to bump, if that is wanted behavior.

Idea from kasper0
2023-04-26 21:45:26 -05:00
Christopher Haster
94d9e097a6 Fixed issue where lfs_fs_deorphan may run more than needed
The underlying issue is that lfs_fs_deorphan did not updating gstate
correctly. The way it determined if there are any orphans remaining in
the filesystem was by subtracting the number of found orphans from an
internal counter.

This internal counter is a leftover from a previous implementation that
allowed leaving the lfs_fs_deorphan loop early if we know the number of
expected orphans. This can happen during recursive mdir relocations, but
with only a single bit in the gstate, can't happen during mount. If we
detect orphans during mount, we set this internal counter to 1, assuming
we will find at least one orphan.

But this presents a problem, what if we find _no_ orphans? If this happens
we never decrement the internal counter of orphans, so we would never
clear the bit in the gstate. This leads to a running lfs_fs_deorphan
on more-or-less every mutable operation in the filesystem, resulting in
an extreme performance hit.

The solution here is to not subtract the number of found orphans, but assume
that when our lfs_fs_deorphan loop finishes, we will have no orphans, because
that's the whole point of lfs_fs_deorphan.

Note that the early termination of lfs_fs_deorphan was dropped because
it would not actually change the runtime complexity of lfs_fs_deorphan,
adds code cost, and risks fragile corner cases such as this one.

---

Also added tests to assert we run lfs_fs_deorphan at most once.

Found by kasper0 and Ldd309
2023-04-26 21:41:26 -05:00
Christopher Haster
dd03c27476 Merge pull request #805 from littlefs-project/fix-dir-seek-end
Fix issue where seeking to end-of-directory return LFS_ERR_INVAL
2023-04-26 14:32:14 -05:00
Christopher Haster
23a4a089b5 Merge pull request #800 from littlefs-project/fix-boundary-truncates
Fix block-boundary truncate issues
2023-04-26 14:31:23 -05:00
Christopher Haster
b6773e68bf Merge remote-tracking branch 'origin/devel' into fix-dir-seek-end 2023-04-26 13:47:58 -05:00
Christopher Haster
922a35b3a5 Merge remote-tracking branch 'origin/devel' into fix-boundary-truncates 2023-04-26 13:30:04 -05:00
Christopher Haster
92298c749d Merge pull request #802 from littlefs-project/assert-minimum-block-size
Add explicit assert for minimum block size of 128 bytes
2023-04-26 02:41:44 -05:00
Christopher Haster
50b394ca36 Merge pull request #801 from littlefs-project/assert-bool-cast
Add an assert for truthy-preserving bool conversions
2023-04-26 02:41:30 -05:00
Christopher Haster
a99574cd5b Merge pull request #807 from littlefs-project/doc-link-littlefs2-rust
Add littlefs2 crate to README
2023-04-26 02:40:51 -05:00
Christopher Haster
363a8b56cf Tweaked wording of littlefs2-rust link in README.md 2023-04-26 02:02:23 -05:00
Lachezar Lechev
e43d381135 chore: add littlefs2 crate to README 2023-04-26 01:59:57 -05:00
Christopher Haster
ee6a51bbbe Merge pull request #718 from yomimono/mention-chamelon
Add "chamelon" to the related projects section.
2023-04-26 01:57:31 -05:00
Christopher Haster
01ac033d47 Merge pull request #572 from tniessen/add-littlefs-disk-img-viewer
Add littlefs-disk-img-viewer to README
2023-04-26 01:56:31 -05:00
Christopher Haster
2a18e03cb8 Merge pull request #809 from littlefs-project/brent-cycle-detection
Adopt Brent's algorithm for cycle detection
2023-04-26 01:55:50 -05:00
Christopher Haster
6f074ebe31 Merge pull request #497 from littlefs-project/crc-rework-2
Forward-looking erase-state CRCs
2023-04-26 01:15:59 -05:00
Christopher Haster
0a7eca0bd5 Merge pull request #752 from littlefs-project/test-and-bench-runners
Add test/bench runners, benchmarks, additional scripts
2023-04-26 01:09:01 -05:00
Christopher Haster
3e25dfc16c Added FCRC tags and an explanation of how FCRCs work to SPEC.md
See SPEC.md for more info.

Also considered adding an explanation to DESIGN.md, but there's not a
great place for it. Maybe FCRCs are too low-level for the high-level
design document. Though may be worth reconsidering if DESIGN.md gets
revisited.
2023-04-21 14:49:49 -05:00
Christopher Haster
9e28c75482 Bumped minor version to v2.6 and on-disk minor version to lfs2.1 2023-04-21 00:57:00 -05:00
Christopher Haster
4c9360020e Added ability to bump on-disk minor version
This just means a rewrite of the superblock entry with the new minor
version.

Though it's interesting to note, we don't need to rewrite the superblock
entry until the first write operation in the filesystem, an optimization
that is already in use for the fixing of orphans and in-flight moves.

To keep track of any outdated minor version found during lfs_mount, we
can carve out a bit from the reserved bits in our gstate. These are
currently used for a counter tracking the number of orphans in the
filesystem, but this is usually a very small number so this hopefully
won't be an issue.

In-device gstate tag:

  [--       32      --]
  [1|- 11 -| 10 |1| 9 ]
   ^----^-----^--^--^-- 1-bit has orphans
        '-----|--|--|-- 11-bit move type
              '--|--|-- 10-bit move id
                 '--|-- 1-bit needs superblock
                    '-- 9-bit orphan count
2023-04-21 00:56:55 -05:00
Christopher Haster
ca0da3d490 Added compatibility testing on pull-request to GitHub test action
This uses the "github.event.pull_request.base.ref" variable as the
"lfsp" target for compatibility testing.
2023-04-21 00:29:28 -05:00
Christopher Haster
116332d3f7 Added tests for forwards and backwards disk compatibility
This is a bit tricky since we need two different version of littlefs in
order to test for most compatibility concerns.

Fortunately we already have scripts/changeprefix.py for version-specific
symbols, so it's not that hard to link in the previous version of
littlefs in CI as a separate set of symbols, "lfsp_" in this case.

So that we can at least test the compatibility tests locally, I've added
an ifdef against the expected define "LFSP" to define a set of aliases
mapping "lfsp_" symbols to "lfs_" symbols. This is manual at the moment,
and a bit hacky, but gets the job done.

---

Also changed BUILDDIR creation to derive subdirectories from a few
Makefile variables. This makes the subdirectories less manual and more
flexible for things like LFSP. Note this wasn't possible until BUILDDIR
was changed to default to "." when omitted.
2023-04-21 00:28:55 -05:00
Christopher Haster
f0cc1db793 Tweaked changeprefix.py to not rename dir component in paths
This wasn't implemented correctly anyways, as it would need to recursively
rename directories that may not exist. Things would also get a bit
complicated if only some files in a directory were renamed.

Doable, but not needed for our use case.

For now just ignore any directory components. Though this may be worth
changing if the source directory structure becomes more complicated in
the future (maybe with a -r/--recursive flag?).
2023-04-19 18:33:47 -05:00
Christopher Haster
bf045dd13c Tweaked link to littlefs-disk-img-viewer to go to github repo 2023-04-19 11:48:06 -05:00
Christopher Haster
b33a5b3f85 Fixed issue where seeking to end-of-directory return LFS_ERR_INVAL
This was just an oversight. Seeking to the end of the directory should
not error, but instead restore the directory to the state where the next
read returns 0.

Note this matches the behavior of lfs_file_tell/lfs_file_seek.

Found by sosthene-nitrokey
2023-04-18 15:10:07 -05:00
Christopher Haster
384a498762 Extend dir seek tests to include seeking to end of directory 2023-04-18 14:55:43 -05:00
Christopher Haster
b0a4a44e5b Added explicit assert for minimum block size of 128 bytes
There was already an assert for this, but because it included the
underlying equation for the requirement it was too confusing for
users that had no prior knowledge for why the assert could trigger.

The math works out such that 128 bytes is a reasonable minimum
requirement, so I've added that number as an explicit assert.
Hopefully this makes this sort of situation easier to debug.

Note that this requirement would need to be increased to 512 bytes if
block addresses are ever increased to 64-bits. DESIGN.md goes into more
detail why this is.
2023-04-17 19:58:09 -05:00
Christopher Haster
aae897ffd0 Added an assert for truthy-preserving bool conversions
This has caught enough people that an explicit assert is warranted.
How littlefs, a c99 project, should be integrated with c89 projects
is still an open question, but no one deserves to debug this sort of
undetected casting issue.

Found by johnernberg and XinStellaris
2023-04-17 19:19:42 -05:00
Christopher Haster
e57402c8e9 Added ability to revert to inline file in lfs_file_truncate
Before, once converted to a CTZ skip-list, a file would remain a CTZ
skip-list even if truncated back to a size that could be inlined.

This was just a shortcut in implementation. And since the fix for boundary
truncates needed special handling for size==0, it made sense to extend
this special condition to allow reverting to inline files.

---

The only case I can think of, where reverting to an inline file would be
detrimental, is if it's a readonly file that you would otherwise not need
to pay the metadata overhead for. But as a tradeoff, inlining the file
would free up the block it was on, so it's unclear if this really is
a net loss.

If the truncate is followed by a write, reverting to an inline file will
always be beneficial. We assume writes will change the data, so in the
non-inlined case there's no way to avoid copying the underlying block.
Even if we assume padding issues are solved.
2023-04-17 18:18:06 -05:00
Christopher Haster
6dc18c38c1 Fixed block-boundary truncate issue
There has been a bug in the filesystem for a while where truncating to a
block boundary suffers from an off-by-one mistake that corrupts the
internal representation of the CTZ skip-list.

This mostly appears when the file_size == block_size, as file_size >
block_size includes CTZ skip-list metadata, so the underlying block
boundaries appear at slightly different offsets.

---

The reason for off-by-one issue is a nuance in lfs_ctz_find that we sort
of abuse to get two different behaviors.

Consider the situation where this bug occurs:

   block 0     block 1
  .--------.  .--------.
  | abcdef |<-| {ptr0} |
  | ghijkl |  | yzabcd |
  | mnopqr |  |        |
  | stuvwx |  |        |
  '--------'  '--------'

With these 24-byte blocks, there's an ambiguity if we wanted to point to
offset 24. We could point before the block boundary, or we could point
after the block boundary

Before:

   block 0     block 1
  .--------.  .--------.
  | abcdef |<-| {ptr0} |
  | ghijkl |  | yzabcd |
  | mnopqr |  |        |
  | stuvwx |  |        |
  '-------^'  '--------'
          '-- off=24 is here

After:

   block 0     block 1
  .--------.  .--------.
  | abcdef |<-| {ptr0} |
  | ghijkl |  | yzabcd |
  | mnopqr |  | ^      |
  | stuvwx |  | |      |
  '--------'  '-|------'
                '-- off=24 is here

When we want these two offsets depends on the context. We want the
offset to be conservative if it represents a size, but eager if it is
being used to prepare a block for writing.

The workaround/hack is to prefer the eager offset, after the block boundary,
but use `size-1` as the argument if we need the conservative offset.

This finds the correct block, but is off-by-one in the calculated
block-offset. Fortunately we happen to not use the block-offset in the
places we need this workaround/hack.

---

To get back to the bug, the wrong mode of lfs_ctz_find was used in
lfs_file_truncate, leading to internal corruption of the CTZ skip-list.

The correct behavior is size-1, with care to avoid underflow.

Also I've tweaked the code to make it clear the calculated block-offset
goes unused in these situations.

Thanks to ghost, ajaybhargav, and others for reporting the issue,
colin-foster-advantage for a reproducible test case, and rvanschoren,
hgspbs for the initial solution.
2023-04-17 17:49:57 -05:00
Christopher Haster
d5dc4872cb Expanded truncate tests to test more corner cases
Removed the weird alignment requirement from the general truncate tests.
This explicitly hid off-by-one truncation errors.

These tests now reveal the same issue as the block-sized truncation test
while also testing for other potential off-by-one errors.
2023-04-17 12:10:19 -05:00
Sosthène Guédon
24795e6b74 Add missing iterations in tests 2023-03-13 11:39:06 +01:00
Colin Foster
7b151e1abb Add test scenario for truncating to a block size
When truncation is done on a file to the block size, there seems to be
an error where it points to an incorrect block. Perform a write /
truncate / readback operation to verify this issue.

Signed-off-by: Colin Foster <colin.foster@in-advantage.com>
2023-01-26 11:55:53 -08:00
Christopher Haster
ba1c76435a Fixed issue where deorphan could get stuck circling between two half-orphans
This of course should never happen normally, two half-orphans requires
two parents, which is disallowed in littlefs for this reason. But it can
happen if there is an outdated half-orphan later in the metadata
linked-list. The two half-orphans can cause the deorphan step to get
stuck, constantly "fixing" the first half-orphan before it has a chance
to remove the problematic, outdated half-orphan later in the list.

The solution here is to do a full check for half-orphans before
restarting the half-orphan loop. This strategy has the potential to
visit more metadata blocks unnecessarily, but avoids situations where
removing a later half-orphan will eventually cause an earlier
half-orphan to resolve itself.

Found with heuristic powerloss testing with test_relocations_reentrant_renames
after 192 nested powerlosses.
2022-12-17 12:42:05 -06:00
Christopher Haster
d1b254da2c Reverted removal of 1-bit counter threaded through tags
Initially I thought the fcrc would be sufficient for all of the
end-of-commit context, since indicating that there is a new commit is a
simple as invalidating the fcrc. But it turns out there are cases that
make this impossible.

The surprising, and actually common, case, is that of an fcrc that
will end up containing a full commit. This is common as soon as the
prog_size is big, as small commits are padded to the prog_size at
minimum.

  .------------------. \
  |     metadata     | |
  |                  | |
  |                  | +-.
  |------------------| | |
  |   foward CRC ------------.
  |------------------| / |   |
  |   commit CRC    -----'   |
  |------------------|       |
  |     padding      |       |
  |                  |       |
  |------------------| \   \ |
  |     metadata     | |   | |
  |                  | +-. | |
  |                  | | | +-'
  |------------------| / | |
  |   commit CRC --------' |
  |------------------|     |
  |                  |     /
  '------------------'

When the commit + crc is all contained in the fcrc, something silly
happens with the math behind crcs. Everything in the commit gets
canceled out:

  crc(m) = m(x) x^|P|-1 mod P(x)

  m ++ crc(m) = m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))

  crc(m ++ crc(m)) = (m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))) x^|P|-1 mod P(x)

  crc(m ++ crc(m)) = (m(x) x^|P|-1 + m(x) x^|P|-1) x^|P|-1 mod P(x)

  crc(m ++ crc(m)) = 0 * x^|P|-1 mod P(x)

This is the reason the crc of a message + naive crc is zero. Even with an
initializer/bit-fiddling, the crc of the whole commit ends up as some
constant.

So no manipulation of the commit can change the fcrc...

But even if this did work, or we changed this scheme to use two
different checksums, it would still require calculating the fcrc of
the whole commit to know if we need to tweak the first bit to invalidate
the unlikely-but-problematic case where we happen to match the fcrc. This
would add a large amount of complexity to the commit code.

It's much simpler and cheaper to keep the 1-bit counter in the tag, even
if it adds another moving part to the system.
2022-12-17 12:42:05 -06:00
Christopher Haster
2f26966710 Continued implementation of forward-crcs, adopted new test runners
This fixes most of the remaining bugs (except one with multiple padding
commits + noop erases in test_badblocks), with some other code tweaks.

The biggest change was dropping reliance on end-of-block commits to know
when to stop parsing commits. We can just continue to parse tags and
rely on the crc for catch bad commits, avoiding a backwards-compatiblity
hiccup. So no new commit tag.

Also renamed nprogcrc -> fcrc and commitcrc -> ccrc and made naming in
the code a bit more consistent.
2022-12-17 12:42:05 -06:00
Christopher Haster
b4091c6871 Switched to separate-tag encoding of forward-looking CRCs
Previously forward-looking CRCs was just two new CRC types, one for
commits with forward-looking CRCs, one without. These both contained the
CRC needed to complete the current commit (note that the commit CRC
must come last!).

         [--   32   --|--   32   --|--   32   --|--   32   --]
with:    [  crc3 tag  | nprog size |  nprog crc | commit crc ]
without: [  crc2 tag  | commit crc ]

This meant there had to be several checks for the two possible structure
sizes, messying up the implementation.

         [--   32   --|--   32   --|--   32   --|--   32   --|--   32   --]
with:    [nprogcrc tag| nprog size |  nprog crc | commit tag | commit crc ]
without: [ commit tag | commit crc ]

But we already have a mechanism for storing optional metadata! The
different metadata tags! So why not use a separate tage for the
forward-looking CRC, separate from the commit CRC?

I wasn't sure this would actually help that much, there are still
necessary conditions for wether or not a forward-looking CRC is there,
but in the end it simplified the code quite nicely, and resulted in a ~200 byte
code-cost saving.
2022-12-17 12:42:05 -06:00
Christopher Haster
91ad673c45 Cleaned up a few additional commit corner cases
- General cleanup from integration, including cleaning up some older
  commit code
- Partial-prog tests do not make sense when prog_size == block_size
  (there can't be partial-progs!)
- Fixed signed-comparison issue in modified filebd
2022-12-17 12:42:05 -06:00
Christopher Haster
52dd83096b Initial implementation of forward-looking erase-state CRCs
This change is necessary to handle out-of-order writes found by pjsg's
fuzzing work.

The problem is that it is possible for (non-NOR) block devices to write
pages in any order, or to even write random data in the case of a
power-loss. This breaks littlefs's use of the first bit in a page to
indicate the erase-state.

pjsg notes this behavior is documented in the W25Q here:
https://community.cypress.com/docs/DOC-10507

---

The basic idea here is to CRC the next page, and use this "erase-state CRC" to
check if the next page is erased and ready to accept programs.

.------------------. \   commit
|     metadata     | |
|                  | +---.
|                  | |   |
|------------------| |   |
| erase-state CRC -----. |
|------------------| | | |
|   commit CRC    ---|-|-'
|------------------| / |
|     padding      |   | padding (doesn't need CRC)
|                  |   |
|------------------| \ | next prog
|     erased?      | +-'
|        |         | |
|        v         | /
|                  |
|                  |
'------------------'

This is made a bit annoying since littlefs doesn't actually store the
page (prog_size) in the superblock, since it doesn't need to know the
size for any other operation. We can work around this by storing both
the CRC and size of the next page when necessary.

Another interesting note is that we don't need to any bit tweaking
information, since we read the next page every time we would need to
know how to clobber the erase-state CRC. And since we only read
prog_size, this works really well with our caching, since the caches
must be a multiple of prog_size.

This also brings back the internal lfs_bd_crc function, in which we can
use some optimizations added to lfs_bd_cmp.

Needs some cleanup but the idea is passing most relevant tests.
2022-12-17 12:42:05 -06:00
Christopher Haster
1278ec1d08 Adopted Brent's algorithm for cycle detection
The previous cycle detection algorithm (a naive check against the largest
possible tail list) is simple and gets the job done, but has the potential to
take a very long time on disks with many blocks. Brent's algorithm, on
the other hand, takes at most 2x the number of blocks in the tail list.

Originally naive cycle detection was chosen over Floyd's algorithm to
avoid the extra complexity of managing two desynced traversals for every
traversal of the tail list, but Brent's algorithm is very well suited for our
use case, requiring only we keep track of an additional mdir pointer on the
stack as we traverse.
2022-12-17 12:41:39 -06:00
Christopher Haster
c2147c45ee Added --gdb-pl to test.py for breaking on specific powerlosses
This allows debugging strategies such as binary searching for the point
of "failure", which may be more complex than simply failing an assert.
2022-12-17 12:39:42 -06:00
Christopher Haster
801cf278ef Tweaked/fixed a number of small runner things after a bit of use
- Added support for negative numbers in the leb16 encoding with an
  optional 'w' prefix.

- Changed prettyasserts.py rule to .a.c => .c, allowing other .a.c files
  in the future.

- Updated .gitignore with missing generated files (tags, .csv).

- Removed suite-namespacing of test symbols, these are no longer needed.

- Changed test define overrides to have higher priority than explicit
  defines encoded in test ids. So:

    ./runners/bench_runner bench_dir_open:0f1g12gg2b8c8dgg4e0 -DREAD_SIZE=16

  Behaves as expected.

  Otherwise it's not easy to experiment with known failing test cases.

- Fixed issue where the -b flag ignored explicit test/bench ids.
2022-12-17 12:35:44 -06:00
Christopher Haster
1f37eb5563 Adopted --subplot* in plot.py
As well as --legend* and --*ticklabels. Mostly for close feature parity, making
it easier to move plots between plot.py and plotmpl.py.
2022-12-16 16:47:42 -06:00
Christopher Haster
cfd4e6029a Added --subplot* to plotmpl.py
Driven primarily by a want to compare measurements of different runtime
complexities (it's difficult to fit O(n) and O(log n) on the same plot),
this adds the ability to nest subplots in the same .svg which try to align
as much as possible. This turned out to be surprisingly complicated.

As a part of this, adopted matplotlib's relatively recent
constrained_layout, which behaves much more consistently.

Also dropped --legend-left, no one should really be using that.
2022-12-16 16:47:30 -06:00
Christopher Haster
2d2dd8b2eb Added plotmpl.py --github flag to match the website's foreground/background
The difference between ggplot's gray and GitHub's gray was a bit jarring.

This also adds --foreground and --font-color for this sort of additional
color control without needing to add a new flag for every color scheme
out there.
2022-12-11 23:41:36 -06:00
Christopher Haster
b0382fa891 Added BENCH/TEST_PRNG, replacing other ad-hoc sources of randomness
When you add a function to every benchmark suite, you know if should
probably be provided by the benchmark runner itself. That being said,
randomness in tests/benchmarks is a bit tricky because it needs to be
strictly controlled and reproducible.

No global state is used, allowing tests/benches to maintain multiple
randomness stream which can be useful for checking results during a run.

There's an argument for having global prng state in that the prng could
be preserved across power-loss, but I have yet to see a use for this,
and it would add a significant requirement to any future test/bench runner.
2022-12-06 23:09:07 -06:00
Christopher Haster
d8e7ffb7fd Changed lfs_emubd_get* -> lfs_emubd_*
lfs_emubd_getreaded      -> lfs_emubd_readed
lfs_emubd_getproged      -> lfs_emubd_proged
lfs_emubd_geterased      -> lfs_emubd_erased
lfs_emubd_getwear        -> lfs_emubd_wear
lfs_emubd_getpowercycles -> lfs_emubd_powercycles
2022-12-06 23:09:07 -06:00
Christopher Haster
cda2f6f1da Changed test_runner to run with -Pnone,linear by default
The linear powerloss heuristic provides very good powerloss coverage
without a significant runtime hit, so there's really no reason to run
the tests without -Plinear.

Previous behavior can be accomplished with an explicit -Pnone.
2022-12-06 23:09:07 -06:00
Christopher Haster
9b687dd96a Added make benchmarks/testmarks rules
Mostly for benchmarking, this makes it easy to view and compare runner
results similarly to other csv results.
2022-12-06 23:09:07 -06:00
Christopher Haster
c4b3e9d826 A couple of script changes after CI integration
- Renamed struct_.py -> structs.py again.

- Removed lfs.csv, instead prefering script specific csv files.

- Added *-diff make rules for quick comparison against a previous
  result, results are now implicitly written on each run.

  For example, `make code` creates lfs.code.csv and prints the summary, which
  can be followed by `make code-diff` to compare changes against the saved
  lfs.code.csv without overwriting.

- Added nargs=? support for -s and -S, now uses a per-result _sort
  attribute to decide sort if fields are unspecified.
2022-12-06 23:09:07 -06:00
Christopher Haster
9990342440 Fixed Clang testing in CI, removed override vars in Makefile
Two flags introduced: -fcallgraph-info=su for stack analysis, and
-ftrack-macro-expansions=0 for cleaner prettyassert.py warnings, are
unfortunately not supported in Clang.

The override vars in the Makefile meant it wasn't actually possible to
remove these flags for Clang testing, so this commit changes those vars
to normal, non-overriding vars. This means `make CFLAGS=-Werror` and
`CFLAGS=-Werror make` behave _very_ differently, but this is just an
unfortunate quirk of make that needs to be worked around.
2022-12-06 23:09:07 -06:00
Christopher Haster
0c781dd822 Merge remote-tracking branch 'origin/master' into test-and-bench-runners 2022-12-06 23:08:53 -06:00
Christopher Haster
4a209344d4 Fixed bench workflow + changeprefix issue in prefix releases
changeprefix.py only works on prefixes, which is a bit of a problem for
flags in the workflow scripts, requiring extra handling to not hide the prefix
from changeprefix.py
2022-12-06 23:07:28 -06:00
Christopher Haster
a659c02bbd Added a bot-generated PR-comment with a simple status table
The littlefs CI is actually in a nice state that generates a lot of
information about PRs (code/stack/struct changes, line/branch coverage
changes, benchmark changes), but GitHub's UI has changed overtime to
make CI statuses harder to find for some reason.

This bot comment should hopefully make this information easy to find
without creating too much noise in the discussion. If not, this can
always be changed later.
2022-12-06 23:07:28 -06:00
Christopher Haster
397aa27181 Removed unnecessarily heavy RAM usage from logs in bench/test.py
For long running processes (testing with >1pls) these logs can grow into
multiple gigabytes, humorously we never access more than the last n lines
as requested by --context. Piping the stdout with --stdout does not use
additional RAM.
2022-12-06 23:07:28 -06:00
Christopher Haster
65923cdfb4 Adopted script changes in GitHub Actions
- Moved to Ubuntu 22.04

  This notably means we no longer have to bend over backwards to
  install GCC 10!

- Changed shell in gha to include the verbose/undefined flags, making
  debugging gha a bit less painful

- Adopted the new test.py/test_runners framework, which means no more
  heavy recompilation for different configurations. This reduces the test job
  runtime from >1 hour to ~15 minutes, while increasing the number of
  geometries we are testing.

- Added exhaustive powerloss testing, because of time constraints this
  is at most 1pls for general tests, 2pls for a subset of useful tests.

- Limited coverage measurements to `make test`

  Originally I tried to maximize coverage numbers by including coverage
  from every possible source, including the more elaborate CI jobs which
  provide an extra level of fuzzing.

  But this missed the purpose of coverage measurements, which is to find
  areas where test cases can be improved. We don't want to improve coverage
  by just shoving more fuzz tests into CI, we want to improve coverage by
  adding specific, intentioned test cases, that, if they fail, highlight
  the reason for the failure.

  With this perspective, maximizing coverage measurement in CI is
  counter-productive. This changes makes it so the reported coverage is
  always less than actual CI coverage, but acts as a more useful metric.

  This also simplifies coverage collection, so that's an extra plus.

- Added benchmarks to CI

  Note this doesn't suffer from inconsistent CPU performance because our
  benchmarks are based on purely simulated read/prog/erase measurements.

- Updated the generated markdown table to include line+branch coverage
  info and benchmark results.
2022-12-06 23:07:21 -06:00
Christopher Haster
387cf6f6e0 Fixed a couple corner cases in scripts when fields are empty
- Fixed added/removed count in scripts when an entry has no field in
  the expected results

- Fixed a python-sort-type issue when by-field is missing in a result
2022-11-28 12:51:18 -06:00
Christopher Haster
0b11ce03b7 Fixed incorrect calculation of extra space needed in mdir blocks
Despite the comment being correct, the calculation is somehow off by a word,
meaning something must have been missed. Maybe the space for the move-delete
was missed since that was added later to avoid losing move-deletes during
relocations.

This was found with the new exhaustive power-loss searching added to the
test framework with -P2. The exact failure was
test_dirs_many_reentrant:2gg2cb:k4o6. This must be the first test that
ends up with all possible extra state in a single mdir block.
2022-11-28 12:51:18 -06:00
Christopher Haster
eba5553314 Fixed hidden orphans by separating deorphan search into two passes
This happens in rare situations where there is a failed mdir relocation,
interrupted by a power-loss, containing the destination of a directory
rename operation, where the directory being renamed preceded the
relocating mdir in the mdir tail-list. This requires at some point for a
previous directory rename to create a cycle.

If this happens, it's possible for the half-orphan to contain the only
reference to the renamed directory. Since half-orphans contain outdated
state when viewed through the mdir tail-list, the renamed directory
appears to be a full-orphan until we fix the relocating half-orphan.
This causes littlefs to incorrectly remove the renamed directory from
the mdir tail-list, causes catastrophic problems down the line.

The source of the problem is that the two different types of orphans
really operate on two different levels of abstraction: half-orphans fix
failed mdir commits, while full-orphans fix directory removes/renames.
Conflating the two leads to situations where we attempt to fix assumed
problems about the directory tree before we have fixed problems with the
mdir state.

The fix here is to separate out the deorphan search into two passes: one
to fix half-orphans and correct any mdir-commits, restoring the mdirs
and gstate to a known good state, then two to fix failed
removes/renames.

---

This was found with the -Plinear heuristic powerloss testing, which now
runs on more geometries. The failing case was:

  test_relocations_reentrant_renames:112gg261dk1e3f3:123456789abcdefg1h1i1j1k1
  l1m1n1o1p1q1r1s1t1u1v1g2h2i2j2k2l2m2n2o2p2q2r2s2t2

Also fixed/tweaked some parts of the test framework as a part of finding
this bug:

- Fixed off-by-one in exhaustive powerloss state encoding.

- Added --gdb-powerloss-before and --gdb-powerloss-after to help debug
  state changes through a failing powerloss, maybe this should be
  expanded to any arbitrary powerloss number in the future.

- Added lfs_emubd_crc and lfs_emubd_bdcrc to get block/bd crcs for quick
  state comparisons while debugging.

- Fixed bd read/prog/erase counts not being copied during exhaustive
  powerloss testing.

- Fixed small typo in lfs_emubd trace.
2022-11-28 12:51:18 -06:00
Christopher Haster
f89d758444 Fixed test out-of-space issues with powerloss testing
These are just incorrect limits in the tests that can be triggered by
powerloss testing, which can end up with more metadata-pairs than
without powerloss testing due to orphans.
2022-11-28 12:51:18 -06:00
Christopher Haster
6c18b4dfb6 Added a simple help rule to the Makefile
To run:

$ make help
2022-11-17 10:36:20 -06:00
Christopher Haster
f73494151a Changed default build target lfs.a -> liblfs.a
This is the name expected if you are actually linking against littlefs.

The use as a default build rule is mostly for linting. Most uses of
littlefs likely compile directly with the sources (it is only several K
of code), or use their own build system, and the previous name would have made
linking a bit of a challenge.

Still, this might cause some breakage for someone...
2022-11-17 10:27:00 -06:00
Christopher Haster
bcc88f52f4 A couple Makefile-related tweaks
- Changed --(tool)-tool to --(tool)-path in scripts, this seems to be
  a more common name for this sort of flag.

- Changed BUILDDIR to not have implicit slash, makes Makefile internals
  a bit more readable.

- Fixed some outdated names hidden in less-often used ifdefs.
2022-11-17 10:26:26 -06:00
Christopher Haster
e35e078943 Renamed prefix.py -> changeprefix.py and updated to use argparse
Added a couple flags to make the script a bit more flexible, and removed
littlefs-specific default in line with the other scripts which aren't
really littlefs-specific. (These defaults can be moved to the
littlefs-specific Makefile easily enough).

The original behavior can be reproduced like so:
./script/changeprefix.py lfs lfs2 --git
2022-11-16 10:46:26 -06:00
Christopher Haster
1a07c2ce0d A number of small script fixes/tweaks from usage
- Fixed prettyasserts.py parsing when '->' is in expr

- Made prettyasserts.py failures not crash (yay dynamic typing)

- Fixed the initial state of the emubd disk file to match the internal
  state in RAM

- Fixed true/false getting changed to True/False in test.py/bench.py
  defines

- Fixed accidental substring matching in plot.py's --by comparison

- Fixed a missed LFS_BLOCk_CYCLES in test_superblocks.toml that was
  missed

- Changed test.py/bench.py -v to only show commands being run

  Including the test output is still possible with test.py -v -O-, making
  the implicit inclusion redundant and noisy.

- Added license comments to bench_runner/test_runner
2022-11-15 13:42:07 -06:00
Christopher Haster
6fce9e5156 Changed plotmpl.py/plot.py to not treat missing values as discontinuities 2022-11-15 13:38:13 -06:00
Christopher Haster
559e174660 Added plotmpl.py for creating svg/png plots with matplotlib
Note that plotmpl.py tries to share many arguments with plot.py,
allowing plot.py to act as a sort of draft mode for previewing plots
before creating an svg.
2022-11-15 13:38:13 -06:00
Christopher Haster
b2a2cc9a19 Added teepipe.py and watch.py 2022-11-15 13:38:13 -06:00
Christopher Haster
3a33c3795b Added perfbd.py and block device performance sampling in bench-runner
Based loosely on Linux's perf tool, perfbd.py uses trace output with
backtraces to aggregate and show the block device usage of all functions
in a program, propagating block devices operation cost up the backtrace
for each operation.

This combined with --trace-period and --trace-freq for
sampling/filtering trace events allow the bench-runner to very
efficiently record the general cost of block device operations with very
little overhead.

Adopted this as the default side-effect of make bench, replacing
cycle-based performance measurements which are less important for
littlefs.
2022-11-15 13:38:13 -06:00
Christopher Haster
29cbafeb67 Renamed coverage.py -> cov.py 2022-11-15 13:38:13 -06:00
Christopher Haster
df283aeb48 Added recursive results to perf.py
This adds -P/--propagate and -Z/--depth to perf.py for showing recursive
results, making it easy to narrow down on where spikes in performance
come from.

This ended up being a bit different from stack.py's recursive results,
as we end up with different (diminishing) numbers as we descend.
2022-11-15 13:38:13 -06:00
Christopher Haster
490e1c4616 Added perf.py a wrapper around Linux's perf tool for perf sampling
This provides 2 things:

1. perf integration with the bench/test runners - This is a bit tricky
   with perf as it doesn't have its own way to combine perf measurements
   across multiple processes. perf.py works around this by writing
   everything to a zip file, using flock to synchronize. As a plus, free
   compression!

2. Parsing and presentation of perf results in a format consistent with
   the other CSV-based tools. This actually ran into a surprising number of
   issues:

   - We need to process raw events to get the information we want, this
     ends up being a lot of data (~16MiB at 100Hz uncompressed), so we
     paralellize the parsing of each decompressed perf file.

   - perf reports raw addresses post-ASLR. It does provide sym+off which
     is very useful, but to find the source of static functions we need to
     reverse the ASLR by finding the delta the produces the best
     symbol<->addr matches.

   - This isn't related to perf, but decoding dwarf line-numbers is
     really complicated. You basically need to write a tiny VM.

This also turns on perf measurement by default for the bench-runner, but at a
low frequency (100 Hz). This can be decreased or removed in the future
if it causes any slowdown.
2022-11-15 13:38:13 -06:00
Christopher Haster
ca66993812 Tweaked scripts to share more code, added coverage calls/hits
The main change is requiring field names for -b/-f/-s/-S, this
is a bit more powerful, and supports hidden extra fields, but
can require a bit more typing in some cases.
2022-11-15 13:38:13 -06:00
Christopher Haster
296c5afea7 Renamed bench_read/prog/erased -> bench_readed/proged/erased
Yes this isn't really correct english anymore, but these names avoid the
read/read ambiguity.
2022-11-15 13:38:13 -06:00
Christopher Haster
274222b518 Added some automatic sizing for field-names in scripts/runners 2022-11-15 13:38:13 -06:00
Christopher Haster
a2fb7089dd Added stddev/gmean/gstddev to summary.py 2022-11-15 13:38:13 -06:00
Christopher Haster
9507e6243c Several tweaks to script flags
- Changed multi-field flags to action=append instead of comma-separated.
- Dropped short-names for geometries/powerlosses
- Renamed -Pexponential -> -Plog
- Allowed omitting the 0 for -W0/-H0/-n0 and made -j0 consistent
- Better handling of --xlim/--ylim
2022-11-15 13:38:13 -06:00
Christopher Haster
42d889e141 Reworked/simplified tracebd.py a bit
Instead of trying to align to block-boundaries tracebd.py now just
aliases to whatever dimensions are provided.

Also reworked how scripts handle default sizing. Now using reasonable
defaults with 0 being a placeholder for automatic sizing. The addition
of -z/--cat makes it possible to pipe directly to stdout.

Also added support for dots/braille output which can capture more
detail, though care needs to be taken to not rely on accurate coloring.
2022-11-15 13:38:13 -06:00
Christopher Haster
fb58148df2 Consistent handling of by/field arguments for plot.py and summary.py
Now both scripts also fallback to guessing what fields to use based on
what fields can be converted to integers. This is more falible, and
doesn't work for tests/benchmarks, but in those cases explicit fields
can be used (which is what would be needed without guessing anyways).
2022-11-15 13:38:13 -06:00
Christopher Haster
7591d9cf74 Added plot.py for in-terminal plotting 2022-11-15 13:38:05 -06:00
Christopher Haster
9a0e3be84e Added a quick trie to avoid running redundant test/bench permutations
Without this redundant permutations can easily happen with runtime
overrides because the different define layers aren't aware of each
other. This causes problems for collecting benchmark results.
2022-11-15 13:33:40 -06:00
Christopher Haster
4fe0738ff4 Added bench.py and bench_runner.c for benchmarking
These are really just different flavors of test.py and test_runner.c
without support for power-loss testing, but with support for measuring
the cumulative number of bytes read, programmed, and erased.

Note that the existing define parameterization should work perfectly
fine for running benchmarks across various dimensions:

./scripts/bench.py \
    runners/bench_runner \
    bench_file_read \
    -gnor \
    -DSIZE='range(0,131072,1024)'

Also added a couple basic benchmarks as a starting point.
2022-11-15 13:33:34 -06:00
Christopher Haster
20ec0be875 Cleaned up a number of small tweaks in the scripts
- Added the littlefs license note to the scripts.

- Adopted parse_intermixed_args everywhere for more consistent arg
  handling.

- Removed argparse's implicit help text formatting as it does not
  work with perse_intermixed_args and breaks sometimes.

- Used string concatenation for argparse everywhere, uses backslashed
  line continuations only works with argparse because it strips
  redundant whitespace.

- Consistent argparse formatting.

- Consistent openio mode handling.

- Consistent color argument handling.

- Adopted functools.lru_cache in tracebd.py.

- Moved unicode printing behind --subscripts in traceby.py, making all
  scripts ascii by default.

- Renamed pretty_asserts.py -> prettyasserts.py.

- Renamed struct.py -> struct_.py, the original name conflicts with
  Python's built in struct module in horrible ways.
2022-11-15 13:31:11 -06:00
Christopher Haster
6a53d76e90 Merge pull request #744 from littlefs-project/fix-fetchmatch-err-path
Fix lfs_dir_fetchmatch not propogating bd errors correctly in one case
2022-11-10 10:32:30 -06:00
Christopher Haster
70298ee988 Merge pull request #742 from carlescufi/fix-be-le-conversions
lfs_util: Fix endiannes conversion when `LFS_NO_INTRINSICS` is set
2022-11-10 10:32:10 -06:00
Christopher Haster
dfa8abdd2c Merge pull request #740 from cbiffle/fix-invalid-block-size-reporting
Fix invalid block size reporting.
2022-11-10 10:31:58 -06:00
Christopher Haster
5659a38c2f Merge pull request #726 from Xenoamor/patch-1
Improve lfs_file_close usage description
2022-11-10 10:31:50 -06:00
Christopher Haster
d8c96abf92 Merge pull request #724 from littlefs-project/clang-lint
Fix self-assign warnings discovered by clang, remove some warning flags
2022-11-10 10:31:42 -06:00
Christopher Haster
007be6fd11 Merge pull request #715 from Mixaill/patch-1
lfs_filebd_sync: fix compilation on Windows
2022-11-10 10:31:31 -06:00
Christopher Haster
e683322af3 Merge pull request #709 from BRTSG-FOSS/hotfix/tests-buffer-overflow
Fix buffer overflow in tests when using a large block size
2022-11-10 10:31:21 -06:00
Christopher Haster
5eb4ea808c Merge pull request #675 from kevinior/lfs_file_rawopen_nomalloc
Fix unused function warning with LFS_NO_MALLOC
2022-11-10 10:31:04 -06:00
Christopher Haster
4a927402a8 Merge pull request #673 from monowii/patch-1
Fix readme Mbed link
2022-11-10 10:30:50 -06:00
monowii
740d9ac4cc Fix readme Mbed link 2022-11-09 11:12:20 -06:00
Christopher Haster
d08f949afd Fixed lfs_dir_fetchmatch not propogating bd errors correctly in one case
Found by cbiffle
2022-11-04 13:45:13 -05:00
Carles Cufi
9e965a8563 lfs_util: Fix endiannes conversion when LFS_NO_INTRINSICS is set
The logic for endiannes conversion was wrong when LFS_NO_INTRINSICS was
set, since on endinanes match a check of that macro would prevent the
unchanged value from being returned.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-10-27 11:33:40 +02:00
Cliff L. Biffle
eb9f4d5d7e Fix invalid block size reporting.
This boilerplate got copied from the stanza just above and incompletely
edited.
2022-10-09 17:45:14 -07:00
Christopher Haster
11d6d1251e Dropped namespacing of test cases
The main benefit is small test ids everywhere, though this is with the
downside of needing longer names to properly prefix and avoid
collisions. But this fits into the rest of the scripts with globally
unique names a bit better. This is a C project after all.

The other small benefit is test generators may have an easier time since
per-case symbols can expect to be unique.
2022-09-17 03:03:39 -05:00
Christopher Haster
1fcd82d5d8 Made test.py output parsable by summary.py
Also fixed an issue with truncation that resulted in a bunch of null
bytes being injected into the CSV output.
2022-09-17 03:02:43 -05:00
Christopher Haster
acdea1880e Made summary.py more powerful, dropped -m from size scripts
With more scripts generating CSV files this moves most CSV manipulation
into summary.py, which can now handle more or less any arbitrary CSV
file with arbitrary names and fields.

This also includes a bunch of additional, probably unnecessary, tweaks:

- summary.py/coverage.py use a custom fractional type for encoding
  fractions, this will also be used for test counts.

- Added a smaller diff output for size scripts with the --percent flag.

- Added line and hit info to coverage.py's CSV files.

- Added --tree flag to stack.py to show only the call tree without
  other noise.

- Renamed structs.py to struct.py.

- Changed a few flags around for consistency between size/summary scripts.

- Added `make sizes` alias.

- Added `make lfs.code.csv` rules
2022-09-16 03:32:10 -05:00
Xenoamor
a25681b2a6 Improve lfs_file_close usage description
Improve the lfs_file_close usage description to make it clearer that the configuration structure must remain valid for its lifetime

In reference to #722
2022-09-12 12:29:06 -05:00
Christopher Haster
23fba40f20 Added option for updating a CSV file with test results
This is mostly for the bench runner which will contain more interesting
results besides just pass/fail.
2022-09-12 12:17:46 -05:00
Christopher Haster
03c1a4ee2e Added permutations and ranges to test defines
This is really more work for the bench runner. With this change defines
can be manipulated at a rather high level at runtime. Which should be
useful for generating benchmarks across various dimensions.

The define grammar in the test_runner is now a bit more powerful,
accepting:

1. A single value: -DN=42
2. A list of values, which get permuted: -DN=1,2,3
3. A range: -DN=range(10)
4. Some combo: -DN=1,2,range(3,0,-1)

This is more complex in the test .toml defines, which can also be C
expressions:

1. A single value: define=42
2. A single expression: define='42*42'
3. A list: define=[1,2,3]
4. A comma separated string: define='1,2,3'
5. A range: define='42*range(10)'
6. This mess: define=[1,2,'3,4,range(2)*range(2)+3']
2022-09-11 21:47:14 -05:00
Christopher Haster
bfbe44e70d Dropped permutation number for full leb16-encoded defines
This is probably how the test runner should have been implemented in the
first place, but it took a few tries to get here.

This makes it so the test identifier, which is a bit longer now, fully
encodes the state of the defines in the test. This removes the need for
the extra geometry field and allows reproduction of tests with custom
defines at runtime.

The test runner may have already seemed like a solved problem, but these
changes are really to enable repurposing the test runner as a bench
runner.
2022-09-10 15:19:34 -05:00
Christopher Haster
5a2ff178e0 Changed test identifier separator # -> :
Compare:
- test_dirs#reentrant_many_dir#1#ggg1ggg8#123456789abcdef
- test_dirs:reentrant_many_dir:1:ggg1ggg8:123456789abcdef
2022-09-09 23:15:16 -05:00
Christopher Haster
c7f7094a06 Several tweaks to test.py and test runner
These are just some minor quality of life improvements

- Added a "make build-test" alias
- Made test runner a positional arg for test.py since it is almost
  always required. This shortens the command line invocation most of the
  time.
- Added --context to test.py
- Renamed --output in test.py to --stdout, note this still merges
  stderr. Maybe at some point these should be split, but it's not really
  worth it for now.
- Reworked the test_id parsing code a bit.
- Changed the test runner --step to take a range such as -s0,12,2
- Changed tracebd.py --block and --off to take ranges
2022-09-08 19:54:07 -05:00
Christopher Haster
47914b925f Fixed self-assign warnings discovered by clang 2022-09-07 12:46:29 -05:00
Christopher Haster
30175de384 Remove -Wshadow -Wjump-misses-init -Wundef
Doing this now specifically because clang does not have
-Wjump-misses-init, but I've been looking for an excuse to remove these
for a while.

These warning flags create more annoyance than they add value. There is
probably a reason they aren't included in -Wall + -Wextra.

-Wshadow specifically is potentially harmful as it forces coming up with
new, sometimes less descriptive names for repeated variables.

Dependent projects should use different flags for their dependencies if
this introduces problems.
2022-09-07 12:38:04 -05:00
Christopher Haster
23747628d5 Added clang build step to CI
As found by dpgeorge, clang has slightly different warnings than GCC.
There's really no cost to running clang as an extra build step to test
for these.
2022-09-07 12:34:52 -05:00
Christopher Haster
a208d848e5 Reworked test defines a bit to use one common array layout
Previously didn't think this would work without making test.py aware of
the number of implicit defines, which risks being incredibly fragile.
Fortunately it turns out we can defer the actual array size calculation
until the C preprocessor. This simplifies a few things.

Also a bitmap-based caching layer for the defines. Since the test
defines have been upgraded to callbacks recursive defines risk spending
a decent amount of time evaluating on every lookup. Some quick testing
shows 408015154 hits to 46160 misses so that's a good sign.

Also changed the geometries to be their own leb16-encoded part of the
test identifier. This means any geometry can be captured and reproduced
with just the test identifier. Here are the current test geometries:

./runners/test_runner --list-geometries
geometry                    read    prog   erase   count        size  leb16
d,default                     16      16     512    2048     1048576  g1gg2
e,eeprom                       1       1     512    2048     1048576  1gg2
E,emmc                       512     512     512    2048     1048576  gg2
n,nor                          1       1    4096     256     1048576  1ggg1
N,nand                      4096    4096   32768      32     1048576  ggg1ggg8
2022-09-07 01:52:53 -05:00
Christopher Haster
91200e6678 Added tracebd.py, a script for rendering block device operations
Based on a handful of local hacky variations, this sort of trace
rendering is surprisingly useful for getting an understanding of how
different filesystem operations interact with the underlying
block-device.

At some point it would probably be good to reimplement this in a
compiled language. Parsing and tracking the trace output quickly
becomes a bottleneck with the amount of trace output the tests
generate.

Note also that since tracebd.py run on trace output, it can also be
used to debug logged block-device operations post-run.
2022-09-07 01:52:53 -05:00
Christopher Haster
c9a6e3a95b Added tailpipe.py and improved redirecting test trace/log output over fifos
This mostly involved futzing around with some of the less intuitive
parts of Unix's named-pipes behavior.

This is a bit important since the tests can quickly generate several
gigabytes of trace output.
2022-09-07 01:52:49 -05:00
Christopher Haster
5279fc6022 Implemented exhaustive testing of n nested powerlosses
As expected this takes a significant amount of time (~10 minutes for all
1 powerlosses, >10 hours for all 2 powerlosses) but this may be reducible in
the future by optimizing tests for powerloss testing. Currently
test_files does a lot of work that doesn't really have testing value.
2022-08-25 11:35:52 -05:00
Christopher Haster
552336eba9 Added optional read/prog/erase delays to testbd
These have no real purpose other than slowing down the simulation
for inspection/fun.

Note this did reveal an issue in pretty_asserts.py which was clobbering
feature macros. Added explicit, and maybe a bit hacky, #undef _FEATURE_H
to avoid this.
2022-08-24 09:38:23 -05:00
Christopher Haster
3f4f85986e Readded support for mirror writes to a file in testbd
Before this was available implicitly by supporting both rambd and filebd
as backends, but now that testbd is a bit more complicated and no longer
maps directly to a block-device, this needs to be explicitly supported.
2022-08-23 19:21:38 -05:00
Christopher Haster
4689678208 Added --color to test.py, fixed some terminal-clobbering issues
With more features being added to test.py, the one-line status is
starting to get quite long and pass the ~80 column readability
heuristic. To make this worse this clobbers the terminal output
when the terminal is not wide enough.

Simple solution is to disable line-wrapping, potentially printing
some garbage if line-wrapping-disable is not supported, but also
printing a final status update to fix any garbage and avoid a race
condition where the script would show a non-final status.

Also added --color which disables any of this attempting-to-be-clever
stuff.
2022-08-23 19:21:38 -05:00
Christopher Haster
61455b6191 Added back heuristic-based power-loss testing
The main change here from the previous test framework design is:

1. Powerloss testing remains in-process, speeding up testing.

2. The state of a test, included all powerlosses, is encoded in the
   test id + leb16 encoded powerloss string. This means exhaustive
   testing can be run in CI, but then easily reproduced locally with
   full debugger support.

   For example:

   ./scripts/test.py test_dirs#reentrant_many_dir#10#1248g1g2 --gdb

   Will run the test test_dir, case reentrant_many_dir, permutation #10,
   with powerlosses at 1, 2, 4, 8, 16, and 32 cycles. Dropping into gdb
   if an assert fails.

The changes to the block-device are a work-in-progress for a
lazily-allocated/copy-on-write block device that I'm hoping will keep
exhaustive testing relatively low-cost.
2022-08-23 19:12:22 -05:00
Christopher Haster
01b11da31b Added a simple test that the block device works
On one hand this seems like the wrong place for these tests, on the
other hand, it's good to know that the block device is behaving as
expected when debugging the filesystem.

Maybe this should be moved to an external program for users to test
their block devices in the future?
2022-08-17 12:29:11 -05:00
Christopher Haster
a368d3a07c Moved emulation of erase values up into lfs_testbd
Yes this is more expensive, since small programs need to rewrite the
whole block in order to conform to the block device API. However, it
reduces code duplication and keeps all of the test-related block device
emulation in lfs_testbd.

Some people have used lfs_filebd/lfs_rambd as a starting point for new block
devices and I think it should be clear that erase does not need to have side
effects. Though to be fair this also just means we should have more
examples of block devices...
2022-08-17 11:50:45 -05:00
Christopher Haster
b08463f8de Reworked scripts/pretty_asserts.py a bit
- Renamed explode_asserts.py -> pretty_asserts.py, this name is
  hopefully a bit more descriptive
- Small cleanup of the parser rules
- Added recognization of memcmp/strcmp => 0 statements and generate
  the relevant memory inspecting assert messages

I attempted to fix the incorrect column numbers for the generated
asserts, but unfortunately this didn't go anywhere and I don't think
it's actually possible.

There is no column control analogous to the #line directive. I thought
you might be able to intermix #line directives to put arguments at the
right column like so:

    assert(a == b);

    __PRETTY_ASSERT_INT_EQ(
    #line 1
           a,
    #line 1
                b);

But this doesn't work as preprocessor directives are not allowed in
macros arguments in standard C. Unfortunately this is probably not
possible to fix without better support in the language.
2022-08-16 11:41:46 -05:00
Christopher Haster
92eee8e6cd Removed some prefixes from Makefile variables where not necessary
Also renamed GCI -> CI, this holds .ci files, though there is a risk
of confusion with continuous integration.

Also added unused but generated .ci files to clean rule.
2022-08-15 12:13:00 -05:00
yomimono
d9333ecbd4 Add "chamelon" to the related projects section.
"chamelon" implements a subset of littlefs (no global move state or
singly-linked list threaded through the directory tree) for use in the
MirageOS library operating system project. It is written entirely in
OCaml and is interoperable (with the above caveats) with the reference
implementation via FUSE.
2022-08-02 11:53:22 -05:00
Mikhail Paulyshka
a405c3293f lfs_filebd_sync: fix compilation on Windows 2022-07-27 17:06:51 +03:00
Jan Boon
9af63b3844 Fix buffer overflow in tests when using a large block size 2022-07-09 17:19:07 +08:00
Christopher Haster
46cc6d4450 Added support for annotated source in coverage.py
On one hand this isn't very different than the source annotation in
gcov, on the other hand I find it a bit more readable after a bit of
experimentation.
2022-06-06 01:35:16 -05:00
Christopher Haster
5b0a6d4747 Reworked scripts to move field details into classes
These scripts can't easily share the common logic, but separating
field details from the print/merge/csv logic should make the common
part of these scripts much easier to create/modify going forward.

This also tweaked the behavior of summary.py slightly.
2022-06-06 01:35:16 -05:00
Christopher Haster
4a7e94fb15 Reimplemented coverage.py, using only gcov and with line+branch coverage
This also adds coverage support to the new test framework, which due to
reduction in scope, no longer needs aggregation and can be much
simpler. Really all we need to do is pass --coverage to GCC, which
builds its .gcda files during testing in a multi-process-safe manner.

The addition of branch coverage leverages information that was available
in both lcov and gcov.

This was made easier with the addition of the --json-format to gcov
in GCC 9.0, however the lax backwards compatibility for gcov's
intermediary options is a bit concerning. Hopefully --json-format
sticks around for a while.
2022-06-06 01:35:14 -05:00
Christopher Haster
2b11f2b426 Tweaked generation of .cgi files, error code for recursion in stack.py
GCC is a bit annoying here, it can't generate .cgi files without
generating the related .o files, though I suppose the alternative risks
duplicating a large amount of compilation work (littlefs is really
a small project).

Previously we rebuilt the .o files anytime we needed .cgi files
(callgraph info used for stack.py). This changes it so we always
built .cgi files as a side-effect of compilation. This is similar
to the .d file generation, though may be annoying if the system
cc doesn't support --callgraph-info.
2022-06-06 01:35:12 -05:00
Christopher Haster
1616115662 Fix test.py hang on ctrl-C, cleanup TODOs
A small mistake in test.py's control flow meant the failing test job
would succesfully kill all other test jobs, but then humorously start
up a new process to continue testing.
2022-06-06 01:35:09 -05:00
Christopher Haster
4a42326797 Moved test suites into custom linker section
This simplifies the interaction between code generation and the
test-runner.

In theory it also reduces compilation dependencies, but internal tests
make this difficult.
2022-06-06 01:35:07 -05:00
Christopher Haster
0781f50edb Ported tests to new framework
This mostly required names for each test case, declarations of
previously-implicit variables since the new test framework is more
conservative with what it declares (the small extra effort to add
declarations is well worth the simplicity and improved readability),
and tweaks to work with not-really-constant defines.

Also renamed test_ -> test, replacing the old ./scripts/test.py,
unfortunately git seems to have had a hard time with this.
2022-06-06 01:35:03 -05:00
Christopher Haster
d679fbb389 In ./scripts/test.py, readded external commands, tweaked subprocesses
- Added --exec for wrapping the test-runner with external commands, such as
  Qemu or Valgrind.

- Added --valgrind, which just aliases --exec=valgrind with a few extra
  flags useful during testing.

- Dropped the "valgrind" type for tests. These aren't separate tests
  that run in the test-runner, and I don't see a need for disabling
  Valgrind for any tests. This can be added back later if needed.

- Readded support for dropping directly into gdb after a test failure,
  either at the assert failure, entry point of test case, or entry point
  of the test runner with --gdb, --gdb-case, or --gdb-main.

- Added --isolate for running each test permutation in its own process,
  this is required for associating Valgrind errors with the right test
  case.

- Fixed an issue where explicit test identifier conflicted with
  per-stage test identifiers generated as a part of --by-suite and
  --by-case.
2022-06-06 01:35:03 -05:00
Christopher Haster
5a572ced3c Reworked how test defines are implemented to support recursion
Previously test defines were implemented using layers of index-mapped
uintmax_t arrays. This worked well for lookup, but limited defines to
constants computed at compile-time. Since test defines themselves are
actually calculated at _run-time_ (yeah, they have deviated quite
a bit from the original, compile-time evaluated defines, which makes
the name make less sense), this means defines can't depend on other
defines. Which was limiting since a lot of test defines relied on
defines generated from the geometry being tested.

This new implementation uses callbacks for the per-case defines. This
means they can easily contain full C statements, which can depend on
other test defines. This does means you can create infinitely-recursive
defines, but the test-runner will just break at run-time so don't do that.

One concern is that there might be a performance hit for evaluating all
defines through callbacks, but if there is it is well below the noise
floor:

- constants: 43.55s
- callbacks: 42.05s
2022-06-06 01:35:03 -05:00
Christopher Haster
be0e6ad5eb More progress toward test-runner feature parity
- Added internal tests, which can run tests inside other source files,
  allowing access to "private" functions and data

  Note this required a special bit of handling our defining and later
  undefining test configurations to not polute the namespace of the
  source file, since it can end up with test cases from different
  suites/configuration namespaces.

- Removed unnecessary/unused permutation argument to generated test
  functions.

- Some cleanup to progress output of test.py.
2022-06-06 01:35:01 -05:00
Christopher Haster
4962829017 Continued progress toward feature parity with new test-runner
- Expanded test defines to allow for lists of configurations

  These are useful for changing multi-dimensional test configurations
  without leading to extremely large and less useful configuration
  combinations.

- Made warnings more visible durring test parsing

- Add lfs_testbd.h to implicit test includes

- Fixed issue with not closing files in ./scripts/explode_asserts.py

- Add `make test_runner` and `make test_list` build rules for
  convenience
2022-06-06 01:35:00 -05:00
Christopher Haster
5ee4b052ae Misc test-runner improvements
- Added --disk/--trace/--output options for information-heavy debugging

- Renamed --skip/--count/--every to --start/--stop/--step.

  This matches common terms for ranges, and frees --skip for being used
  to skip test cases in the future.

- Better handling of SIGTERM, now all tests are killed, reported as
  failures, and testing is halted irregardless of -k.

  This is a compromise, you throw away the rest of the tests, which
  is normally what -k is for, but prevents annoying-to-terminate
  processes when debugging, which is a very interactive process.
2022-06-06 01:35:00 -05:00
Christopher Haster
5812d2b5cf Reworked how multi-layered defines work in the test-runner
In the test-runner, defines are parameterized constants (limited
to integers) that are generated from the test suite tomls resulting
in many permutations of each test.

In order to make this efficient, these defines are implemented as
multi-layered lookup tables, using per-layer/per-scope indirect
mappings. This lets the test-runner and test suites define their
own defines with compile-time indexes independently. It also makes
building of the lookup tables very efficient, since they can be
incrementally populated as we expand the test permutations.

The four current define layers and when we need to build them:

layer                           defines         predefine_map   define_map
user-provided overrides         per-run         per-run         per-suite
per-permutation defines         per-perm        per-case        per-perm
per-geometry defines            per-perm        compile-time    -
default defines                 compile-time    compile-time    -
2022-06-06 01:35:00 -05:00
Christopher Haster
64436933e2 Putting together rewritten test.py script 2022-06-06 01:34:57 -05:00
Kevin ORourke
6c720dc2bb Fix unused function warning with LFS_NO_MALLOC 2022-04-25 12:12:41 +02:00
Christopher Haster
92a600a980 Added trace and persist flags to test_runner 2022-04-19 02:12:24 -05:00
Christopher Haster
9281ce26a7 More test_runner progress
- Added filtering based on suite, case, perm, type, geometry
- Added --skip, --count, and --every (will be used for parallelism)
- Implemented --list-defines
- Better helptext for flags with arguments
- Other minor tweaks
2022-04-18 15:15:57 -05:00
Christopher Haster
4b0aa6272e Some more minor improvements to the test_runner
- Indirect index map instead of bitmap+sparse array
- test_define_t and test_type_t
- Added back conditional filtering
- Added suite-level defines and filtering
2022-04-18 00:09:01 -05:00
Christopher Haster
d683f1c76c Reintroduced test-defines into the new test_runner
This moves defines entirely into the runtime of the test_runner,
simplifying thing and reducing the amount of generated code that needs
to be build, at the cost of limiting test-defines to uintmax_t types.

This is implemented using a set of index-based scopes (created by
test.py) that allow different layers to override defines from other
layers, accessible through the global `test_define` function.

layers:
1. command-line overrides
2. per-case defines
3. per-geometry defines
2022-04-17 21:45:47 -05:00
Christopher Haster
56a990336b Created new test_runner.c and test_.py
This is to try a different design for testing, the goals are to make the
test infrastructure a bit simpler, with clear stages for building and
running, and faster, by avoiding rebuilding lfs.c n-times.
2022-04-16 13:50:34 -05:00
Christopher Haster
40dba4a556 Merge pull request #669 from littlefs-project/devel
Minor release: v2.5
2022-04-13 22:49:41 -05:00
Christopher Haster
148e312ea3 Bumped minor version to v2.5 2022-04-13 22:47:43 -05:00
Christopher Haster
abbfe8e92e Reduced lfs_dir_traverse's explicit stack to 3 frames
This is possible thanks to invoxiaamo's optimization of compacting
renames to avoid the O(n^3) nested filters. Not only does this
significantly reduce the runtime cost of that operation, but it
reduces the maximum possible depth of recursion to 3 frames.

Deepest lfs_dir_traverse before:

traverse with commit
'-> traverse with filter
    '-> traverse with move
        '-> traverse with filter

Deepest lfs_dir_traverse after:

traverse with commit
'-> traverse with move
    '-> traverse with filter
2022-04-10 23:27:49 -05:00
Christopher Haster
c60c977c25 Merge pull request #658 from littlefs-project/no-recursion
Restructure littlefs to not use recursion, measure stack usage
2022-04-10 23:23:39 -05:00
Christopher Haster
3ce64d1ac0 Merge pull request #666 from invoxiaamo/rename-opti2
Optimization of the rename case.
2022-04-10 22:02:04 -05:00
Christopher Haster
0ced3623d4 Merge pull request #657 from littlefs-project/copyright-update
Update copyright notice
2022-04-10 21:59:27 -05:00
Christopher Haster
5451a6d503 Merge pull request #643 from microist/fix-filebd-windows
Fixes to use lfs_filebd on windows platforms
2022-04-10 21:56:08 -05:00
Martin Hoffmann
1e038c81fc Fixes to use lfs_filebd on windows platforms
There are two issues, when using the file-based block device emulation
on Windows Platforms:
1. There is no fsync implementation available. This needs to be mapped
   to a Windows-specific FlushFileBuffers system call.
2. The block device file needs to be opened as binary file (O_BINARY)
	   The corresponding flag is not required for Linux.
2022-04-10 21:55:00 -05:00
Christopher Haster
f28ac3ea7d Merge pull request #638 from lmapii/master
Removed invalid overwrite for return value.
2022-04-10 21:52:48 -05:00
Christopher Haster
a94fbda1cd Merge pull request #632 from robekras/patch-1
Fix lfs_file_rawseek performance issue
2022-04-10 21:52:27 -05:00
Christopher Haster
cc025653ed Merge pull request #630 from Johnxjj/dev-johnxjj
add the limit, the cursor cannot be set to a negative number
2022-04-10 14:44:47 -05:00
Christopher Haster
bfb9bd2483 Merge pull request #614 from nnayo/fix_no_malloc_2
don't use lfs_file_open() when LFS_NO_MALLOC is set
2022-04-10 14:44:33 -05:00
Christopher Haster
f40b854ab5 Merge pull request #584 from colin-foster-in-advantage/block_size_mount_fail
Fail mount when the block size changes
2022-04-10 14:44:24 -05:00
Arnaud Mouiche
c2fa1bb7df Optimization of the rename case.
Rename can be VERY time consuming. One of the reasons is the 4 recursion
level depth of lfs_dir_traverse() seen if a compaction happened during the
rename.

lfs_dir_compact()
  size computation
    [1] lfs_dir_traverse(cb=lfs_dir_commit_size)
         - do 'duplicates and tag update'
       [2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
           - Reaching a LFS_FROM_MOVE tag (here)
         [3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
             - do 'duplicates and tag update'
           [4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])
  followed by the compaction itself:
    [1] lfs_dir_traverse(cb=lfs_dir_commit_commit)
         - do 'duplicates and tag update'
       [2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
           - Reaching a LFS_FROM_MOVE tag (here)
         [3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
             - do 'duplicates and tag update'
           [4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])

Yet, analyse shows that levels [3] and [4] don't perform anything
if the callback is lfs_dir_traverse_filter...

A practical example:

- format and mount a 4KB block FS
- create 100 files of 256 Bytes named "/dummy_%d"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
This triggers a compaction where lfs_dir_traverse was called 148393 times,
generating 25e6+ lfs_bd_read calls (~100 MB+ of data)

With the optimization, lfs_dir_traverse is now called 3248 times
(589e3 lfs_bds_calls (~2.3MB of data)

=> x 43 improvement...
2022-04-10 13:12:45 -05:00
martin
3b62ec1c47 Updated error handling for NOSPC 2022-04-10 13:00:13 -05:00
xujunjun
b898977fd8 Set the limit, the cursor cannot be set to a negative number 2022-04-10 12:57:42 -05:00
Colin Foster
cf274e6ec6 Squash of CR changes
- nit: Moving brace to end of if statement line for consistency
- mount: add more debug info per CR
- Fix compiler error from extra parentheses
- Fix superblock typo
2022-04-10 12:53:33 -05:00
Christopher Haster
425dc810a5 Modified robekras's optimization to avoid flush for all seeks in cache
The basic idea is simple, if we seek to a position in the currently
loaded cache, don't flush the cache. Notably this ensures that seek is
always as fast or faster than just reading the data.

This is a bit tricky since we need to check that our new block and
offset match the cache, fortunately we can skip the block check by
reevaluating the block index for both the current and new positions.

Note this only works whene reading, for writing we need to always flush
the cache, or else we will lose the pending write data.
2022-04-10 12:46:51 -05:00
robekras
a6f01b7d6e Update lfs.c
This should fix the performance issue if a new seek position belongs to currently cached data.
This avoids unnecessary rereads of file data.
2022-04-09 02:12:18 -05:00
Christopher Haster
9c7e232086 Fixed missing definition of lfs_cache_drop in readonly mode
Interestingly this was introduced by two different PRs which were not tested
together until pre-release testing:

- Fix lfs_file_seek doesn't update cache properties correctly
- Fix compiler warnings when LFS_READONLY defined
2022-03-21 20:29:04 -05:00
Christopher Haster
c676bcee4c Merge branch 'bf_lfs_file_seek_readonly' into HEAD 2022-03-20 23:16:15 -05:00
Christopher Haster
03f088b92c Tweaked lfs_file_flush to still flush caches when build under LFS_READONLY
A slight varation to the fix from ondrap
2022-03-20 23:14:34 -05:00
ondrap
e955b9f65d Fix lfs_file_seek doesn't update cache properties correctly in readonly mode. Invalidate cache to fix it. 2022-03-20 23:10:11 -05:00
Christopher Haster
99f58139cb Merge pull request #650 from Kongduino/patch-1
Typo
2022-03-20 23:09:41 -05:00
Christopher Haster
5801169348 Merge pull request #635 from mikee47/fix/spelling-errors
Fix spelling errors
2022-03-20 23:09:23 -05:00
Christopher Haster
2d6f4ead13 Merge pull request #620 from XinStellaris/master
fix bug:lfs_alloc will alloc one block repeatedly in multiple split
2022-03-20 23:09:04 -05:00
Christopher Haster
3d1b89b41a Merge pull request #612 from tniessen/patch-1
Always zero rambd buffer before first use
2022-03-20 23:08:31 -05:00
Christopher Haster
45cefb825d Merge pull request #606 from eclig/improve-config-doc
Specify unit of the size members of the lfs_config struct
2022-03-20 23:07:51 -05:00
Christopher Haster
bbb9e3873e Merge pull request #593 from tannewt/patch-1
Indent sub-portions of tag fields
2022-03-20 23:07:32 -05:00
Christopher Haster
c6d3c48939 Merge pull request #569 from tniessen/fix-compilation-with-lfs_readonly
Fix compiler warnings when LFS_READONLY defined
2022-03-20 23:06:50 -05:00
Christopher Haster
2db5dc80c2 Update copyright notice 2022-03-20 23:03:52 -05:00
田昕
1363c9f9d4 fix bug:lfs_alloc will alloc one block repeatedly in multiple split
BUG CASE:Assume there are 6 blocks in littlefs, block 0,1,2,3 already allocated. 0 has a tail pair of {2, 3}. Now we try to write more into 0.
When writing to block 0, we will split(FIRST SPLIT), thus allocate block 4 and 5. Up to now , everything is as expected.
Then we will try to commit in block 4, during which split(SECOND SPLIT) is triggered again(In our case, some files are large, some are small, one split may not be enough).  Still as expected now.
BUG happens when we try to alloc a new block pair for the second split:
As lookahead buffer reaches the end , a new lookahead buffer will be generated from flash content, and block 4, 5 are unused blocks in the new lookahead buffer because they are not programed yet. HOWEVER, block 4,5 should be occupied in the first split!!!!!  The result is block 4,5 are allocated again(This is where things are getting wrong).

commit ce2c01f results in this bug. In the commit, a lfs_alloc_ack is inserted in lfs_dir_split, which will cause split to reset lfs->free.ack to block count.
In summary, this problem exists after 2.1.3.

Solution: don't call lfs_alloc_ack in lfs_dir_split.
2022-03-20 20:53:48 -05:00
Kongduino
5bc682a0d4 Typo
s/propogated/propagated/
2022-03-20 20:49:45 -05:00
Christopher Haster
8109f28266 Removed recursion from lfs_dir_traverse
lfs_dir_traverse is a bit unpleasant in that it is inherently a
recursive function, but without a strict bound of 4 calls (commit -> filter ->
move -> filter), and efforts to unroll the recursion comes at a
signification code cost.

It turns out the best solution I've found so far is to simple create an
explicit stack with an explicit bound of 4 calls (or more accurately,
3 pushed frames).

---

This actually highlights one of the bigger flaws in littlefs right now,
which is that this function, lfs_dir_traverse, takes O(n^2) disk reads
to traverse.

Note that LFS_FROM_MOVE can only occur once per commit, which is why
this code is O(n^2) and not O(n^4).
2022-03-20 04:27:54 -05:00
Christopher Haster
fedf646c79 Removed recursion in file read/writes
This mostly just required separate functions for "lfs_file_rawwrite" and
"lfs_file_flushedwrite", since lfs_file_flush recursively invokes
lfs_file_rawread and lfs_file_rawwrite.

This comes at a code cost, but gives us bounded and measurable RAM usage
on this code path.
2022-03-20 04:25:24 -05:00
Christopher Haster
84da4c0b1a Removed recursion from commit/relocate code path
lfs_dir_commit originally relied heavily on tail-recursion, though at
least one path (through relocations) was not tail-recursive, and could
cause unbounded stack usage in extreme cases of bad blocks. (Keep in
mind even extreme cases of bad blocks should be in scope for littlefs).

In order to remove recursion from this code path, several changed were
raequired:

- The lfs_dir_compact logic had to be somewhat inverted. Instead of
  first compacting and then resolving issues such as relocations and
  orphans, the overarching lfs_dir_commit now contains a state-machine
  which after committing or compacting handles the extra changes to the
  filesystem in a single, non-recursive loop

- Instead of fixing all relocations recursively, >1 relocation requires
  defering to a full deorphan step. This step is unfortunately an
  additional n^2 process. It also required some changes to lfs_deorphan
  in order to ignore intentional orphans created as an intermediary in
  lfs_mkdir. Maybe in the future we should remove these.

- Tail recursion normally found in lfs_fs_deorphan had to be rewritten
  as a loop which restarts any time a new commit causes a relocation.
  This does show that the algorithm may not terminate, but only if every
  block is bad, which will eventually cause littlefs to run out of
  blocks to write to.
2022-03-20 04:24:44 -05:00
Christopher Haster
554e4b1444 Fixed Popen deadlock issue in test.py
As noted in Python's subprocess library:

> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
> child process generates enough output to a pipe such that it blocks
> waiting for the OS pipe buffer to accept more data.

Curiously, this only became a problem when updating to Ubuntu 20.04
in CI (python3.6 -> python3.8).
2022-03-20 03:44:39 -05:00
Christopher Haster
fe8f3d4f18 Changed./scripts/struct.py to organize by header file
Avoids redundant counting of structs shared in multiple .c files, which
is very common. This is different from the other scripts,
code.py/data.py/stack.py, but this difference makes sense as struct
declarations have a very different lifetime.
2022-03-20 03:41:37 -05:00
Christopher Haster
316b019f41 In CI, determine loop devices dynamically to avoid conflicts with Ubuntu snaps
Introduced when updating CI to Ubuntu 20.04, Ubuntu snaps consume
loop devices, which conflict with out assumption that /dev/loop0
will always be unused. Changed to request a dynamic loop device from
losetup, though it would have been nice if Ubuntu snaps allocated
from the last device or something.
2022-03-20 03:39:23 -05:00
Christopher Haster
8475c8064d Limit ./scripts/structs.py to report structs in local .h files
This requires parsing an additional section of the dwarfinfo (--dwarf=rawlines)
to get the declaration file info.

---

Interpreting the results of ./scripts/structs.py reporting is a bit more
complicated than other scripts, structs aren't used in a consistent
manner so the cost of a large struct depends on the context in which it
is used.

But that being said, there really isn't much reason to report
internal-only structs. These structs really only exist for type-checking
in internal algorithms, and their cost will end up reflected in other RAM
measurements, either stack, heap, or other.
2022-03-20 03:39:23 -05:00
Christopher Haster
563af5f364 Cleaned up make clean 2022-03-20 03:39:23 -05:00
Christopher Haster
3b495bab79 Fixed spurious CI failure caused by multiple writers to .o files
GCC is a bit frustrating here, it really wants to generate every file in
a single command, which _is_ more efficient if our build system could
leverage this. But -fcallgraph-info is a rather novel flag, so we can't
really rely on it for generally compiling and testing littlefs.

The multi-file output gets in the way when we want an explicitly
separate rule for callgraph-info generation. We can't generate the
callgraph-info without generating the objects files.

This becomes a surprsing issue when parallel building (make -j) is used!
Suddenly we might end up with both the .o and .ci rules writing to .o
files, which creates a really difficult to track down issue of corrupted
.o files.

The temporary solution is to use an order-only prerequisite. This still
ends up building the .o files twice, but it's an acceptable tradeoff for
not requiring the -fcallgraph-info for all builds.
2022-03-20 03:39:18 -05:00
Christopher Haster
e4adefd1d7 Fixed spurious encoding error
Using errors=replace in python utf-8 decoding makes these scripts more
resilient to underlying errors, rather than just throwing an unhelpfully
generic decode error.
2022-03-20 03:28:26 -05:00
Christopher Haster
9d54603ce2 Added new scripts to CI results
- Added to GitHub statuses (61 results)

- Reworked generated release table to include these (16 results, only thumb)

These also required a surprisingly large number of other changes:

- Bumbed CI Ubuntu version 18.04 -> 20.04, 22.04 is already on the
  horizon but not usable in GitHub yet

- Manualy upgrade to GCC v10, this is required for the -fcallgraph-info
  flag that scripts/stack.py uses.

- Increased paginated status queries to 100 per-page. If we have more
  statuses than this the status diffs may get much more complicated...

- Forced whitespace in generated release table to always be nbsp. GitHub
  tables get scrunched rather ugly without this, prefering margins to
  readable tables.

- Added limited support for "∞" results, since this is returned by
  ./scripts/stack.py for recursive functions.

As a side-note, this increases the number of statuses reported
per-commit from 6 to 61, so hopefully that doesn't cause any problems...
2022-03-20 03:28:26 -05:00
Christopher Haster
7ea2b515aa A few more tweaks to scripts
- Changed `make summary` to show a one line summary
- Added `make lfs.csv` rule, which is useful for finding more info with
  other scripts
- Fixed small issue in ./scripts/summary.py
- Added *.ci (callgraph) and *.csv (script output) to CI
2022-03-20 03:28:26 -05:00
Christopher Haster
55b3c538d5 Added ./script/summary.py
A full summary of static measurements (code size, stack usage, etc) can now
be found with:

    make summary

This is done through the combination of a new ./scripts/summary.py
script and the ability of existing scripts to merge into existing csv
files, allowing multiple results to be merged either in a pipeline, or
in parallel with a single ./script/summary.py call.

The ./scripts/summary.py script can also be used to quickly compare
different builds or configurations. This is a proper implementation
of a similar but hacky shell script that has already been very useful
for making optimization decisions:

    $ ./scripts/structs.py new.csv -d old.csv --summary
    name (2 added, 0 removed)               code             stack            structs
    TOTAL                                  28648 (-2.7%)      2448               1012

Also some other small tweaks to scripts:

- Removed state saving diff rules. This isn't the most useful way to
  handle comparing changes.

- Added short flags for --summary (-Y) and --files (-F), since these
  are quite often used.
2022-03-20 03:28:26 -05:00
Christopher Haster
eb8be9f351 Some improvements to size scripts
- Added -L/--depth argument to show dependencies for scripts/stack.py,
  this replaces calls.py
- Additional internal restructuring to avoid repeated code
- Removed incorrect diff percentage when there is no actual size
- Consistent percentage rendering in test.py
2022-03-20 03:28:21 -05:00
Christopher Haster
50ad2adc96 Added make *-diff rules, quick commands to compare sizes
This required a patch to the --diff flag for the scripts to ignore
a missing file. This enables the useful one liner for making comparisons
with potentially missing previous versions:

    ./scripts/code.py lfs.o -d lfs.o.code.csv -o lfs.o.code.csv

    function (0 added, 0 removed)            old     new    diff
    TOTAL                                  25476   25476      +0

One downside, these previous files are easy to delete as a part of make
clean, which limits their usefulness for comparing configuration
changes...
2022-03-11 14:40:54 -06:00
Christopher Haster
0a2ff3b6ff Added scripts/structs.py for getting sizes of structs
Note this does include internal structs, so this should probably
be limited to informative purposes.
2022-03-11 14:40:54 -06:00
Christopher Haster
d7582efec8 Changed script's CSV formats to allow for merging different measurements
- size  -> code_size
- size  -> data_size
- frame -> stack_frame
- limit -> stack_limit
- hits  -> coverage_hits
- count -> coverage_count
2022-03-11 14:40:54 -06:00
Christopher Haster
f4c7af76f8 Added scripts/stack.py for viewing stack usage
Note this detects loops (recursion), and renders this as infinity.
Currently littlefs does have a single recursive function and you can see
how this infects the full call graph. Eventually this should be removed.
2022-03-11 14:40:54 -06:00
Christopher Haster
20c58dcbaa Added coverage-sort to scripts/coverage.py
scripts/coverage.py was missed originally because it's not ran as often
as the others. Since it requires run-time info, it's usually only used
in CI.
2022-03-11 14:39:38 -06:00
Christopher Haster
f5286abe7a Added scripts/calls.py for viewing the callgraph directly 2022-03-11 14:39:36 -06:00
Christopher Haster
2cdabe810d Split out scripts/code.py into scripts/code.py and scripts/data.py
This is to avoid unexpected script behavior even though data.py should
always return 0 bytes for littlefs. Maybe a check for this should be
added to CI?
2022-03-11 14:39:36 -06:00
Christopher Haster
b045436c23 Added size-sort options to scripts/code.py
Now with -s/--sort and -S/--reverse-sort for sorting the functions by
size.

You may wonder why add reverse-sort, since its utility doesn't seem
worth the cost to implement (these are just helper scripts after all),
the reason is that reverse-sort is quite useful on the command-line,
where scrollback may be truncated, and you only care about the larger
entries.

Outside of the command-line, normal sort is prefered.

Fortunately the difference is just the sign in the sort key.

Note this conflicts with the short --summary flag, so that has been
removed.
2022-03-11 14:36:23 -06:00
Scott Shawcroft
1877c40aac Indent sub-portions of tag fields
This makes the bit breakdown clearer.
2022-02-18 21:13:41 -06:00
Emilio Lopes
e29e7aeefa Specify unit of the size members of the lfs_config struct
Fixes littlefs-project/littlefs#568
2022-02-18 21:09:19 -06:00
yog
e334983767 don't use lfs_file_open() when LFS_NO_MALLOC is set 2022-02-18 20:57:20 -06:00
mikee47
4977fa0c0e Fix spelling errors 2022-01-29 09:52:00 +00:00
Tobias Nießen
fdda3b4aa2 Always zero rambd buffer before first use
This fixes warnings produced by tools such as memcheck without
requiring the user to set an erase value.
2021-11-14 16:10:54 +01:00
Colin Foster
487df12dde Fail when block_size doesn't match config
With the previous commit, fail if the superblock block_size doesn't
match the config block_size.
2021-08-17 10:02:27 -07:00
Colin Foster
3efb8e44f3 Fail mount when the block size changes
When the on-disk block size doesn't match the config block size, it is
possible to get file corruption. For instance, if the num blocks was
0x200 and we re-mount with 0x100 files could be corrupt.

If we re-mount with a larger number of blocks things should be safer,
but could be handled with a resize option or perhaps a mount flag to
ignore this parameter.
2021-07-21 08:56:21 -07:00
Tobias Nießen
3ae87f4e29 Add littlefs-disk-img-viewer to README 2021-06-21 18:52:00 +02:00
Tobias Nießen
fb2c311bb4 Fix compiler warnings when LFS_READONLY defined 2021-06-14 12:12:38 +02:00
Christopher Haster
ead50807f1 Merge pull request #565 from tniessen/fix-link-to-test-bd
Fix link to test block device
2021-06-12 12:35:34 -05:00
Christopher Haster
2f7596811d Merge pull request #529 from yamt/macos-make-test
scripts/test.py: Fix infinite busy loops on macOS
2021-06-12 12:35:25 -05:00
Tobias Nießen
1e423bae58 Fix link to test block device 2021-06-09 21:04:50 +02:00
YAMAMOTO Takashi
3bee4d9a19 scripts/test.py: Fix infinite busy loops on macOS
I confirmed that the same number of tests are run
with "make test" on:

    * Ubuntu with and without this change
    * macOS with this change

>   ====== results ======
>   tests passed 817/817 (100.00%)
>   tests failed 0/817 (0.00%)
2021-02-22 14:42:10 +09:00
Christopher Haster
1863dc7883 Merge pull request #519 from littlefs-project/devel
Minor release: v2.4
2021-01-19 18:50:34 -06:00
Christopher Haster
3d4e4f2085 Bumped minor version to v2.4 2021-01-18 20:23:54 -06:00
Christopher Haster
a2c744c8f8 Merge pull request #516 from littlefs-project/ci-revamp
Adopt GitHub Actions, bring in a number of script/Makefile improvements
2021-01-18 18:38:42 -06:00
Christopher Haster
c0cc0a417e Enabled overriding of LFS_ASSERT/TRACE/DEBUG/etc
This is useful for testing the new erroring assert behavior in CI.
Asserts do not error by default, so this macro needs to be overriden.

It is possible to test this behavior using the existing option of
overriding lfs_util.h with a custom file, by using a small sed
one-line script. But this is much simpler.

This does raise the question if more of the configuration options in
lfs_util.h should be opened up for function-like macro overrides.
2021-01-18 14:01:53 -06:00
Christopher Haster
bca64d76cf Merge branch 'devel' into ci-revamp
Needed to bring in new "error-asserts" configuration
2021-01-18 12:23:25 -06:00
Christopher Haster
cab1d6cca6 Merge pull request #514 from mon/feature/assert_early_return
lfs_fs_preporphans: return int to alllow graceful LFS_ASSERT
2021-01-18 11:53:47 -06:00
Will
c9eed1f181 Add test to ensure asserts can return 2021-01-18 11:50:39 -06:00
Will
e7e4b352bd lfs_fs_preporphans ret int for graceful LFS_ASSERT 2021-01-18 11:50:33 -06:00
Christopher Haster
9449ef4be4 Merge pull request #511 from embeddedt/fix_lseek
Skip flushing file if lfs_file_rawseek() doesn't change position
2021-01-18 11:47:56 -06:00
Christopher Haster
cfe779fc08 Merge pull request #508 from littlefs-project/fix-sanity-check
Moved sanity check in lfs_format after compaction
2021-01-18 11:47:23 -06:00
Christopher Haster
0db6466984 Merge pull request #502 from mon/feature/meta_limits
Add metadata_max config to help performance on devices with large blocks
2021-01-18 11:45:34 -06:00
Christopher Haster
21488d9e06 Fixed incorrect documentation in test.py
The argparse documented an outdated format, and was off by 1.

Found by sender6
2021-01-18 11:41:51 -06:00
Christopher Haster
10a08833c6 Moved lfs_mdir_isopen behind LFS_NO_ASSERT
lfs_mdir_isopen goes unused if asserts are disabled, and this caused an
"unused function" warning on Clang (curiously not on GCC since the
function was static inline, commonly used for header-only functions).

Also removed "inline" from the lfs_mdir_* functions as these involve
linked-list operations and really shouldn't be inlined. And since they
are static, inlining should occur automatically if there is a benefit.

Found by dpgeorge
2021-01-18 11:41:18 -06:00
Christopher Haster
47d6b2fcf3 Removed unnecessary truncate condition thanks to new seek optimization 2021-01-11 00:14:34 -06:00
Christopher Haster
745d98cde0 Fixed lfs_file_truncate issue where internal state may not be flushed
This was caused by the new lfs_file_rawseek optimization that can skip
flushing when calculated file->pos is unchanged combined with an
implicit expectation in lfs_file_truncate that lfs_file_rawseek
unconditionally sets file->pos.

Because of this assumption, lfs_file_truncate could leave file->pos in
an outdated state while changing the internal file metadata. Humorously,
this was always gauranteed to trigger the skip in lfs_file_rawseek when
we try to restore the file->pos, leaving the file->cache used to do the
CTZ skip-list lookup in a potentially bad state.

The easiest fix is to just update file->pos correctly. Note we don't
want to explicitly flush since we can leverage the same noop
optimization if we truncate to the file position. Which I've added a
test for.
2021-01-11 00:14:34 -06:00
Themba Dube
3216b07c3b Use lfs_file_rawsize to calculate LFS_SEEK_END position 2021-01-11 00:14:30 -06:00
Christopher Haster
6592719d28 Removed .travis.yml
Now that it's been replaced by GitHub workflows (in .github/workflows)
2021-01-10 13:20:14 -06:00
Christopher Haster
c9110617b3 Added post-release script, cleaned up workflows
This helps an outstanding maintainer annoyance: updating dependencies to
bring in new versions on each littlefs release.

But instead of adding a bunch of scripts to the tail end of the release
workflow, the post-release script just triggers a single
"repository_dispatch" event in the newly created littlefs.post-release
repo. From there any number of post-release workflows can be run.

This indirection should let the post-release scripts move much quicker
than littlefs itself, which helps offset how fragile these sort of scripts
are.

---

Also finished cleaning up the workflows now that they are mostly
working.
2021-01-10 13:20:11 -06:00
Christopher Haster
104d65113d Reduced build sources to just the core littlefs
Currently this is just lfs.c and lfs_util.c. Previously this included
the block devices, but this meant all of the scripts needed to
explicitly deselect the block devices to avoid reporting build
size/coverage info on them.

Note that test.py still explicitly adds the block devices for compiling
tests, which is their main purpose. Humorously this means the block
devices will probably be compiled into most builds in this repo anyways.
2021-01-10 04:03:16 -06:00
Christopher Haster
6d3e4ac33e Brought over the release workflow
This is pretty much a cleaned up version of the release script that ran
on Travis.

This biggest change is that now the release script also collecs the
build results into a table as part of the change notes, which is a nice
addition.
2021-01-10 04:03:13 -06:00
Christopher Haster
9d6546071b Fixed a recompilation issue in CI, tweaked coverage.py a bit more
This was lost in the Travis -> GitHub transition, in serializing some of
the jobs, I missed that we need to clean between tests with different
geometry configurations. Otherwise we end up running outdated binaries,
which explains some of the weird test behavior we were seeing.

Also tweaked a few script things:
- Better subprocess error reporting (dump stderr on failure)
- Fixed a BUILDDIR rule issue in test.py
- Changed test-not-run status to None instead of undefined
2021-01-10 03:21:28 -06:00
Christopher Haster
b84fb6bcc5 Added BUILDDIR, a bit of script reworking
Now littlefs's Makefile can work with a custom build directory
for compilation output. Just set the BUILDDIR variable and the Makefile
will take care of the rest.

make BUILDDIR=build size

This makes it very easy to compare builds with different compile-time
configurations or different cross-compilers.

This meant most of code.py's build isolation is no longer needed,
so revisted the scripts and cleaned/tweaked a number of things.

Also bought code.py in line with coverage.py, fixing some of the
inconsistencies that were created while developing these scripts.

One change to note was removing the inline measuring logic, I realized
this feature is unnecessary thanks to GCC's -fkeep-static-functions and
-fno-inline flags.
2021-01-10 03:21:21 -06:00
Christopher Haster
887f3660ed Switched to lcov for coverage collection, greatly simplified coverage.py
Since we already have fairly complicated scriptts, I figured it wouldn't
be too hard to use the gcov tools and directly parse their output. Boy
was I wrong.

The gcov intermediary format is a bit of a mess. In version 5.4, a
text-based intermediary format is written to a single .gcov file per
executable. This changed sometime before version 7.5, when it started
writing separate .gcov files per .o files. And in version 9 this
intermediary format has been entirely replaced with an incompatible json
format!

Ironically, this means the internal-only .gcda/.gcno binary format has
actually been more stable than the intermediary format.

Also there's no way to avoid temporary .gcov files generated in the
project root, which risks messing with how test.py runs parallel tests.
Fortunately this looks like it will be fixed in gcov version 9.

---

Ended up switching to lcov, which was the right way to go. lcov handles
all of the gcov parsing, provides an easily parsable output, and even
provides a set of higher-level commands to manage coverage collection
from different runs.

Since this is all provided by lcov, was able to simplify coverage.py
quite a bit. Now it just parses the .info files output by lcov.
2021-01-10 02:21:33 -06:00
Christopher Haster
eeeceb9e30 Added coverage.py, and optional coverage info to test.py
Now coverage information can be collected if you provide the --coverage
to test.py. Internally this uses GCC's gcov instrumentation along with a
new script, coverage.py, to parse *.gcov files.

The main use for this is finding coverage info during CI runs. There's a
risk that the instrumentation may make it more difficult to debug, so I
decided to not make coverage collection enabled by default.
2021-01-10 02:12:45 -06:00
Christopher Haster
b2235e956d Added GitHub workflows to run tests
Mostly taken from .travis.yml, biggest changes were around how to get
the status updates to work.

We can't use a token on PRs the same way we could in Travis, so instead
we use a second workflow that checks every pull request for "status"
artifacts, and create the actual statuses in the "workflow_run" event,
where we have full access to repo secrets.
2021-01-09 23:42:49 -06:00
Themba Dube
6bb4043154 Skip flushing file if lfs_file_rawseek() doesn't change position 2020-12-24 14:05:46 -05:00
Christopher Haster
2b804537b0 Moved sanity check in lfs_format after compaction
After a bit of tweaking in 9dde5c7 to write out all superblocks
during lfs_format, additional writes were added after the sanity
checking normally done at the end.

This turned out to be a problem when porting littlefs, as it makes it
easy for addressing issues to not get caught during lfs_format.

Found by marekr, tristanclare94, and mjs513
2020-12-22 11:47:48 -06:00
Christopher Haster
d804c2d3b7 Added scripts/code_size.py, for more in-depth code-size reporting
Inspired by Linux's Bloat-O-Meter, code_size.py wraps nm to provide
function-level code size, and supports detailed comparison between
different builds.

One difference is that code_size.py invokes littlefs's build system
similarly to test.py, creating a duplicate build in the "sizes"
directory. This makes it easy to monitor a cross-compiled build size
while simultaneously testing on the host machine.
2020-12-19 18:49:57 -06:00
Will
37f4de2976 Remove inline_files_max and lfs_t entry for metadata_max 2020-12-18 13:05:20 +10:00
Will
6b16dafb4d Add metadata_max and inline_file_max to config
We have seen poor read performance on NAND flashes with 128kB blocks.
The root cause is inline files having to traverse many sets of metadata
pairs inside the current block before being fully reconstructed. Simply
disabling inline files is not enough, as the metadata will still fill up
the block and eventually need to be compacted.

By allowing configuration of how much size metadata takes up, along with
limiting (or disabling) inline file size, we achieve read performance
improvements on an order of magnitude.
2020-12-15 12:59:32 +10:00
Christopher Haster
1a59954ec6 Merge pull request #495 from littlefs-project/devel
Minor release: v2.3
2020-12-07 20:50:31 -06:00
Christopher Haster
6a7012774d Renamed internal lfs_*raw -> lfs_raw* functions
- Prefixing with raw is slightly more readable, follows
  common-prefix rule
- Matches existing raw prefixes in testbd
2020-12-06 00:26:24 -06:00
Christopher Haster
288a5cbc8d Bumped minor version to v2.3 2020-12-04 01:31:27 -06:00
Christopher Haster
5783eea0de Merge pull request #490 from littlefs-project/fix-alloc-eviction
Fix allocation-eviction issue when erase state is multiple of block_cycles+1
2020-12-04 00:49:09 -06:00
Christopher Haster
2bb523421e Moved lfs_mlist_isopen checks into the API wrappers
This indirectly solves an issue with lfs_file_rawclose asserting
when lfs_file_opencfg errors since appending to the mlist occurs
after open. It also may speed up some of the internal operations such as
the lfs_file_write used to resolve unflushed data.

The idea behind adopting mlist over flags is that realistically it's
unlikely for the user to open a significant number of files (enough for
big O to kick in). That being said, moving the mlist asserts into the
API wrappers does protect some of the internal operations from scaling
based on the number of open files.
2020-12-04 00:42:32 -06:00
Noah Gorny
7388b2938a Deprecate LFS_F_OPENED and use lfs_mlist_isused instead
Instead of additional flag, we can just go through the mlist.
2020-12-04 00:26:19 -06:00
Christopher Haster
ce425a56c3 Merge pull request #470 from renesas/SWFLEX-1517-littlefs-thread-safe-option
Add thread safe wrappers
2020-12-03 23:47:32 -06:00
Christopher Haster
a99a93fb27 Added thread-safe build+size reporting to CI 2020-12-03 23:46:59 -06:00
Christopher Haster
45afded784 Moved LFS_TRACE calls to API wrapper functions
This removes quite a bit of extra code needed to entertwine the
LFS_TRACE calls into the original funcions.

Also changed temporary return type to match API declaration where
necessary.
2020-12-03 23:46:59 -06:00
Christopher Haster
00a9ba7826 Tweaked thread-safe implementation
- Stayed on non-system include for lfs_util.h for now
- Named internal functions "lfs_functionraw"
- Merged lfs_fs_traverseraw
- Added LFS_LOCK/UNLOCK macros
- Changed LFS_THREADSAFE from 1/0 to defined/undefined to
  match LFS_READONLY
2020-12-03 23:46:59 -06:00
Bill Gesner
fc6988c7c3 make raw functions static. formatting tweaks 2020-12-03 23:46:54 -06:00
Bill Gesner
d0f055d321 Squash of thread-safe PR cleanup
- expand functions
- add comment
- rename functions
- fix locking issue in format and mount
- use global include
- fix ac6 linker issue
- use the global config file
- address review comments
- minor cleanup
- minor cleanup
- review comments
2020-12-03 23:41:01 -06:00
Christopher Haster
b9fa33f9bc Merge pull request #480 from maximevince/master
Add LFS_READONLY define, to allow smaller builds providing read-only mode
2020-12-03 23:06:00 -06:00
Christopher Haster
2efebf8e9b Added read-only build+size reporting to CI 2020-12-03 23:04:48 -06:00
Maxime Vincent
754b4c3cda Squash of LFS_READONLY cleanup
- undef unavailable function declarations altogether
- even less code, assert on write attempts
- remove LFS_O_WRONLY and other flags when compiling with LFS_READONLY
- do not annotate #endif, as requested
- move ifdef before comments blocks, rework dangling opening bracket
- ifdef file flags that are not needed in read-only mode
- slight refactor
- ifdef LFS_F_ERRED out as well
2020-12-03 23:03:29 -06:00
Christopher Haster
584eb26efc Merge pull request #443 from NoahGorny/add-already-opened-assert
Assert that the file isnt open in lfs_file_opencfg
2020-12-03 22:43:10 -06:00
Noah Gorny
008ebc37df Add lfs_mlist_append/remove helper 2020-12-03 22:42:39 -06:00
Christopher Haster
66272067ab Merge pull request #395 from gmpy/improve-write-performance
lfs_bd_cmp() compares more bytes at one time
2020-12-03 22:34:47 -06:00
Christopher Haster
e273a82679 Merge pull request #487 from littlefs-project/fix-alloc-reset-modulus
Fix several wear-leveling issues found in lfs_alloc_reset
2020-12-03 22:33:47 -06:00
Christopher Haster
1dc6ae94b9 Merge pull request #486 from littlefs-project/fix-assert
Fix assert
2020-12-03 22:32:56 -06:00
Christopher Haster
817ef02d24 Merge pull request #412 from jrast/patch-3
Added littlefs-python to the related projects section
2020-12-03 22:32:04 -06:00
Christopher Haster
b8dcf10974 Changed lfs_dir_alloc to maximize block cycles for new metadata pairs
Previously we only bumped the revision count if an eviction would occur
immediately (and possibly corrupt littlefs). This works, but does risk
an unoptimal superblock size if an almost-exhausted superblock was
allocated during lfs_format.

As pointed out by tim-nordell-nimbelink, we can align the revision count
to maximize the number of block cycles without breaking the existing
requirements of increasing revision counts.

As an added benefit, littlefs's wear-leveling should behave more
consistently after this change.
2020-11-28 22:46:11 -06:00
Christopher Haster
0aba71d0d6 Fixed single unchecked bit during commit verification
This bug was exposed by the bad-block tests due to changes to block
allocation, but could have been hit before these changes.

In flash, when blocks fail, they don't fail in a predictable manner. To
account for this, the bad-block tests check a number of failure
behaviors. The interesting one here is "LFS_TESTBD_BADBLOCK_ERASENOOP",
in which bad blocks can not be erased or programmed, and are stuck with
the data written at the time the blocks go bad.

This is actually a pretty realistic failure behavior, since flash needs a
large voltage to force the electrons of the floating gates. Though
realistically, such a failure would like corrupt the data a bit, not leave the
underlying data perfectly intact.

LFS_TESTBD_BADBLOCK_ERASENOOP is rather interesting to test for because it
means bad blocks can end up with perfectly valid CRCs after a failed write,
confusing littlefs.

---

In this case, we had the perfect series of operations such that a test
was repeatedly writing the same sequence of metadata commits to the same
block, which eventually goes bad, leaving the block stuck with metadata
that occurs later in the sequence.

What this means is that after the first commit, the metadata block
contained both the first and second commits, even though the loop in the
test hadn't reached that point yet.

expected       actual
.----------.  .----------.
| commit 1 |  | commit 1 |
| crc 1    |  | crc 1    |
|          |  | commit 2 <-- (from previous iteration)
|          |  | crc 2    |
'----------'  '----------'

To protect against this, littlefs normally compares the written CRC
against the expected CRC, but because this was the exact same data that
it was going to write, this CRCs end up the same.

Ah! But doesn't littlefs also encode the state of the next page to keep
track of if the next page has been erased or not? Wouldn't that change
between iterations?

It does! In a single bit in the CRC-tag. But thanks to some incorrect
logic attempting to avoid an extra condition in the loop for writing out
padding commits, the CRC that littlefs checked against was the CRC
immediately before we include the "is-next-page-erased" bit.

Changing the verification check to use the same CRC as what is used to
verify commits on fetch solves this problem.
2020-11-22 15:07:16 -06:00
Christopher Haster
0ea2871e24 Fixed typo in scripts/readtree.py
Not sure how this went unnoticed, I guess this is the first bug that
needed in-depth inspection after the a last-minute argument cleanup
in the debug scripts.
2020-11-22 15:05:22 -06:00
Christopher Haster
d04c1392c0 Fixed allocation-eviction issue when erase state is multiple of block_cycles+1
This rather interesting corner-case arises in lfs_dir_alloc anytime the
uninitialized revision count happens to be a multiple of block_cycles+1.

For example, the source of the bug found by tim-nordell-nimbelink:

rev = 2742492087
block_cycles = 100

2742492087 % (100+1) = 0

The reason for this weird block_cycles+1 case is due to a fix for a
previous bug in fe957de. To avoid aliasing, which would cause metadata
pairs to wear unevenly, block_cycles incremented to the next odd number.

Normally, littlefs tweaks the revision count of blocks during
lfs_dir_alloc in order to make sure evictions can't happen on the first
compact. Otherwise, higher-level logic such as lfs_format would break.

However, this wasn't updated with the aliasing fix in fe957de, so
lfs_dir_alloc was only rounding the revision count to the nearest even
number.

The current fix is to change the logic in lfs_dir_alloc to explicitly
check for the eviction condition and increment if eviction would occur.

Found by tim-nordell-nimbelink
2020-11-22 00:40:58 -06:00
Christopher Haster
f215027fd4 Switched to CRC as seed collection function instead of xor
As noted by gtaska, we are sitting on a better hash-combining function
than xor: CRC. Previous issues with xor were solvable, but relying on
xor for this isn't really worth the risk when we already have a CRC
function readily available.

To quote a study found by gtaska:

https://michiel.buddingh.eu/distribution-of-hash-values

> CRC32 seems to score really well, but its graph is skewed by the results
> of Dataset 5 (binary numbers), which may or may not be too synthetic to
> be considered a fair benchmark. But even if you substract the results
> from that test, it does not fare significantly worse than other,
> cryptographic hash functions.
2020-11-20 00:38:41 -06:00
Christopher Haster
1ae4b36f2a Removed unnecessary randomization of offsets in lfs_alloc_reset
On first read, randomizing the allocators offset may seem appropriate
for lfs_alloc_reset. However, it ends up using the filesystem-fed
pseudorandom seed in situations it wasn't designed for.

As noted by gtaska, the combination of using xors for feeding the seed
and multiple traverses of the same CRCs can cause the seed to flip to
zeros with concerning frequency.

Removed the randomization from lfs_alloc_reset, leaving it in only
lfs_mount.

Found by gtaska
2020-11-20 00:18:13 -06:00
Christopher Haster
480cdd9f81 Fixed incorrect modulus in lfs_alloc_reset
Modulus of the offset by block_size was clearly a typo, and should be
block_count. Interesting to note that later moduluses during alloc
calculations prevents this from breaking anything, but as gtaska notes it
could skew the wear-leveling distribution.

Found by guiserle and gtaska
2020-11-20 00:02:19 -06:00
Noah Gorny
6303558aee Use LFS_O_RDWR instead of magic number in lfs_file_* asserts 2020-11-19 01:51:39 +02:00
Noah Gorny
4bd653dd00 Assert that file/dir struct is not reused in lfs_file_opencfg/lfs_dir_open 2020-11-19 01:51:39 +02:00
Maxime Vincent
8e6826c4e2 Add LFS_READYONLY define, to allow smaller builds providing read-only mode 2020-10-28 16:09:13 +01:00
Bill Gesner
10ac6b9cf0 add thread safe wrappers 2020-09-17 23:41:20 +00:00
Shiven Gupta
87a2cb0e41 Fix assert 2020-08-18 17:36:14 -04:00
Jürg Rast
6d0ec5e851 Added littlefs-python to the related projects section
As introduced in #297, I created a python wrapper for littlefs. The wrapper supports two API's: A C-like API which is the same as in C and a more pythonic API which is easier to use if you are more the python guy. The wrapper is built with littlefs 2.2.1 at the moment.
2020-04-13 21:33:30 +02:00
Christopher Haster
4c9146ea53 Merge pull request #405 from rojer/mfe
Fix -Wmissing-field-initializers
2020-04-09 05:42:46 -05:00
Deomid "rojer" Ryabkov
5a9f38df01 Remove -Wno-missing-field-initializers 2020-04-06 19:51:19 +01:00
Deomid "rojer" Ryabkov
1b033e9ab6 Fix -Wmissing-field-initializers 2020-04-03 02:18:14 +01:00
Christopher Haster
a049f1318e Merge pull request #372 from ARMmbed/test-revamp
Rework test framework, fix a number of related bugs
2020-03-31 18:25:13 -05:00
Christopher Haster
7257681f5d Merge branch 'master' into test-revamp 2020-03-31 18:24:54 -05:00
Christopher Haster
2da340af69 Merge pull request #373 from henrygab/patch-1
Indicate C99 standard as target for LittleFS code
2020-03-31 18:22:48 -05:00
Christopher Haster
02881e591b Merge pull request #360 from jpdoyle/master
Fix incorrect comment on `lfs_npw2`
2020-03-31 18:22:41 -05:00
Christopher Haster
38024d5a17 Merge pull request #356 from zqb-all/patch-1
Update SPEC.md
2020-03-31 18:22:34 -05:00
Christopher Haster
4a9bac4418 Merge pull request #322 from hemmick/master
Allow debug prints without __VA_ARGS__ in non-MSVC
2020-03-31 18:22:27 -05:00
Christopher Haster
6121495444 Merge pull request #266 from FreddieChopin/revert-bypass-cache
Revert "Don't bypass cache in `lfs_cache_prog()` and `lfs_cache_read()`"
2020-03-31 18:22:19 -05:00
John Hemmick
6372f515fe Allow debug prints without __VA_ARGS__
__VA_ARGS__ are frustrating in C. Even for their main purpose (printf),
they fall short in that they don't have a _portable_ way to have zero
arguments after the format string in a printf call.

Even if we detect compilers and use ##__VA_ARGS__ where available, GCC
emits a warning with -pedantic that is _impossible_ to explicitly
disable.

This commit contains the best solution we can think of. A bit of
indirection that adds a hidden "%s" % "" to the end of the format
string. This solution does not work everywhere as it has a runtime
cost, but it is hopefully ok for debug statements.
2020-03-29 21:58:49 -05:00
Christopher Haster
6622f3deee Bumped minor version to v2.2 2020-03-29 21:43:58 -05:00
Christopher Haster
5137e4b0ba Last minute tweaks to debug scripts
- Standardized littlefs debug statements to use hex prefixes and
  brackets for printing pairs.

- Removed the entry behavior for readtree and made -t the default.
  This is because 1. the CTZ skip-list parsing was broken, which is not
  surprising, and 2. the entry parsing was more complicated than useful.
  This functionality may be better implemented as a proper filesystem
  read script, complete with directory tree dumping.

- Changed test.py's --gdb argument to take [init, main, assert],
  this matches the names of the stages in C's startup.

- Added printing of tail to all mdir dumps in readtree/readmdir.

- Added a print for if any mdirs are corrupted in readtree.

- Added debug script side-effects to .gitignore.
2020-03-29 21:19:33 -05:00
Christopher Haster
ff84902970 Moved out block device tracing into separate define
Block device tracing has a lot of potential uses, of course debugging,
but it can also be used for profiling and externally tracking littlefs's
usage of the block device. However, block device tracing emits a massive
amount of output. So keeping block device tracing on by default limits
the usefulness of the filesystem tracing.

So, instead, I've moved the block device tracing into a separate
LFS_TESTBD_YES_TRACE define which switches on the LFS_TESTBD_TRACE
macro. Note that this means in order to get block device tracing, you
need to define both LFS_YES_TRACE and LFS_TESTBD_YES_TRACE. This is
needed as the LFS_TRACE definition is gated by LFS_YES_TRACE in
lfs_util.h.
2020-03-29 18:45:51 -05:00
Christopher Haster
01e42abd10 Merge pull request #401 from thrasher8390/bugfix/thrasher8390/issue-394-lookahead-buffer-corruption
Lookahead corruption fix given an IO Error during traversal
2020-03-29 17:59:00 -05:00
Christopher Haster
f9dbec3d92 Added test case catching issues with errors during a lookahead scan
Original issue found by thrasher8390
2020-03-29 14:12:58 -05:00
Derek Thrasher
f17d3d7eba Minor cleanup
- Removed the declaration of lfs_alloc_ack
- Consistent brackets
2020-03-29 14:12:30 -05:00
Derek Thrasher
5e5b5d8572 (chore) updates from PR, we decided not to move forward with changing v1 code since it can be risky. Let's improve the future! Also renamed and moved around a the lookahead free / reset function 2020-03-29 14:12:30 -05:00
Derek Thrasher
d498b9fb31 (bugfix) adding line function to clear out all the global 'free' information so that we can reset it after a failed traversal 2020-03-29 14:12:30 -05:00
Christopher Haster
4677421aba Added "evil" tests and detecion/recovery from bad pointers and infinite loops
These two features have been much requested by users, and have even had
several PRs proposed to fix these in several cases. Before this, these
error conditions usually were caught by internal asserts, however
asserts prevented users from implementing their own workarounds.

It's taken me a while to provide/accept a useful recovery mechanism
(returning LFS_ERR_CORRUPT instead of asserting) because my original thinking
was that these error conditions only occur due to bugs in the filesystem, and
these bugs should be fixed properly.

While I still think this is mostly true, the point has been made clear
that being able to recover from these conditions is definitely worth the
code cost. Hopefully this new behaviour helps the longevity of devices
even if the storage code fails.

Another, less important, reason I didn't want to accept fixes for these
situations was the lack of tests that prove the code's value. This has
been fixed with the new testing framework thanks to the additional of
"internal tests" which can call C static functions and really take
advantage of the internal information of the filesystem.
2020-03-20 09:26:07 -05:00
WeiXiong Liao
64f70f51b0 lfs_bd_cmp() compares more bytes at one time
It's very slowly to compare one byte at one time. Here are the
performance I get from 128M spinand with NFTL by sequential writing.

| file size | buffer size  | write speed  |
| 10 MB     | 0   B        | 3206.01 KB/s |
| 10 MB     | 1   B        | 2434.04 KB/s |
| 10 MB     | 2   B        | 2685.78 KB/s |
| 10 MB     | 4   B        | 2857.94 KB/s |
| 10 MB     | 8   B        | 3060.68 KB/s |
| 10 MB     | 16  B        | 3155.30 KB/s |
| 10 MB     | 64  B        | 3193.68 KB/s |
| 10 MB     | 128 B        | 3230.62 KB/s |
| 10 MB     | 256 B        | 3153.03 KB/s |

| 70 MB     | 0   B        | 2258.87 KB/s |
| 70 MB     | 1   B        | 1827.83 KB/s |
| 70 MB     | 2   B        | 1962.29 KB/s |
| 70 MB     | 4   B        | 2074.01 KB/s |
| 70 MB     | 8   B        | 2147.03 KB/s |
| 70 MB     | 64  B        | 2179.92 KB/s |
| 70 MB     | 256 B        | 2179.96 KB/s |

The 0 Byte size means no validation and the 1 Byte size is how
littlefs do before. Based on the above table and to save memory,
comparing 8 bytes at one time is more wonderful.

Signed-off-by: WeiXiong Liao <liaoweixiong@allwinnertech.com>
2020-03-13 15:23:20 +08:00
Chris Desjardins
cb26157880 Change assert to runtime check.
I had a system that was constantly hitting this assert, after making
this change it recovered immediately.
2020-02-23 22:18:08 -06:00
Christopher Haster
a7dfae4526 Minor tweaks to debugging scripts, fixed explode_asserts.py off-by-1
- Changed readmdir.py to print the metadata pair and revision count,
  which is useful when debugging commit issues.
- Added truncated data view to readtree.py by default. This does mean
  readtree.py must read all files on the filesystem to show the
  truncated data, hopefully this does not end up being a problem.
- Made overall representation hopefully more readable, including moving
  superblock under the root dir, userattrs under files, fixing a gstate
  rendering issue.
- Added rendering of soft-tails as dotted-arrows, hopefully this isn't
  too noisy.
- Fixed explode_asserts.py off-by-1 in #line mapping caused by a strip
  call in the assert generation eating newlines. The script matches
  line numbers between the original+modified files by emitting assert
  statements that use the same number of lines. An off-by-1 here causes
  the entire file to map lines incorrectly, which can be very annoying.
2020-02-22 23:50:03 -06:00
Christopher Haster
50fe8ae258 Renamed test_format -> test_superblocks, tweaked superblock tests
With the superblock expansion stuff, the test_format tests have grown
to test more advanced superblock-related features. This is fine but
deserves a rename so it's more clear.

Also fixed a typo that meant tests never ran with block cycles.
2020-02-22 23:35:28 -06:00
Christopher Haster
0990296619 Limited byte-level tests to native testing due to time
Byte-level writes are expensive and not suggested (caches >= 4 bytes
make much more sense), however there are many corner cases with
byte-level writes that can be easy to miss (power-loss leaving single
bytes written to disk).

Unfortunately, byte-level writes mixed with power-loss testing, the
Travis infrastructure, and Arm Thumb instruction set simulation
exceeds the 50-minute budget Travis allocates for jobs.

For now I'm disabling the byte-level tests under Qemu, with the hope that
performance improvements in littlefs will let us turn these tests back
on in the future.
2020-02-18 18:05:08 -06:00
Christopher Haster
d04b077506 Fixed minor things to get CI passing again
- Added caching to Travis install dirs, because otherwise
  pip3 install fails randomly
- Increased size of littlefs-fuse disk because test script has
  a larger footprint now
- Skip a couple of reentrant tests under byte-level writes because
  the tests just take too long and cause Travis to bail due to no
  output for 10m
- Fixed various Valgrind errors
  - Suppressed uninit checks for tests where LFS_BLOCK_ERASE_VALUE == -1.
    In this case rambd goes uninitialized, which is fine for rambd's
    purposes. Note I couldn't figure out how to limit this suppression
    to only the malloc in rambd, this doesn't seem possible with Valgrind.
  - Fixed memory leaks in exhaustion tests
  - Fixed off-by-1 string null-terminator issue in paths tests
- Fixed lfs_file_sync issue caused by revealed by fixing memory leaks
  in exhaustion tests. Getting ENOSPC during a file write puts the file
  in a bad state where littlefs doesn't know how to write it out safely.
  In this case, lfs_file_sync and lfs_file_close return 0 without
  writing out state so that device-side resources can still be cleaned
  up. To recover from ENOSPC, the file needs to be reopened and the
  writes recreated. Not sure if there is a better way to handle this.
- Added some quality-of-life improvements to Valgrind testing
  - Fit Valgrind messages into truncated output when not in verbose mode
  - Turned on origin tracking
2020-02-18 18:05:03 -06:00
Christopher Haster
c7987a3162 Restructured .travis.yml to span more jobs
The core of littlefs's CI testing is the full test suite, `make test`, run
under a number of configurations:

- Processor architecture:
  - x86 (native)
  - Arm Thumb
  - MIPS
  - PowerPC
- Storage geometry:
  - rs=16   ps=16   cs=64   bs=512   (default)
  - rs=1    ps=1    cs=64   bs=4KiB  (NOR flash)
  - rs=512  ps=512  cs=512  bs=512   (eMMC)
  - rs=4KiB ps=4KiB cs=4KiB bs=32KiB (NAND flash)
- Other corner cases:
  - no intrinsics
  - no inline
  - byte-level read/writes
  - single block-cycles
  - odd block counts
  - odd block sizes

The number of different configurations we need to test quickly exceeds the
50 minute time limit Travis has on jobs. Fortunately, we can split these
tests out into multiple jobs. This seems to be the intended course of
action for large CI "builds" in Travis, as this gives Travis a finer
grain of control over limiting builds.

Unfortunately, this created a couple issues:

1. The Travis configuration isn't actually that flexible. It allows a
   single "matrix expansion" which can be generated from top-level lists
   of different configurations. But it doesn't let you generate a matrix
   from two seperate environment variable lists (for arch + geometry).

   Without multiple matrix expansions, we're stuck writing out each test
   permutation by hand.

   On the bright-side, this was a good chance to really learn how YAML
   anchors work. I'm torn because on one hand anchors add what feels
   like unnecessary complexity to a config language, on the other hand,
   they did help quite a bit in working around Travis's limitations.

2. Now that we have 47 jobs instead of 7, reporting a separate status
   for each job stops making sense.

   What I've opted for here is to use a special NAME variable to
   deduplicate jobs, and used a few state-less rules to hopefully have
   the reported status make sense most of the time.

   - Overwrite "pending" statuses so that the last job to start owns the
     most recent "pending" status
   - Don't overwrite "failure" statuses unless the job number matches
     our own (in the case of CI restarts)
   - Don't write "success" statuses unless the job number matches our
     own, this should delay a green check-mark until the last-to-start
     job finishes
   - Always overwrite non-failures with "failure" statuses

   This does mean a temporary "success" may appear if the last job
   terminates before earlier jobs. But this is the simpliest solution
   I can think of without storing some complex state somewhere.

   Note we can only report the size this way because it's cheap to
   calculate in every job.
2020-02-18 17:34:23 -06:00
Christopher Haster
dcae185a00 Fixed typo in LFS_MKTAG_IF_ELSE 2020-02-12 11:31:34 -06:00
Christopher Haster
f4b17b379c Added test.py support for tmpfs-backed disks
RAM-backed testing is faster than file-backed testing. This is why
test.py uses rambd by default.

So why add support for tmpfs-backed disks if we can already run tests in
RAM? For reentrant testing.

Under reentrant testing we simulate power-loss by forcefully exiting the
test program at specific times. To make this power-loss meaningful, we need to
persist the disk across these power-losses. However, it's interesting to
note this persistence doesn't need to be actually backed by the
filesystem.

It may be possible to rearchitecture the tests to simulate power-loss a
different way, by say, using coroutines or setjmp/longjmp to leave
behind ongoing filesystem operations without terminating the program
completely. But at this point, I think it's best to work with what we
have.

And simply putting the test disks into a tmpfs mount-point seems to
work just fine.

Note this does force serialization of the tests, which isn't required
otherwise. Currently they are only serialized due to limitations in
test.py. If a future change wants to perallelize the tests, it may need
to rework RAM-backed reentrant tests.
2020-02-12 10:48:54 -06:00
Christopher Haster
9f546f154f Updated .travis.yml and added additional geometry constraints
Moved .travis.yml over to use the new test framework. A part of this
involved testing all of the configurations ran on the old framework
and deciding which to carry over. The new framework duplicates some of
the cases tested by the configurations so some configurations could be
dropped.

The .travis.yml includes some extreme ones, such as no inline files,
relocations every cycle, no intrinsics, power-loss every byte, unaligned
block_count and lookahead, and odd read_sizes.

There were several configurations were some tests failed because of
limitations in the tests themselves, so many conditions were added
to make sure the configurations can run on as many tests as possible.
2020-02-11 16:01:57 -06:00
Christopher Haster
b69cf890e6 Fixed CRC check when prog_size causes multiple CRCs per commit
This is a bit of a strange case that can be caused by storage with
very large prog sizes, such as NAND flash. We only have 10 bits to store
the size of our padding, so when the prog_size gets larger than 1024
bytes, we have to use multiple padding tags to commit to the next
prog_size boundary.

This causes some complication for the new logic that checks CRCs in case
our block becomes "readonly" and contains existing commits that just happen
to match our new commit size.

Here we just check the CRC of the first commit. This isn't perfect but
does protect against pure "readonly" blocks.
2020-02-09 22:43:20 -06:00
Christopher Haster
02c84ac5f4 Cleaned up dependent fixes on branch
These should probably have been cleaned up in each commit to allow
cherry-picking, but due to time I haven't been able to.

- Went with creating an mdir copy in lfs_dir_commit. This handles a
  number of related cleanup issues in lfs_dir_compact and it does so
  more robustly. As a plus we can use the copy to update dependencies
  in the mlist.

- Eliminated code left by the ENOSPC file outlining

- Cleaned up TODOs and lingering comments

- Changed the reentrant many directory create/rename/remove test to use
  a smaller set of directories because of space issues when
  READ/PROG_SIZE=512
2020-02-09 12:37:39 -06:00
Christopher Haster
6530cb3a61 Fixed lfs_fs_size doubling metadata-pairs
This was caused by the previous fix for allocations during
lfs_fs_deorphan in this branch. To catch half-orphans during block
allocations we needed to duplicate all metadata-pairs reported to
lfs_fs_traverse. Unfortunately this causes lfs_fs_size to report 2x the
number of metadata-pairs, which would undoubtably confuse users.

The fix here is inelegantly simple, just do a different traversale for
allocations and size measurements. It reuses the same code but touches
slightly different sets of blocks.

Unfortunately, this causes the public lfs_fs_traverse and lfs_fs_size
functions to split in how they report blocks. This is technically
allowed, since lfs_fs_traverse may report blocks multiple times due to
CoW behavior, however it's undesirable and I'm sure there will be some
confusion.

But I don't have a better solution, so from this point lfs_fs_traverse
will be reporting 2x metadata-blocks and shouldn't be used for finding
the number of available blocks on the filesystem.
2020-02-09 12:00:23 -06:00
Christopher Haster
fe957de892 Fixed broken wear-leveling when block_cycles = 2n-1
This was an interesting issue found during a GitHub discussion with
rmollway and thrasher8390.

Blocks in the metadata-pair are relocated every "block_cycles", or, more
mathy, when rev % block_cycles == 0 as long as rev += 1 every block write.

But there's a problem, rev isn't += 1 every block write. There are two
blocks in a metadata-pair, so looking at it from each blocks
perspective, rev += 2 every block write.

This leads to a sort of aliasing issue, where, if block_cycles is
divisible by 2, one block in the metadata-pair is always relocated, and
the other block is _never_ relocated. Causing a complete failure of
block-level wear-leveling.

Fortunately, because of a previous workaround to avoid block_cycles = 1
(since this will cause the relocation algorithm to never terminate), the
actual math is rev % (block_cycles+1) == 0. This means the bug only
shows its head in the much less likely case where block_cycles is a
multiple of 2 plus 1, or, in more mathy terms, block_cycles = 2n+1 for
some n.

To workaround this we can bitwise or our block_cycles with 1 to force it
to never be a multiple of 2n.

(Maybe we should do this during initialization? But then block_cycles
would need to be mutable.)

---

There's a few unrelated changes mixed into this commit that shouldn't be
there since I added this as part of a branch of bug fixes I'm putting
together rather hastily, so unfortunately this is not easily cherry-pickable.
2020-02-09 12:00:23 -06:00
Christopher Haster
6a550844f4 Modified readmdir/readtree to make reading non-truncated data easier
Added indention so there was a more clear separation between the tag
description and tag data.

Also took the best parts of readmdir.py and added it to readtree.py.
Initially I was thinking it was best for these to have completely
independent data representations, since you could always call readtree
to get more info, but this becomes tedius when needed to look at
low-level tag info across multiple directories on the filesystem.
2020-02-09 12:00:23 -06:00
Christopher Haster
f9c2fd93f2 Removed file outlining on ENOSPC in lfs_file_sync
This was initially added as protection against the case where a file
grew to no longer fit in a metadata-pair. While in most cases this
should be caught by the math in lfs_file_write, it doesn't handle a
problem that can happen if the files metadata is large enough that even
small inline files can't fit. This can happen if you combine a small
block size with large file names and many custom attributes.

But trying to outline on ENOSPC creates creates a lot of problems.

If we are actually low on space, this is one of the worst things we can
do. Inline files take up less space than CTZ skip-lists, but inline
files are rendered useless if we outline inline files as soon as we run
low on space.

On top of this, the outlining logic tries multiple mdir commits if it
gets ENOSPC, which can hide errors if ENOSPC is returned for other
reasons.

In a perfect world, we would be using a different error code for
no-room-in-metadata-pair, and no-blocks-on-disk.

For now I've removed the outlining logic and we will need to figure out
how to handle this situation more robustly.
2020-02-09 12:00:23 -06:00
Christopher Haster
44d7112794 Fixed tests/*.toml.* in .gitignore
Running test.py creates a log of garbage here
2020-02-09 12:00:22 -06:00
Christopher Haster
77e3078b9f Added/fixed tests for noop writes (where bd error can't be trusted)
It's interesting how many ways block devices can show failed writes:
1. prog can error
2. erase can error
3. read can error after writing (ECC failure)
4. prog doesn't error but doesn't write the data correctly
5. erase doesn't error but doesn't erase correctly

Can read fail without an error? Yes, though this appears the same as
prog and erase failing.

These weren't all simulated by testbd since I unintentionally assumed
the block device could always error. Fixed by added additional bad-black
behaviors to testbd.

Note: This also includes a small fix where we can miss bad writes if the
underlying block device contains a valid commit with the exact same
size in the exact same offset.
2020-02-09 12:00:22 -06:00
Christopher Haster
517d3414c5 Fixed more bugs, mostly related to ENOSPC on different geometries
Fixes:
- Fixed reproducability issue when we can't read a directory revision
- Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size
- Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
  outline a file in lfs_file_sync
- Fixed cleanup issue if we run out of space while extending a CTZ skip-list
- Fixed missing half-orphans when allocating blocks during lfs_fs_deorphan

Also:
- Added cycle-detection to readtree.py
- Allowed pseudo-C expressions in test conditions (and it's
  beautifully hacky, see line 187 of test.py)
- Better handling of ctrl-C during test runs
- Added build-only mode to test.py
- Limited stdout of test failures to 5 lines unless in verbose mode

Explanation of fixes below

1. Fixed reproducability issue when we can't read a directory revision

   An interesting subtlety of the block-device layer is that the
   block-device is allowed to return LFS_ERR_CORRUPT on reads to
   untouched blocks. This can easily happen if a user is using ECC or
   some sort of CMAC on their blocks. Normally we never run into this,
   except for the optimization around directory revisions where we use
   uninitialized data to start our revision count.

   We correctly handle this case by ignoring whats on disk if the read
   fails, but end up using unitialized RAM instead. This is not an issue
   for normal use, though it can lead to a small information leak.
   However it creates a big problem for reproducability, which is very
   helpful for debugging.

   I ended up running into a case where the RAM values for the revision
   count was different, causing two identical runs to wear-level at
   different times, leading to one version running out of space before a
   bug occured because it expanded the superblock early.

2. Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size

   This could be caused if the previous tag was a valid commit and we
   lost power causing a partially written tag as the start of a new
   commit.

   Fortunately we already have a separate condition for exceeding the
   block size, so we can force that case to always treat the mdir as
   unerased.

3. Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
   outline a file in lfs_file_sync

   Most operations involving metadata-pairs treat the mdir struct as
   entirely temporary and throw it out if any error occurs. Except for
   lfs_file_sync since the mdir is also a part of the file struct.

   This is relevant because of a cleanup issue in lfs_dir_compact that
   usually doesn't have side-effects. The issue is that lfs_fs_relocate
   can fail. It needs to allocate new blocks to relocate to, and as the
   disk reaches its end of life, it can fail with ENOSPC quite often.

   If lfs_fs_relocate fails, the containing lfs_dir_compact would return
   immediately without restoring the previous state of the mdir. If a new
   commit comes in on the same mdir, the old state left there could
   corrupt the filesystem.

   It's interesting to note this is forced to happen in lfs_file_sync,
   since it always tries to outline the file if it gets ENOSPC (ENOSPC
   can mean both no blocks to allocate and that the mdir is full). I'm
   not actually sure this bit of code is necessary anymore, we may be
   able to remove it.

4. Fixed cleanup issue if we run out of space while extending a CTZ
   skip-list

   The actually CTZ skip-list logic itself hasn't been touched in more
   than a year at this point, so I was surprised to find a bug here. But
   it turns out the CTZ skip-list could be put in an invalid state if we
   run out of space while trying to extend the skip-list.

   This only becomes a problem if we keep the file open, clean up some
   space elsewhere, and then continue to write to the open file without
   modifying it. Fortunately an easy fix.

5. Fixed missing half-orphans when allocating blocks during
   lfs_fs_deorphan

   This was a really interesting bug. Normally, we don't have to worry
   about allocations, since we force consistency before we are allowed
   to allocate blocks. But what about the deorphan operation itself?
   Don't we need to allocate blocks if we relocate while deorphaning?

   It turns out the deorphan operation can lead to allocating blocks
   while there's still orphans and half-orphans on the threaded
   linked-list. Orphans aren't an issue, but half-orphans may contain
   references to blocks in the outdated half, which doesn't get scanned
   during the normal allocation pass.

   Fortunately we already fetch directory entries to check CTZ lists, so
   we can also check half-orphans here. However this causes
   lfs_fs_traverse to duplicate all metadata-pairs, not sure what to do
   about this yet.
2020-02-09 11:54:22 -06:00
zhuangqiubin
4fb188369d Update SPEC.md
1.fix size in Layout of the CRC tag
2.update (size) to (size * 8)
2020-02-02 17:42:42 +08:00
Henry Gabryjelski
c8e9a64a21 Indicate C99 standard as target for LittleFS code
Resolve #358
2020-01-27 21:51:12 -08:00
Christopher Haster
aab6aa0ed9 Cleaned up test script and directory naming
- Removed old tests and test scripts
- Reorganize the block devices to live under one directory
- Plugged new test framework into Makefile

renamed:
- scripts/test_.py -> scripts/test.py
- tests_ -> tests
- {file,ram,test}bd/* -> bd/*

It took a surprising amount of effort to make the Makefile behave since
it turns out the "test_%" rule could override "tests/test_%.toml.test"
which is generated as part of test.py.
2020-01-27 10:16:29 -06:00
Christopher Haster
52ef0c1c9e Fixed a crazy consistency issue in test.py
The root of the problem was the notorious Python quirk with mutable
default parameters. The default defines for the TestSuite class ended
up being mutated as the class determined the permutations to test,
corrupting other test's defines.

However, the only define that was mutated this way was the CACHE_SIZE
config in test_entries.

The crazy thing was how this small innocuous change would cause
"./scripts/test.py -nr test_relocations" and "./scripts/test.py -nr"
to drift out of sync only after a commit spanning the different cache
sizes would be written out with a different number of prog calls. This
offset the power-cycle counter enough to cause one case to make it to
an erase, and the other to not.

Normally, the difference between a successful/unsuccessful erase
wouldn't change the result of a test, but in this case it offset the
revision count used for wear-leveling, causing one run run expand the
superblock and the other to not.

This change to the filesystem would then propogate through the rest of
the test, making it difficult to reproduce test failures.

Fortunately the fix was to just make a copy of the default define
dictionary. This should also prevent accidently mutating of dicts
belonging to our caller.

Oh, also fixed a buffer overflow in test_files.
2020-01-26 23:53:53 -06:00
Christopher Haster
b9d0695e0a Rewrote explode_asserts.py to be more efficient
Normally I wouldn't consider optimizing this sort of script, but
explode_asserts.py proved to be terribly inefficient and dominated
the build time for running tests. It was slow enough to be distracting
when attempting to test patches while debugging. Just running
explode_asserts.py was ~10x slower than the rest of the compilation
process.

After implementing a proper tokenizer and switching to a handwritten
recursive descent parser, I was able to speed up explode_asserts.py
by ~5x and make test compilation much more tolerable.

I don't think this was a limitaiton of parsy, but rather switching
to a recursive descent parser made it much easier to find the hotspots
where parsing was wasting cycles (string slicing for one).

It's interesting to note that while the assert patterns can be parsed
with a LL(1) parser (by dumping seen tokens if a pattern fails),
I didn't bother as it's much easier to write the patterns with LL(k)
and parsing asserts is predicated by the "assert" string.

A few other tweaks:
- allowed combining different test modes in one run
- added a --no-internal option
- changed test_.py to start counting cases from 1
- added assert(memcmp(a, b) == 0) matching
- added better handling of string escapes in assert messages

time to run tests:
before: 1m31.122s
after:  0m41.447s
2020-01-26 23:53:53 -06:00
Christopher Haster
a5d614fbfb Added tests for power-cycled-relocations and fixed the bugs that fell out
The power-cycled-relocation test with random renames has been the most
aggressive test applied to littlefs so far, with:
- Random nested directory creation
- Random nested directory removal
- Random nested directory renames (this could make the
  threaded linked-list very interesting)
- Relocating blocks every write (maximum wear-leveling)
- Incrementally cycling power every write

Also added a couple other tests to test_orphans and test_relocations.

The good news is the added testing worked well, it found quite a number
of complex and subtle bugs that have been difficult to find.

1. It's actually possible for our parent to be relocated and go out of
   sync in lfs_mkdir. This can happen if our predecessor's predecessor
   is our parent as we are threading ourselves into the filesystem's
   threaded list. (note this doesn't happen if our predecessor _is_ our
   parent, as we then update our parent in a single commit).

   This is annoying because it only happens if our parent is a long (>1
   pair) directory, otherwise we wouldn't need to catch relocations.
   Fortunately we can reuse the internal open file/dir linked-list to
   catch relocations easily, as long as we're careful to unhook our
   parent whenever lfs_mkdir returns.

2. Even more surprising, it's possible for the child in lfs_remove
   to be relocated while we delete the entry from our parent. This
   can happen if we are our own parent's predecessor, since we need
   to be updated then if our parent relocates.

   Fortunately we can also hook into the open linked-list here.

   Note this same issue was present in lfs_rename.

   Fortunately, this means now all fetched dirs are hooked into the
   open linked-list if they are needed across a commit. This means
   we shouldn't need assumptions about tree movement for correctness.

3. lfs_rename("deja/vu", "deja/vu") with the same source and destination
   was broken and tried to delete the entry twice.

4. Managing gstate deltas when we lose power during relocations was
   broken. And unfortunately complicated.

   The issue happens when we lose power during a relocation while
   removing a directory.

   When we remove a directory, we need to move the contents of its
   gstate delta to another directory or we'll corrupt littlefs gstate.
   (gstate is an xor of all deltas on the filesystem). We used to just
   xor the gstate into our parent's gstate, however this isn't correct.

   The gstate isn't built out of the directory tree, but rather out of
   the threaded linked-list (which exists to make collecting this
   gstate efficient).

   Because we have to remove our dir in two operations, there's a point
   were both the updated parent and child can exist in threaded
   linked-list and duplicate the child's gstate delta.

     .--------.
   ->| parent |-.
     | gstate | |
   .-|   a    |-'
   | '--------'
   |     X <- child is orphaned
   | .--------.
   '>| child  |->
     | gstate |
     |   a    |
     '--------'

   What we need to do is save our child's gstate and only give it to our
   predecessor, since this finalizes the removal of the child.

   However we still need to make valid updates to the gstate to mark
   that we've created an orphan when we start removing the child.

   This led to a small rework of how the gstate is handled. Now we have
   a separation of the gpending state that should be written out ASAP
   and the gdelta state that is collected from orphans awaiting
   deletion.

5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing
   more than one orphan after a power-cycle. Having more than one orphan
   is very rare, but of course very possible. Fortunately this was just
   a mistake with using a break the in the deorphan, perhaps left from
   v1 where multiple orphans weren't possible?

   Note that we use a continue to force a refetch of the orphaned block.
   This is needed in the case of a half-orphan, since the fetched
   half-orphan may have an outdated tail pointer.
2020-01-26 23:45:54 -06:00
Christopher Haster
f4b6a6b328 Fixed issues with neighbor updates during moves
The root of the problem was some assumptions about what tags could be
sent to lfs_dir_commit.

- The first assumption is that there could be only one splice (create/delete)
  tag at a time, which is trivially broken by the core commit in lfs_rename.

- The second assumption is that there is at most one create and one delete in
  a single commit. This is less obvious but turns out to not be true in
  the case that we rename a file such that it overwrites another file in
  the same directory (1 delete for source file, 1 delete for destination).

- The third assumption was that there was an ordering to the
  delete/creates passed to lfs_dir_commit. It may be possible to force all
  deletes to follow creates by rearranging the tags in lfs_rename, but
  this risks overflowing tag ids.

The way the lfs_dir_commit first collected the "deletetag" and "createtag"
broke all three of these assumptions. And because we lose the ordering
information we can no longer apply the directory changes to open files
correctly. The file ids may be shifted in a way that doesn't reflect the
actual operations on disk.

These problems were made worst by lfs_dir_commit cleaning up moves
implicitly, which also creates deletes implicitly. While cleaning up moves
in lfs_dir_commit may save some code size, it makes the commit logic much more
difficult to implement correctly.

This bug turned into pulling out a dead tree stump, roots and all.

I ended up reworking how lfs_dir_commit updates open files so that it
has less assumptions, now it just traverses the commit tags multiple
times in order to update file ids after a successful commit in the
correct order.

This also got rid of the dir copy by carefully updating split dirs
after all files have an up-to-date copy of the original dir.

I also just removed the implicit move cleanup. It turns out the only
commits that can occur before we have cleaned up the move is in
lfs_fs_relocate, so it was simple enough to explicitly handle this case
when we update our parent and pred during a relocate.

Cases where we may need to fix moves:
- In lfs_rename when we move a file/dir
- In lfs_demove if we lose power
- In lfs_fs_relocate if we have to relocate our parent and we find it
  had a pending move (or else the move will be outdated)
- In lfs_fs_relocate if we have to relocate our predecessor and we find it
  had a pending move (or else the move will be outdated)

Note the two cases in lfs_fs_relocate may be recursive. But
lfs_fs_relocate can only trigger other lfs_fs_relocates so it's not
possible for pending moves to spill out into other filesystem commits

And of couse, I added several tests to cover these situations. Hopefully
the rename-with-open-files logic should be fairly locked down now.

found with initial fix by eastmoutain
2020-01-20 19:27:27 -06:00
Christopher Haster
9453ebd15d Added/improved disk-reading debug scripts
Also fixed a bug in dir splitting when there's a large number of open
files, which was the main reason I was trying to make it easier to debug
disk images.

One part of the recent test changes was to move away from the
file-per-block emubd and instead simulate storage with a single
contiguous file. The file-per-block format was marginally useful
at the beginning, but as the remaining bugs get more subtle, it
becomes more useful to inspect littlefs through scripts that
make the underlying metadata more human-readable.

The key benefit of switching to a contiguous file is these same
scripts can be reused for real disk images and can even read through
/dev/sdb or similar.

- ./scripts/readblock.py disk block_size block

  off       data
  00000000: 71 01 00 00 f0 0f ff f7 6c 69 74 74 6c 65 66 73  q.......littlefs
  00000010: 2f e0 00 10 00 00 02 00 00 02 00 00 00 04 00 00  /...............
  00000020: ff 00 00 00 ff ff ff 7f fe 03 00 00 20 00 04 19  ...............
  00000030: 61 00 00 0c 00 62 20 30 0c 09 a0 01 00 00 64 00  a....b 0......d.
  ...

  readblock.py prints a hex dump of a given block on disk. It's basically
  just "dd if=disk bs=block_size count=1 skip=block | xxd -g1 -" but with
  less typing.

- ./scripts/readmdir.py disk block_size block1 block2

  off       tag       type            id  len  data (truncated)
  0000003b: 0020000a  dir              0   10  63 6f 6c 64 63 6f 66 66 coldcoff
  00000049: 20000008  dirstruct        0    8  02 02 00 00 03 02 00 00 ........
  00000008: 00200409  dir              1    9  68 6f 74 63 6f 66 66 65 hotcoffe
  00000015: 20000408  dirstruct        1    8  fe 01 00 00 ff 01 00 00 ........

  readmdir.py prints info about the tags in a metadata pair on disk. It
  can print the currently active tags as well as the raw log of the
  metadata pair.

- ./scripts/readtree.py disk block_size

  superblock "littlefs"
    version v2.0
    block_size 512
    block_count 1024
    name_max 255
    file_max 2147483647
    attr_max 1022
  gstate 0x000000000000000000000000
  dir "/"
  mdir {0x0, 0x1} rev 3
  v id 0 superblock "littlefs" inline size 24
  mdir {0x77, 0x78} rev 1
    id 0 dir "coffee" dir {0x1fc, 0x1fd}
  dir "/coffee"
  mdir {0x1fd, 0x1fc} rev 2
    id 0 dir "coldcoffee" dir {0x202, 0x203}
    id 1 dir "hotcoffee" dir {0x1fe, 0x1ff}
  dir "/coffee/coldcoffee"
  mdir {0x202, 0x203} rev 1
  dir "/coffee/warmcoffee"
  mdir {0x200, 0x201} rev 1

  readtree.py parses the littlefs tree and prints info about the
  semantics of what's on disk. This includes the superblock,
  global-state, and directories/metadata-pairs. It doesn't print
  the filesystem tree though, that could be a different tool.
2020-01-20 19:27:27 -06:00
Christopher Haster
fb65057a3c Restructured block devices again for better test exploitation
Also finished migrating tests with test_relocations and test_exhaustion.

The issue I was running into when migrating these tests was a lack of
flexibility with what you could do with the block devices. It was
possible to hack in some hooks for things like bad blocks and power
loss, but it wasn't clean or easily extendable.

The solution here was to just put all of these test extensions into a
third block device, testbd, that uses the other two example block
devices internally.

testbd has several useful features for testing. Note this makes it a
pretty terrible block device _example_ since these hooks look more
complicated than a block device needs to be.

- testbd can simulate different erase values, supporting 1s, 0s, other byte
  patterns, or no erases at all (which can cause surprising bugs). This
  actually depends on the simulated erase values in ramdb and filebd.

  I did try to move this out of rambd/filebd, but it's not possible to
  simulate erases in testbd without buffering entire blocks and creating
  an excessive amount of extra write operations.

- testbd also helps simulate power-loss by containing a "power cycles"
  counter that is decremented every write operation until it calls exit.

  This is notably faster than the previous gdb approach, which is
  valuable since the reentrant tests tend to take a while to resolve.

- testbd also tracks wear, which can be manually set and read. This is
  very useful for testing things like bad block handling, wear leveling,
  or even changing the effective size of the block device at runtime.
2020-01-20 19:27:24 -06:00
Christopher Haster
ecc2857c0e Migrated bad-block tests
Even with adding better reentrance testing, the bad-block tests are
still very useful at isolating the block eviction logic.

This also required rewriting a bit of the internal testing wirework
to allow custom block devices which opens up quite a bit more straegies
for testing.
2020-01-14 12:04:20 -06:00
Christopher Haster
5181ce66cd Migrated the first of the tests with internal knowledge
Both test_move and test_orphan needed internal knowledge which comes
with the addition of the "in" attribute. This was in the plan for the
test-revamp from the beginning as it really opens up the ability to
write more unit-style-tests using internal knowledge of how littlefs
works. More unit-style-tests should help _fix_ bugs by limiting the
scope of the test and where the bug could be hiding.

The "in" attribute effectively runs tests _inside_ the .c file
specified, giving the test access to all static members without
needed to change their visibility.
2020-01-14 09:14:01 -06:00
Christopher Haster
b06ce54279 Migrated the bulk of the feature-specific tests
This involved some minor tweaks for the various types of tests, added
predicates to the test framework (necessary for test_entries and
test_alloc), and cleaned up some of the testing semantics such as
reporting how many tests are filtered, showing permutation config on
the result screen, and properly inheriting suite config in cases.
2020-01-12 22:21:09 -06:00
Christopher Haster
1d2688a771 Migrated test_files, test_dirs, test_format suites to new framework
Also some tweaks to test_.py to capture Makefile warnings and print
test locations a bit better.
2020-01-11 15:58:17 -06:00
Christopher Haster
eeaf536eca Replaced emubd with rambd and filebd
The idea behind emubd (file per block), was neat, but doesn't add much
value over a block device that just operates on a single linear file
(other than adding a significant amount of overhead). Initially it
helped with debugging, but when the metadata format became more complex
in v2, most debugging ends up going through the debug.py script anyways.

Aside from being simpler, moving to filebd means it is also possible to
mount disk images directly.

Also introduced rambd, which keeps the disk contents in RAM. This is
very useful for testing where it increases the speed _significantly_.
- test_dirs w/ filebd - 0m7.170s
- test_dirs w/ rambd  - 0m0.966s

These follow the emubd model of using the lfs_config for geometry. I'm
not convinced this is the best approach, but it gets the job done.

I've also added lfs_ramdb_createcfg to add additional config similar to
lfs_file_opencfg. This is useful for specifying erase_value, which tells
the block device to simulate erases similar to flash devices. Note that
the default (-1) meets the minimum block device requirements and is the
most performant.
2020-01-02 18:36:53 -06:00
Joe Doyle
626006af0c Fix incorrect comment on lfs_npw2
`lfs_npw2` returns a value v such that `2^v >= a` and `2^(v-1) < a`, but
the previous comment incorrectly describes it as "less than or equal to
a".
2020-01-02 13:46:07 -08:00
Christopher Haster
53d2b02f2a Added reentrant and gdb testing mechanisms to test framework
Aside from reworking the internals of test_.py to work well with
inherited TestCase classes, this also provides the two main features
that were the main reason for revamping the test framework

1. ./scripts/test_.py --reentrant

   Runs reentrant tests (tests with reentrant=true in the .toml
   configuration) under gdb such that the program is killed on every
   call to lfs_emubd_prog or lfs_emubd_erase.

   Currently this just increments a number of prog/erases to skip, which
   means it doesn't necessarily check every possible branch of the test,
   but this should still provide a good coverage of power-loss tests.

2. ./scripts/test_.py --gdb

   Run the tests and if a failure is hit, drop into GDB. In theory this
   will be very useful for reproducing and debugging test failures.

   Note this can be combined with --reentrant to drop into GDB on the
   exact cycle of power-loss where the tests fail.
2019-12-31 11:51:52 -06:00
Christopher Haster
ed8341ec4c Reworked permutation generation in test framework and cleanup
- Reworked how permutations work
  - Now with global defines as well (apply to all code)
  - Also supports lists of different permutation sets
- Added better cleanup in tests and "make clean"
2019-12-30 13:01:08 -06:00
Christopher Haster
f42e007709 Created initial implementation of revamped test.py
This is the start of reworking littlefs's testing framework based on
lessons learned from the initial testing framework.

1. The testing framework needs to be _flexible_. It was hacky, which by
   itself isn't a downside, but it wasn't _flexible_. This limited what
   could be done with the tests and there ended up being many
   workarounds just to reproduce bugs.

   The idea behind this revamped framework is to separate the
   description of tests (tests/test_dirs.toml) and the running of tests
   (scripts/test.py).

   Now, with the logic moved entirely to python, it's possible to run
   the test under varying environments. In addition to the "just don't
   assert" run, I'm also looking to run the tests in valgrind for memory
   checking, and an environment with simulated power-loss.

   The test description can also contain abstract attributes that help
   control how tests can be ran, such as "leaky" to identify tests where
   memory leaks are expected. This keeps test limitations at a minimum
   without limiting how the tests can be ran.

2. Multi-stage-process tests didn't really add value and limited what
   the testing environment.

   Unmounting + mounting can be done in a single process to test the
   same logic. It would be really difficult to make this fail only
   when memory is zeroed, though that can still be caught by
   power-resilient tests.

   Requiring every test to be a single process adds several options
   for test execution, such as using a RAM-backed block device for
   speed, or even running the tests on a device.

3. Added fancy assert interception. This wasn't really a requirement,
   but something I've been wanting to experiment with for a while.

   During testing, scripts/explode_asserts.py is added to the build
   process. This is a custom C-preprocessor that parses out assert
   statements and replaces them with _very_ verbose asserts that
   wouldn't normally be possible with just C macros.

   It even goes as far as to report the arguments to strcmp, since the
   lack of visibility here was very annoying.

   tests_/test_dirs.toml:186:assert: assert failed with "..", expected eq "..."
       assert(strcmp(info.name, "...") == 0);

   One downside is that simply parsing C in python is slower than the
   entire rest of the compilation, but fortunately this can be
   alleviated by parallelizing the test builds through make.

Other neat bits:
- All generated files are a suffix of the test description, this helps
  cleanup and means it's (theoretically) possible to parallelize the
  tests.
- The generated test.c is shoved base64 into an ad-hoc Makefile, this
  means it doesn't force a rebuild of tests all the time.
- Test parameterizing is now easier.
- Hopefully this framework can be repurposed also for benchmarks in the
  future.
2019-12-28 23:43:02 -06:00
Christopher Haster
ce2c01f098 Fixed lfs_dir_fetchmatch not understanding overwritten tags
Sometimes small, single line code change hides behind it a complicated
story. This is one of those times.

If you look at this diff, you may note that this is a case of
lfs_dir_fetchmatch not correctly handling a tag that invalidates a
callback used to search for some condition, in this case a search for a
parent, which is invalidated by a later dir tag overwritting the
previous dir pair.

But how can this happen? Dir-pair-tags are only overwritten during
relocations (when a block goes bad or exceeds the block_cycles config
option for dynamic wear-leveling). Other dir operations create new
directory entries. And the only lfs_dir_fetchmatch condition that relies
on overwrites (as opposed to proper deletes) is when we need to find a
directory's parent, an operation that only occurs during a _different_
relocation. And a false _positive_, can only happen if we don't have a
parent. Which is really unlikely when we search for directory parents!

This bug and minimal test case was found by Matthew Renzelmann. In a
unfortunate series of events, first a file creation causes a directory
split to occur. This creates a new, orphaned metadata-pair containing
our new file. However, the revision count on this metadata-pair
indicates the pair is due for relocation as a part of wear-leveling.
Normally, this is fine, even though this metadata-pair has no parent,
the lfs_dir_find should return ENOENT and continue without error.
However, here we get hit by our fetchmatch bug. A previous, unrelated
relocation overwrites a pair which just happens to contain the block
allocated for a new metadata-pair. When we search for a parent,
lfs_dir_fetchmatch incorrectly finds this old, outdated metadata pair
and incorrectly tells our orphan it's found its parent.

As you can imagine the orphan's dissapointment must be immense.

So an unfortunately timed dir split triggers a relocation which
incorrectly finds a previously written parent that has been outdated
by another relocation.

As a solution we can outdate our found tag if it is overwritten by
an exact match during lfs_dir_fetchmatch.

As a part of this I started adding a new set of tests: tests/test_relocations,
for aggressive relocations tests. This is already by appended to by
another PR. I suspect relocations is relatively under-tested and is
becoming more important due to recent improvements in wear-leveling.
2019-12-01 16:32:01 -06:00
Christopher Haster
0197b18100 Fixed issue with superblock breaking lfs_dir_seek
The superblock entry takes up id 0 in the root directory (not all
entries are files, though currently the superblock is the only
exception). Normally, reading a directory correctly skips the
superblock and only reports non-superblock files.

However, this doesn't work perfectly for lfs_dir_seek, which tries
to be clever to not touch the disk.

Fortunately, we can fix this by adding an offset for the superblock.
This will only work while the superblock is the only non-file entry,
otherwise we would need to touch the disk to properly seek in a
directory (though we already touch the disk a bit to get dir-tails
during seeks).

Found by jhartika
2019-12-01 16:25:08 -06:00
Christopher Haster
1f11e6b78a Merge pull request #338 from ARMmbed/fix-readme-desc
README: fix incorrect description
2019-12-01 16:24:53 -06:00
Christopher Haster
9a7a3f637a Merge pull request #337 from ARMmbed/fix-null-fetchmatch
fix nullptr access in lfs_dir_fetchmatch (#185)
2019-12-01 16:24:44 -06:00
Christopher Haster
8188019cbf Merge pull request #334 from mon/bugfix/inttypes
Fix some LFS_TRACE format specifiers
2019-12-01 16:22:33 -06:00
Christopher Haster
d6dc728c87 Fixed some issues in lfs_migrate
- Bad size used for writing out softtail tag
- Use of sizeof address instead of intended target
2019-12-01 16:22:15 -06:00
Christopher Haster
aeff2a28cf Stop wear-leveling during migration
Stop proactively relocate blocks during migrations, this can cause a number of
failure states such: clobbering the v1 superblock if we relocate root, and
invalidating directory pointers if we relocate the head of a directory. On top
of this, relocations increase the overall complexity of lfs_migration, which is
already a delicate operation.
2019-12-01 16:21:57 -06:00
Christopher Haster
aae22c8256 Fixed issue with directories falling out of date after block relocation
This is caused by dir->head not being updated when dir->m.pair may be.
This causes the two to fall out of sync and later dir rewinds to fail.

This bug stems all the way back from the first commits of littlefs, so
it's surprising it has avoided detection for this long. Perhaps because
lfs_dir_rewind is not used often.
2019-12-01 16:21:57 -06:00
Christopher Haster
60e67ae080 Fixed implicit change-of-sign warning in lfs_dir_fetch
Warning on MDK v5.27.1
Found by geniusgogo
2019-11-26 16:42:49 -06:00
grunwald-m
64dedee2d1 prepare upstream bugfix of lfs
-> call lfs_dir_fetchmatch with ftag=-1 in order to set the invalid bit
   and never let the function match a dir
2019-11-26 11:48:53 -06:00
Will
5925db48da Fix some LFS_TRACE format specifiers 2019-11-22 14:29:57 +10:00
liaoweixiong
ab56dc5a8b README: fix incorrect description
In my point of view, file updates will commit to filesystem only when
sync or close. There is a extra word 'no' here.

Fixes: bdff4bc59e ("Updated DESIGN.md to reflect v2 changes")
Signed-off-by: liaoweixiong <liaoweixiong@allwinnertech.com>
2019-11-15 18:53:53 +08:00
Christopher Haster
6b65737715 Merge pull request #308 from roykuper13/readme-example-update-block-cycles
Update readme example code in accordance to the block_cycles change
2019-10-15 10:36:42 -05:00
Christopher Haster
4ebe6030c5 Merge pull request #294 from ARMmbed/fix-max-null-tests
Fixed off-by-one null terminator in tests
2019-10-15 10:36:04 -05:00
Christopher Haster
7ae8d778f1 Merge pull request #299 from sipke/sipke/fix-types-for-16bit-machines-v2
fix types for 16bit machines v2
2019-10-15 10:35:47 -05:00
Roy Kupershmid
4d068a154d Update README example code in accordance to the block_cycles change
An addition to 38a2a8d. When executing the given example in README,
you immediately get an assertion error because block_cycles is initiated to 0.
2019-10-13 20:27:18 +03:00
Sipke Vriend
ba088aa213 lfs_dir_*: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 15:24:17 +10:00
Sipke Vriend
955b296bcc lfs_file_rewind: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 14:22:25 +10:00
Sipke Vriend
241dbc6f86 lfs_stat: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 14:22:01 +10:00
Sipke Vriend
8cca58f1a6 lfs_file_truncate: ensure lfs_file_seek return code is lsf_soff_t and cast error returns
To ensure 16 bit devices do not invalidly truncate lfs_file_write return codes, change
the return variable to be lfs_ssize_t which is the lfs_file_write return code and
cast to int if it is a negative error code.
2019-10-01 14:20:43 +10:00
Sipke Vriend
97f86af4e9 lfs_remove: Cast tag/error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 13:56:51 +10:00
Sipke Vriend
d40302c5e3 lfs_rename: Cast error return codes to int.
For correctness, cast the lfs_stag_t variables to int when returning a negative error code.
2019-10-01 13:51:52 +10:00
Sipke Vriend
0b5a78e2cd Adjust lfs_dir_find return code to ensure 32 bit value.
lfs_dir_find returns either a negative return code or a tag.
For 32 bit machines with int as 32 bits this co-incides, but for smaller
bit processors, we need to ensure a 32 bit value is returned so change
the return type to lfs_stag_t.
2019-10-01 11:52:02 +10:00
Christopher Haster
27b6cc829b Fixed off-by-one null terminator in tests
Found by mr-at-eo
2019-09-23 10:43:39 -05:00
Christopher Haster
fd204ac2fb Merge pull request #278 from roykuper13/validate-lfs-cfg-sizes
lfs: Validate lfs-cfg sizes before performing any arithmetic logics with them
2019-09-19 10:02:54 -05:00
Christopher Haster
bd99402d9a Merge pull request #281 from patrick--/fix-lfs-embud-file-resource-leak
Fix for issue #260
2019-09-19 10:02:42 -05:00
Christopher Haster
bce442a86b Merge pull request #282 from runderwo/master
Corrections for typos and grammar
2019-09-19 10:02:34 -05:00
Christopher Haster
f26e970a0e Merge pull request #286 from sipke/sipke/fix-warnings-shift-count
build: Fix warnings about shift count width difference for 16 bit com…
2019-09-19 10:02:25 -05:00
Sipke Vriend
965d29b887 build: Fix warnings about shift count width difference for 16 bit compiler
Build warnings exist on a gcc based 16 bit compiler. Cast relevant types
to fix.

littlefs/lfs.c: In function 'lfs_gstate_xororphans':
littlefs/lfs.c:355:5: warning: left shift count >= width of type
littlefs/lfs.c: In function 'lfs_dir_fetchmatch':
littlefs/lfs.c:849:17: warning: left shift count >= width of type
littlefs/lfs.c: In function 'lfs_dir_commitcrc':
littlefs/lfs.c:1278:9: warning: left shift count >= width of type
2019-09-09 13:53:50 +10:00
Ryan Underwood
f7fd7d966a Corrections for typos and grammar 2019-09-01 21:11:49 -07:00
Patrick Servello
d5aba27d60 Fix for issue #260
Certain functions within lfs_emubd.c were susceptible to file resource leaks due to certain code paths not issuing an fclose() before returning.
2019-08-31 20:47:26 -05:00
Roy Kupershmid
0c77123eee lfs: Validate lfs-cfg sizes before performing arithmetic logics with them 2019-08-31 16:57:56 +03:00
Freddie Chopin
5a12c443b8 Revert "Don't bypass cache in lfs_cache_prog() and lfs_cache_read()"
This reverts commit fdd239fe21.

Bypassing cache turned out to be a mistake which causes more problems
than it solves. Device driver should deal with alignment if this is
required - trying to do that in a file system is not a viable solution
anyway.
2019-08-09 23:02:33 +02:00
Christopher Haster
494dd6673d Merge pull request #263 from rojer/wundef
Fix build with -Wundef
2019-08-08 18:50:40 -05:00
Christopher Haster
fce2569005 Merge pull request #257 from pabigot/pr/20190803a
fix seek position corruption in truncate function
2019-08-08 18:50:28 -05:00
Christopher Haster
9d1f1211a9 Merge pull request #253 from pabigot/pr/20190730a
lfs: correct alignment restriction on lookahead buffer
2019-08-08 18:50:15 -05:00
Christopher Haster
151104c790 Changed CI to create release note for patches
This is a result of feedback that the current release notes made it too
difficult to see what changes happened on patch releases. From my
experience as well it became difficult to chase down which release a
commit landed on.

The risk is that this creates additional noise, both for the release
page and for user notifications. I am open to feedback if this causes a
problem.

Other tweaks on the CI side, these came from iteration with the same
scheme for coru and equeue:

- Changed version branch updates to be atomic (vN and vN-prefix). This
  makes it a bit easier to fix if one of the pushes fails due to a rogue
  branch with the same name.

- Added GEKY_BOT_DRAFT as a CI macro that can optionally switch between
  only creating drafts or immediately posting a release. The default is
  what I will be trying with littlefs which is to draft minor/major
  releases, but automatically create patch release.

  The real benefit of automatic releases is to use on tiny repos that
  don't really have an active maintainer. Though this is definitely no
  longer the case with littlefs, and I'm happy it has gained this much
  attention.
2019-08-08 18:50:00 -05:00
Deomid "rojer" Ryabkov
303ffb2da4 Fix build with -Wundef
Part of https://github.com/mongoose-os-libs/vfs-fs-lfs/issues/2
2019-08-08 16:54:34 +01:00
Peter A. Bigot
5bf71fa43e lfs: do not reposition seek pointer on truncate
When using lfs_file_truncate() to make a file shorter the file block and
off were incorrectly positioned at the new end, resulting in invalid
data accessed when reading.  Lift the seek pointer restoration to apply
to both increasing and reducing truncates.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-08-03 17:17:49 -05:00
Peter A. Bigot
55fb1416c7 lfs: initialize file offs field
The uninitialized value creates confusion when diagnosing anomalies.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-08-03 09:59:27 -05:00
Peter A. Bigot
dc031ce1d9 lfs: use meaningful names for magic block identifiers
The difference between 0xffffffff and 0xfffffffe is too subtle.  Use
names that reflect what the value represents.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-08-03 09:59:07 -05:00
Peter A. Bigot
f85ff1d2f8 lfs: correct alignment restriction on lookahead buffer
The buffer need only be 32-bit aligned.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-07-30 20:02:42 -05:00
Christopher Haster
db054684a6 Bump version to v2.1 2019-07-29 01:42:28 -05:00
Christopher Haster
7872918ec8 Fixed issue where lfs_migrate would relocate root and corrupt superblock
Found during testing, the issue was with lfs_migrate in combination with
wear leveling.

Normally, we can expect lfs_migrate to be able to respect the user-configured
block_cycles. It already has allocation information on which blocks are
used by both v1 and v2, so it should be safe to relocate blocks as
needed.

However, this fell apart when root was relocated. If lfs_migrate found a
root that needed migration, it would happily relocate the root. This
would normally be fine, except relocating the root has a side-effect of
needed to update the superblock. Which, during migration, is in a
delicate state of containing both v1's and v2's superblocks in the same
metadata pair. If the superblock ends up needing to compact, this would
clobber the v1 superblock and corrupt the filesystem during migration.

The best fix I could come up with is to specifically dissallow migrating the
root directory during migration. Fortunately this is behind the
LFS_MIGRATE macro, so the code cost for this check is not normally paid.
2019-07-29 01:42:06 -05:00
Christopher Haster
e249854858 Removed dependency on uninitialized value in lfs_file_t struct 2019-07-29 00:43:54 -05:00
Christopher Haster
501b0240a9 Merge pull request #232 from ARMmbed/debug-improvements
Debug improvements
2019-07-28 21:53:55 -05:00
Christopher Haster
e1f3b90b56 Merge remote-tracking branch 'origin/master' into debug-improvements 2019-07-28 21:53:13 -05:00
Christopher Haster
74fe46de3d Merge pull request #233 from ARMmbed/discourage-no-wear-leveling
Change block_cycles disable from 0 to -1
2019-07-28 21:35:48 -05:00
Christopher Haster
582b596ed1 Merge pull request #242 from ARMmbed/fix-2048-erase-size
Fix issues with large prog sizes (prog_size > 1KiB)
2019-07-28 21:35:22 -05:00
Christopher Haster
0d4c0b105c Fixed issue where inline files were not cleaned up
Due to the logging nature of metadata pairs, switching from inline files
(type3 = 0x201) to CTZ skip-lists (type3 = 0x202) does not explicitly
erase inline files, but instead leaves them up to compaction to omit.
To save code size, this is handled by the same logic that deduplicates
tags.

Unfortunately, this wasn't working. Due to a relatively late change in v2
the struct's type field was changed to no longer be a part of determining a
tag's "uniqueness". A part of this should have been the modification of
directory traversal filtering to respect type-dependent uniqueness, but
I missed this.

The fix is to add in correct type-dependent filtering. Also there was
some clean up necessary around removing delete tags during compaction
and outlining files.

Note that while this appears to conflict with the possibility of
combining inline + ctz files, we still have the device-side-only
LFS_TYPE_FROM tag that can be repurposed for 256 additional inline
"chunks".

Found by Johnxjj
2019-07-28 21:34:17 -05:00
Christopher Haster
4850e01e14 Changed rdonly/wronly mistakes to assert
Previously these returned LFS_ERR_BADF. But attempting to modify a file
opened read-only, or reading a write-only flie, is a user error and
should not occur in normal use.

Changing this to an assert allows the logic to be omitted if the user
disables asserts to reduce the code footprint (not suggested unless the
user really really knows what they're doing).
2019-07-28 21:32:06 -05:00
Christopher Haster
4ec4425272 Fixed overlapping memcpy in emubd
Found by DanielLyubin
2019-07-28 21:26:24 -05:00
Christopher Haster
31e28fddb7 Merge pull request #237 from Ar2rL/reverse_finalize_close
Protect (LFS_ASSERT) file operations against using not opened or closed files.
2019-07-28 21:26:03 -05:00
Christopher Haster
3806d88285 Fixed seek-related typos in lfs.h
- lfs_file_rewind == lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
- lfs_file_seek returns the _new_ position of the file
2019-07-28 21:25:18 -05:00
Christopher Haster
de5972699a Fixed license header in lfs.c
Found by pabigot
2019-07-28 21:25:00 -05:00
Christopher Haster
0d8ffd6b86 Merge pull request #239 from pabigot/pr/20190723a
lfs: correct documentation on lookahead-related values
2019-07-28 21:24:39 -05:00
Christopher Haster
c0af471bc1 Merge pull request #227 from haneefmubarak/patch-1
removed <dirent.h> preventing compile on some archs
2019-07-28 21:24:22 -05:00
Christopher Haster
e8c023aab0 Changed FUSE branch to v2 (previously v2-alpha) 2019-07-28 20:43:12 -05:00
Christopher Haster
38a2a8d2a3 Minor improvement to documentation over block_cycles
Suggested by haneefmubarak
2019-07-28 20:42:13 -05:00
Christopher Haster
51fabc672b Switched to using hex for blocks and ids in debug output
This is a minor quality of life change to help debugging, specifically
when debugging test failures.

Before, the test framework used hex, while the log output used decimal.
This was slightly annoying to convert between.

Why not output lengths/offset in hex? I don't have a big reason. I find
it easier to reason about lengths in decimal and ids (such as addresses
or block numbers) in hex. But this may just be me.
2019-07-26 20:09:24 -05:00
Christopher Haster
19838371fb Fixed issue where sed buffering (QUIET=1) caused Travis timeout 2019-07-26 19:51:20 -05:00
Christopher Haster
312326c4e4 Added a better solution for large prog sizes
A current limitation of the lfs tag is the 10-bit (1024) length field.
This field is used to indicate padding for commits and effectively
limits the size of commits to 1KiB. Because commits must be prog size
aligned, this is a problem on devices with prog size > 1024.

[----                   6KiB erase block                   ----]
[-- 2KiB prog size --|-- 2KiB prog size --|-- 2KiB prog size --]
[ 1KiB commit |  ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ]

This can be increased to 12-bit (4096), but for NAND devices this is
still to small to completely solve the issue.

The previous workaround was to just create unaligned commits. This can
occur naturally if littlefs is used on portable media as the prog size
does not have to be consistent on different drivers. If littlefs sees
an unaligned commit, it treats the dir as unerased and must compact the
dir if it creates any new commits.

Unfortunately this isn't great. It effectively means that every small
commit forced an erase on devices with prog size > 1024. This is pretty
terrible.

[----                   6KiB erase block                   ----]
[-- 2KiB prog size --|-- 2KiB prog size --|-- 2KiB prog size --]
[ 1KiB commit |------------------- wasted ---------------------]

A different solution, implemented here, is to use multiple crc tags
to pad the commit until the remaining space fits in the padding. This
effectively looks like multiple empty commits and has a small runtime
cost to parse these tags, but otherwise does no harm.

[----                   6KiB erase block                   ----]
[-- 2KiB prog size --|-- 2KiB prog size --|-- 2KiB prog size --]
[ 1KiB commit | noop | 1KiB commit | noop | 1KiB commit | noop ]

It was a bit tricky to implement, but now we can effectively support
unlimited prog sizes since there's no limit to the number of commits
in a block.

found by kazink and joicetm
2019-07-26 19:51:15 -05:00
Christopher Haster
ef1c926940 Increased testing to include geometries that can't be fully tested
This is primarily to get better test coverage over devices with very
large erase/prog/read sizes. The unfortunate state of the tests is
that most of them rely on a specific block device size, so that
ENOSPC and ECORRUPT errors occur in specific situations.

This should be improved in the future, but at least for now we can
open up some of the simpler tests to run on these different
configurations.

Also added testing over both 0x00 and 0xff erase values in emubd.

Also added a number of small file tests that expose issues prevalent
on NAND devices.
2019-07-26 19:50:17 -05:00
Christopher Haster
72e3bb4448 Refactored a handful of things in tests
- Now test errors have correct line reporting! #line directives
  are passed to the compiler that reference the relevant line in
  the test case shell script.

  --- Multi-block directory ---
  ./tests/test_dirs.sh:109: assert failed with 0, expected 1
      lfs_unmount(&lfs) => 1

- Cleaned up the number of implicit global variables provided to
  tests. A lot of these were infrequently used and made it difficult
  to remember what was provided. This isn't an MCU, so there's very
  little cost to stack allocations when needed.

- Minimized the results.py script (previously stats.py) output to
  match minimization of test output.
2019-07-26 11:11:34 -05:00
Christopher Haster
649640c605 Fixed workaround for erase sizes >1024 B
Introduced in 0b76635, the workaround for erases sizes >1024 is to
commit with an unaligned CRC tag. Upon reading an unaligned CRC,
littlefs should treat the metadata pair as "requires erased". While
necessary for portability, this also lets us workaround the lack of
handling of erases sizes >1024.

Unfortunately, this workaround wasn't implemented correctly (by me)
in the case that the metadata-pair does not immediately compact. This
is solved here by added the erase check to lfs_dir_commit.

Note this is still only a part of a workaround which should be replaced.
One potential solution is to pad the commit with multiple smaller CRC
tags until we reach the next prog_size boundary.

found by kazink
2019-07-24 14:45:21 -05:00
Peter A. Bigot
eb013e6dd6 lfs: correct documentation on lookahead-related values
The size of the lookahead buffer is required to be a multiple of 8 bytes
in anticipation of a future improvement.  The buffer itself need only be
aligned to support access through a uint32_t pointer.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-07-23 11:05:04 -05:00
Ar2rL
7e1bad3eee Set LFS_F_OPENED flag at places required by lfs internal logic. 2019-07-21 14:36:40 +02:00
Ar2rL
72a3758958 Use LFS_F_OPENED flag to protect against use of not opened or closed file. 2019-07-21 11:34:53 +02:00
Ar2rL
df2e676562 Add necessary flag to mark file as being opened. 2019-07-21 11:34:14 +02:00
Christopher Haster
53a6e04712 Changed block_cycles disable from 0 to -1
As it is now, block_cycles = 0 disables wear leveling. This was a
mistake as 0 is the "default" value for several other config options.
It's even worse when migrating from v1 as it's easy to miss the addition
of block_cycles and end up with a filesystem that is not actually
wear-leveling.

Clearly, block_cycles = 0 should do anything but disable wear-leveling.

Here, I've changed block_cycles = 0 to assert. Forcing users to set a
value for block_cycles (500 is suggested). block_cycles can be set to -1
to explicitly disable wear leveling if desired.
2019-07-17 17:05:20 -05:00
Christopher Haster
1aaf1cb6c0 Minor improvements to testing framework
- Moved scripts into scripts folder
- Removed what have been relatively unhelpful assert printing
2019-07-16 20:53:39 -05:00
Christopher Haster
52a90b8dcc Added asserts on positive return values from block device functions
This has been a large source of porting errors, partially due to my
fault in not having enough porting documentation, which is also
planned.

In the short term, asserts should at least help catch these types of
errors instead of just letting the filesystem collapse after recieving
an odd error code.
2019-07-16 15:55:29 -05:00
Christopher Haster
e279c8ff90 Tweaked debug output
- Changed "No more free space" to be an error as suggested by davidefer
- Tweaked output to be more parsable (no space between lfs and warn)
2019-07-16 15:40:26 -05:00
Christopher Haster
6a1ee91490 Added trace statements through LFS_YES_TRACE
To use, compile and run with LFS_YES_TRACE defined:
make CFLAGS+=-DLFS_YES_TRACE=1 test_format

The name LFS_YES_TRACE was chosen to match the LFS_NO_DEBUG and
LFS_NO_WARN defines for the similar levels of output. The YES is
necessary to avoid a conflict with the actual LFS_TRACE macro that
gets emitting. LFS_TRACE can also be defined directly to provide
a custom trace formatter.

Hopefully having trace statements at the littlefs C API helps
debugging and reproducing issues.
2019-07-16 15:14:32 -05:00
Haneef Mubarak
2e92f7a49b actually removed <dirent.h> 2019-07-12 11:46:18 -07:00
Haneef Mubarak
2588948d70 removed <dirent.h> preventing compile on some archs 2019-07-11 15:46:17 -07:00
Christopher Haster
abd90cb84c Fixed 32-bit/64-bit Ubuntu multilib issue in Travis 2019-07-01 19:34:06 -05:00
Christopher Haster
b73ac594f2 Fixed issues with reading and caching inline files
Kind of a two-fold issue. One, the programming to the middle of inline
files was causing the cache to get updated to a half programmed state.
While fine, as all programs do occur in order in a block, this is less
efficient when writing to inline files as it would cause the inline file
to need to be reread even if it fits in the cache.

Two, the rereading of the inline file was broken and passed the file's
tag all the way to where a user would expect an error. This was easy to
fix but adds to the reasons we should have test coverage information.

Found by ebinans
2019-07-01 15:11:53 -05:00
Christopher Haster
614f7b1e68 Fixed accidental truncate after seek on inline files
The cause was mistakenly setting file->ctz.size directly instead of
file->pos, which file->ctz.size gets overwritten with later in
lfs_file_flush.

Also added better seek test cases specifically for inline files. This
should also catch most of the inline corner cases related to
lfs_file_size/lfs_file_tell.

Found by ebinans
2019-07-01 15:11:53 -05:00
Christopher Haster
a9a61a3e78 Added redundant compaction to lfs_format/lfs_migrate
This ensures that both blocks in the superblock pair are written with
the superblock info. While this does use an additional erase cycle, it
prevents older versions of littlefs from accidentally being picked up
in the case that the disk is mounted on a system that doesn't support
the newer version.

This does bring back the risk of picking up old littlefs versions on
a disk that has been formatted with a filesystem that doesn't use
block 2 (such as FAT), but this risk already exists, and moving between
versions of littlefs is more likely with the recent v1 -> v2 update.

Suggested by rojer
2019-07-01 15:11:38 -05:00
Christopher Haster
36973d8fd5 Fixed missing cache flush in lfs_migrate
The data written to the prog cache would make littlefs internally
consistent, but because this was never written to disk, the filesystem
would become unmountable.

Unfortunately, this wasn't found during testing because caches automatically
flush if data is written up to a program boundary (maybe this was a mistake?).

Found by rojer
2019-07-01 15:11:38 -05:00
Christopher Haster
f06dc5737f Merge pull request #201 from nickray/python2-markings
Mark all Python 2 scripts as Python 2
2019-07-01 15:11:16 -05:00
Nicolas Stalder
3fb242f3ae Mark all Python 2 scripts as Python 2 2019-06-07 04:09:44 +02:00
Christopher Haster
ef77195a64 Fixed limit of inline files based on LFS_ATTR_MAX
The maximum limit of inline files and attributes are unrelated, but were
not at a point in littlefs v2 development. This should be checking
against the bit-field limit in the littlefs tag.

Found by lsilvaalmeida
2019-05-23 16:43:23 -05:00
Christopher Haster
12e464e9c3 Fixed issue with writes following a truncate
The problem was not setting the file state correctly after the truncate.
To truncate < size, we end up using the cache to traverse the ctz
skip-list far away from where our file->pos is.

We can leave the last block in the cache in case we're going to append
to the file, but if we do this we need to set up file->block+file->off
to tell use where we are in the file, and set the LFS_F_READING flag to
indicate that our cache contains read data.

Note this is different than the LFS_F_DIRTY, which we need also. The
purpose of the flags are as follows:
- LFS_F_DIRTY - file ctz skip-list branch is out of sync with
  filesystem, need to update metadata
- LFS_F_READING - file cache is in use for reading, need to drop cache
- LFS_F_WRITING - file cache is in use for writing, need to write out
  cache to disk

The difference between flags is subtle but important because read/prog
caches are handled differently. Prog caches have asserts in place to
catch programs without erases (the infamous pcache->block == 0xffffffff
assert).

Though maybe the names deserve an update...

Found by ebinans
2019-05-23 16:43:10 -05:00
Christopher Haster
9899c7fe48 Fixed read cache amount based on hint and offset
Found by apmorton
2019-05-23 16:42:47 -05:00
Christopher Haster
bc7bed740b Merge pull request #181 from rojer/lfs1_crc
Make lfs1_crc static so it doesn't conflict with prefixed LFS1 code
2019-05-23 16:40:09 -05:00
Christopher Haster
cf9afdddff Merge pull request #179 from rojer/wundef
Fix compilation with -Wundef
2019-05-23 16:39:57 -05:00
Deomid "rojer" Ryabkov
2533a0f6d6 Make lfs1_crc static so it doesn't conflict with prefixed LFS1 code
When LFS1 code is present and LFS_MIGRATE is enabled
2019-05-16 17:51:22 +01:00
Deomid "rojer" Ryabkov
2a7f0ed11b Fix compilation with -Wundef 2019-05-14 18:18:29 +01:00
89 changed files with 38431 additions and 5191 deletions

4
.gitattributes vendored Normal file
View File

@@ -0,0 +1,4 @@
# GitHub really wants to mark littlefs as a python project, telling it to
# reclassify our test .toml files as C code (which they are 95% of anyways)
# remedies this
*.toml linguist-language=c

31
.github/workflows/post-release.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: post-release
on:
release:
branches: [master]
types: [released]
defaults:
run:
shell: bash -euv -o pipefail {0}
jobs:
post-release:
runs-on: ubuntu-latest
steps:
# trigger post-release in dependency repo, this indirection allows the
# dependency repo to be updated often without affecting this repo. At
# the time of this comment, the dependency repo is responsible for
# creating PRs for other dependent repos post-release.
- name: trigger-post-release
continue-on-error: true
run: |
curl -sS -X POST -H "authorization: token ${{secrets.BOT_TOKEN}}" \
"$GITHUB_API_URL/repos/${{secrets.POST_RELEASE_REPO}}/dispatches" \
-d "$(jq -n '{
event_type: "post-release",
client_payload: {
repo: env.GITHUB_REPOSITORY,
version: "${{github.event.release.tag_name}}",
},
}' | tee /dev/stderr)"

260
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,260 @@
name: release
on:
workflow_run:
workflows: [test]
branches: [master]
types: [completed]
defaults:
run:
shell: bash -euv -o pipefail {0}
jobs:
release:
runs-on: ubuntu-latest
# need to manually check for a couple things
# - tests passed?
# - we are the most recent commit on master?
if: ${{github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_sha == github.sha}}
steps:
- uses: actions/checkout@v2
with:
ref: ${{github.event.workflow_run.head_sha}}
# need workflow access since we push branches
# containing workflows
token: ${{secrets.BOT_TOKEN}}
# need all tags
fetch-depth: 0
# try to get results from tests
- uses: dawidd6/action-download-artifact@v2
continue-on-error: true
with:
workflow: ${{github.event.workflow_run.name}}
run_id: ${{github.event.workflow_run.id}}
name: sizes
path: sizes
- uses: dawidd6/action-download-artifact@v2
continue-on-error: true
with:
workflow: ${{github.event.workflow_run.name}}
run_id: ${{github.event.workflow_run.id}}
name: cov
path: cov
- uses: dawidd6/action-download-artifact@v2
continue-on-error: true
with:
workflow: ${{github.event.workflow_run.name}}
run_id: ${{github.event.workflow_run.id}}
name: bench
path: bench
- name: find-version
run: |
# rip version from lfs.h
LFS_VERSION="$(grep -o '^#define LFS_VERSION .*$' lfs.h \
| awk '{print $3}')"
LFS_VERSION_MAJOR="$((0xffff & ($LFS_VERSION >> 16)))"
LFS_VERSION_MINOR="$((0xffff & ($LFS_VERSION >> 0)))"
# find a new patch version based on what we find in our tags
LFS_VERSION_PATCH="$( \
( git describe --tags --abbrev=0 \
--match="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.*" \
|| echo 'v0.0.-1' ) \
| awk -F '.' '{print $3+1}')"
# found new version
LFS_VERSION="v$LFS_VERSION_MAJOR`
`.$LFS_VERSION_MINOR`
`.$LFS_VERSION_PATCH"
echo "LFS_VERSION=$LFS_VERSION"
echo "LFS_VERSION=$LFS_VERSION" >> $GITHUB_ENV
echo "LFS_VERSION_MAJOR=$LFS_VERSION_MAJOR" >> $GITHUB_ENV
echo "LFS_VERSION_MINOR=$LFS_VERSION_MINOR" >> $GITHUB_ENV
echo "LFS_VERSION_PATCH=$LFS_VERSION_PATCH" >> $GITHUB_ENV
# try to find previous version?
- name: find-prev-version
continue-on-error: true
run: |
LFS_PREV_VERSION="$( \
git describe --tags --abbrev=0 --match 'v*' \
|| true)"
echo "LFS_PREV_VERSION=$LFS_PREV_VERSION"
echo "LFS_PREV_VERSION=$LFS_PREV_VERSION" >> $GITHUB_ENV
# try to find results from tests
- name: create-table
run: |
# previous results to compare against?
[ -n "$LFS_PREV_VERSION" ] && curl -sS \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/status/$LFS_PREV_VERSION`
`?per_page=100" \
| jq -re 'select(.sha != env.GITHUB_SHA) | .statuses[]' \
>> prev-status.json \
|| true
# build table for GitHub
declare -A table
# sizes table
i=0
j=0
for c in "" readonly threadsafe multiversion migrate error-asserts
do
# per-config results
c_or_default=${c:-default}
c_camel=${c_or_default^}
table[$i,$j]=$c_camel
((j+=1))
for s in code stack structs
do
f=sizes/thumb${c:+-$c}.$s.csv
[ -e $f ] && table[$i,$j]=$( \
export PREV="$(jq -re '
select(.context == "'"sizes (thumb${c:+, $c}) / $s"'").description
| capture("(?<prev>[0-9∞]+)").prev' \
prev-status.json || echo 0)"
./scripts/summary.py $f --max=stack_limit -Y \
| awk '
NR==2 {$1=0; printf "%s B",$NF}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",100*($NF-ENVIRON["PREV"])/ENVIRON["PREV"]
}' \
| sed -e 's/ /\&nbsp;/g')
((j+=1))
done
((j=0, i+=1))
done
# coverage table
i=0
j=4
for s in lines branches
do
table[$i,$j]=${s^}
((j+=1))
f=cov/cov.csv
[ -e $f ] && table[$i,$j]=$( \
export PREV="$(jq -re '
select(.context == "'"cov / $s"'").description
| capture("(?<prev_a>[0-9]+)/(?<prev_b>[0-9]+)")
| 100*((.prev_a|tonumber) / (.prev_b|tonumber))' \
prev-status.json || echo 0)"
./scripts/cov.py -u $f -f$s -Y \
| awk -F '[ /%]+' -v s=$s '
NR==2 {$1=0; printf "%d/%d %s",$2,$3,s}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",$4-ENVIRON["PREV"]
}' \
| sed -e 's/ /\&nbsp;/g')
((j=4, i+=1))
done
# benchmark table
i=3
j=4
for s in readed proged erased
do
table[$i,$j]=${s^}
((j+=1))
f=bench/bench.csv
[ -e $f ] && table[$i,$j]=$( \
export PREV="$(jq -re '
select(.context == "'"bench / $s"'").description
| capture("(?<prev>[0-9]+)").prev' \
prev-status.json || echo 0)"
./scripts/summary.py $f -f$s=bench_$s -Y \
| awk '
NR==2 {$1=0; printf "%s B",$NF}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",100*($NF-ENVIRON["PREV"])/ENVIRON["PREV"]
}' \
| sed -e 's/ /\&nbsp;/g')
((j=4, i+=1))
done
# build the actual table
echo "| | Code | Stack | Structs | | Coverage |" >> table.txt
echo "|:--|-----:|------:|--------:|:--|---------:|" >> table.txt
for ((i=0; i<6; i++))
do
echo -n "|" >> table.txt
for ((j=0; j<6; j++))
do
echo -n " " >> table.txt
[[ i -eq 2 && j -eq 5 ]] && echo -n "**Benchmarks**" >> table.txt
echo -n "${table[$i,$j]:-}" >> table.txt
echo -n " |" >> table.txt
done
echo >> table.txt
done
cat table.txt
# find changes from history
- name: create-changes
run: |
[ -n "$LFS_PREV_VERSION" ] || exit 0
# use explicit link to github commit so that release notes can
# be copied elsewhere
git log "$LFS_PREV_VERSION.." \
--grep='^Merge' --invert-grep \
--format="format:[\`%h\`](`
`https://github.com/$GITHUB_REPOSITORY/commit/%h) %s" \
> changes.txt
echo "CHANGES:"
cat changes.txt
# create and update major branches (vN and vN-prefix)
- name: create-major-branches
run: |
# create major branch
git branch "v$LFS_VERSION_MAJOR" HEAD
# create major prefix branch
git config user.name ${{secrets.BOT_USER}}
git config user.email ${{secrets.BOT_EMAIL}}
git fetch "https://github.com/$GITHUB_REPOSITORY.git" \
"v$LFS_VERSION_MAJOR-prefix" || true
./scripts/changeprefix.py --git "lfs" "lfs$LFS_VERSION_MAJOR"
git branch "v$LFS_VERSION_MAJOR-prefix" $( \
git commit-tree $(git write-tree) \
$(git rev-parse --verify -q FETCH_HEAD | sed -e 's/^/-p /') \
-p HEAD \
-m "Generated v$LFS_VERSION_MAJOR prefixes")
git reset --hard
# push!
git push --atomic origin \
"v$LFS_VERSION_MAJOR" \
"v$LFS_VERSION_MAJOR-prefix"
# build release notes
- name: create-release
run: |
# create release and patch version tag (vN.N.N)
# only draft if not a patch release
touch release.txt
[ -e table.txt ] && cat table.txt >> release.txt
echo >> release.txt
[ -e changes.txt ] && cat changes.txt >> release.txt
cat release.txt
curl -sS -X POST -H "authorization: token ${{secrets.BOT_TOKEN}}" \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/releases" \
-d "$(jq -n --rawfile release release.txt '{
tag_name: env.LFS_VERSION,
name: env.LFS_VERSION | rtrimstr(".0"),
target_commitish: "${{github.event.workflow_run.head_sha}}",
draft: env.LFS_VERSION | endswith(".0"),
body: $release,
}' | tee /dev/stderr)"

100
.github/workflows/status.yml vendored Normal file
View File

@@ -0,0 +1,100 @@
name: status
on:
workflow_run:
workflows: [test]
types: [completed]
defaults:
run:
shell: bash -euv -o pipefail {0}
jobs:
# forward custom statuses
status:
runs-on: ubuntu-latest
steps:
- uses: dawidd6/action-download-artifact@v2
continue-on-error: true
with:
workflow: ${{github.event.workflow_run.name}}
run_id: ${{github.event.workflow_run.id}}
name: status
path: status
- name: update-status
continue-on-error: true
run: |
ls status
for s in $(shopt -s nullglob ; echo status/*.json)
do
# parse requested status
export STATE="$(jq -er '.state' $s)"
export CONTEXT="$(jq -er '.context' $s)"
export DESCRIPTION="$(jq -er '.description' $s)"
# help lookup URL for job/steps because GitHub makes
# it VERY HARD to link to specific jobs
export TARGET_URL="$(
jq -er '.target_url // empty' $s || (
export TARGET_JOB="$(jq -er '.target_job' $s)"
export TARGET_STEP="$(jq -er '.target_step // ""' $s)"
curl -sS -H "authorization: token ${{secrets.BOT_TOKEN}}" \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/actions/runs/`
`${{github.event.workflow_run.id}}/jobs" \
| jq -er '.jobs[]
| select(.name == env.TARGET_JOB)
| .html_url
+ "?check_suite_focus=true"
+ ((.steps[]
| select(.name == env.TARGET_STEP)
| "#step:\(.number):0") // "")'))"
# update status
curl -sS -X POST -H "authorization: token ${{secrets.BOT_TOKEN}}" \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/statuses/`
`${{github.event.workflow_run.head_sha}}" \
-d "$(jq -n '{
state: env.STATE,
context: env.CONTEXT,
description: env.DESCRIPTION,
target_url: env.TARGET_URL,
}' | tee /dev/stderr)"
done
# forward custom pr-comments
comment:
runs-on: ubuntu-latest
# only run on success (we don't want garbage comments!)
if: ${{github.event.workflow_run.conclusion == 'success'}}
steps:
# generated comment?
- uses: dawidd6/action-download-artifact@v2
continue-on-error: true
with:
workflow: ${{github.event.workflow_run.name}}
run_id: ${{github.event.workflow_run.id}}
name: comment
path: comment
- name: update-comment
continue-on-error: true
run: |
ls comment
for s in $(shopt -s nullglob ; echo comment/*.json)
do
export NUMBER="$(jq -er '.number' $s)"
export BODY="$(jq -er '.body' $s)"
# check that the comment was from the most recent commit on the
# pull request
[ "$(curl -sS -H "authorization: token ${{secrets.BOT_TOKEN}}" \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/pulls/$NUMBER" \
| jq -er '.head.sha')" \
== ${{github.event.workflow_run.head_sha}} ] || continue
# update comment
curl -sS -X POST -H "authorization: token ${{secrets.BOT_TOKEN}}" \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/issues/`
`$NUMBER/comments" \
-d "$(jq -n '{
body: env.BODY,
}' | tee /dev/stderr)"
done

870
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,870 @@
name: test
on: [push, pull_request]
defaults:
run:
shell: bash -euv -o pipefail {0}
env:
CFLAGS: -Werror
MAKEFLAGS: -j
TESTFLAGS: -k
BENCHFLAGS:
jobs:
# run tests
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
arch: [x86_64, thumb, mips, powerpc]
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
# cross-compile with ARM Thumb (32-bit, little-endian)
- name: install-thumb
if: ${{matrix.arch == 'thumb'}}
run: |
sudo apt-get install -qq \
gcc-arm-linux-gnueabi \
libc6-dev-armel-cross \
qemu-user
echo "CC=arm-linux-gnueabi-gcc -mthumb --static" >> $GITHUB_ENV
echo "EXEC=qemu-arm" >> $GITHUB_ENV
arm-linux-gnueabi-gcc --version
qemu-arm -version
# cross-compile with MIPS (32-bit, big-endian)
- name: install-mips
if: ${{matrix.arch == 'mips'}}
run: |
sudo apt-get install -qq \
gcc-mips-linux-gnu \
libc6-dev-mips-cross \
qemu-user
echo "CC=mips-linux-gnu-gcc --static" >> $GITHUB_ENV
echo "EXEC=qemu-mips" >> $GITHUB_ENV
mips-linux-gnu-gcc --version
qemu-mips -version
# cross-compile with PowerPC (32-bit, big-endian)
- name: install-powerpc
if: ${{matrix.arch == 'powerpc'}}
run: |
sudo apt-get install -qq \
gcc-powerpc-linux-gnu \
libc6-dev-powerpc-cross \
qemu-user
echo "CC=powerpc-linux-gnu-gcc --static" >> $GITHUB_ENV
echo "EXEC=qemu-ppc" >> $GITHUB_ENV
powerpc-linux-gnu-gcc --version
qemu-ppc -version
# does littlefs compile?
- name: test-build
run: |
make clean
make build
# make sure example can at least compile
- name: test-example
run: |
make clean
sed -n '/``` c/,/```/{/```/d; p}' README.md > test.c
CFLAGS="$CFLAGS \
-Duser_provided_block_device_read=NULL \
-Duser_provided_block_device_prog=NULL \
-Duser_provided_block_device_erase=NULL \
-Duser_provided_block_device_sync=NULL \
-include stdio.h" \
make all
rm test.c
# run the tests!
- name: test
run: |
make clean
make test
# collect coverage info
#
# Note the goal is to maximize coverage in the small, easy-to-run
# tests, so we intentionally exclude more aggressive powerloss testing
# from coverage results
- name: cov
if: ${{matrix.arch == 'x86_64'}}
run: |
make lfs.cov.csv
./scripts/cov.py -u lfs.cov.csv
mkdir -p cov
cp lfs.cov.csv cov/cov.csv
# find compile-time measurements
- name: sizes
run: |
make clean
CFLAGS="$CFLAGS \
-DLFS_NO_ASSERT \
-DLFS_NO_DEBUG \
-DLFS_NO_WARN \
-DLFS_NO_ERROR" \
make lfs.code.csv lfs.data.csv lfs.stack.csv lfs.structs.csv
./scripts/structs.py -u lfs.structs.csv
./scripts/summary.py lfs.code.csv lfs.data.csv lfs.stack.csv \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack_limit
mkdir -p sizes
cp lfs.code.csv sizes/${{matrix.arch}}.code.csv
cp lfs.data.csv sizes/${{matrix.arch}}.data.csv
cp lfs.stack.csv sizes/${{matrix.arch}}.stack.csv
cp lfs.structs.csv sizes/${{matrix.arch}}.structs.csv
- name: sizes-readonly
run: |
make clean
CFLAGS="$CFLAGS \
-DLFS_NO_ASSERT \
-DLFS_NO_DEBUG \
-DLFS_NO_WARN \
-DLFS_NO_ERROR \
-DLFS_READONLY" \
make lfs.code.csv lfs.data.csv lfs.stack.csv lfs.structs.csv
./scripts/structs.py -u lfs.structs.csv
./scripts/summary.py lfs.code.csv lfs.data.csv lfs.stack.csv \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack_limit
mkdir -p sizes
cp lfs.code.csv sizes/${{matrix.arch}}-readonly.code.csv
cp lfs.data.csv sizes/${{matrix.arch}}-readonly.data.csv
cp lfs.stack.csv sizes/${{matrix.arch}}-readonly.stack.csv
cp lfs.structs.csv sizes/${{matrix.arch}}-readonly.structs.csv
- name: sizes-threadsafe
run: |
make clean
CFLAGS="$CFLAGS \
-DLFS_NO_ASSERT \
-DLFS_NO_DEBUG \
-DLFS_NO_WARN \
-DLFS_NO_ERROR \
-DLFS_THREADSAFE" \
make lfs.code.csv lfs.data.csv lfs.stack.csv lfs.structs.csv
./scripts/structs.py -u lfs.structs.csv
./scripts/summary.py lfs.code.csv lfs.data.csv lfs.stack.csv \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack_limit
mkdir -p sizes
cp lfs.code.csv sizes/${{matrix.arch}}-threadsafe.code.csv
cp lfs.data.csv sizes/${{matrix.arch}}-threadsafe.data.csv
cp lfs.stack.csv sizes/${{matrix.arch}}-threadsafe.stack.csv
cp lfs.structs.csv sizes/${{matrix.arch}}-threadsafe.structs.csv
- name: sizes-multiversion
run: |
make clean
CFLAGS="$CFLAGS \
-DLFS_NO_ASSERT \
-DLFS_NO_DEBUG \
-DLFS_NO_WARN \
-DLFS_NO_ERROR \
-DLFS_MULTIVERSION" \
make lfs.code.csv lfs.data.csv lfs.stack.csv lfs.structs.csv
./scripts/structs.py -u lfs.structs.csv
./scripts/summary.py lfs.code.csv lfs.data.csv lfs.stack.csv \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack_limit
mkdir -p sizes
cp lfs.code.csv sizes/${{matrix.arch}}-multiversion.code.csv
cp lfs.data.csv sizes/${{matrix.arch}}-multiversion.data.csv
cp lfs.stack.csv sizes/${{matrix.arch}}-multiversion.stack.csv
cp lfs.structs.csv sizes/${{matrix.arch}}-multiversion.structs.csv
- name: sizes-migrate
run: |
make clean
CFLAGS="$CFLAGS \
-DLFS_NO_ASSERT \
-DLFS_NO_DEBUG \
-DLFS_NO_WARN \
-DLFS_NO_ERROR \
-DLFS_MIGRATE" \
make lfs.code.csv lfs.data.csv lfs.stack.csv lfs.structs.csv
./scripts/structs.py -u lfs.structs.csv
./scripts/summary.py lfs.code.csv lfs.data.csv lfs.stack.csv \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack_limit
mkdir -p sizes
cp lfs.code.csv sizes/${{matrix.arch}}-migrate.code.csv
cp lfs.data.csv sizes/${{matrix.arch}}-migrate.data.csv
cp lfs.stack.csv sizes/${{matrix.arch}}-migrate.stack.csv
cp lfs.structs.csv sizes/${{matrix.arch}}-migrate.structs.csv
- name: sizes-error-asserts
run: |
make clean
CFLAGS="$CFLAGS \
-DLFS_NO_DEBUG \
-DLFS_NO_WARN \
-DLFS_NO_ERROR \
-D'LFS_ASSERT(test)=do {if(!(test)) {return -1;}} while(0)'" \
make lfs.code.csv lfs.data.csv lfs.stack.csv lfs.structs.csv
./scripts/structs.py -u lfs.structs.csv
./scripts/summary.py lfs.code.csv lfs.data.csv lfs.stack.csv \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack_limit
mkdir -p sizes
cp lfs.code.csv sizes/${{matrix.arch}}-error-asserts.code.csv
cp lfs.data.csv sizes/${{matrix.arch}}-error-asserts.data.csv
cp lfs.stack.csv sizes/${{matrix.arch}}-error-asserts.stack.csv
cp lfs.structs.csv sizes/${{matrix.arch}}-error-asserts.structs.csv
# create size statuses
- name: upload-sizes
uses: actions/upload-artifact@v2
with:
name: sizes
path: sizes
- name: status-sizes
run: |
mkdir -p status
for f in $(shopt -s nullglob ; echo sizes/*.csv)
do
# skip .data.csv as it should always be zero
[[ $f == *.data.csv ]] && continue
export STEP="sizes$(echo $f \
| sed -n 's/[^-.]*-\([^.]*\)\..*csv/-\1/p')"
export CONTEXT="sizes (${{matrix.arch}}$(echo $f \
| sed -n 's/[^-.]*-\([^.]*\)\..*csv/, \1/p')) / $(echo $f \
| sed -n 's/[^.]*\.\(.*\)\.csv/\1/p')"
export PREV="$(curl -sS \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/status/master`
`?per_page=100" \
| jq -re 'select(.sha != env.GITHUB_SHA) | .statuses[]
| select(.context == env.CONTEXT).description
| capture("(?<prev>[0-9∞]+)").prev' \
|| echo 0)"
export DESCRIPTION="$(./scripts/summary.py $f --max=stack_limit -Y \
| awk '
NR==2 {$1=0; printf "%s B",$NF}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",100*($NF-ENVIRON["PREV"])/ENVIRON["PREV"]
}')"
jq -n '{
state: "success",
context: env.CONTEXT,
description: env.DESCRIPTION,
target_job: "${{github.job}} (${{matrix.arch}})",
target_step: env.STEP,
}' | tee status/$(basename $f .csv).json
done
- name: upload-status-sizes
uses: actions/upload-artifact@v2
with:
name: status
path: status
retention-days: 1
# create cov statuses
- name: upload-cov
if: ${{matrix.arch == 'x86_64'}}
uses: actions/upload-artifact@v2
with:
name: cov
path: cov
- name: status-cov
if: ${{matrix.arch == 'x86_64'}}
run: |
mkdir -p status
f=cov/cov.csv
for s in lines branches
do
export STEP="cov"
export CONTEXT="cov / $s"
export PREV="$(curl -sS \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/status/master`
`?per_page=100" \
| jq -re 'select(.sha != env.GITHUB_SHA) | .statuses[]
| select(.context == env.CONTEXT).description
| capture("(?<prev_a>[0-9]+)/(?<prev_b>[0-9]+)")
| 100*((.prev_a|tonumber) / (.prev_b|tonumber))' \
|| echo 0)"
export DESCRIPTION="$(./scripts/cov.py -u $f -f$s -Y \
| awk -F '[ /%]+' -v s=$s '
NR==2 {$1=0; printf "%d/%d %s",$2,$3,s}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",$4-ENVIRON["PREV"]
}')"
jq -n '{
state: "success",
context: env.CONTEXT,
description: env.DESCRIPTION,
target_job: "${{github.job}} (${{matrix.arch}})",
target_step: env.STEP,
}' | tee status/$(basename $f .csv)-$s.json
done
- name: upload-status-sizes
if: ${{matrix.arch == 'x86_64'}}
uses: actions/upload-artifact@v2
with:
name: status
path: status
retention-days: 1
# run as many exhaustive tests as fits in GitHub's time limits
#
# this grows exponentially, so it doesn't turn out to be that many
test-pls:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
pls: [1, 2]
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
- name: test-pls
if: ${{matrix.pls <= 1}}
run: |
TESTFLAGS="$TESTFLAGS -P${{matrix.pls}}" make test
# >=2pls takes multiple days to run fully, so we can only
# run a subset of tests, these are the most important
- name: test-limited-pls
if: ${{matrix.pls > 1}}
run: |
TESTFLAGS="$TESTFLAGS -P${{matrix.pls}} test_dirs test_relocations" \
make test
# run with LFS_NO_INTRINSICS to make sure that works
test-no-intrinsics:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
- name: test-no-intrinsics
run: |
CFLAGS="$CFLAGS -DLFS_NO_INTRINSICS" make test
# run LFS_MULTIVERSION tests
test-multiversion:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
- name: test-multiversion
run: |
CFLAGS="$CFLAGS -DLFS_MULTIVERSION" make test
# run tests on the older version lfs2.0
test-lfs2_0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
- name: test-lfs2_0
run: |
CFLAGS="$CFLAGS -DLFS_MULTIVERSION" \
TESTFLAGS="$TESTFLAGS -DDISK_VERSION=0x00020000" \
make test
# run under Valgrind to check for memory errors
test-valgrind:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip valgrind
pip3 install toml
gcc --version
python3 --version
valgrind --version
# Valgrind takes a while with diminishing value, so only test
# on one geometry
- name: test-valgrind
run: |
TESTFLAGS="$TESTFLAGS --valgrind --context=1024 -Gdefault -Pnone" \
make test
# test that compilation is warning free under clang
# run with Clang, mostly to check for Clang-specific warnings
test-clang:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get install -qq clang python3 python3-pip
pip3 install toml
clang --version
python3 --version
- name: test-clang
run: |
# override CFLAGS since Clang does not support -fcallgraph-info
# and -ftrack-macro-expansions
make \
CC=clang \
CFLAGS="$CFLAGS -MMD -g3 -I. -std=c99 -Wall -Wextra -pedantic" \
test
# run benchmarks
#
# note there's no real benefit to running these on multiple archs
bench:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip valgrind
pip3 install toml
gcc --version
python3 --version
valgrind --version
- name: bench
run: |
make bench
# find bench results
make lfs.bench.csv
./scripts/summary.py lfs.bench.csv \
-bsuite \
-freaded=bench_readed \
-fproged=bench_proged \
-ferased=bench_erased
mkdir -p bench
cp lfs.bench.csv bench/bench.csv
# find perfbd results
make lfs.perfbd.csv
./scripts/perfbd.py -u lfs.perfbd.csv
mkdir -p bench
cp lfs.perfbd.csv bench/perfbd.csv
# create bench statuses
- name: upload-bench
uses: actions/upload-artifact@v2
with:
name: bench
path: bench
- name: status-bench
run: |
mkdir -p status
f=bench/bench.csv
for s in readed proged erased
do
export STEP="bench"
export CONTEXT="bench / $s"
export PREV="$(curl -sS \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/status/master`
`?per_page=100" \
| jq -re 'select(.sha != env.GITHUB_SHA) | .statuses[]
| select(.context == env.CONTEXT).description
| capture("(?<prev>[0-9]+)").prev' \
|| echo 0)"
export DESCRIPTION="$(./scripts/summary.py $f -f$s=bench_$s -Y \
| awk '
NR==2 {$1=0; printf "%s B",$NF}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",100*($NF-ENVIRON["PREV"])/ENVIRON["PREV"]
}')"
jq -n '{
state: "success",
context: env.CONTEXT,
description: env.DESCRIPTION,
target_job: "${{github.job}}",
target_step: env.STEP,
}' | tee status/$(basename $f .csv)-$s.json
done
- name: upload-status-bench
uses: actions/upload-artifact@v2
with:
name: status
path: status
retention-days: 1
# run compatibility tests using the current master as the previous version
test-compat:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
if: ${{github.event_name == 'pull_request'}}
# checkout the current pr target into lfsp
- uses: actions/checkout@v2
if: ${{github.event_name == 'pull_request'}}
with:
ref: ${{github.event.pull_request.base.ref}}
path: lfsp
- name: install
if: ${{github.event_name == 'pull_request'}}
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
# adjust prefix of lfsp
- name: changeprefix
if: ${{github.event_name == 'pull_request'}}
run: |
./scripts/changeprefix.py lfs lfsp lfsp/*.h lfsp/*.c
- name: test-compat
if: ${{github.event_name == 'pull_request'}}
run: |
TESTS=tests/test_compat.toml \
SRC="$(find . lfsp -name '*.c' -maxdepth 1 \
-and -not -name '*.t.*' \
-and -not -name '*.b.*')" \
CFLAGS="-DLFSP=lfsp/lfsp.h" \
make test
# self-host with littlefs-fuse for a fuzz-like test
fuse:
runs-on: ubuntu-latest
if: ${{!endsWith(github.ref, '-prefix')}}
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip libfuse-dev
sudo pip3 install toml
gcc --version
python3 --version
fusermount -V
- uses: actions/checkout@v2
with:
repository: littlefs-project/littlefs-fuse
ref: v2
path: littlefs-fuse
- name: setup
run: |
# copy our new version into littlefs-fuse
rm -rf littlefs-fuse/littlefs/*
cp -r $(git ls-tree --name-only HEAD) littlefs-fuse/littlefs
# setup disk for littlefs-fuse
mkdir mount
LOOP=$(sudo losetup -f)
sudo chmod a+rw $LOOP
dd if=/dev/zero bs=512 count=128K of=disk
losetup $LOOP disk
echo "LOOP=$LOOP" >> $GITHUB_ENV
- name: test
run: |
# self-host test
make -C littlefs-fuse
littlefs-fuse/lfs --format $LOOP
littlefs-fuse/lfs $LOOP mount
ls mount
mkdir mount/littlefs
cp -r $(git ls-tree --name-only HEAD) mount/littlefs
cd mount/littlefs
stat .
ls -flh
make -B test-runner
make -B test
# test migration using littlefs-fuse
migrate:
runs-on: ubuntu-latest
if: ${{!endsWith(github.ref, '-prefix')}}
steps:
- uses: actions/checkout@v2
- name: install
run: |
# need a few things
sudo apt-get update -qq
sudo apt-get install -qq gcc python3 python3-pip libfuse-dev
sudo pip3 install toml
gcc --version
python3 --version
fusermount -V
- uses: actions/checkout@v2
with:
repository: littlefs-project/littlefs-fuse
ref: v2
path: v2
- uses: actions/checkout@v2
with:
repository: littlefs-project/littlefs-fuse
ref: v1
path: v1
- name: setup
run: |
# copy our new version into littlefs-fuse
rm -rf v2/littlefs/*
cp -r $(git ls-tree --name-only HEAD) v2/littlefs
# setup disk for littlefs-fuse
mkdir mount
LOOP=$(sudo losetup -f)
sudo chmod a+rw $LOOP
dd if=/dev/zero bs=512 count=128K of=disk
losetup $LOOP disk
echo "LOOP=$LOOP" >> $GITHUB_ENV
- name: test
run: |
# compile v1 and v2
make -C v1
make -C v2
# run self-host test with v1
v1/lfs --format $LOOP
v1/lfs $LOOP mount
ls mount
mkdir mount/littlefs
cp -r $(git ls-tree --name-only HEAD) mount/littlefs
cd mount/littlefs
stat .
ls -flh
make -B test-runner
make -B test
# attempt to migrate
cd ../..
fusermount -u mount
v2/lfs --migrate $LOOP
v2/lfs $LOOP mount
# run self-host test with v2 right where we left off
ls mount
cd mount/littlefs
stat .
ls -flh
make -B test-runner
make -B test
# status related tasks that run after tests
status:
runs-on: ubuntu-latest
needs: [test, bench]
steps:
- uses: actions/checkout@v2
if: ${{github.event_name == 'pull_request'}}
- name: install
if: ${{github.event_name == 'pull_request'}}
run: |
# need a few things
sudo apt-get install -qq gcc python3 python3-pip
pip3 install toml
gcc --version
python3 --version
- uses: actions/download-artifact@v2
if: ${{github.event_name == 'pull_request'}}
continue-on-error: true
with:
name: sizes
path: sizes
- uses: actions/download-artifact@v2
if: ${{github.event_name == 'pull_request'}}
continue-on-error: true
with:
name: cov
path: cov
- uses: actions/download-artifact@v2
if: ${{github.event_name == 'pull_request'}}
continue-on-error: true
with:
name: bench
path: bench
# try to find results from tests
- name: create-table
if: ${{github.event_name == 'pull_request'}}
run: |
# compare against pull-request target
curl -sS \
"$GITHUB_API_URL/repos/$GITHUB_REPOSITORY/status/`
`${{github.event.pull_request.base.ref}}`
`?per_page=100" \
| jq -re 'select(.sha != env.GITHUB_SHA) | .statuses[]' \
>> prev-status.json \
|| true
# build table for GitHub
declare -A table
# sizes table
i=0
j=0
for c in "" readonly threadsafe multiversion migrate error-asserts
do
# per-config results
c_or_default=${c:-default}
c_camel=${c_or_default^}
table[$i,$j]=$c_camel
((j+=1))
for s in code stack structs
do
f=sizes/thumb${c:+-$c}.$s.csv
[ -e $f ] && table[$i,$j]=$( \
export PREV="$(jq -re '
select(.context == "'"sizes (thumb${c:+, $c}) / $s"'").description
| capture("(?<prev>[0-9∞]+)").prev' \
prev-status.json || echo 0)"
./scripts/summary.py $f --max=stack_limit -Y \
| awk '
NR==2 {$1=0; printf "%s B",$NF}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",100*($NF-ENVIRON["PREV"])/ENVIRON["PREV"]
}' \
| sed -e 's/ /\&nbsp;/g')
((j+=1))
done
((j=0, i+=1))
done
# coverage table
i=0
j=4
for s in lines branches
do
table[$i,$j]=${s^}
((j+=1))
f=cov/cov.csv
[ -e $f ] && table[$i,$j]=$( \
export PREV="$(jq -re '
select(.context == "'"cov / $s"'").description
| capture("(?<prev_a>[0-9]+)/(?<prev_b>[0-9]+)")
| 100*((.prev_a|tonumber) / (.prev_b|tonumber))' \
prev-status.json || echo 0)"
./scripts/cov.py -u $f -f$s -Y \
| awk -F '[ /%]+' -v s=$s '
NR==2 {$1=0; printf "%d/%d %s",$2,$3,s}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",$4-ENVIRON["PREV"]
}' \
| sed -e 's/ /\&nbsp;/g')
((j=4, i+=1))
done
# benchmark table
i=3
j=4
for s in readed proged erased
do
table[$i,$j]=${s^}
((j+=1))
f=bench/bench.csv
[ -e $f ] && table[$i,$j]=$( \
export PREV="$(jq -re '
select(.context == "'"bench / $s"'").description
| capture("(?<prev>[0-9]+)").prev' \
prev-status.json || echo 0)"
./scripts/summary.py $f -f$s=bench_$s -Y \
| awk '
NR==2 {$1=0; printf "%s B",$NF}
NR==2 && ENVIRON["PREV"]+0 != 0 {
printf " (%+.1f%%)",100*($NF-ENVIRON["PREV"])/ENVIRON["PREV"]
}' \
| sed -e 's/ /\&nbsp;/g')
((j=4, i+=1))
done
# build the actual table
echo "| | Code | Stack | Structs | | Coverage |" >> table.txt
echo "|:--|-----:|------:|--------:|:--|---------:|" >> table.txt
for ((i=0; i<6; i++))
do
echo -n "|" >> table.txt
for ((j=0; j<6; j++))
do
echo -n " " >> table.txt
[[ i -eq 2 && j -eq 5 ]] && echo -n "**Benchmarks**" >> table.txt
echo -n "${table[$i,$j]:-}" >> table.txt
echo -n " |" >> table.txt
done
echo >> table.txt
done
cat table.txt
# create a bot comment for successful runs on pull requests
- name: create-comment
if: ${{github.event_name == 'pull_request'}}
run: |
touch comment.txt
echo "<details>" >> comment.txt
echo "<summary>" >> comment.txt
echo "Tests passed ✓, `
`Code: $(awk 'NR==3 {print $4}' table.txt || true), `
`Stack: $(awk 'NR==3 {print $6}' table.txt || true), `
`Structs: $(awk 'NR==3 {print $8}' table.txt || true)" \
>> comment.txt
echo "</summary>" >> comment.txt
echo >> comment.txt
[ -e table.txt ] && cat table.txt >> comment.txt
echo >> comment.txt
echo "</details>" >> comment.txt
cat comment.txt
mkdir -p comment
jq -n --rawfile comment comment.txt '{
number: ${{github.event.number}},
body: $comment,
}' | tee comment/comment.json
- name: upload-comment
uses: actions/upload-artifact@v2
with:
name: comment
path: comment
retention-days: 1

31
.gitignore vendored
View File

@@ -2,8 +2,33 @@
*.o
*.d
*.a
*.ci
*.csv
*.t.*
*.b.*
*.gcno
*.gcda
*.perf
lfs
liblfs.a
# Testing things
blocks/
lfs
test.c
runners/test_runner
runners/bench_runner
lfs.code.csv
lfs.data.csv
lfs.stack.csv
lfs.structs.csv
lfs.cov.csv
lfs.perf.csv
lfs.perfbd.csv
lfs.test.csv
lfs.bench.csv
# Misc
tags
.gdb_history
scripts/__pycache__
# Historical, probably should remove at some point
tests/*.toml.*

View File

@@ -1,302 +0,0 @@
# Environment variables
env:
global:
- CFLAGS=-Werror
# Common test script
script:
# make sure example can at least compile
- sed -n '/``` c/,/```/{/```/d; p;}' README.md > test.c &&
make all CFLAGS+="
-Duser_provided_block_device_read=NULL
-Duser_provided_block_device_prog=NULL
-Duser_provided_block_device_erase=NULL
-Duser_provided_block_device_sync=NULL
-include stdio.h"
# run tests
- make test QUIET=1
# run tests with a few different configurations
- make test QUIET=1 CFLAGS+="-DLFS_READ_SIZE=1 -DLFS_CACHE_SIZE=4"
- make test QUIET=1 CFLAGS+="-DLFS_READ_SIZE=512 -DLFS_CACHE_SIZE=512 -DLFS_BLOCK_CYCLES=16"
- make test QUIET=1 CFLAGS+="-DLFS_BLOCK_COUNT=1023 -DLFS_LOOKAHEAD_SIZE=256"
- make clean test QUIET=1 CFLAGS+="-DLFS_INLINE_MAX=0"
- make clean test QUIET=1 CFLAGS+="-DLFS_NO_INTRINSICS"
# compile and find the code size with the smallest configuration
- make clean size
OBJ="$(ls lfs*.o | tr '\n' ' ')"
CFLAGS+="-DLFS_NO_ASSERT -DLFS_NO_DEBUG -DLFS_NO_WARN -DLFS_NO_ERROR"
| tee sizes
# update status if we succeeded, compare with master if possible
- |
if [ "$TRAVIS_TEST_RESULT" -eq 0 ]
then
CURR=$(tail -n1 sizes | awk '{print $1}')
PREV=$(curl -u "$GEKY_BOT_STATUSES" https://api.github.com/repos/$TRAVIS_REPO_SLUG/status/master \
| jq -re "select(.sha != \"$TRAVIS_COMMIT\")
| .statuses[] | select(.context == \"$STAGE/$NAME\").description
| capture(\"code size is (?<size>[0-9]+)\").size" \
|| echo 0)
STATUS="Passed, code size is ${CURR}B"
if [ "$PREV" -ne 0 ]
then
STATUS="$STATUS ($(python -c "print '%+.2f' % (100*($CURR-$PREV)/$PREV.0)")%)"
fi
fi
# CI matrix
jobs:
include:
# native testing
- stage: test
env:
- STAGE=test
- NAME=littlefs-x86
# cross-compile with ARM (thumb mode)
- stage: test
env:
- STAGE=test
- NAME=littlefs-arm
- CC="arm-linux-gnueabi-gcc --static -mthumb"
- EXEC="qemu-arm"
install:
- sudo apt-get install gcc-arm-linux-gnueabi qemu-user
- arm-linux-gnueabi-gcc --version
- qemu-arm -version
# cross-compile with PowerPC
- stage: test
env:
- STAGE=test
- NAME=littlefs-powerpc
- CC="powerpc-linux-gnu-gcc --static"
- EXEC="qemu-ppc"
install:
- sudo apt-get install gcc-powerpc-linux-gnu qemu-user
- powerpc-linux-gnu-gcc --version
- qemu-ppc -version
# cross-compile with MIPS
- stage: test
env:
- STAGE=test
- NAME=littlefs-mips
- CC="mips-linux-gnu-gcc --static"
- EXEC="qemu-mips"
install:
- sudo add-apt-repository -y "deb http://archive.ubuntu.com/ubuntu/ xenial main universe"
- sudo apt-get -qq update
- sudo apt-get install gcc-mips-linux-gnu qemu-user
- mips-linux-gnu-gcc --version
- qemu-mips -version
# self-host with littlefs-fuse for fuzz test
- stage: test
env:
- STAGE=test
- NAME=littlefs-fuse
if: branch !~ -prefix$
install:
- sudo apt-get install libfuse-dev
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2-alpha
- fusermount -V
- gcc --version
before_script:
# setup disk for littlefs-fuse
- rm -rf littlefs-fuse/littlefs/*
- cp -r $(git ls-tree --name-only HEAD) littlefs-fuse/littlefs
- mkdir mount
- sudo chmod a+rw /dev/loop0
- dd if=/dev/zero bs=512 count=4096 of=disk
- losetup /dev/loop0 disk
script:
# self-host test
- make -C littlefs-fuse
- littlefs-fuse/lfs --format /dev/loop0
- littlefs-fuse/lfs /dev/loop0 mount
- ls mount
- mkdir mount/littlefs
- cp -r $(git ls-tree --name-only HEAD) mount/littlefs
- cd mount/littlefs
- stat .
- ls -flh
- make -B test_dirs test_files QUIET=1
# self-host with littlefs-fuse for fuzz test
- stage: test
env:
- STAGE=test
- NAME=littlefs-migration
if: branch !~ -prefix$
install:
- sudo apt-get install libfuse-dev
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v2-alpha v2
- git clone --depth 1 https://github.com/geky/littlefs-fuse -b v1 v1
- fusermount -V
- gcc --version
before_script:
# setup disk for littlefs-fuse
- rm -rf v2/littlefs/*
- cp -r $(git ls-tree --name-only HEAD) v2/littlefs
- mkdir mount
- sudo chmod a+rw /dev/loop0
- dd if=/dev/zero bs=512 count=4096 of=disk
- losetup /dev/loop0 disk
script:
# compile v1 and v2
- make -C v1
- make -C v2
# run self-host test with v1
- v1/lfs --format /dev/loop0
- v1/lfs /dev/loop0 mount
- ls mount
- mkdir mount/littlefs
- cp -r $(git ls-tree --name-only HEAD) mount/littlefs
- cd mount/littlefs
- stat .
- ls -flh
- make -B test_dirs test_files QUIET=1
# attempt to migrate
- cd ../..
- fusermount -u mount
- v2/lfs --migrate /dev/loop0
- v2/lfs /dev/loop0 mount
# run self-host test with v2 right where we left off
- ls mount
- cd mount/littlefs
- stat .
- ls -flh
- make -B test_dirs test_files QUIET=1
# Automatically create releases
- stage: deploy
env:
- STAGE=deploy
- NAME=deploy
script:
- |
bash << 'SCRIPT'
set -ev
# Find version defined in lfs.h
LFS_VERSION=$(grep -ox '#define LFS_VERSION .*' lfs.h | cut -d ' ' -f3)
LFS_VERSION_MAJOR=$((0xffff & ($LFS_VERSION >> 16)))
LFS_VERSION_MINOR=$((0xffff & ($LFS_VERSION >> 0)))
# Grab latests patch from repo tags, default to 0, needs finagling
# to get past github's pagination api
PREV_URL=https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs/tags/v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.
PREV_URL=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" -I \
| sed -n '/^Link/{s/.*<\(.*\)>; rel="last"/\1/;p;q0};$q1' \
|| echo $PREV_URL)
LFS_VERSION_PATCH=$(curl -u "$GEKY_BOT_RELEASES" "$PREV_URL" \
| jq 'map(.ref | match("\\bv.*\\..*\\.(.*)$";"g")
.captures[].string | tonumber) | max + 1' \
|| echo 0)
# We have our new version
LFS_VERSION="v$LFS_VERSION_MAJOR.$LFS_VERSION_MINOR.$LFS_VERSION_PATCH"
echo "VERSION $LFS_VERSION"
# Check that we're the most recent commit
CURRENT_COMMIT=$(curl -f -u "$GEKY_BOT_RELEASES" \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/commits/master \
| jq -re '.sha')
[ "$TRAVIS_COMMIT" == "$CURRENT_COMMIT" ] || exit 0
# Create major branch
git branch v$LFS_VERSION_MAJOR HEAD
# Create major prefix branch
git config user.name "geky bot"
git config user.email "bot@geky.net"
git fetch https://github.com/$TRAVIS_REPO_SLUG.git \
--depth=50 v$LFS_VERSION_MAJOR-prefix || true
./scripts/prefix.py lfs$LFS_VERSION_MAJOR
git branch v$LFS_VERSION_MAJOR-prefix $( \
git commit-tree $(git write-tree) \
$(git rev-parse --verify -q FETCH_HEAD | sed -e 's/^/-p /') \
-p HEAD \
-m "Generated v$LFS_VERSION_MAJOR prefixes")
git reset --hard
# Update major version branches (vN and vN-prefix)
git push https://$GEKY_BOT_RELEASES@github.com/$TRAVIS_REPO_SLUG.git \
v$LFS_VERSION_MAJOR \
v$LFS_VERSION_MAJOR-prefix
# Create patch version tag (vN.N.N)
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/git/refs \
-d "{
\"ref\": \"refs/tags/$LFS_VERSION\",
\"sha\": \"$TRAVIS_COMMIT\"
}"
# Create minor release?
[[ "$LFS_VERSION" == *.0 ]] || exit 0
# Build release notes
PREV=$(git tag --sort=-v:refname -l "v*.0" | head -1)
if [ ! -z "$PREV" ]
then
echo "PREV $PREV"
CHANGES=$'### Changes\n\n'$( \
git log --oneline $PREV.. --grep='^Merge' --invert-grep)
printf "CHANGES\n%s\n\n" "$CHANGES"
fi
# Create the release
curl -f -u "$GEKY_BOT_RELEASES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/releases \
-d "{
\"tag_name\": \"$LFS_VERSION\",
\"name\": \"${LFS_VERSION%.0}\",
\"draft\": true,
\"body\": $(jq -sR '.' <<< "$CHANGES")
}" #"
SCRIPT
# Manage statuses
before_install:
- |
curl -u "$GEKY_BOT_STATUSES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{
\"context\": \"$STAGE/$NAME\",
\"state\": \"pending\",
\"description\": \"${STATUS:-In progress}\",
\"target_url\": \"https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID\"
}"
after_failure:
- |
curl -u "$GEKY_BOT_STATUSES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{
\"context\": \"$STAGE/$NAME\",
\"state\": \"failure\",
\"description\": \"${STATUS:-Failed}\",
\"target_url\": \"https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID\"
}"
after_success:
- |
curl -u "$GEKY_BOT_STATUSES" -X POST \
https://api.github.com/repos/$TRAVIS_REPO_SLUG/statuses/${TRAVIS_PULL_REQUEST_SHA:-$TRAVIS_COMMIT} \
-d "{
\"context\": \"$STAGE/$NAME\",
\"state\": \"success\",
\"description\": \"${STATUS:-Passed}\",
\"target_url\": \"https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$TRAVIS_JOB_ID\"
}"
# Job control
stages:
- name: test
- name: deploy
if: branch = master AND type = push

View File

@@ -59,7 +59,7 @@ This leaves us with three major requirements for an embedded filesystem.
RAM to temporarily store filesystem metadata.
For ROM, this means we need to keep our design simple and reuse code paths
were possible. For RAM we have a stronger requirement, all RAM usage is
where possible. For RAM we have a stronger requirement, all RAM usage is
bounded. This means RAM usage does not grow as the filesystem changes in
size or number of files. This creates a unique challenge as even presumably
simple operations, such as traversing the filesystem, become surprisingly
@@ -254,7 +254,7 @@ have weaknesses that limit their usefulness. But if we merge the two they can
mutually solve each other's limitations.
This is the idea behind littlefs. At the sub-block level, littlefs is built
out of small, two blocks logs that provide atomic updates to metadata anywhere
out of small, two block logs that provide atomic updates to metadata anywhere
on the filesystem. At the super-block level, littlefs is a CObW tree of blocks
that can be evicted on demand.
@@ -626,7 +626,7 @@ log&#8322;_n_ pointers that skip to different preceding elements of the
skip-list.
The name comes from heavy use of the [CTZ instruction][wikipedia-ctz], which
lets us calculate the power-of-two factors efficiently. For a give block _n_,
lets us calculate the power-of-two factors efficiently. For a given block _n_,
that block contains ctz(_n_)+1 pointers.
```
@@ -676,7 +676,7 @@ block, this cost is fairly reasonable.
---
This is a new data structure, so we still have several questions. What is the
storage overage? Can the number of pointers exceed the size of a block? How do
storage overhead? Can the number of pointers exceed the size of a block? How do
we store a CTZ skip-list in our metadata pairs?
To find the storage overhead, we can look at the data structure as multiple
@@ -742,8 +742,8 @@ where:
2. popcount(![x]) = the number of bits that are 1 in ![x]
Initial tests of this surprising property seem to hold. As ![n] approaches
infinity, we end up with an average overhead of 2 pointers, which matches what
our assumption from earlier. During iteration, the popcount function seems to
infinity, we end up with an average overhead of 2 pointers, which matches our
assumption from earlier. During iteration, the popcount function seems to
handle deviations from this average. Of course, just to make sure I wrote a
quick script that verified this property for all 32-bit integers.
@@ -767,7 +767,7 @@ overflow, but we can avoid this by rearranging the equation a bit:
![off = N - (B-2w/8)n - (w/8)popcount(n)][ctz-formula7]
Our solution requires quite a bit of math, but computer are very good at math.
Our solution requires quite a bit of math, but computers are very good at math.
Now we can find both our block index and offset from a size in _O(1)_, letting
us store CTZ skip-lists with only a pointer and size.
@@ -850,7 +850,7 @@ nearly every write to the filesystem.
Normally, block allocation involves some sort of free list or bitmap stored on
the filesystem that is updated with free blocks. However, with power
resilience, keeping these structure consistent becomes difficult. It doesn't
resilience, keeping these structures consistent becomes difficult. It doesn't
help that any mistake in updating these structures can result in lost blocks
that are impossible to recover.
@@ -894,9 +894,9 @@ high-risk error conditions.
---
Our block allocator needs to find free blocks efficiently. You could traverse
through every block on storage and check each one against our filesystem tree,
however the runtime would be abhorrent. We need to somehow collect multiple
blocks each traversal.
through every block on storage and check each one against our filesystem tree;
however, the runtime would be abhorrent. We need to somehow collect multiple
blocks per traversal.
Looking at existing designs, some larger filesystems that use a similar "drop
it on the floor" strategy store a bitmap of the entire storage in [RAM]. This
@@ -920,8 +920,8 @@ a brute force traversal. Instead of a bitmap the size of storage, we keep track
of a small, fixed-size bitmap called the lookahead buffer. During block
allocation, we take blocks from the lookahead buffer. If the lookahead buffer
is empty, we scan the filesystem for more free blocks, populating our lookahead
buffer. Each scan we use an increasing offset, circling the storage as blocks
are allocated.
buffer. In each scan we use an increasing offset, circling the storage as
blocks are allocated.
Here's what it might look like to allocate 4 blocks on a decently busy
filesystem with a 32 bit lookahead and a total of 128 blocks (512 KiB
@@ -950,7 +950,7 @@ alloc = 112 lookahead: ffff8000
```
This lookahead approach has a runtime complexity of _O(n&sup2;)_ to completely
scan storage, however, bitmaps are surprisingly compact, and in practice only
scan storage; however, bitmaps are surprisingly compact, and in practice only
one or two passes are usually needed to find free blocks. Additionally, the
performance of the allocator can be optimized by adjusting the block size or
size of the lookahead buffer, trading either write granularity or RAM for
@@ -1173,9 +1173,9 @@ We may find that the new block is also bad, but hopefully after repeating this
cycle we'll eventually find a new block where a write succeeds. If we don't,
that means that all blocks in our storage are bad, and we've reached the end of
our device's usable life. At this point, littlefs will return an "out of space"
error, which is technically true, there are no more good blocks, but as an
added benefit also matches the error condition expected by users of dynamically
sized data.
error. This is technically true, as there are no more good blocks, but as an
added benefit it also matches the error condition expected by users of
dynamically sized data.
---
@@ -1187,7 +1187,7 @@ original data even after it has been corrupted. One such mechanism for this is
ECC is an extension to the idea of a checksum. Where a checksum such as CRC can
detect that an error has occurred in the data, ECC can detect and actually
correct some amount of errors. However, there is a limit to how many errors ECC
can detect, call the [Hamming bound][wikipedia-hamming-bound]. As the number of
can detect: the [Hamming bound][wikipedia-hamming-bound]. As the number of
errors approaches the Hamming bound, we may still be able to detect errors, but
can no longer fix the data. If we've reached this point the block is
unrecoverable.
@@ -1202,7 +1202,7 @@ chip itself.
In littlefs, ECC is entirely optional. Read errors can instead be prevented
proactively by wear leveling. But it's important to note that ECC can be used
at the block device level to modestly extend the life of a device. littlefs
respects any errors reported by the block device, allow a block device to
respects any errors reported by the block device, allowing a block device to
provide additional aggressive error detection.
---
@@ -1231,7 +1231,7 @@ Generally, wear leveling algorithms fall into one of two categories:
we need to consider all blocks, including blocks that already contain data.
As a tradeoff for code size and complexity, littlefs (currently) only provides
dynamic wear leveling. This is a best efforts solution. Wear is not distributed
dynamic wear leveling. This is a best effort solution. Wear is not distributed
perfectly, but it is distributed among the free blocks and greatly extends the
life of a device.
@@ -1378,7 +1378,7 @@ We can make several improvements. First, instead of giving each file its own
metadata pair, we can store multiple files in a single metadata pair. One way
to do this is to directly associate a directory with a metadata pair (or a
linked list of metadata pairs). This makes it easy for multiple files to share
the directory's metadata pair for logging and reduce the collective storage
the directory's metadata pair for logging and reduces the collective storage
overhead.
The strict binding of metadata pairs and directories also gives users
@@ -1816,12 +1816,12 @@ while manipulating the directory tree (foreshadowing!).
## The move problem
We have one last challenge. The move problem. Phrasing the problem is simple:
We have one last challenge: the move problem. Phrasing the problem is simple:
How do you atomically move a file between two directories?
In littlefs we can atomically commit to directories, but we can't create
an atomic commit that span multiple directories. The filesystem must go
an atomic commit that spans multiple directories. The filesystem must go
through a minimum of two distinct states to complete a move.
To make matters worse, file moves are a common form of synchronization for
@@ -1831,13 +1831,13 @@ atomic moves right.
So what can we do?
- We definitely can't just let power-loss result in duplicated or lost files.
This could easily break user's code and would only reveal itself in extreme
This could easily break users' code and would only reveal itself in extreme
cases. We were only able to be lazy about the threaded linked-list because
it isn't user facing and we can handle the corner cases internally.
- Some filesystems propagate COW operations up the tree until finding a common
parent. Unfortunately this interacts poorly with our threaded tree and brings
back the issue of upward propagation of wear.
- Some filesystems propagate COW operations up the tree until a common parent
is found. Unfortunately this interacts poorly with our threaded tree and
brings back the issue of upward propagation of wear.
- In a previous version of littlefs we tried to solve this problem by going
back and forth between the source and destination, marking and unmarking the
@@ -1852,7 +1852,7 @@ introduction of a mechanism called "global state".
---
Global state is a small set of state that can be updated from _any_ metadata
pair. Combining global state with metadata pair's ability to update multiple
pair. Combining global state with metadata pairs' ability to update multiple
entries in one commit gives us a powerful tool for crafting complex atomic
operations.
@@ -1910,7 +1910,7 @@ the filesystem is mounted.
You may have noticed that global state is very expensive. We keep a copy in
RAM and a delta in an unbounded number of metadata pairs. Even if we reset
the global state to its initial value we can't easily clean up the deltas on
the global state to its initial value, we can't easily clean up the deltas on
disk. For this reason, it's very important that we keep the size of global
state bounded and extremely small. But, even with a strict budget, global
state is incredibly valuable.

View File

@@ -1,3 +1,4 @@
Copyright (c) 2022, The littlefs authors.
Copyright (c) 2017, Arm Limited. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,

603
Makefile
View File

@@ -1,72 +1,585 @@
TARGET = lfs.a
# overrideable build dir, default is in-place
BUILDDIR ?= .
# overridable target/src/tools/flags/etc
ifneq ($(wildcard test.c main.c),)
override TARGET = lfs
endif
CC ?= gcc
AR ?= ar
SIZE ?= size
SRC += $(wildcard *.c emubd/*.c)
OBJ := $(SRC:.c=.o)
DEP := $(SRC:.c=.d)
ASM := $(SRC:.c=.s)
TEST := $(patsubst tests/%.sh,%,$(wildcard tests/test_*))
SHELL = /bin/bash -o pipefail
ifdef DEBUG
override CFLAGS += -O0 -g3
TARGET ?= $(BUILDDIR)/lfs
else
override CFLAGS += -Os
TARGET ?= $(BUILDDIR)/liblfs.a
endif
ifdef WORD
override CFLAGS += -m$(WORD)
CC ?= gcc
AR ?= ar
SIZE ?= size
CTAGS ?= ctags
NM ?= nm
OBJDUMP ?= objdump
VALGRIND ?= valgrind
GDB ?= gdb
PERF ?= perf
SRC ?= $(filter-out $(wildcard *.t.* *.b.*),$(wildcard *.c))
OBJ := $(SRC:%.c=$(BUILDDIR)/%.o)
DEP := $(SRC:%.c=$(BUILDDIR)/%.d)
ASM := $(SRC:%.c=$(BUILDDIR)/%.s)
CI := $(SRC:%.c=$(BUILDDIR)/%.ci)
GCDA := $(SRC:%.c=$(BUILDDIR)/%.t.gcda)
TESTS ?= $(wildcard tests/*.toml)
TEST_SRC ?= $(SRC) \
$(filter-out $(wildcard bd/*.t.* bd/*.b.*),$(wildcard bd/*.c)) \
runners/test_runner.c
TEST_RUNNER ?= $(BUILDDIR)/runners/test_runner
TEST_A := $(TESTS:%.toml=$(BUILDDIR)/%.t.a.c) \
$(TEST_SRC:%.c=$(BUILDDIR)/%.t.a.c)
TEST_C := $(TEST_A:%.t.a.c=%.t.c)
TEST_OBJ := $(TEST_C:%.t.c=%.t.o)
TEST_DEP := $(TEST_C:%.t.c=%.t.d)
TEST_CI := $(TEST_C:%.t.c=%.t.ci)
TEST_GCNO := $(TEST_C:%.t.c=%.t.gcno)
TEST_GCDA := $(TEST_C:%.t.c=%.t.gcda)
TEST_PERF := $(TEST_RUNNER:%=%.perf)
TEST_TRACE := $(TEST_RUNNER:%=%.trace)
TEST_CSV := $(TEST_RUNNER:%=%.csv)
BENCHES ?= $(wildcard benches/*.toml)
BENCH_SRC ?= $(SRC) \
$(filter-out $(wildcard bd/*.t.* bd/*.b.*),$(wildcard bd/*.c)) \
runners/bench_runner.c
BENCH_RUNNER ?= $(BUILDDIR)/runners/bench_runner
BENCH_A := $(BENCHES:%.toml=$(BUILDDIR)/%.b.a.c) \
$(BENCH_SRC:%.c=$(BUILDDIR)/%.b.a.c)
BENCH_C := $(BENCH_A:%.b.a.c=%.b.c)
BENCH_OBJ := $(BENCH_C:%.b.c=%.b.o)
BENCH_DEP := $(BENCH_C:%.b.c=%.b.d)
BENCH_CI := $(BENCH_C:%.b.c=%.b.ci)
BENCH_GCNO := $(BENCH_C:%.b.c=%.b.gcno)
BENCH_GCDA := $(BENCH_C:%.b.c=%.b.gcda)
BENCH_PERF := $(BENCH_RUNNER:%=%.perf)
BENCH_TRACE := $(BENCH_RUNNER:%=%.trace)
BENCH_CSV := $(BENCH_RUNNER:%=%.csv)
CFLAGS += -fcallgraph-info=su
CFLAGS += -g3
CFLAGS += -I.
CFLAGS += -std=c99 -Wall -Wextra -pedantic
CFLAGS += -Wmissing-prototypes
CFLAGS += -ftrack-macro-expansion=0
ifdef DEBUG
CFLAGS += -O0
else
CFLAGS += -Os
endif
ifdef TRACE
CFLAGS += -DLFS_YES_TRACE
endif
ifdef YES_COV
CFLAGS += --coverage
endif
ifdef YES_PERF
CFLAGS += -fno-omit-frame-pointer
endif
ifdef YES_PERFBD
CFLAGS += -fno-omit-frame-pointer
endif
ifdef VERBOSE
CODEFLAGS += -v
DATAFLAGS += -v
STACKFLAGS += -v
STRUCTSFLAGS += -v
COVFLAGS += -v
PERFFLAGS += -v
PERFBDFLAGS += -v
endif
# forward -j flag
PERFFLAGS += $(filter -j%,$(MAKEFLAGS))
PERFBDFLAGS += $(filter -j%,$(MAKEFLAGS))
ifneq ($(NM),nm)
CODEFLAGS += --nm-path="$(NM)"
DATAFLAGS += --nm-path="$(NM)"
endif
ifneq ($(OBJDUMP),objdump)
CODEFLAGS += --objdump-path="$(OBJDUMP)"
DATAFLAGS += --objdump-path="$(OBJDUMP)"
STRUCTSFLAGS += --objdump-path="$(OBJDUMP)"
PERFFLAGS += --objdump-path="$(OBJDUMP)"
PERFBDFLAGS += --objdump-path="$(OBJDUMP)"
endif
ifneq ($(PERF),perf)
PERFFLAGS += --perf-path="$(PERF)"
endif
TESTFLAGS += -b
BENCHFLAGS += -b
# forward -j flag
TESTFLAGS += $(filter -j%,$(MAKEFLAGS))
BENCHFLAGS += $(filter -j%,$(MAKEFLAGS))
ifdef YES_PERF
TESTFLAGS += -p $(TEST_PERF)
BENCHFLAGS += -p $(BENCH_PERF)
endif
ifdef YES_PERFBD
TESTFLAGS += -t $(TEST_TRACE) --trace-backtrace --trace-freq=100
endif
ifndef NO_PERFBD
BENCHFLAGS += -t $(BENCH_TRACE) --trace-backtrace --trace-freq=100
endif
ifdef YES_TESTMARKS
TESTFLAGS += -o $(TEST_CSV)
endif
ifndef NO_BENCHMARKS
BENCHFLAGS += -o $(BENCH_CSV)
endif
ifdef VERBOSE
TESTFLAGS += -v
TESTCFLAGS += -v
BENCHFLAGS += -v
BENCHCFLAGS += -v
endif
ifdef EXEC
TESTFLAGS += --exec="$(EXEC)"
BENCHFLAGS += --exec="$(EXEC)"
endif
ifneq ($(GDB),gdb)
TESTFLAGS += --gdb-path="$(GDB)"
BENCHFLAGS += --gdb-path="$(GDB)"
endif
ifneq ($(VALGRIND),valgrind)
TESTFLAGS += --valgrind-path="$(VALGRIND)"
BENCHFLAGS += --valgrind-path="$(VALGRIND)"
endif
ifneq ($(PERF),perf)
TESTFLAGS += --perf-path="$(PERF)"
BENCHFLAGS += --perf-path="$(PERF)"
endif
# this is a bit of a hack, but we want to make sure the BUILDDIR
# directory structure is correct before we run any commands
ifneq ($(BUILDDIR),.)
$(if $(findstring n,$(MAKEFLAGS)),, $(shell mkdir -p \
$(addprefix $(BUILDDIR)/,$(dir \
$(SRC) \
$(TESTS) \
$(TEST_SRC) \
$(BENCHES) \
$(BENCH_SRC)))))
endif
override CFLAGS += -I.
override CFLAGS += -std=c99 -Wall -pedantic
override CFLAGS += -Wextra -Wshadow -Wjump-misses-init
# Remove missing-field-initializers because of GCC bug
override CFLAGS += -Wno-missing-field-initializers
all: $(TARGET)
# commands
## Build littlefs
.PHONY: all build
all build: $(TARGET)
## Build assembly files
.PHONY: asm
asm: $(ASM)
## Find the total size
.PHONY: size
size: $(OBJ)
$(SIZE) -t $^
.SUFFIXES:
test: test_format test_dirs test_files test_seek test_truncate \
test_entries test_interspersed test_alloc test_paths test_attrs \
test_move test_orphan test_corrupt
@rm test.c
test_%: tests/test_%.sh
## Generate a ctags file
.PHONY: tags
tags:
$(CTAGS) --totals --c-types=+p $(shell find -H -name '*.h') $(SRC)
ifdef QUIET
@./$< | sed -n '/^[-=]/p'
else
./$<
## Show this help text
.PHONY: help
help:
@$(strip awk '/^## / { \
sub(/^## /,""); \
getline rule; \
while (rule ~ /^(#|\.PHONY|ifdef|ifndef)/) getline rule; \
gsub(/:.*/, "", rule); \
printf " "" %-25s %s\n", rule, $$0 \
}' $(MAKEFILE_LIST))
## Find the per-function code size
.PHONY: code
code: CODEFLAGS+=-S
code: $(OBJ) $(BUILDDIR)/lfs.code.csv
./scripts/code.py $(OBJ) $(CODEFLAGS)
## Compare per-function code size
.PHONY: code-diff
code-diff: $(OBJ)
./scripts/code.py $^ $(CODEFLAGS) -d $(BUILDDIR)/lfs.code.csv
## Find the per-function data size
.PHONY: data
data: DATAFLAGS+=-S
data: $(OBJ) $(BUILDDIR)/lfs.data.csv
./scripts/data.py $(OBJ) $(DATAFLAGS)
## Compare per-function data size
.PHONY: data-diff
data-diff: $(OBJ)
./scripts/data.py $^ $(DATAFLAGS) -d $(BUILDDIR)/lfs.data.csv
## Find the per-function stack usage
.PHONY: stack
stack: STACKFLAGS+=-S
stack: $(CI) $(BUILDDIR)/lfs.stack.csv
./scripts/stack.py $(CI) $(STACKFLAGS)
## Compare per-function stack usage
.PHONY: stack-diff
stack-diff: $(CI)
./scripts/stack.py $^ $(STACKFLAGS) -d $(BUILDDIR)/lfs.stack.csv
## Find function sizes
.PHONY: funcs
funcs: SUMMARYFLAGS+=-S
funcs: \
$(BUILDDIR)/lfs.code.csv \
$(BUILDDIR)/lfs.data.csv \
$(BUILDDIR)/lfs.stack.csv
$(strip ./scripts/summary.py $^ \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack \
$(SUMMARYFLAGS))
## Compare function sizes
.PHONY: funcs-diff
funcs-diff: SHELL=/bin/bash
funcs-diff: $(OBJ) $(CI)
$(strip ./scripts/summary.py \
<(./scripts/code.py $(OBJ) -q $(CODEFLAGS) -o-) \
<(./scripts/data.py $(OBJ) -q $(DATAFLAGS) -o-) \
<(./scripts/stack.py $(CI) -q $(STACKFLAGS) -o-) \
-bfunction \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack \
$(SUMMARYFLAGS) -d <(./scripts/summary.py \
$(BUILDDIR)/lfs.code.csv \
$(BUILDDIR)/lfs.data.csv \
$(BUILDDIR)/lfs.stack.csv \
-q $(SUMMARYFLAGS) -o-))
## Find struct sizes
.PHONY: structs
structs: STRUCTSFLAGS+=-S
structs: $(OBJ) $(BUILDDIR)/lfs.structs.csv
./scripts/structs.py $(OBJ) $(STRUCTSFLAGS)
## Compare struct sizes
.PHONY: structs-diff
structs-diff: $(OBJ)
./scripts/structs.py $^ $(STRUCTSFLAGS) -d $(BUILDDIR)/lfs.structs.csv
## Find the line/branch coverage after a test run
.PHONY: cov
cov: COVFLAGS+=-s
cov: $(GCDA) $(BUILDDIR)/lfs.cov.csv
$(strip ./scripts/cov.py $(GCDA) \
$(patsubst %,-F%,$(SRC)) \
$(COVFLAGS))
## Compare line/branch coverage
.PHONY: cov-diff
cov-diff: $(GCDA)
$(strip ./scripts/cov.py $^ \
$(patsubst %,-F%,$(SRC)) \
$(COVFLAGS) -d $(BUILDDIR)/lfs.cov.csv)
## Find the perf results after bench run with YES_PERF
.PHONY: perf
perf: PERFFLAGS+=-S
perf: $(BENCH_PERF) $(BUILDDIR)/lfs.perf.csv
$(strip ./scripts/perf.py $(BENCH_PERF) \
$(patsubst %,-F%,$(SRC)) \
$(PERFFLAGS))
## Compare perf results
.PHONY: perf-diff
perf-diff: $(BENCH_PERF)
$(strip ./scripts/perf.py $^ \
$(patsubst %,-F%,$(SRC)) \
$(PERFFLAGS) -d $(BUILDDIR)/lfs.perf.csv)
## Find the perfbd results after a bench run
.PHONY: perfbd
perfbd: PERFBDFLAGS+=-S
perfbd: $(BENCH_TRACE) $(BUILDDIR)/lfs.perfbd.csv
$(strip ./scripts/perfbd.py $(BENCH_RUNNER) $(BENCH_TRACE) \
$(patsubst %,-F%,$(SRC)) \
$(PERFBDFLAGS))
## Compare perfbd results
.PHONY: perfbd-diff
perfbd-diff: $(BENCH_TRACE)
$(strip ./scripts/perfbd.py $(BENCH_RUNNER) $^ \
$(patsubst %,-F%,$(SRC)) \
$(PERFBDFLAGS) -d $(BUILDDIR)/lfs.perfbd.csv)
## Find a summary of compile-time sizes
.PHONY: summary sizes
summary sizes: \
$(BUILDDIR)/lfs.code.csv \
$(BUILDDIR)/lfs.data.csv \
$(BUILDDIR)/lfs.stack.csv \
$(BUILDDIR)/lfs.structs.csv
$(strip ./scripts/summary.py $^ \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack \
-fstructs=struct_size \
-Y $(SUMMARYFLAGS))
## Compare compile-time sizes
.PHONY: summary-diff sizes-diff
summary-diff sizes-diff: SHELL=/bin/bash
summary-diff sizes-diff: $(OBJ) $(CI)
$(strip ./scripts/summary.py \
<(./scripts/code.py $(OBJ) -q $(CODEFLAGS) -o-) \
<(./scripts/data.py $(OBJ) -q $(DATAFLAGS) -o-) \
<(./scripts/stack.py $(CI) -q $(STACKFLAGS) -o-) \
<(./scripts/structs.py $(OBJ) -q $(STRUCTSFLAGS) -o-) \
-fcode=code_size \
-fdata=data_size \
-fstack=stack_limit --max=stack \
-fstructs=struct_size \
-Y $(SUMMARYFLAGS) -d <(./scripts/summary.py \
$(BUILDDIR)/lfs.code.csv \
$(BUILDDIR)/lfs.data.csv \
$(BUILDDIR)/lfs.stack.csv \
$(BUILDDIR)/lfs.structs.csv \
-q $(SUMMARYFLAGS) -o-))
## Build the test-runner
.PHONY: test-runner build-test
test-runner build-test: CFLAGS+=-Wno-missing-prototypes
ifndef NO_COV
test-runner build-test: CFLAGS+=--coverage
endif
ifdef YES_PERF
test-runner build-test: CFLAGS+=-fno-omit-frame-pointer
endif
ifdef YES_PERFBD
test-runner build-test: CFLAGS+=-fno-omit-frame-pointer
endif
# note we remove some binary dependent files during compilation,
# otherwise it's way to easy to end up with outdated results
test-runner build-test: $(TEST_RUNNER)
ifndef NO_COV
rm -f $(TEST_GCDA)
endif
ifdef YES_PERF
rm -f $(TEST_PERF)
endif
ifdef YES_PERFBD
rm -f $(TEST_TRACE)
endif
-include $(DEP)
## Run the tests, -j enables parallel tests
.PHONY: test
test: test-runner
./scripts/test.py $(TEST_RUNNER) $(TESTFLAGS)
lfs: $(OBJ)
## List the tests
.PHONY: test-list
test-list: test-runner
./scripts/test.py $(TEST_RUNNER) $(TESTFLAGS) -l
## Summarize the testmarks
.PHONY: testmarks
testmarks: SUMMARYFLAGS+=-spassed
testmarks: $(TEST_CSV) $(BUILDDIR)/lfs.test.csv
$(strip ./scripts/summary.py $(TEST_CSV) \
-bsuite \
-fpassed=test_passed \
$(SUMMARYFLAGS))
## Compare testmarks against a previous run
.PHONY: testmarks-diff
testmarks-diff: $(TEST_CSV)
$(strip ./scripts/summary.py $^ \
-bsuite \
-fpassed=test_passed \
$(SUMMARYFLAGS) -d $(BUILDDIR)/lfs.test.csv)
## Build the bench-runner
.PHONY: bench-runner build-bench
bench-runner build-bench: CFLAGS+=-Wno-missing-prototypes
ifdef YES_COV
bench-runner build-bench: CFLAGS+=--coverage
endif
ifdef YES_PERF
bench-runner build-bench: CFLAGS+=-fno-omit-frame-pointer
endif
ifndef NO_PERFBD
bench-runner build-bench: CFLAGS+=-fno-omit-frame-pointer
endif
# note we remove some binary dependent files during compilation,
# otherwise it's way to easy to end up with outdated results
bench-runner build-bench: $(BENCH_RUNNER)
ifdef YES_COV
rm -f $(BENCH_GCDA)
endif
ifdef YES_PERF
rm -f $(BENCH_PERF)
endif
ifndef NO_PERFBD
rm -f $(BENCH_TRACE)
endif
## Run the benchmarks, -j enables parallel benchmarks
.PHONY: bench
bench: bench-runner
./scripts/bench.py $(BENCH_RUNNER) $(BENCHFLAGS)
## List the benchmarks
.PHONY: bench-list
bench-list: bench-runner
./scripts/bench.py $(BENCH_RUNNER) $(BENCHFLAGS) -l
## Summarize the benchmarks
.PHONY: benchmarks
benchmarks: SUMMARYFLAGS+=-Serased -Sproged -Sreaded
benchmarks: $(BENCH_CSV) $(BUILDDIR)/lfs.bench.csv
$(strip ./scripts/summary.py $(BENCH_CSV) \
-bsuite \
-freaded=bench_readed \
-fproged=bench_proged \
-ferased=bench_erased \
$(SUMMARYFLAGS))
## Compare benchmarks against a previous run
.PHONY: benchmarks-diff
benchmarks-diff: $(BENCH_CSV)
$(strip ./scripts/summary.py $^ \
-bsuite \
-freaded=bench_readed \
-fproged=bench_proged \
-ferased=bench_erased \
$(SUMMARYFLAGS) -d $(BUILDDIR)/lfs.bench.csv)
# rules
-include $(DEP)
-include $(TEST_DEP)
.SUFFIXES:
.SECONDARY:
$(BUILDDIR)/lfs: $(OBJ)
$(CC) $(CFLAGS) $^ $(LFLAGS) -o $@
%.a: $(OBJ)
$(BUILDDIR)/liblfs.a: $(OBJ)
$(AR) rcs $@ $^
%.o: %.c
$(CC) -c -MMD $(CFLAGS) $< -o $@
$(BUILDDIR)/lfs.code.csv: $(OBJ)
./scripts/code.py $^ -q $(CODEFLAGS) -o $@
%.s: %.c
$(BUILDDIR)/lfs.data.csv: $(OBJ)
./scripts/data.py $^ -q $(DATAFLAGS) -o $@
$(BUILDDIR)/lfs.stack.csv: $(CI)
./scripts/stack.py $^ -q $(STACKFLAGS) -o $@
$(BUILDDIR)/lfs.structs.csv: $(OBJ)
./scripts/structs.py $^ -q $(STRUCTSFLAGS) -o $@
$(BUILDDIR)/lfs.cov.csv: $(GCDA)
$(strip ./scripts/cov.py $^ \
$(patsubst %,-F%,$(SRC)) \
-q $(COVFLAGS) -o $@)
$(BUILDDIR)/lfs.perf.csv: $(BENCH_PERF)
$(strip ./scripts/perf.py $^ \
$(patsubst %,-F%,$(SRC)) \
-q $(PERFFLAGS) -o $@)
$(BUILDDIR)/lfs.perfbd.csv: $(BENCH_TRACE)
$(strip ./scripts/perfbd.py $(BENCH_RUNNER) $^ \
$(patsubst %,-F%,$(SRC)) \
-q $(PERFBDFLAGS) -o $@)
$(BUILDDIR)/lfs.test.csv: $(TEST_CSV)
cp $^ $@
$(BUILDDIR)/lfs.bench.csv: $(BENCH_CSV)
cp $^ $@
$(BUILDDIR)/runners/test_runner: $(TEST_OBJ)
$(CC) $(CFLAGS) $^ $(LFLAGS) -o $@
$(BUILDDIR)/runners/bench_runner: $(BENCH_OBJ)
$(CC) $(CFLAGS) $^ $(LFLAGS) -o $@
# our main build rule generates .o, .d, and .ci files, the latter
# used for stack analysis
$(BUILDDIR)/%.o $(BUILDDIR)/%.ci: %.c
$(CC) -c -MMD $(CFLAGS) $< -o $(BUILDDIR)/$*.o
$(BUILDDIR)/%.o $(BUILDDIR)/%.ci: $(BUILDDIR)/%.c
$(CC) -c -MMD $(CFLAGS) $< -o $(BUILDDIR)/$*.o
$(BUILDDIR)/%.s: %.c
$(CC) -S $(CFLAGS) $< -o $@
$(BUILDDIR)/%.c: %.a.c
./scripts/prettyasserts.py -p LFS_ASSERT $< -o $@
$(BUILDDIR)/%.c: $(BUILDDIR)/%.a.c
./scripts/prettyasserts.py -p LFS_ASSERT $< -o $@
$(BUILDDIR)/%.t.a.c: %.toml
./scripts/test.py -c $< $(TESTCFLAGS) -o $@
$(BUILDDIR)/%.t.a.c: %.c $(TESTS)
./scripts/test.py -c $(TESTS) -s $< $(TESTCFLAGS) -o $@
$(BUILDDIR)/%.b.a.c: %.toml
./scripts/bench.py -c $< $(BENCHCFLAGS) -o $@
$(BUILDDIR)/%.b.a.c: %.c $(BENCHES)
./scripts/bench.py -c $(BENCHES) -s $< $(BENCHCFLAGS) -o $@
## Clean everything
.PHONY: clean
clean:
rm -f $(TARGET)
rm -f $(BUILDDIR)/lfs
rm -f $(BUILDDIR)/liblfs.a
rm -f $(BUILDDIR)/lfs.code.csv
rm -f $(BUILDDIR)/lfs.data.csv
rm -f $(BUILDDIR)/lfs.stack.csv
rm -f $(BUILDDIR)/lfs.structs.csv
rm -f $(BUILDDIR)/lfs.cov.csv
rm -f $(BUILDDIR)/lfs.perf.csv
rm -f $(BUILDDIR)/lfs.perfbd.csv
rm -f $(BUILDDIR)/lfs.test.csv
rm -f $(BUILDDIR)/lfs.bench.csv
rm -f $(OBJ)
rm -f $(DEP)
rm -f $(ASM)
rm -f $(CI)
rm -f $(TEST_RUNNER)
rm -f $(TEST_A)
rm -f $(TEST_C)
rm -f $(TEST_OBJ)
rm -f $(TEST_DEP)
rm -f $(TEST_CI)
rm -f $(TEST_GCNO)
rm -f $(TEST_GCDA)
rm -f $(TEST_PERF)
rm -f $(TEST_TRACE)
rm -f $(TEST_CSV)
rm -f $(BENCH_RUNNER)
rm -f $(BENCH_A)
rm -f $(BENCH_C)
rm -f $(BENCH_OBJ)
rm -f $(BENCH_DEP)
rm -f $(BENCH_CI)
rm -f $(BENCH_GCNO)
rm -f $(BENCH_GCDA)
rm -f $(BENCH_PERF)
rm -f $(BENCH_TRACE)
rm -f $(BENCH_CSV)

View File

@@ -53,6 +53,7 @@ const struct lfs_config cfg = {
.block_count = 128,
.cache_size = 16,
.lookahead_size = 16,
.block_cycles = 500,
};
// entry point
@@ -109,11 +110,14 @@ directory functions, with the deviation that the allocation of filesystem
structures must be provided by the user.
All POSIX operations, such as remove and rename, are atomic, even in event
of power-loss. Additionally, no file updates are not actually committed to
of power-loss. Additionally, file updates are not actually committed to
the filesystem until sync or close is called on the file.
## Other notes
Littlefs is written in C, and specifically should compile with any compiler
that conforms to the `C99` standard.
All littlefs calls have the potential to return a negative error code. The
errors can be either one of those found in the `enum lfs_error` in
[lfs.h](lfs.h), or an error returned by the user's block device operations.
@@ -188,7 +192,7 @@ More details on how littlefs works can be found in [DESIGN.md](DESIGN.md) and
## Testing
The littlefs comes with a test suite designed to run on a PC using the
[emulated block device](emubd/lfs_emubd.h) found in the emubd directory.
[emulated block device](bd/lfs_testbd.h) found in the `bd` directory.
The tests assume a Linux environment and can be started with make:
``` bash
@@ -217,6 +221,18 @@ License Identifiers that are here available: http://spdx.org/licenses/
- [littlefs-js] - A javascript wrapper for littlefs. I'm not sure why you would
want this, but it is handy for demos. You can see it in action
[here][littlefs-js-demo].
- [littlefs-python] - A Python wrapper for littlefs. The project allows you
to create images of the filesystem on your PC. Check if littlefs will fit
your needs, create images for a later download to the target memory or
inspect the content of a binary image of the target memory.
- [littlefs2-rust] - A Rust wrapper for littlefs. This project allows you
to use littlefs in a Rust-friendly API, reaping the benefits of Rust's memory
safety and other guarantees.
- [littlefs-disk-img-viewer] - A memory-efficient web application for viewing
littlefs disk images in your web browser.
- [mklfs] - A command line tool built by the [Lua RTOS] guys for making
littlefs images from a host PC. Supports Windows, Mac OS, and Linux.
@@ -234,8 +250,19 @@ License Identifiers that are here available: http://spdx.org/licenses/
MCUs. It offers static wear-leveling and power-resilience with only a fixed
_O(|address|)_ pointer structure stored on each block and in RAM.
- [ChaN's FatFs] - A lightweight reimplementation of the infamous FAT filesystem
for microcontroller-scale devices. Due to limitations of FAT it can't provide
power-loss resilience, but it does allow easy interop with PCs.
- [chamelon] - A pure-OCaml implementation of (most of) littlefs, designed for
use with the MirageOS library operating system project. It is interoperable
with the reference implementation, with some caveats.
- [nim-littlefs] - A Nim wrapper and API for littlefs. Includes a fuse
implementation based on [littlefs-fuse]
[BSD-3-Clause]: https://spdx.org/licenses/BSD-3-Clause.html
[littlefs-disk-img-viewer]: https://github.com/tniessen/littlefs-disk-img-viewer
[littlefs-fuse]: https://github.com/geky/littlefs-fuse
[FUSE]: https://github.com/libfuse/libfuse
[littlefs-js]: https://github.com/geky/littlefs-js
@@ -243,6 +270,11 @@ License Identifiers that are here available: http://spdx.org/licenses/
[mklfs]: https://github.com/whitecatboard/Lua-RTOS-ESP32/tree/master/components/mklfs/src
[Lua RTOS]: https://github.com/whitecatboard/Lua-RTOS-ESP32
[Mbed OS]: https://github.com/armmbed/mbed-os
[LittleFileSystem]: https://os.mbed.com/docs/mbed-os/v5.12/apis/littlefilesystem.html
[LittleFileSystem]: https://os.mbed.com/docs/mbed-os/latest/apis/littlefilesystem.html
[SPIFFS]: https://github.com/pellepl/spiffs
[Dhara]: https://github.com/dlbeer/dhara
[ChaN's FatFs]: http://elm-chan.org/fsw/ff/00index_e.html
[littlefs-python]: https://pypi.org/project/littlefs-python/
[littlefs2-rust]: https://crates.io/crates/littlefs2
[chamelon]: https://github.com/yomimono/chamelon
[nim-littlefs]: https://github.com/Graveflo/nim-littlefs

133
SPEC.md
View File

@@ -1,10 +1,10 @@
## littlefs technical specification
This is the technical specification of the little filesystem. This document
covers the technical details of how the littlefs is stored on disk for
introspection and tooling. This document assumes you are familiar with the
design of the littlefs, for more info on how littlefs works check
out [DESIGN.md](DESIGN.md).
This is the technical specification of the little filesystem with on-disk
version lfs2.1. This document covers the technical details of how the littlefs
is stored on disk for introspection and tooling. This document assumes you are
familiar with the design of the littlefs, for more info on how littlefs works
check out [DESIGN.md](DESIGN.md).
```
| | | .---._____
@@ -133,12 +133,6 @@ tags XORed together, starting with `0xffffffff`.
'-------------------' '-------------------'
```
One last thing to note before we get into the details around tag encoding. Each
tag contains a valid bit used to indicate if the tag and containing commit is
valid. This valid bit is the first bit found in the tag and the commit and can
be used to tell if we've attempted to write to the remaining space in the
block.
Here's a more complete example of metadata block containing 4 entries:
```
@@ -191,6 +185,53 @@ Here's a more complete example of metadata block containing 4 entries:
'---- most recent D
```
Two things to note before we get into the details around tag encoding:
1. Each tag contains a valid bit used to indicate if the tag and containing
commit is valid. After XORing, this bit should always be zero.
At the end of each commit, the valid bit of the previous tag is XORed
with the lowest bit in the type field of the CRC tag. This allows
the CRC tag to force the next commit to fail the valid bit test if it
has not yet been written to.
2. The valid bit alone is not enough info to know if the next commit has been
erased. We don't know the order bits will be programmed in a program block,
so it's possible that the next commit had an attempted program that left the
valid bit unchanged.
To ensure we only ever program erased bytes, each commit can contain an
optional forward-CRC (FCRC). An FCRC contains a checksum of some amount of
bytes in the next commit at the time it was erased.
```
.-------------------. \ \
| revision count | | |
|-------------------| | |
| metadata | | |
| | +---. +-- current commit
| | | | |
|-------------------| | | |
| FCRC ---|-. | |
|-------------------| / | | |
| CRC -----|-' /
|-------------------| |
| padding | | padding (does't need CRC)
| | |
|-------------------| \ | \
| erased? | +-' |
| | | | +-- next commit
| v | / |
| | /
| |
'-------------------'
```
If the FCRC is missing or the checksum does not match, we must assume a
commit was attempted but failed due to power-loss.
Note that end-of-block commits do not need an FCRC.
## Metadata tags
So in littlefs, 32-bit tags describe every type of metadata. And this means
@@ -233,19 +274,19 @@ Metadata tag fields:
into a 3-bit abstract type and an 8-bit chunk field. Note that the value
`0x000` is invalid and not assigned a type.
3. **Type1 (3-bits)** - Abstract type of the tag. Groups the tags into
8 categories that facilitate bitmasked lookups.
1. **Type1 (3-bits)** - Abstract type of the tag. Groups the tags into
8 categories that facilitate bitmasked lookups.
4. **Chunk (8-bits)** - Chunk field used for various purposes by the different
abstract types. type1+chunk+id form a unique identifier for each tag in the
metadata block.
2. **Chunk (8-bits)** - Chunk field used for various purposes by the different
abstract types. type1+chunk+id form a unique identifier for each tag in the
metadata block.
5. **Id (10-bits)** - File id associated with the tag. Each file in a metadata
3. **Id (10-bits)** - File id associated with the tag. Each file in a metadata
block gets a unique id which is used to associate tags with that file. The
special value `0x3ff` is used for any tags that are not associated with a
file, such as directory and global metadata.
6. **Length (10-bits)** - Length of the data in bytes. The special value
4. **Length (10-bits)** - Length of the data in bytes. The special value
`0x3ff` indicates that this tag has been deleted.
## Metadata types
@@ -289,8 +330,8 @@ Layout of the name tag:
```
tag data
[-- 32 --][--- variable length ---]
[1| 3| 8 | 10 | 10 ][--- (size) ---]
^ ^ ^ ^ ^- size ^- file name
[1| 3| 8 | 10 | 10 ][--- (size * 8) ---]
^ ^ ^ ^ ^- size ^- file name
| | | '------ id
| | '----------- file type
| '-------------- type1 (0x0)
@@ -470,8 +511,8 @@ Layout of the inline-struct tag:
```
tag data
[-- 32 --][--- variable length ---]
[1|- 11 -| 10 | 10 ][--- (size) ---]
^ ^ ^ ^- size ^- inline data
[1|- 11 -| 10 | 10 ][--- (size * 8) ---]
^ ^ ^ ^- size ^- inline data
| | '------ id
| '------------ type (0x201)
'----------------- valid bit
@@ -556,8 +597,8 @@ Layout of the user-attr tag:
```
tag data
[-- 32 --][--- variable length ---]
[1| 3| 8 | 10 | 10 ][--- (size) ---]
^ ^ ^ ^ ^- size ^- attr data
[1| 3| 8 | 10 | 10 ][--- (size * 8) ---]
^ ^ ^ ^ ^- size ^- attr data
| | | '------ id
| | '----------- attr type
| '-------------- type1 (0x3)
@@ -764,9 +805,9 @@ Layout of the CRC tag:
```
tag data
[-- 32 --][-- 32 --|--- variable length ---]
[1| 3| 8 | 10 | 10 ][-- 32 --|--- (size) ---]
^ ^ ^ ^ ^ ^- crc ^- padding
| | | | '- size (12)
[1| 3| 8 | 10 | 10 ][-- 32 --|--- (size * 8 - 32) ---]
^ ^ ^ ^ ^ ^- crc ^- padding
| | | | '- size
| | | '------ id (0x3ff)
| | '----------- valid state
| '-------------- type1 (0x5)
@@ -785,3 +826,41 @@ CRC fields:
are made about the contents.
---
#### `0x5ff` LFS_TYPE_FCRC
Added in lfs2.1, the optional FCRC tag contains a checksum of some amount of
bytes in the next commit at the time it was erased. This allows us to ensure
that we only ever program erased bytes, even if a previous commit failed due
to power-loss.
When programming a commit, the FCRC size must be at least as large as the
program block size. However, the program block is not saved on disk, and can
change between mounts, so the FCRC size on disk may be different than the
current program block size.
If the FCRC is missing or the checksum does not match, we must assume a
commit was attempted but failed due to power-loss.
Layout of the FCRC tag:
```
tag data
[-- 32 --][-- 32 --|-- 32 --]
[1|- 11 -| 10 | 10 ][-- 32 --|-- 32 --]
^ ^ ^ ^ ^- fcrc size ^- fcrc
| | | '- size (8)
| | '------ id (0x3ff)
| '------------ type (0x5ff)
'----------------- valid bit
```
FCRC fields:
1. **FCRC size (32-bits)** - Number of bytes after this commit's CRC tag's
padding to include in the FCRC.
2. **FCRC (32-bits)** - CRC of the bytes after this commit's CRC tag's padding
when erased. Like the CRC tag, this uses a CRC-32 with a polynomial of
`0x04c11db7` initialized with `0xffffffff`.
---

739
bd/lfs_emubd.c Normal file
View File

@@ -0,0 +1,739 @@
/*
* Emulating block device, wraps filebd and rambd while providing a bunch
* of hooks for testing littlefs in various conditions.
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef _POSIX_C_SOURCE
#define _POSIX_C_SOURCE 199309L
#endif
#include "bd/lfs_emubd.h"
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#include <time.h>
#ifdef _WIN32
#include <windows.h>
#endif
// access to lazily-allocated/copy-on-write blocks
//
// Note we can only modify a block if we have exclusive access to it (rc == 1)
//
static lfs_emubd_block_t *lfs_emubd_incblock(lfs_emubd_block_t *block) {
if (block) {
block->rc += 1;
}
return block;
}
static void lfs_emubd_decblock(lfs_emubd_block_t *block) {
if (block) {
block->rc -= 1;
if (block->rc == 0) {
free(block);
}
}
}
static lfs_emubd_block_t *lfs_emubd_mutblock(
const struct lfs_config *cfg,
lfs_emubd_block_t **block) {
lfs_emubd_t *bd = cfg->context;
lfs_emubd_block_t *block_ = *block;
if (block_ && block_->rc == 1) {
// rc == 1? can modify
return block_;
} else if (block_) {
// rc > 1? need to create a copy
lfs_emubd_block_t *nblock = malloc(
sizeof(lfs_emubd_block_t) + bd->cfg->erase_size);
if (!nblock) {
return NULL;
}
memcpy(nblock, block_,
sizeof(lfs_emubd_block_t) + bd->cfg->erase_size);
nblock->rc = 1;
lfs_emubd_decblock(block_);
*block = nblock;
return nblock;
} else {
// no block? need to allocate
lfs_emubd_block_t *nblock = malloc(
sizeof(lfs_emubd_block_t) + bd->cfg->erase_size);
if (!nblock) {
return NULL;
}
nblock->rc = 1;
nblock->wear = 0;
// zero for consistency
memset(nblock->data,
(bd->cfg->erase_value != -1) ? bd->cfg->erase_value : 0,
bd->cfg->erase_size);
*block = nblock;
return nblock;
}
}
// emubd create/destroy
int lfs_emubd_create(const struct lfs_config *cfg,
const struct lfs_emubd_config *bdcfg) {
LFS_EMUBD_TRACE("lfs_emubd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p}, "
"%p {.read_size=%"PRIu32", .prog_size=%"PRIu32", "
".erase_size=%"PRIu32", .erase_count=%"PRIu32", "
".erase_value=%"PRId32", .erase_cycles=%"PRIu32", "
".badblock_behavior=%"PRIu8", .power_cycles=%"PRIu32", "
".powerloss_behavior=%"PRIu8", .powerloss_cb=%p, "
".powerloss_data=%p, .track_branches=%d})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
(void*)bdcfg,
bdcfg->read_size, bdcfg->prog_size, bdcfg->erase_size,
bdcfg->erase_count, bdcfg->erase_value, bdcfg->erase_cycles,
bdcfg->badblock_behavior, bdcfg->power_cycles,
bdcfg->powerloss_behavior, (void*)(uintptr_t)bdcfg->powerloss_cb,
bdcfg->powerloss_data, bdcfg->track_branches);
lfs_emubd_t *bd = cfg->context;
bd->cfg = bdcfg;
// allocate our block array, all blocks start as uninitialized
bd->blocks = malloc(bd->cfg->erase_count * sizeof(lfs_emubd_block_t*));
if (!bd->blocks) {
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
memset(bd->blocks, 0, bd->cfg->erase_count * sizeof(lfs_emubd_block_t*));
// setup testing things
bd->readed = 0;
bd->proged = 0;
bd->erased = 0;
bd->power_cycles = bd->cfg->power_cycles;
bd->ooo_block = -1;
bd->ooo_data = NULL;
bd->disk = NULL;
if (bd->cfg->disk_path) {
bd->disk = malloc(sizeof(lfs_emubd_disk_t));
if (!bd->disk) {
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
bd->disk->rc = 1;
bd->disk->scratch = NULL;
#ifdef _WIN32
bd->disk->fd = open(bd->cfg->disk_path,
O_RDWR | O_CREAT | O_BINARY, 0666);
#else
bd->disk->fd = open(bd->cfg->disk_path,
O_RDWR | O_CREAT, 0666);
#endif
if (bd->disk->fd < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", err);
return err;
}
// if we're emulating erase values, we can keep a block around in
// memory of just the erase state to speed up emulated erases
if (bd->cfg->erase_value != -1) {
bd->disk->scratch = malloc(bd->cfg->erase_size);
if (!bd->disk->scratch) {
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
memset(bd->disk->scratch,
bd->cfg->erase_value,
bd->cfg->erase_size);
// go ahead and erase all of the disk, otherwise the file will not
// match our internal representation
for (size_t i = 0; i < bd->cfg->erase_count; i++) {
ssize_t res = write(bd->disk->fd,
bd->disk->scratch,
bd->cfg->erase_size);
if (res < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", err);
return err;
}
}
}
}
LFS_EMUBD_TRACE("lfs_emubd_create -> %d", 0);
return 0;
}
int lfs_emubd_destroy(const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_destroy(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
// decrement reference counts
for (lfs_block_t i = 0; i < bd->cfg->erase_count; i++) {
lfs_emubd_decblock(bd->blocks[i]);
}
free(bd->blocks);
// clean up other resources
lfs_emubd_decblock(bd->ooo_data);
if (bd->disk) {
bd->disk->rc -= 1;
if (bd->disk->rc == 0) {
close(bd->disk->fd);
free(bd->disk->scratch);
free(bd->disk);
}
}
LFS_EMUBD_TRACE("lfs_emubd_destroy -> %d", 0);
return 0;
}
// powerloss hook
static int lfs_emubd_powerloss(const struct lfs_config *cfg) {
lfs_emubd_t *bd = cfg->context;
// emulate out-of-order writes?
lfs_emubd_block_t *ooo_data = NULL;
if (bd->cfg->powerloss_behavior == LFS_EMUBD_POWERLOSS_OOO
&& bd->ooo_block != -1) {
// since writes between syncs are allowed to be out-of-order, it
// shouldn't hurt to restore the first write on powerloss, right?
ooo_data = bd->blocks[bd->ooo_block];
bd->blocks[bd->ooo_block] = lfs_emubd_incblock(bd->ooo_data);
// mirror to disk file?
if (bd->disk
&& (bd->blocks[bd->ooo_block]
|| bd->cfg->erase_value != -1)) {
off_t res1 = lseek(bd->disk->fd,
(off_t)bd->ooo_block*bd->cfg->erase_size,
SEEK_SET);
if (res1 < 0) {
return -errno;
}
ssize_t res2 = write(bd->disk->fd,
(bd->blocks[bd->ooo_block])
? bd->blocks[bd->ooo_block]->data
: bd->disk->scratch,
bd->cfg->erase_size);
if (res2 < 0) {
return -errno;
}
}
}
// simulate power loss
bd->cfg->powerloss_cb(bd->cfg->powerloss_data);
// if we continue, undo out-of-order write emulation
if (bd->cfg->powerloss_behavior == LFS_EMUBD_POWERLOSS_OOO
&& bd->ooo_block != -1) {
lfs_emubd_decblock(bd->blocks[bd->ooo_block]);
bd->blocks[bd->ooo_block] = ooo_data;
// mirror to disk file?
if (bd->disk
&& (bd->blocks[bd->ooo_block]
|| bd->cfg->erase_value != -1)) {
off_t res1 = lseek(bd->disk->fd,
(off_t)bd->ooo_block*bd->cfg->erase_size,
SEEK_SET);
if (res1 < 0) {
return -errno;
}
ssize_t res2 = write(bd->disk->fd,
(bd->blocks[bd->ooo_block])
? bd->blocks[bd->ooo_block]->data
: bd->disk->scratch,
bd->cfg->erase_size);
if (res2 < 0) {
return -errno;
}
}
}
return 0;
}
// block device API
int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
LFS_EMUBD_TRACE("lfs_emubd_read(%p, "
"0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_emubd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->read_size == 0);
LFS_ASSERT(size % bd->cfg->read_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// get the block
const lfs_emubd_block_t *b = bd->blocks[block];
if (b) {
// block bad?
if (bd->cfg->erase_cycles && b->wear >= bd->cfg->erase_cycles &&
bd->cfg->badblock_behavior == LFS_EMUBD_BADBLOCK_READERROR) {
LFS_EMUBD_TRACE("lfs_emubd_read -> %d", LFS_ERR_CORRUPT);
return LFS_ERR_CORRUPT;
}
// read data
memcpy(buffer, &b->data[off], size);
} else {
// zero for consistency
memset(buffer,
(bd->cfg->erase_value != -1) ? bd->cfg->erase_value : 0,
size);
}
// track reads
bd->readed += size;
if (bd->cfg->read_sleep) {
int err = nanosleep(&(struct timespec){
.tv_sec=bd->cfg->read_sleep/1000000000,
.tv_nsec=bd->cfg->read_sleep%1000000000},
NULL);
if (err) {
err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_read -> %d", err);
return err;
}
}
LFS_EMUBD_TRACE("lfs_emubd_read -> %d", 0);
return 0;
}
int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
LFS_EMUBD_TRACE("lfs_emubd_prog(%p, "
"0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_emubd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->prog_size == 0);
LFS_ASSERT(size % bd->cfg->prog_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// get the block
lfs_emubd_block_t *b = lfs_emubd_mutblock(cfg, &bd->blocks[block]);
if (!b) {
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
// block bad?
if (bd->cfg->erase_cycles && b->wear >= bd->cfg->erase_cycles) {
if (bd->cfg->badblock_behavior ==
LFS_EMUBD_BADBLOCK_PROGERROR) {
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", LFS_ERR_CORRUPT);
return LFS_ERR_CORRUPT;
} else if (bd->cfg->badblock_behavior ==
LFS_EMUBD_BADBLOCK_PROGNOOP ||
bd->cfg->badblock_behavior ==
LFS_EMUBD_BADBLOCK_ERASENOOP) {
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", 0);
return 0;
}
}
// were we erased properly?
if (bd->cfg->erase_value != -1) {
for (lfs_off_t i = 0; i < size; i++) {
LFS_ASSERT(b->data[off+i] == bd->cfg->erase_value);
}
}
// prog data
memcpy(&b->data[off], buffer, size);
// mirror to disk file?
if (bd->disk) {
off_t res1 = lseek(bd->disk->fd,
(off_t)block*bd->cfg->erase_size + (off_t)off,
SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", err);
return err;
}
ssize_t res2 = write(bd->disk->fd, buffer, size);
if (res2 < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", err);
return err;
}
}
// track progs
bd->proged += size;
if (bd->cfg->prog_sleep) {
int err = nanosleep(&(struct timespec){
.tv_sec=bd->cfg->prog_sleep/1000000000,
.tv_nsec=bd->cfg->prog_sleep%1000000000},
NULL);
if (err) {
err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", err);
return err;
}
}
// lose power?
if (bd->power_cycles > 0) {
bd->power_cycles -= 1;
if (bd->power_cycles == 0) {
int err = lfs_emubd_powerloss(cfg);
if (err) {
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", err);
return err;
}
}
}
LFS_EMUBD_TRACE("lfs_emubd_prog -> %d", 0);
return 0;
}
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_EMUBD_TRACE("lfs_emubd_erase(%p, 0x%"PRIx32" (%"PRIu32"))",
(void*)cfg, block, ((lfs_emubd_t*)cfg->context)->cfg->erase_size);
lfs_emubd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < bd->cfg->erase_count);
// emulate out-of-order writes? save first write
if (bd->cfg->powerloss_behavior == LFS_EMUBD_POWERLOSS_OOO
&& bd->ooo_block == -1) {
bd->ooo_block = block;
bd->ooo_data = lfs_emubd_incblock(bd->blocks[block]);
}
// get the block
lfs_emubd_block_t *b = lfs_emubd_mutblock(cfg, &bd->blocks[block]);
if (!b) {
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
// block bad?
if (bd->cfg->erase_cycles) {
if (b->wear >= bd->cfg->erase_cycles) {
if (bd->cfg->badblock_behavior ==
LFS_EMUBD_BADBLOCK_ERASEERROR) {
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", LFS_ERR_CORRUPT);
return LFS_ERR_CORRUPT;
} else if (bd->cfg->badblock_behavior ==
LFS_EMUBD_BADBLOCK_ERASENOOP) {
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", 0);
return 0;
}
} else {
// mark wear
b->wear += 1;
}
}
// emulate an erase value?
if (bd->cfg->erase_value != -1) {
memset(b->data, bd->cfg->erase_value, bd->cfg->erase_size);
// mirror to disk file?
if (bd->disk) {
off_t res1 = lseek(bd->disk->fd,
(off_t)block*bd->cfg->erase_size,
SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", err);
return err;
}
ssize_t res2 = write(bd->disk->fd,
bd->disk->scratch,
bd->cfg->erase_size);
if (res2 < 0) {
int err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", err);
return err;
}
}
}
// track erases
bd->erased += bd->cfg->erase_size;
if (bd->cfg->erase_sleep) {
int err = nanosleep(&(struct timespec){
.tv_sec=bd->cfg->erase_sleep/1000000000,
.tv_nsec=bd->cfg->erase_sleep%1000000000},
NULL);
if (err) {
err = -errno;
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", err);
return err;
}
}
// lose power?
if (bd->power_cycles > 0) {
bd->power_cycles -= 1;
if (bd->power_cycles == 0) {
int err = lfs_emubd_powerloss(cfg);
if (err) {
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", err);
return err;
}
}
}
LFS_EMUBD_TRACE("lfs_emubd_erase -> %d", 0);
return 0;
}
int lfs_emubd_sync(const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_sync(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
// emulate out-of-order writes? reset first write, writes
// cannot be out-of-order across sync
if (bd->cfg->powerloss_behavior == LFS_EMUBD_POWERLOSS_OOO) {
lfs_emubd_decblock(bd->ooo_data);
bd->ooo_block = -1;
bd->ooo_data = NULL;
}
LFS_EMUBD_TRACE("lfs_emubd_sync -> %d", 0);
return 0;
}
/// Additional extended API for driving test features ///
static int lfs_emubd_crc_(const struct lfs_config *cfg,
lfs_block_t block, uint32_t *crc) {
lfs_emubd_t *bd = cfg->context;
// check if crc is valid
LFS_ASSERT(block < cfg->block_count);
// crc the block
uint32_t crc_ = 0xffffffff;
const lfs_emubd_block_t *b = bd->blocks[block];
if (b) {
crc_ = lfs_crc(crc_, b->data, cfg->block_size);
} else {
uint8_t erase_value = (bd->cfg->erase_value != -1)
? bd->cfg->erase_value
: 0;
for (lfs_size_t i = 0; i < cfg->block_size; i++) {
crc_ = lfs_crc(crc_, &erase_value, 1);
}
}
*crc = 0xffffffff ^ crc_;
return 0;
}
int lfs_emubd_crc(const struct lfs_config *cfg,
lfs_block_t block, uint32_t *crc) {
LFS_EMUBD_TRACE("lfs_emubd_crc(%p, %"PRIu32", %p)",
(void*)cfg, block, crc);
int err = lfs_emubd_crc_(cfg, block, crc);
LFS_EMUBD_TRACE("lfs_emubd_crc -> %d", err);
return err;
}
int lfs_emubd_bdcrc(const struct lfs_config *cfg, uint32_t *crc) {
LFS_EMUBD_TRACE("lfs_emubd_bdcrc(%p, %p)", (void*)cfg, crc);
uint32_t crc_ = 0xffffffff;
for (lfs_block_t i = 0; i < cfg->block_count; i++) {
uint32_t i_crc;
int err = lfs_emubd_crc_(cfg, i, &i_crc);
if (err) {
LFS_EMUBD_TRACE("lfs_emubd_bdcrc -> %d", err);
return err;
}
crc_ = lfs_crc(crc_, &i_crc, sizeof(uint32_t));
}
*crc = 0xffffffff ^ crc_;
LFS_EMUBD_TRACE("lfs_emubd_bdcrc -> %d", 0);
return 0;
}
lfs_emubd_sio_t lfs_emubd_readed(const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_readed(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
LFS_EMUBD_TRACE("lfs_emubd_readed -> %"PRIu64, bd->readed);
return bd->readed;
}
lfs_emubd_sio_t lfs_emubd_proged(const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_proged(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
LFS_EMUBD_TRACE("lfs_emubd_proged -> %"PRIu64, bd->proged);
return bd->proged;
}
lfs_emubd_sio_t lfs_emubd_erased(const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_erased(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
LFS_EMUBD_TRACE("lfs_emubd_erased -> %"PRIu64, bd->erased);
return bd->erased;
}
int lfs_emubd_setreaded(const struct lfs_config *cfg, lfs_emubd_io_t readed) {
LFS_EMUBD_TRACE("lfs_emubd_setreaded(%p, %"PRIu64")", (void*)cfg, readed);
lfs_emubd_t *bd = cfg->context;
bd->readed = readed;
LFS_EMUBD_TRACE("lfs_emubd_setreaded -> %d", 0);
return 0;
}
int lfs_emubd_setproged(const struct lfs_config *cfg, lfs_emubd_io_t proged) {
LFS_EMUBD_TRACE("lfs_emubd_setproged(%p, %"PRIu64")", (void*)cfg, proged);
lfs_emubd_t *bd = cfg->context;
bd->proged = proged;
LFS_EMUBD_TRACE("lfs_emubd_setproged -> %d", 0);
return 0;
}
int lfs_emubd_seterased(const struct lfs_config *cfg, lfs_emubd_io_t erased) {
LFS_EMUBD_TRACE("lfs_emubd_seterased(%p, %"PRIu64")", (void*)cfg, erased);
lfs_emubd_t *bd = cfg->context;
bd->erased = erased;
LFS_EMUBD_TRACE("lfs_emubd_seterased -> %d", 0);
return 0;
}
lfs_emubd_swear_t lfs_emubd_wear(const struct lfs_config *cfg,
lfs_block_t block) {
LFS_EMUBD_TRACE("lfs_emubd_wear(%p, %"PRIu32")", (void*)cfg, block);
lfs_emubd_t *bd = cfg->context;
// check if block is valid
LFS_ASSERT(block < bd->cfg->erase_count);
// get the wear
lfs_emubd_wear_t wear;
const lfs_emubd_block_t *b = bd->blocks[block];
if (b) {
wear = b->wear;
} else {
wear = 0;
}
LFS_EMUBD_TRACE("lfs_emubd_wear -> %"PRIi32, wear);
return wear;
}
int lfs_emubd_setwear(const struct lfs_config *cfg,
lfs_block_t block, lfs_emubd_wear_t wear) {
LFS_EMUBD_TRACE("lfs_emubd_setwear(%p, %"PRIu32", %"PRIi32")",
(void*)cfg, block, wear);
lfs_emubd_t *bd = cfg->context;
// check if block is valid
LFS_ASSERT(block < bd->cfg->erase_count);
// set the wear
lfs_emubd_block_t *b = lfs_emubd_mutblock(cfg, &bd->blocks[block]);
if (!b) {
LFS_EMUBD_TRACE("lfs_emubd_setwear -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
b->wear = wear;
LFS_EMUBD_TRACE("lfs_emubd_setwear -> %d", 0);
return 0;
}
lfs_emubd_spowercycles_t lfs_emubd_powercycles(
const struct lfs_config *cfg) {
LFS_EMUBD_TRACE("lfs_emubd_powercycles(%p)", (void*)cfg);
lfs_emubd_t *bd = cfg->context;
LFS_EMUBD_TRACE("lfs_emubd_powercycles -> %"PRIi32, bd->power_cycles);
return bd->power_cycles;
}
int lfs_emubd_setpowercycles(const struct lfs_config *cfg,
lfs_emubd_powercycles_t power_cycles) {
LFS_EMUBD_TRACE("lfs_emubd_setpowercycles(%p, %"PRIi32")",
(void*)cfg, power_cycles);
lfs_emubd_t *bd = cfg->context;
bd->power_cycles = power_cycles;
LFS_EMUBD_TRACE("lfs_emubd_powercycles -> %d", 0);
return 0;
}
int lfs_emubd_copy(const struct lfs_config *cfg, lfs_emubd_t *copy) {
LFS_EMUBD_TRACE("lfs_emubd_copy(%p, %p)", (void*)cfg, (void*)copy);
lfs_emubd_t *bd = cfg->context;
// lazily copy over our block array
copy->blocks = malloc(bd->cfg->erase_count * sizeof(lfs_emubd_block_t*));
if (!copy->blocks) {
LFS_EMUBD_TRACE("lfs_emubd_copy -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
for (size_t i = 0; i < bd->cfg->erase_count; i++) {
copy->blocks[i] = lfs_emubd_incblock(bd->blocks[i]);
}
// other state
copy->readed = bd->readed;
copy->proged = bd->proged;
copy->erased = bd->erased;
copy->power_cycles = bd->power_cycles;
copy->ooo_block = bd->ooo_block;
copy->ooo_data = lfs_emubd_incblock(bd->ooo_data);
copy->disk = bd->disk;
if (copy->disk) {
copy->disk->rc += 1;
}
copy->cfg = bd->cfg;
LFS_EMUBD_TRACE("lfs_emubd_copy -> %d", 0);
return 0;
}

244
bd/lfs_emubd.h Normal file
View File

@@ -0,0 +1,244 @@
/*
* Emulating block device, wraps filebd and rambd while providing a bunch
* of hooks for testing littlefs in various conditions.
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_EMUBD_H
#define LFS_EMUBD_H
#include "lfs.h"
#include "lfs_util.h"
#include "bd/lfs_rambd.h"
#include "bd/lfs_filebd.h"
#ifdef __cplusplus
extern "C"
{
#endif
// Block device specific tracing
#ifndef LFS_EMUBD_TRACE
#ifdef LFS_EMUBD_YES_TRACE
#define LFS_EMUBD_TRACE(...) LFS_TRACE(__VA_ARGS__)
#else
#define LFS_EMUBD_TRACE(...)
#endif
#endif
// Mode determining how "bad-blocks" behave during testing. This simulates
// some real-world circumstances such as progs not sticking (prog-noop),
// a readonly disk (erase-noop), and ECC failures (read-error).
//
// Not that read-noop is not allowed. Read _must_ return a consistent (but
// may be arbitrary) value on every read.
typedef enum lfs_emubd_badblock_behavior {
LFS_EMUBD_BADBLOCK_PROGERROR = 0, // Error on prog
LFS_EMUBD_BADBLOCK_ERASEERROR = 1, // Error on erase
LFS_EMUBD_BADBLOCK_READERROR = 2, // Error on read
LFS_EMUBD_BADBLOCK_PROGNOOP = 3, // Prog does nothing silently
LFS_EMUBD_BADBLOCK_ERASENOOP = 4, // Erase does nothing silently
} lfs_emubd_badblock_behavior_t;
// Mode determining how power-loss behaves during testing. For now this
// only supports a noop behavior, leaving the data on-disk untouched.
typedef enum lfs_emubd_powerloss_behavior {
LFS_EMUBD_POWERLOSS_NOOP = 0, // Progs are atomic
LFS_EMUBD_POWERLOSS_OOO = 1, // Blocks are written out-of-order
} lfs_emubd_powerloss_behavior_t;
// Type for measuring read/program/erase operations
typedef uint64_t lfs_emubd_io_t;
typedef int64_t lfs_emubd_sio_t;
// Type for measuring wear
typedef uint32_t lfs_emubd_wear_t;
typedef int32_t lfs_emubd_swear_t;
// Type for tracking power-cycles
typedef uint32_t lfs_emubd_powercycles_t;
typedef int32_t lfs_emubd_spowercycles_t;
// Type for delays in nanoseconds
typedef uint64_t lfs_emubd_sleep_t;
typedef int64_t lfs_emubd_ssleep_t;
// emubd config, this is required for testing
struct lfs_emubd_config {
// Minimum size of a read operation in bytes.
lfs_size_t read_size;
// Minimum size of a program operation in bytes.
lfs_size_t prog_size;
// Size of an erase operation in bytes.
lfs_size_t erase_size;
// Number of erase blocks on the device.
lfs_size_t erase_count;
// 8-bit erase value to use for simulating erases. -1 does not simulate
// erases, which can speed up testing by avoiding the extra block-device
// operations to store the erase value.
int32_t erase_value;
// Number of erase cycles before a block becomes "bad". The exact behavior
// of bad blocks is controlled by badblock_behavior.
uint32_t erase_cycles;
// The mode determining how bad-blocks fail
lfs_emubd_badblock_behavior_t badblock_behavior;
// Number of write operations (erase/prog) before triggering a power-loss.
// power_cycles=0 disables this. The exact behavior of power-loss is
// controlled by a combination of powerloss_behavior and powerloss_cb.
lfs_emubd_powercycles_t power_cycles;
// The mode determining how power-loss affects disk
lfs_emubd_powerloss_behavior_t powerloss_behavior;
// Function to call to emulate power-loss. The exact behavior of power-loss
// is up to the runner to provide.
void (*powerloss_cb)(void*);
// Data for power-loss callback
void *powerloss_data;
// True to track when power-loss could have occured. Note this involves
// heavy memory usage!
bool track_branches;
// Path to file to use as a mirror of the disk. This provides a way to view
// the current state of the block device.
const char *disk_path;
// Artificial delay in nanoseconds, there is no purpose for this other
// than slowing down the simulation.
lfs_emubd_sleep_t read_sleep;
// Artificial delay in nanoseconds, there is no purpose for this other
// than slowing down the simulation.
lfs_emubd_sleep_t prog_sleep;
// Artificial delay in nanoseconds, there is no purpose for this other
// than slowing down the simulation.
lfs_emubd_sleep_t erase_sleep;
};
// A reference counted block
typedef struct lfs_emubd_block {
uint32_t rc;
lfs_emubd_wear_t wear;
uint8_t data[];
} lfs_emubd_block_t;
// Disk mirror
typedef struct lfs_emubd_disk {
uint32_t rc;
int fd;
uint8_t *scratch;
} lfs_emubd_disk_t;
// emubd state
typedef struct lfs_emubd {
// array of copy-on-write blocks
lfs_emubd_block_t **blocks;
// some other test state
lfs_emubd_io_t readed;
lfs_emubd_io_t proged;
lfs_emubd_io_t erased;
lfs_emubd_powercycles_t power_cycles;
lfs_ssize_t ooo_block;
lfs_emubd_block_t *ooo_data;
lfs_emubd_disk_t *disk;
const struct lfs_emubd_config *cfg;
} lfs_emubd_t;
/// Block device API ///
// Create an emulating block device using the geometry in lfs_config
int lfs_emubd_create(const struct lfs_config *cfg,
const struct lfs_emubd_config *bdcfg);
// Clean up memory associated with block device
int lfs_emubd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_emubd_sync(const struct lfs_config *cfg);
/// Additional extended API for driving test features ///
// A CRC of a block for debugging purposes
int lfs_emubd_crc(const struct lfs_config *cfg,
lfs_block_t block, uint32_t *crc);
// A CRC of the entire block device for debugging purposes
int lfs_emubd_bdcrc(const struct lfs_config *cfg, uint32_t *crc);
// Get total amount of bytes read
lfs_emubd_sio_t lfs_emubd_readed(const struct lfs_config *cfg);
// Get total amount of bytes programmed
lfs_emubd_sio_t lfs_emubd_proged(const struct lfs_config *cfg);
// Get total amount of bytes erased
lfs_emubd_sio_t lfs_emubd_erased(const struct lfs_config *cfg);
// Manually set amount of bytes read
int lfs_emubd_setreaded(const struct lfs_config *cfg, lfs_emubd_io_t readed);
// Manually set amount of bytes programmed
int lfs_emubd_setproged(const struct lfs_config *cfg, lfs_emubd_io_t proged);
// Manually set amount of bytes erased
int lfs_emubd_seterased(const struct lfs_config *cfg, lfs_emubd_io_t erased);
// Get simulated wear on a given block
lfs_emubd_swear_t lfs_emubd_wear(const struct lfs_config *cfg,
lfs_block_t block);
// Manually set simulated wear on a given block
int lfs_emubd_setwear(const struct lfs_config *cfg,
lfs_block_t block, lfs_emubd_wear_t wear);
// Get the remaining power-cycles
lfs_emubd_spowercycles_t lfs_emubd_powercycles(
const struct lfs_config *cfg);
// Manually set the remaining power-cycles
int lfs_emubd_setpowercycles(const struct lfs_config *cfg,
lfs_emubd_powercycles_t power_cycles);
// Create a copy-on-write copy of the state of this block device
int lfs_emubd_copy(const struct lfs_config *cfg, lfs_emubd_t *copy);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

167
bd/lfs_filebd.c Normal file
View File

@@ -0,0 +1,167 @@
/*
* Block device emulated in a file
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "bd/lfs_filebd.h"
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#ifdef _WIN32
#include <windows.h>
#endif
int lfs_filebd_create(const struct lfs_config *cfg, const char *path,
const struct lfs_filebd_config *bdcfg) {
LFS_FILEBD_TRACE("lfs_filebd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p}, "
"\"%s\", "
"%p {.read_size=%"PRIu32", .prog_size=%"PRIu32", "
".erase_size=%"PRIu32", .erase_count=%"PRIu32"})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
path,
(void*)bdcfg,
bdcfg->read_size, bdcfg->prog_size, bdcfg->erase_size,
bdcfg->erase_count);
lfs_filebd_t *bd = cfg->context;
bd->cfg = bdcfg;
// open file
#ifdef _WIN32
bd->fd = open(path, O_RDWR | O_CREAT | O_BINARY, 0666);
#else
bd->fd = open(path, O_RDWR | O_CREAT, 0666);
#endif
if (bd->fd < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_create -> %d", err);
return err;
}
LFS_FILEBD_TRACE("lfs_filebd_create -> %d", 0);
return 0;
}
int lfs_filebd_destroy(const struct lfs_config *cfg) {
LFS_FILEBD_TRACE("lfs_filebd_destroy(%p)", (void*)cfg);
lfs_filebd_t *bd = cfg->context;
int err = close(bd->fd);
if (err < 0) {
err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_destroy -> %d", err);
return err;
}
LFS_FILEBD_TRACE("lfs_filebd_destroy -> %d", 0);
return 0;
}
int lfs_filebd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
LFS_FILEBD_TRACE("lfs_filebd_read(%p, "
"0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_filebd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->read_size == 0);
LFS_ASSERT(size % bd->cfg->read_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// zero for reproducibility (in case file is truncated)
memset(buffer, 0, size);
// read
off_t res1 = lseek(bd->fd,
(off_t)block*bd->cfg->erase_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_read -> %d", err);
return err;
}
ssize_t res2 = read(bd->fd, buffer, size);
if (res2 < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_read -> %d", err);
return err;
}
LFS_FILEBD_TRACE("lfs_filebd_read -> %d", 0);
return 0;
}
int lfs_filebd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
LFS_FILEBD_TRACE("lfs_filebd_prog(%p, "
"0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_filebd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->prog_size == 0);
LFS_ASSERT(size % bd->cfg->prog_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// program data
off_t res1 = lseek(bd->fd,
(off_t)block*bd->cfg->erase_size + (off_t)off, SEEK_SET);
if (res1 < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_prog -> %d", err);
return err;
}
ssize_t res2 = write(bd->fd, buffer, size);
if (res2 < 0) {
int err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_prog -> %d", err);
return err;
}
LFS_FILEBD_TRACE("lfs_filebd_prog -> %d", 0);
return 0;
}
int lfs_filebd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_FILEBD_TRACE("lfs_filebd_erase(%p, 0x%"PRIx32" (%"PRIu32"))",
(void*)cfg, block, ((lfs_file_t*)cfg->context)->cfg->erase_size);
lfs_filebd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < bd->cfg->erase_count);
// erase is a noop
(void)block;
LFS_FILEBD_TRACE("lfs_filebd_erase -> %d", 0);
return 0;
}
int lfs_filebd_sync(const struct lfs_config *cfg) {
LFS_FILEBD_TRACE("lfs_filebd_sync(%p)", (void*)cfg);
// file sync
lfs_filebd_t *bd = cfg->context;
#ifdef _WIN32
int err = FlushFileBuffers((HANDLE) _get_osfhandle(bd->fd)) ? 0 : -1;
#else
int err = fsync(bd->fd);
#endif
if (err) {
err = -errno;
LFS_FILEBD_TRACE("lfs_filebd_sync -> %d", 0);
return err;
}
LFS_FILEBD_TRACE("lfs_filebd_sync -> %d", 0);
return 0;
}

82
bd/lfs_filebd.h Normal file
View File

@@ -0,0 +1,82 @@
/*
* Block device emulated in a file
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_FILEBD_H
#define LFS_FILEBD_H
#include "lfs.h"
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
{
#endif
// Block device specific tracing
#ifndef LFS_FILEBD_TRACE
#ifdef LFS_FILEBD_YES_TRACE
#define LFS_FILEBD_TRACE(...) LFS_TRACE(__VA_ARGS__)
#else
#define LFS_FILEBD_TRACE(...)
#endif
#endif
// filebd config
struct lfs_filebd_config {
// Minimum size of a read operation in bytes.
lfs_size_t read_size;
// Minimum size of a program operation in bytes.
lfs_size_t prog_size;
// Size of an erase operation in bytes.
lfs_size_t erase_size;
// Number of erase blocks on the device.
lfs_size_t erase_count;
};
// filebd state
typedef struct lfs_filebd {
int fd;
const struct lfs_filebd_config *cfg;
} lfs_filebd_t;
// Create a file block device
int lfs_filebd_create(const struct lfs_config *cfg, const char *path,
const struct lfs_filebd_config *bdcfg);
// Clean up memory associated with block device
int lfs_filebd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_filebd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_filebd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_filebd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_filebd_sync(const struct lfs_config *cfg);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

118
bd/lfs_rambd.c Normal file
View File

@@ -0,0 +1,118 @@
/*
* Block device emulated in RAM
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "bd/lfs_rambd.h"
int lfs_rambd_create(const struct lfs_config *cfg,
const struct lfs_rambd_config *bdcfg) {
LFS_RAMBD_TRACE("lfs_rambd_create(%p {.context=%p, "
".read=%p, .prog=%p, .erase=%p, .sync=%p}, "
"%p {.read_size=%"PRIu32", .prog_size=%"PRIu32", "
".erase_size=%"PRIu32", .erase_count=%"PRIu32", "
".buffer=%p})",
(void*)cfg, cfg->context,
(void*)(uintptr_t)cfg->read, (void*)(uintptr_t)cfg->prog,
(void*)(uintptr_t)cfg->erase, (void*)(uintptr_t)cfg->sync,
(void*)bdcfg,
bdcfg->read_size, bdcfg->prog_size, bdcfg->erase_size,
bdcfg->erase_count, bdcfg->buffer);
lfs_rambd_t *bd = cfg->context;
bd->cfg = bdcfg;
// allocate buffer?
if (bd->cfg->buffer) {
bd->buffer = bd->cfg->buffer;
} else {
bd->buffer = lfs_malloc(bd->cfg->erase_size * bd->cfg->erase_count);
if (!bd->buffer) {
LFS_RAMBD_TRACE("lfs_rambd_create -> %d", LFS_ERR_NOMEM);
return LFS_ERR_NOMEM;
}
}
// zero for reproducibility
memset(bd->buffer, 0, bd->cfg->erase_size * bd->cfg->erase_count);
LFS_RAMBD_TRACE("lfs_rambd_create -> %d", 0);
return 0;
}
int lfs_rambd_destroy(const struct lfs_config *cfg) {
LFS_RAMBD_TRACE("lfs_rambd_destroy(%p)", (void*)cfg);
// clean up memory
lfs_rambd_t *bd = cfg->context;
if (!bd->cfg->buffer) {
lfs_free(bd->buffer);
}
LFS_RAMBD_TRACE("lfs_rambd_destroy -> %d", 0);
return 0;
}
int lfs_rambd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
LFS_RAMBD_TRACE("lfs_rambd_read(%p, "
"0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_rambd_t *bd = cfg->context;
// check if read is valid
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->read_size == 0);
LFS_ASSERT(size % bd->cfg->read_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// read data
memcpy(buffer, &bd->buffer[block*bd->cfg->erase_size + off], size);
LFS_RAMBD_TRACE("lfs_rambd_read -> %d", 0);
return 0;
}
int lfs_rambd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
LFS_RAMBD_TRACE("lfs_rambd_prog(%p, "
"0x%"PRIx32", %"PRIu32", %p, %"PRIu32")",
(void*)cfg, block, off, buffer, size);
lfs_rambd_t *bd = cfg->context;
// check if write is valid
LFS_ASSERT(block < bd->cfg->erase_count);
LFS_ASSERT(off % bd->cfg->prog_size == 0);
LFS_ASSERT(size % bd->cfg->prog_size == 0);
LFS_ASSERT(off+size <= bd->cfg->erase_size);
// program data
memcpy(&bd->buffer[block*bd->cfg->erase_size + off], buffer, size);
LFS_RAMBD_TRACE("lfs_rambd_prog -> %d", 0);
return 0;
}
int lfs_rambd_erase(const struct lfs_config *cfg, lfs_block_t block) {
LFS_RAMBD_TRACE("lfs_rambd_erase(%p, 0x%"PRIx32" (%"PRIu32"))",
(void*)cfg, block, ((lfs_rambd_t*)cfg->context)->cfg->erase_size);
lfs_rambd_t *bd = cfg->context;
// check if erase is valid
LFS_ASSERT(block < bd->cfg->erase_count);
// erase is a noop
(void)block;
LFS_RAMBD_TRACE("lfs_rambd_erase -> %d", 0);
return 0;
}
int lfs_rambd_sync(const struct lfs_config *cfg) {
LFS_RAMBD_TRACE("lfs_rambd_sync(%p)", (void*)cfg);
// sync is a noop
(void)cfg;
LFS_RAMBD_TRACE("lfs_rambd_sync -> %d", 0);
return 0;
}

85
bd/lfs_rambd.h Normal file
View File

@@ -0,0 +1,85 @@
/*
* Block device emulated in RAM
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_RAMBD_H
#define LFS_RAMBD_H
#include "lfs.h"
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
{
#endif
// Block device specific tracing
#ifndef LFS_RAMBD_TRACE
#ifdef LFS_RAMBD_YES_TRACE
#define LFS_RAMBD_TRACE(...) LFS_TRACE(__VA_ARGS__)
#else
#define LFS_RAMBD_TRACE(...)
#endif
#endif
// rambd config
struct lfs_rambd_config {
// Minimum size of a read operation in bytes.
lfs_size_t read_size;
// Minimum size of a program operation in bytes.
lfs_size_t prog_size;
// Size of an erase operation in bytes.
lfs_size_t erase_size;
// Number of erase blocks on the device.
lfs_size_t erase_count;
// Optional statically allocated buffer for the block device.
void *buffer;
};
// rambd state
typedef struct lfs_rambd {
uint8_t *buffer;
const struct lfs_rambd_config *cfg;
} lfs_rambd_t;
// Create a RAM block device
int lfs_rambd_create(const struct lfs_config *cfg,
const struct lfs_rambd_config *bdcfg);
// Clean up memory associated with block device
int lfs_rambd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_rambd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_rambd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_rambd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_rambd_sync(const struct lfs_config *cfg);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

270
benches/bench_dir.toml Normal file
View File

@@ -0,0 +1,270 @@
[cases.bench_dir_open]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.N = 1024
defines.FILE_SIZE = 8
defines.CHUNK_SIZE = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// first create the files
char name[256];
uint8_t buffer[CHUNK_SIZE];
for (lfs_size_t i = 0; i < N; i++) {
sprintf(name, "file%08x", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, name,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t file_prng = i;
for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
buffer[k] = BENCH_PRNG(&file_prng);
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_close(&lfs, &file) => 0;
}
// then read the files
BENCH_START();
uint32_t prng = 42;
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
sprintf(name, "file%08x", i_);
lfs_file_t file;
lfs_file_open(&lfs, &file, name, LFS_O_RDONLY) => 0;
uint32_t file_prng = i_;
for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
lfs_file_read(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
assert(buffer[k] == BENCH_PRNG(&file_prng));
}
}
lfs_file_close(&lfs, &file) => 0;
}
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_dir_creat]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.N = 1024
defines.FILE_SIZE = 8
defines.CHUNK_SIZE = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
BENCH_START();
uint32_t prng = 42;
char name[256];
uint8_t buffer[CHUNK_SIZE];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
sprintf(name, "file%08x", i_);
lfs_file_t file;
lfs_file_open(&lfs, &file, name,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint32_t file_prng = i_;
for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
buffer[k] = BENCH_PRNG(&file_prng);
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_close(&lfs, &file) => 0;
}
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_dir_remove]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.N = 1024
defines.FILE_SIZE = 8
defines.CHUNK_SIZE = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// first create the files
char name[256];
uint8_t buffer[CHUNK_SIZE];
for (lfs_size_t i = 0; i < N; i++) {
sprintf(name, "file%08x", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, name,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t file_prng = i;
for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
buffer[k] = BENCH_PRNG(&file_prng);
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_close(&lfs, &file) => 0;
}
// then remove the files
BENCH_START();
uint32_t prng = 42;
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
sprintf(name, "file%08x", i_);
int err = lfs_remove(&lfs, name);
assert(!err || err == LFS_ERR_NOENT);
}
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_dir_read]
defines.N = 1024
defines.FILE_SIZE = 8
defines.CHUNK_SIZE = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// first create the files
char name[256];
uint8_t buffer[CHUNK_SIZE];
for (lfs_size_t i = 0; i < N; i++) {
sprintf(name, "file%08x", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, name,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t file_prng = i;
for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
buffer[k] = BENCH_PRNG(&file_prng);
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_close(&lfs, &file) => 0;
}
// then read the directory
BENCH_START();
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "/") => 0;
struct lfs_info info;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, ".") == 0);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_DIR);
assert(strcmp(info.name, "..") == 0);
for (int i = 0; i < N; i++) {
sprintf(name, "file%08x", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(info.type == LFS_TYPE_REG);
assert(strcmp(info.name, name) == 0);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_dir_mkdir]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.N = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
BENCH_START();
uint32_t prng = 42;
char name[256];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
printf("hm %d\n", i);
sprintf(name, "dir%08x", i_);
int err = lfs_mkdir(&lfs, name);
assert(!err || err == LFS_ERR_EXIST);
}
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_dir_rmdir]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.N = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// first create the dirs
char name[256];
for (lfs_size_t i = 0; i < N; i++) {
sprintf(name, "dir%08x", i);
lfs_mkdir(&lfs, name) => 0;
}
// then remove the dirs
BENCH_START();
uint32_t prng = 42;
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
sprintf(name, "dir%08x", i_);
int err = lfs_remove(&lfs, name);
assert(!err || err == LFS_ERR_NOENT);
}
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''

95
benches/bench_file.toml Normal file
View File

@@ -0,0 +1,95 @@
[cases.bench_file_read]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SIZE = '128*1024'
defines.CHUNK_SIZE = 64
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_size_t chunks = (SIZE+CHUNK_SIZE-1)/CHUNK_SIZE;
// first write the file
lfs_file_t file;
uint8_t buffer[CHUNK_SIZE];
lfs_file_open(&lfs, &file, "file",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
for (lfs_size_t i = 0; i < chunks; i++) {
uint32_t chunk_prng = i;
for (lfs_size_t j = 0; j < CHUNK_SIZE; j++) {
buffer[j] = BENCH_PRNG(&chunk_prng);
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
lfs_file_close(&lfs, &file) => 0;
// then read the file
BENCH_START();
lfs_file_open(&lfs, &file, "file", LFS_O_RDONLY) => 0;
uint32_t prng = 42;
for (lfs_size_t i = 0; i < chunks; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (chunks-1-i)
: BENCH_PRNG(&prng) % chunks;
lfs_file_seek(&lfs, &file, i_*CHUNK_SIZE, LFS_SEEK_SET)
=> i_*CHUNK_SIZE;
lfs_file_read(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
uint32_t chunk_prng = i_;
for (lfs_size_t j = 0; j < CHUNK_SIZE; j++) {
assert(buffer[j] == BENCH_PRNG(&chunk_prng));
}
}
lfs_file_close(&lfs, &file) => 0;
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_file_write]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SIZE = '128*1024'
defines.CHUNK_SIZE = 64
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_size_t chunks = (SIZE+CHUNK_SIZE-1)/CHUNK_SIZE;
BENCH_START();
lfs_file_t file;
lfs_file_open(&lfs, &file, "file",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint8_t buffer[CHUNK_SIZE];
uint32_t prng = 42;
for (lfs_size_t i = 0; i < chunks; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (chunks-1-i)
: BENCH_PRNG(&prng) % chunks;
uint32_t chunk_prng = i_;
for (lfs_size_t j = 0; j < CHUNK_SIZE; j++) {
buffer[j] = BENCH_PRNG(&chunk_prng);
}
lfs_file_seek(&lfs, &file, i_*CHUNK_SIZE, LFS_SEEK_SET)
=> i_*CHUNK_SIZE;
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_close(&lfs, &file) => 0;
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''

View File

@@ -0,0 +1,56 @@
[cases.bench_superblocks_found]
# support benchmarking with files
defines.N = [0, 1024]
defines.FILE_SIZE = 8
defines.CHUNK_SIZE = 8
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// create files?
lfs_mount(&lfs, cfg) => 0;
char name[256];
uint8_t buffer[CHUNK_SIZE];
for (lfs_size_t i = 0; i < N; i++) {
sprintf(name, "file%08x", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, name,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
buffer[k] = i+j+k;
}
lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
BENCH_START();
lfs_mount(&lfs, cfg) => 0;
BENCH_STOP();
lfs_unmount(&lfs) => 0;
'''
[cases.bench_superblocks_missing]
code = '''
lfs_t lfs;
BENCH_START();
int err = lfs_mount(&lfs, cfg);
assert(err != 0);
BENCH_STOP();
'''
[cases.bench_superblocks_format]
code = '''
lfs_t lfs;
BENCH_START();
lfs_format(&lfs, cfg) => 0;
BENCH_STOP();
'''

View File

@@ -1,324 +0,0 @@
/*
* Block device emulated on standard files
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include "emubd/lfs_emubd.h"
#include <errno.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
#include <dirent.h>
#include <sys/stat.h>
#include <unistd.h>
#include <assert.h>
#include <stdbool.h>
#include <inttypes.h>
// Emulated block device utils
static inline void lfs_emubd_tole32(lfs_emubd_t *emu) {
emu->cfg.read_size = lfs_tole32(emu->cfg.read_size);
emu->cfg.prog_size = lfs_tole32(emu->cfg.prog_size);
emu->cfg.block_size = lfs_tole32(emu->cfg.block_size);
emu->cfg.block_count = lfs_tole32(emu->cfg.block_count);
emu->stats.read_count = lfs_tole32(emu->stats.read_count);
emu->stats.prog_count = lfs_tole32(emu->stats.prog_count);
emu->stats.erase_count = lfs_tole32(emu->stats.erase_count);
for (unsigned i = 0; i < sizeof(emu->history.blocks) /
sizeof(emu->history.blocks[0]); i++) {
emu->history.blocks[i] = lfs_tole32(emu->history.blocks[i]);
}
}
static inline void lfs_emubd_fromle32(lfs_emubd_t *emu) {
emu->cfg.read_size = lfs_fromle32(emu->cfg.read_size);
emu->cfg.prog_size = lfs_fromle32(emu->cfg.prog_size);
emu->cfg.block_size = lfs_fromle32(emu->cfg.block_size);
emu->cfg.block_count = lfs_fromle32(emu->cfg.block_count);
emu->stats.read_count = lfs_fromle32(emu->stats.read_count);
emu->stats.prog_count = lfs_fromle32(emu->stats.prog_count);
emu->stats.erase_count = lfs_fromle32(emu->stats.erase_count);
for (unsigned i = 0; i < sizeof(emu->history.blocks) /
sizeof(emu->history.blocks[0]); i++) {
emu->history.blocks[i] = lfs_fromle32(emu->history.blocks[i]);
}
}
// Block device emulated on existing filesystem
int lfs_emubd_create(const struct lfs_config *cfg, const char *path) {
lfs_emubd_t *emu = cfg->context;
emu->cfg.read_size = cfg->read_size;
emu->cfg.prog_size = cfg->prog_size;
emu->cfg.block_size = cfg->block_size;
emu->cfg.block_count = cfg->block_count;
// Allocate buffer for creating children files
size_t pathlen = strlen(path);
emu->path = malloc(pathlen + 1 + LFS_NAME_MAX + 1);
if (!emu->path) {
return -ENOMEM;
}
strcpy(emu->path, path);
emu->path[pathlen] = '/';
emu->child = &emu->path[pathlen+1];
memset(emu->child, '\0', LFS_NAME_MAX+1);
// Create directory if it doesn't exist
int err = mkdir(path, 0777);
if (err && errno != EEXIST) {
return -errno;
}
// Load stats to continue incrementing
snprintf(emu->child, LFS_NAME_MAX, ".stats");
FILE *f = fopen(emu->path, "r");
if (!f) {
memset(&emu->stats, 0, sizeof(emu->stats));
} else {
size_t res = fread(&emu->stats, sizeof(emu->stats), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
// Load history
snprintf(emu->child, LFS_NAME_MAX, ".history");
f = fopen(emu->path, "r");
if (!f) {
memset(&emu->history, 0, sizeof(emu->history));
} else {
size_t res = fread(&emu->history, sizeof(emu->history), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
return 0;
}
void lfs_emubd_destroy(const struct lfs_config *cfg) {
lfs_emubd_sync(cfg);
lfs_emubd_t *emu = cfg->context;
free(emu->path);
}
int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size) {
lfs_emubd_t *emu = cfg->context;
uint8_t *data = buffer;
// Check if read is valid
assert(off % cfg->read_size == 0);
assert(size % cfg->read_size == 0);
assert(block < cfg->block_count);
// Zero out buffer for debugging
memset(data, 0, size);
// Read data
snprintf(emu->child, LFS_NAME_MAX, "%" PRIx32, block);
FILE *f = fopen(emu->path, "rb");
if (!f && errno != ENOENT) {
return -errno;
}
if (f) {
int err = fseek(f, off, SEEK_SET);
if (err) {
return -errno;
}
size_t res = fread(data, 1, size, f);
if (res < size && !feof(f)) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
emu->stats.read_count += 1;
return 0;
}
int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size) {
lfs_emubd_t *emu = cfg->context;
const uint8_t *data = buffer;
// Check if write is valid
assert(off % cfg->prog_size == 0);
assert(size % cfg->prog_size == 0);
assert(block < cfg->block_count);
// Program data
snprintf(emu->child, LFS_NAME_MAX, "%" PRIx32, block);
FILE *f = fopen(emu->path, "r+b");
if (!f) {
return (errno == EACCES) ? 0 : -errno;
}
// Check that file was erased
assert(f);
int err = fseek(f, off, SEEK_SET);
if (err) {
return -errno;
}
size_t res = fwrite(data, 1, size, f);
if (res < size) {
return -errno;
}
err = fseek(f, off, SEEK_SET);
if (err) {
return -errno;
}
uint8_t dat;
res = fread(&dat, 1, 1, f);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
// update history and stats
if (block != emu->history.blocks[0]) {
memcpy(&emu->history.blocks[1], &emu->history.blocks[0],
sizeof(emu->history) - sizeof(emu->history.blocks[0]));
emu->history.blocks[0] = block;
}
emu->stats.prog_count += 1;
return 0;
}
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block) {
lfs_emubd_t *emu = cfg->context;
// Check if erase is valid
assert(block < cfg->block_count);
// Erase the block
snprintf(emu->child, LFS_NAME_MAX, "%" PRIx32, block);
struct stat st;
int err = stat(emu->path, &st);
if (err && errno != ENOENT) {
return -errno;
}
if (!err && S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode)) {
err = unlink(emu->path);
if (err) {
return -errno;
}
}
if (err || (S_ISREG(st.st_mode) && (S_IWUSR & st.st_mode))) {
FILE *f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
}
emu->stats.erase_count += 1;
return 0;
}
int lfs_emubd_sync(const struct lfs_config *cfg) {
lfs_emubd_t *emu = cfg->context;
// Just write out info/stats for later lookup
snprintf(emu->child, LFS_NAME_MAX, ".config");
FILE *f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
lfs_emubd_tole32(emu);
size_t res = fwrite(&emu->cfg, sizeof(emu->cfg), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
int err = fclose(f);
if (err) {
return -errno;
}
snprintf(emu->child, LFS_NAME_MAX, ".stats");
f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
lfs_emubd_tole32(emu);
res = fwrite(&emu->stats, sizeof(emu->stats), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
snprintf(emu->child, LFS_NAME_MAX, ".history");
f = fopen(emu->path, "w");
if (!f) {
return -errno;
}
lfs_emubd_tole32(emu);
res = fwrite(&emu->history, sizeof(emu->history), 1, f);
lfs_emubd_fromle32(emu);
if (res < 1) {
return -errno;
}
err = fclose(f);
if (err) {
return -errno;
}
return 0;
}

View File

@@ -1,91 +0,0 @@
/*
* Block device emulated on standard files
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_EMUBD_H
#define LFS_EMUBD_H
#include "lfs.h"
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
{
#endif
// Config options
#ifndef LFS_EMUBD_READ_SIZE
#define LFS_EMUBD_READ_SIZE 1
#endif
#ifndef LFS_EMUBD_PROG_SIZE
#define LFS_EMUBD_PROG_SIZE 1
#endif
#ifndef LFS_EMUBD_ERASE_SIZE
#define LFS_EMUBD_ERASE_SIZE 512
#endif
#ifndef LFS_EMUBD_TOTAL_SIZE
#define LFS_EMUBD_TOTAL_SIZE 524288
#endif
// The emu bd state
typedef struct lfs_emubd {
char *path;
char *child;
struct {
uint64_t read_count;
uint64_t prog_count;
uint64_t erase_count;
} stats;
struct {
lfs_block_t blocks[4];
} history;
struct {
uint32_t read_size;
uint32_t prog_size;
uint32_t block_size;
uint32_t block_count;
} cfg;
} lfs_emubd_t;
// Create a block device using path for the directory to store blocks
int lfs_emubd_create(const struct lfs_config *cfg, const char *path);
// Clean up memory associated with emu block device
void lfs_emubd_destroy(const struct lfs_config *cfg);
// Read a block
int lfs_emubd_read(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a block
//
// The block must have previously been erased.
int lfs_emubd_prog(const struct lfs_config *cfg, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block
//
// A block must be erased before being programmed. The
// state of an erased block is undefined.
int lfs_emubd_erase(const struct lfs_config *cfg, lfs_block_t block);
// Sync the block device
int lfs_emubd_sync(const struct lfs_config *cfg);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif

3989
lfs.c

File diff suppressed because it is too large Load Diff

241
lfs.h
View File

@@ -1,14 +1,14 @@
/*
* The little filesystem
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_H
#define LFS_H
#include <stdint.h>
#include <stdbool.h>
#include "lfs_util.h"
#ifdef __cplusplus
extern "C"
@@ -21,14 +21,14 @@ extern "C"
// Software library version
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_VERSION 0x00020000
#define LFS_VERSION 0x00020009
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
// Version of On-disk data structures
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_DISK_VERSION 0x00020000
#define LFS_DISK_VERSION 0x00020001
#define LFS_DISK_VERSION_MAJOR (0xffff & (LFS_DISK_VERSION >> 16))
#define LFS_DISK_VERSION_MINOR (0xffff & (LFS_DISK_VERSION >> 0))
@@ -52,10 +52,8 @@ typedef uint32_t lfs_block_t;
#endif
// Maximum size of a file in bytes, may be redefined to limit to support other
// drivers. Limited on disk to <= 4294967296. However, above 2147483647 the
// functions lfs_file_seek, lfs_file_size, and lfs_file_tell will return
// incorrect values due to using signed integers. Stored in superblock and
// must be respected by other littlefs drivers.
// drivers. Limited on disk to <= 2147483647. Stored in superblock and must be
// respected by other littlefs drivers.
#ifndef LFS_FILE_MAX
#define LFS_FILE_MAX 2147483647
#endif
@@ -112,6 +110,8 @@ enum lfs_type {
LFS_TYPE_SOFTTAIL = 0x600,
LFS_TYPE_HARDTAIL = 0x601,
LFS_TYPE_MOVESTATE = 0x7ff,
LFS_TYPE_CCRC = 0x500,
LFS_TYPE_FCRC = 0x5ff,
// internal chip sources
LFS_FROM_NOOP = 0x000,
@@ -123,18 +123,24 @@ enum lfs_type {
enum lfs_open_flags {
// open flags
LFS_O_RDONLY = 1, // Open a file as read only
#ifndef LFS_READONLY
LFS_O_WRONLY = 2, // Open a file as write only
LFS_O_RDWR = 3, // Open a file as read and write
LFS_O_CREAT = 0x0100, // Create a file if it does not exist
LFS_O_EXCL = 0x0200, // Fail if a file already exists
LFS_O_TRUNC = 0x0400, // Truncate the existing file to zero size
LFS_O_APPEND = 0x0800, // Move to end of file on every write
#endif
// internally used flags
#ifndef LFS_READONLY
LFS_F_DIRTY = 0x010000, // File does not match storage
LFS_F_WRITING = 0x020000, // File has been written since last flush
#endif
LFS_F_READING = 0x040000, // File has been read since last flush
LFS_F_ERRED = 0x080000, // An error occured during write
#ifndef LFS_READONLY
LFS_F_ERRED = 0x080000, // An error occurred during write
#endif
LFS_F_INLINE = 0x100000, // Currently inlined in directory entry
};
@@ -152,61 +158,86 @@ struct lfs_config {
// information to the block device operations
void *context;
// Read a region in a block. Negative error codes are propogated
// Read a region in a block. Negative error codes are propagated
// to the user.
int (*read)(const struct lfs_config *c, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a region in a block. The block must have previously
// been erased. Negative error codes are propogated to the user.
// been erased. Negative error codes are propagated to the user.
// May return LFS_ERR_CORRUPT if the block should be considered bad.
int (*prog)(const struct lfs_config *c, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block. A block must be erased before being programmed.
// The state of an erased block is undefined. Negative error codes
// are propogated to the user.
// are propagated to the user.
// May return LFS_ERR_CORRUPT if the block should be considered bad.
int (*erase)(const struct lfs_config *c, lfs_block_t block);
// Sync the state of the underlying block device. Negative error codes
// are propogated to the user.
// are propagated to the user.
int (*sync)(const struct lfs_config *c);
// Minimum size of a block read. All read operations will be a
#ifdef LFS_THREADSAFE
// Lock the underlying block device. Negative error codes
// are propagated to the user.
int (*lock)(const struct lfs_config *c);
// Unlock the underlying block device. Negative error codes
// are propagated to the user.
int (*unlock)(const struct lfs_config *c);
#endif
// Minimum size of a block read in bytes. All read operations will be a
// multiple of this value.
lfs_size_t read_size;
// Minimum size of a block program. All program operations will be a
// multiple of this value.
// Minimum size of a block program in bytes. All program operations will be
// a multiple of this value.
lfs_size_t prog_size;
// Size of an erasable block. This does not impact ram consumption and
// may be larger than the physical erase size. However, non-inlined files
// take up at minimum one block. Must be a multiple of the read
// and program sizes.
// Size of an erasable block in bytes. This does not impact ram consumption
// and may be larger than the physical erase size. However, non-inlined
// files take up at minimum one block. Must be a multiple of the read and
// program sizes.
lfs_size_t block_size;
// Number of erasable blocks on the device.
lfs_size_t block_count;
// Number of erase cycles before we should move data to another block.
// May be zero, in which case no block-level wear-leveling is performed.
uint32_t block_cycles;
// Number of erase cycles before littlefs evicts metadata logs and moves
// the metadata to another block. Suggested values are in the
// range 100-1000, with large values having better performance at the cost
// of less consistent wear distribution.
//
// Set to -1 to disable block-level wear-leveling.
int32_t block_cycles;
// Size of block caches. Each cache buffers a portion of a block in RAM.
// The littlefs needs a read cache, a program cache, and one additional
// Size of block caches in bytes. Each cache buffers a portion of a block in
// RAM. The littlefs needs a read cache, a program cache, and one additional
// cache per file. Larger caches can improve performance by storing more
// data and reducing the number of disk accesses. Must be a multiple of
// the read and program sizes, and a factor of the block size.
// data and reducing the number of disk accesses. Must be a multiple of the
// read and program sizes, and a factor of the block size.
lfs_size_t cache_size;
// Size of the lookahead buffer in bytes. A larger lookahead buffer
// increases the number of blocks found during an allocation pass. The
// lookahead buffer is stored as a compact bitmap, so each byte of RAM
// can track 8 blocks. Must be a multiple of 4.
// can track 8 blocks.
lfs_size_t lookahead_size;
// Threshold for metadata compaction during lfs_fs_gc in bytes. Metadata
// pairs that exceed this threshold will be compacted during lfs_fs_gc.
// Defaults to ~88% block_size when zero, though the default may change
// in the future.
//
// Note this only affects lfs_fs_gc. Normal compactions still only occur
// when full.
//
// Set to -1 to disable metadata compaction during lfs_fs_gc.
lfs_size_t compact_thresh;
// Optional statically allocated read buffer. Must be cache_size.
// By default lfs_malloc is used to allocate this buffer.
void *read_buffer;
@@ -215,9 +246,8 @@ struct lfs_config {
// By default lfs_malloc is used to allocate this buffer.
void *prog_buffer;
// Optional statically allocated lookahead buffer. Must be lookahead_size
// and aligned to a 64-bit boundary. By default lfs_malloc is used to
// allocate this buffer.
// Optional statically allocated lookahead buffer. Must be lookahead_size.
// By default lfs_malloc is used to allocate this buffer.
void *lookahead_buffer;
// Optional upper limit on length of file names in bytes. No downside for
@@ -235,6 +265,29 @@ struct lfs_config {
// larger attributes size but must be <= LFS_ATTR_MAX. Defaults to
// LFS_ATTR_MAX when zero.
lfs_size_t attr_max;
// Optional upper limit on total space given to metadata pairs in bytes. On
// devices with large blocks (e.g. 128kB) setting this to a low size (2-8kB)
// can help bound the metadata compaction time. Must be <= block_size.
// Defaults to block_size when zero.
lfs_size_t metadata_max;
// Optional upper limit on inlined files in bytes. Inlined files live in
// metadata and decrease storage requirements, but may be limited to
// improve metadata-related performance. Must be <= cache_size, <=
// attr_max, and <= block_size/8. Defaults to the largest possible
// inline_max when zero.
//
// Set to -1 to disable inlined files.
lfs_size_t inline_max;
#ifdef LFS_MULTIVERSION
// On-disk version to use when writing in the form of 16-bit major version
// + 16-bit minor version. This limiting metadata to what is supported by
// older minor versions. Note that some features will be lost. Defaults to
// to the most recent minor version when zero.
uint32_t disk_version;
#endif
};
// File info structure
@@ -252,6 +305,27 @@ struct lfs_info {
char name[LFS_NAME_MAX+1];
};
// Filesystem info structure
struct lfs_fsinfo {
// On-disk version.
uint32_t disk_version;
// Size of a logical block in bytes.
lfs_size_t block_size;
// Number of logical blocks in filesystem.
lfs_size_t block_count;
// Upper limit on the length of file names in bytes.
lfs_size_t name_max;
// Upper limit on the size of files in bytes.
lfs_size_t file_max;
// Upper limit on the size of custom attributes in bytes.
lfs_size_t attr_max;
};
// Custom attribute structure, used to describe custom attributes
// committed atomically during file writes.
struct lfs_attr {
@@ -350,6 +424,11 @@ typedef struct lfs_superblock {
lfs_size_t attr_max;
} lfs_superblock_t;
typedef struct lfs_gstate {
uint32_t tag;
lfs_block_t pair[2];
} lfs_gstate_t;
// The littlefs filesystem type
typedef struct lfs {
lfs_cache_t rcache;
@@ -364,23 +443,24 @@ typedef struct lfs {
} *mlist;
uint32_t seed;
struct lfs_gstate {
uint32_t tag;
lfs_block_t pair[2];
} gstate, gpending, gdelta;
lfs_gstate_t gstate;
lfs_gstate_t gdisk;
lfs_gstate_t gdelta;
struct lfs_free {
lfs_block_t off;
struct lfs_lookahead {
lfs_block_t start;
lfs_block_t size;
lfs_block_t i;
lfs_block_t ack;
uint32_t *buffer;
} free;
lfs_block_t next;
lfs_block_t ckpoint;
uint8_t *buffer;
} lookahead;
const struct lfs_config *cfg;
lfs_size_t block_count;
lfs_size_t name_max;
lfs_size_t file_max;
lfs_size_t attr_max;
lfs_size_t inline_max;
#ifdef LFS_MIGRATE
struct lfs1 *lfs1;
@@ -390,6 +470,7 @@ typedef struct lfs {
/// Filesystem functions ///
#ifndef LFS_READONLY
// Format a block device with the littlefs
//
// Requires a littlefs object and config struct. This clobbers the littlefs
@@ -398,6 +479,7 @@ typedef struct lfs {
//
// Returns a negative error code on failure.
int lfs_format(lfs_t *lfs, const struct lfs_config *config);
#endif
// Mounts a littlefs
//
@@ -417,12 +499,15 @@ int lfs_unmount(lfs_t *lfs);
/// General operations ///
#ifndef LFS_READONLY
// Removes a file or directory
//
// If removing a directory, the directory must be empty.
// Returns a negative error code on failure.
int lfs_remove(lfs_t *lfs, const char *path);
#endif
#ifndef LFS_READONLY
// Rename or move a file or directory
//
// If the destination exists, it must match the source in type.
@@ -430,6 +515,7 @@ int lfs_remove(lfs_t *lfs, const char *path);
//
// Returns a negative error code on failure.
int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath);
#endif
// Find info about a file or directory
//
@@ -448,10 +534,11 @@ int lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info);
// Returns the size of the attribute, or a negative error code on failure.
// Note, the returned size is the size of the attribute on disk, irrespective
// of the size of the buffer. This can be used to dynamically allocate a buffer
// or check for existance.
// or check for existence.
lfs_ssize_t lfs_getattr(lfs_t *lfs, const char *path,
uint8_t type, void *buffer, lfs_size_t size);
#ifndef LFS_READONLY
// Set custom attributes
//
// Custom attributes are uniquely identified by an 8-bit type and limited
@@ -461,17 +548,21 @@ lfs_ssize_t lfs_getattr(lfs_t *lfs, const char *path,
// Returns a negative error code on failure.
int lfs_setattr(lfs_t *lfs, const char *path,
uint8_t type, const void *buffer, lfs_size_t size);
#endif
#ifndef LFS_READONLY
// Removes a custom attribute
//
// If an attribute is not found, nothing happens.
//
// Returns a negative error code on failure.
int lfs_removeattr(lfs_t *lfs, const char *path, uint8_t type);
#endif
/// File operations ///
#ifndef LFS_NO_MALLOC
// Open a file
//
// The mode that the file is opened in is determined by the flags, which
@@ -481,14 +572,18 @@ int lfs_removeattr(lfs_t *lfs, const char *path, uint8_t type);
int lfs_file_open(lfs_t *lfs, lfs_file_t *file,
const char *path, int flags);
// if LFS_NO_MALLOC is defined, lfs_file_open() will fail with LFS_ERR_NOMEM
// thus use lfs_file_opencfg() with config.buffer set.
#endif
// Open a file with extra configuration
//
// The mode that the file is opened in is determined by the flags, which
// are values from the enum lfs_open_flags that are bitwise-ored together.
//
// The config struct provides additional config options per file as described
// above. The config struct must be allocated while the file is open, and the
// config struct must be zeroed for defaults and backwards compatibility.
// above. The config struct must remain allocated while the file is open, and
// the config struct must be zeroed for defaults and backwards compatibility.
//
// Returns a negative error code on failure.
int lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file,
@@ -516,6 +611,7 @@ int lfs_file_sync(lfs_t *lfs, lfs_file_t *file);
lfs_ssize_t lfs_file_read(lfs_t *lfs, lfs_file_t *file,
void *buffer, lfs_size_t size);
#ifndef LFS_READONLY
// Write data to file
//
// Takes a buffer and size indicating the data to write. The file will not
@@ -524,18 +620,21 @@ lfs_ssize_t lfs_file_read(lfs_t *lfs, lfs_file_t *file,
// Returns the number of bytes written, or a negative error code on failure.
lfs_ssize_t lfs_file_write(lfs_t *lfs, lfs_file_t *file,
const void *buffer, lfs_size_t size);
#endif
// Change the position of the file
//
// The change in position is determined by the offset and whence flag.
// Returns the old position of the file, or a negative error code on failure.
// Returns the new position of the file, or a negative error code on failure.
lfs_soff_t lfs_file_seek(lfs_t *lfs, lfs_file_t *file,
lfs_soff_t off, int whence);
#ifndef LFS_READONLY
// Truncates the size of the file to the specified size
//
// Returns a negative error code on failure.
int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size);
#endif
// Return the position of the file
//
@@ -545,7 +644,7 @@ lfs_soff_t lfs_file_tell(lfs_t *lfs, lfs_file_t *file);
// Change the position of the file to the beginning of the file
//
// Equivalent to lfs_file_seek(lfs, file, 0, LFS_SEEK_CUR)
// Equivalent to lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
// Returns a negative error code on failure.
int lfs_file_rewind(lfs_t *lfs, lfs_file_t *file);
@@ -558,10 +657,12 @@ lfs_soff_t lfs_file_size(lfs_t *lfs, lfs_file_t *file);
/// Directory operations ///
#ifndef LFS_READONLY
// Create a directory
//
// Returns a negative error code on failure.
int lfs_mkdir(lfs_t *lfs, const char *path);
#endif
// Open a directory
//
@@ -606,6 +707,12 @@ int lfs_dir_rewind(lfs_t *lfs, lfs_dir_t *dir);
/// Filesystem-level filesystem operations
// Find on-disk info about the filesystem
//
// Fills out the fsinfo structure based on the filesystem found on-disk.
// Returns a negative error code on failure.
int lfs_fs_stat(lfs_t *lfs, struct lfs_fsinfo *fsinfo);
// Finds the current size of the filesystem
//
// Note: Result is best effort. If files share COW structures, the returned
@@ -623,6 +730,47 @@ lfs_ssize_t lfs_fs_size(lfs_t *lfs);
// Returns a negative error code on failure.
int lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
#ifndef LFS_READONLY
// Attempt to make the filesystem consistent and ready for writing
//
// Calling this function is not required, consistency will be implicitly
// enforced on the first operation that writes to the filesystem, but this
// function allows the work to be performed earlier and without other
// filesystem changes.
//
// Returns a negative error code on failure.
int lfs_fs_mkconsistent(lfs_t *lfs);
#endif
#ifndef LFS_READONLY
// Attempt any janitorial work
//
// This currently:
// 1. Calls mkconsistent if not already consistent
// 2. Compacts metadata > compact_thresh
// 3. Populates the block allocator
//
// Though additional janitorial work may be added in the future.
//
// Calling this function is not required, but may allow the offloading of
// expensive janitorial work to a less time-critical code path.
//
// Returns a negative error code on failure. Accomplishing nothing is not
// an error.
int lfs_fs_gc(lfs_t *lfs);
#endif
#ifndef LFS_READONLY
// Grows the filesystem to a new size, updating the superblock with the new
// block count.
//
// Note: This is irreversible.
//
// Returns a negative error code on failure.
int lfs_fs_grow(lfs_t *lfs, lfs_size_t block_count);
#endif
#ifndef LFS_READONLY
#ifdef LFS_MIGRATE
// Attempts to migrate a previous version of littlefs
//
@@ -637,6 +785,7 @@ int lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
// Returns a negative error code on failure.
int lfs_migrate(lfs_t *lfs, const struct lfs_config *cfg);
#endif
#endif
#ifdef __cplusplus

View File

@@ -1,6 +1,7 @@
/*
* lfs util functions
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -10,6 +11,8 @@
#ifndef LFS_CONFIG
// If user provides their own CRC impl we don't need this
#ifndef LFS_CRC
// Software CRC implementation with small lookup table
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size) {
static const uint32_t rtable[16] = {
@@ -28,6 +31,7 @@ uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size) {
return crc;
}
#endif
#endif

View File

@@ -1,6 +1,7 @@
/*
* lfs utility functions
*
* Copyright (c) 2022, The littlefs authors.
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
@@ -31,7 +32,10 @@
#ifndef LFS_NO_ASSERT
#include <assert.h>
#endif
#if !defined(LFS_NO_DEBUG) || !defined(LFS_NO_WARN) || !defined(LFS_NO_ERROR)
#if !defined(LFS_NO_DEBUG) || \
!defined(LFS_NO_WARN) || \
!defined(LFS_NO_ERROR) || \
defined(LFS_YES_TRACE)
#include <stdio.h>
#endif
@@ -46,33 +50,54 @@ extern "C"
// code footprint
// Logging functions
#ifndef LFS_TRACE
#ifdef LFS_YES_TRACE
#define LFS_TRACE_(fmt, ...) \
printf("%s:%d:trace: " fmt "%s\n", __FILE__, __LINE__, __VA_ARGS__)
#define LFS_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
#else
#define LFS_TRACE(...)
#endif
#endif
#ifndef LFS_DEBUG
#ifndef LFS_NO_DEBUG
#define LFS_DEBUG(fmt, ...) \
printf("lfs debug:%d: " fmt "\n", __LINE__, __VA_ARGS__)
#define LFS_DEBUG_(fmt, ...) \
printf("%s:%d:debug: " fmt "%s\n", __FILE__, __LINE__, __VA_ARGS__)
#define LFS_DEBUG(...) LFS_DEBUG_(__VA_ARGS__, "")
#else
#define LFS_DEBUG(fmt, ...)
#define LFS_DEBUG(...)
#endif
#endif
#ifndef LFS_WARN
#ifndef LFS_NO_WARN
#define LFS_WARN(fmt, ...) \
printf("lfs warn:%d: " fmt "\n", __LINE__, __VA_ARGS__)
#define LFS_WARN_(fmt, ...) \
printf("%s:%d:warn: " fmt "%s\n", __FILE__, __LINE__, __VA_ARGS__)
#define LFS_WARN(...) LFS_WARN_(__VA_ARGS__, "")
#else
#define LFS_WARN(fmt, ...)
#define LFS_WARN(...)
#endif
#endif
#ifndef LFS_ERROR
#ifndef LFS_NO_ERROR
#define LFS_ERROR(fmt, ...) \
printf("lfs error:%d: " fmt "\n", __LINE__, __VA_ARGS__)
#define LFS_ERROR_(fmt, ...) \
printf("%s:%d:error: " fmt "%s\n", __FILE__, __LINE__, __VA_ARGS__)
#define LFS_ERROR(...) LFS_ERROR_(__VA_ARGS__, "")
#else
#define LFS_ERROR(fmt, ...)
#define LFS_ERROR(...)
#endif
#endif
// Runtime assertions
#ifndef LFS_ASSERT
#ifndef LFS_NO_ASSERT
#define LFS_ASSERT(test) assert(test)
#else
#define LFS_ASSERT(test)
#endif
#endif
// Builtin functions, these may be replaced by more efficient
@@ -97,7 +122,7 @@ static inline uint32_t lfs_alignup(uint32_t a, uint32_t alignment) {
return lfs_aligndown(a + alignment-1, alignment);
}
// Find the next smallest power of 2 less than or equal to a
// Find the smallest power of 2 greater than or equal to a
static inline uint32_t lfs_npw2(uint32_t a) {
#if !defined(LFS_NO_INTRINSICS) && (defined(__GNUC__) || defined(__CC_ARM))
return 32 - __builtin_clz(a-1);
@@ -142,15 +167,14 @@ static inline int lfs_scmp(uint32_t a, uint32_t b) {
// Convert between 32-bit little-endian and native order
static inline uint32_t lfs_fromle32(uint32_t a) {
#if !defined(LFS_NO_INTRINSICS) && ( \
(defined( BYTE_ORDER ) && BYTE_ORDER == ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER ) && __BYTE_ORDER == __ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__))
#if (defined( BYTE_ORDER ) && defined( ORDER_LITTLE_ENDIAN ) && BYTE_ORDER == ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER ) && defined(__ORDER_LITTLE_ENDIAN ) && __BYTE_ORDER == __ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER__) && defined(__ORDER_LITTLE_ENDIAN__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
return a;
#elif !defined(LFS_NO_INTRINSICS) && ( \
(defined( BYTE_ORDER ) && BYTE_ORDER == ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER ) && __BYTE_ORDER == __ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__))
(defined( BYTE_ORDER ) && defined( ORDER_BIG_ENDIAN ) && BYTE_ORDER == ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER ) && defined(__ORDER_BIG_ENDIAN ) && __BYTE_ORDER == __ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER__) && defined(__ORDER_BIG_ENDIAN__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__))
return __builtin_bswap32(a);
#else
return (((uint8_t*)&a)[0] << 0) |
@@ -167,14 +191,13 @@ static inline uint32_t lfs_tole32(uint32_t a) {
// Convert between 32-bit big-endian and native order
static inline uint32_t lfs_frombe32(uint32_t a) {
#if !defined(LFS_NO_INTRINSICS) && ( \
(defined( BYTE_ORDER ) && BYTE_ORDER == ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER ) && __BYTE_ORDER == __ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__))
(defined( BYTE_ORDER ) && defined( ORDER_LITTLE_ENDIAN ) && BYTE_ORDER == ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER ) && defined(__ORDER_LITTLE_ENDIAN ) && __BYTE_ORDER == __ORDER_LITTLE_ENDIAN ) || \
(defined(__BYTE_ORDER__) && defined(__ORDER_LITTLE_ENDIAN__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__))
return __builtin_bswap32(a);
#elif !defined(LFS_NO_INTRINSICS) && ( \
(defined( BYTE_ORDER ) && BYTE_ORDER == ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER ) && __BYTE_ORDER == __ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__))
#elif (defined( BYTE_ORDER ) && defined( ORDER_BIG_ENDIAN ) && BYTE_ORDER == ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER ) && defined(__ORDER_BIG_ENDIAN ) && __BYTE_ORDER == __ORDER_BIG_ENDIAN ) || \
(defined(__BYTE_ORDER__) && defined(__ORDER_BIG_ENDIAN__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)
return a;
#else
return (((uint8_t*)&a)[0] << 24) |
@@ -189,12 +212,22 @@ static inline uint32_t lfs_tobe32(uint32_t a) {
}
// Calculate CRC-32 with polynomial = 0x04c11db7
#ifdef LFS_CRC
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size) {
return LFS_CRC(crc, buffer, size)
}
#else
uint32_t lfs_crc(uint32_t crc, const void *buffer, size_t size);
#endif
// Allocate memory, only used if buffers are not provided to littlefs
// Note, memory must be 64-bit aligned
//
// littlefs current has no alignment requirements, as it only allocates
// byte-level buffers.
static inline void *lfs_malloc(size_t size) {
#ifndef LFS_NO_MALLOC
#if defined(LFS_MALLOC)
return LFS_MALLOC(size);
#elif !defined(LFS_NO_MALLOC)
return malloc(size);
#else
(void)size;
@@ -204,7 +237,9 @@ static inline void *lfs_malloc(size_t size) {
// Deallocate memory, only used if buffers are not provided to littlefs
static inline void lfs_free(void *p) {
#ifndef LFS_NO_MALLOC
#if defined(LFS_FREE)
LFS_FREE(p);
#elif !defined(LFS_NO_MALLOC)
free(p);
#else
(void)p;

2057
runners/bench_runner.c Normal file

File diff suppressed because it is too large Load Diff

143
runners/bench_runner.h Normal file
View File

@@ -0,0 +1,143 @@
/*
* Runner for littlefs benchmarks
*
* Copyright (c) 2022, The littlefs authors.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef BENCH_RUNNER_H
#define BENCH_RUNNER_H
// override LFS_TRACE
void bench_trace(const char *fmt, ...);
#define LFS_TRACE_(fmt, ...) \
bench_trace("%s:%d:trace: " fmt "%s\n", \
__FILE__, \
__LINE__, \
__VA_ARGS__)
#define LFS_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
#define LFS_EMUBD_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
// provide BENCH_START/BENCH_STOP macros
void bench_start(void);
void bench_stop(void);
#define BENCH_START() bench_start()
#define BENCH_STOP() bench_stop()
// note these are indirectly included in any generated files
#include "bd/lfs_emubd.h"
#include <stdio.h>
// give source a chance to define feature macros
#undef _FEATURES_H
#undef _STDIO_H
// generated bench configurations
struct lfs_config;
enum bench_flags {
BENCH_REENTRANT = 0x1,
};
typedef uint8_t bench_flags_t;
typedef struct bench_define {
intmax_t (*cb)(void *data);
void *data;
} bench_define_t;
struct bench_case {
const char *name;
const char *path;
bench_flags_t flags;
size_t permutations;
const bench_define_t *defines;
bool (*filter)(void);
void (*run)(struct lfs_config *cfg);
};
struct bench_suite {
const char *name;
const char *path;
bench_flags_t flags;
const char *const *define_names;
size_t define_count;
const struct bench_case *cases;
size_t case_count;
};
// deterministic prng for pseudo-randomness in benches
uint32_t bench_prng(uint32_t *state);
#define BENCH_PRNG(state) bench_prng(state)
// access generated bench defines
intmax_t bench_define(size_t define);
#define BENCH_DEFINE(i) bench_define(i)
// a few preconfigured defines that control how benches run
#define READ_SIZE_i 0
#define PROG_SIZE_i 1
#define ERASE_SIZE_i 2
#define ERASE_COUNT_i 3
#define BLOCK_SIZE_i 4
#define BLOCK_COUNT_i 5
#define CACHE_SIZE_i 6
#define LOOKAHEAD_SIZE_i 7
#define COMPACT_THRESH_i 8
#define INLINE_MAX_i 9
#define BLOCK_CYCLES_i 10
#define ERASE_VALUE_i 11
#define ERASE_CYCLES_i 12
#define BADBLOCK_BEHAVIOR_i 13
#define POWERLOSS_BEHAVIOR_i 14
#define READ_SIZE bench_define(READ_SIZE_i)
#define PROG_SIZE bench_define(PROG_SIZE_i)
#define ERASE_SIZE bench_define(ERASE_SIZE_i)
#define ERASE_COUNT bench_define(ERASE_COUNT_i)
#define BLOCK_SIZE bench_define(BLOCK_SIZE_i)
#define BLOCK_COUNT bench_define(BLOCK_COUNT_i)
#define CACHE_SIZE bench_define(CACHE_SIZE_i)
#define LOOKAHEAD_SIZE bench_define(LOOKAHEAD_SIZE_i)
#define COMPACT_THRESH bench_define(COMPACT_THRESH_i)
#define INLINE_MAX bench_define(INLINE_MAX_i)
#define BLOCK_CYCLES bench_define(BLOCK_CYCLES_i)
#define ERASE_VALUE bench_define(ERASE_VALUE_i)
#define ERASE_CYCLES bench_define(ERASE_CYCLES_i)
#define BADBLOCK_BEHAVIOR bench_define(BADBLOCK_BEHAVIOR_i)
#define POWERLOSS_BEHAVIOR bench_define(POWERLOSS_BEHAVIOR_i)
#define BENCH_IMPLICIT_DEFINES \
BENCH_DEF(READ_SIZE, PROG_SIZE) \
BENCH_DEF(PROG_SIZE, ERASE_SIZE) \
BENCH_DEF(ERASE_SIZE, 0) \
BENCH_DEF(ERASE_COUNT, (1024*1024)/BLOCK_SIZE) \
BENCH_DEF(BLOCK_SIZE, ERASE_SIZE) \
BENCH_DEF(BLOCK_COUNT, ERASE_COUNT/lfs_max(BLOCK_SIZE/ERASE_SIZE,1))\
BENCH_DEF(CACHE_SIZE, lfs_max(64,lfs_max(READ_SIZE,PROG_SIZE))) \
BENCH_DEF(LOOKAHEAD_SIZE, 16) \
BENCH_DEF(COMPACT_THRESH, 0) \
BENCH_DEF(INLINE_MAX, 0) \
BENCH_DEF(BLOCK_CYCLES, -1) \
BENCH_DEF(ERASE_VALUE, 0xff) \
BENCH_DEF(ERASE_CYCLES, 0) \
BENCH_DEF(BADBLOCK_BEHAVIOR, LFS_EMUBD_BADBLOCK_PROGERROR) \
BENCH_DEF(POWERLOSS_BEHAVIOR, LFS_EMUBD_POWERLOSS_NOOP)
#define BENCH_GEOMETRY_DEFINE_COUNT 4
#define BENCH_IMPLICIT_DEFINE_COUNT 15
#endif

2808
runners/test_runner.c Normal file

File diff suppressed because it is too large Load Diff

139
runners/test_runner.h Normal file
View File

@@ -0,0 +1,139 @@
/*
* Runner for littlefs tests
*
* Copyright (c) 2022, The littlefs authors.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef TEST_RUNNER_H
#define TEST_RUNNER_H
// override LFS_TRACE
void test_trace(const char *fmt, ...);
#define LFS_TRACE_(fmt, ...) \
test_trace("%s:%d:trace: " fmt "%s\n", \
__FILE__, \
__LINE__, \
__VA_ARGS__)
#define LFS_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
#define LFS_EMUBD_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
// note these are indirectly included in any generated files
#include "bd/lfs_emubd.h"
#include <stdio.h>
// give source a chance to define feature macros
#undef _FEATURES_H
#undef _STDIO_H
// generated test configurations
struct lfs_config;
enum test_flags {
TEST_REENTRANT = 0x1,
};
typedef uint8_t test_flags_t;
typedef struct test_define {
intmax_t (*cb)(void *data);
void *data;
} test_define_t;
struct test_case {
const char *name;
const char *path;
test_flags_t flags;
size_t permutations;
const test_define_t *defines;
bool (*filter)(void);
void (*run)(struct lfs_config *cfg);
};
struct test_suite {
const char *name;
const char *path;
test_flags_t flags;
const char *const *define_names;
size_t define_count;
const struct test_case *cases;
size_t case_count;
};
// deterministic prng for pseudo-randomness in testes
uint32_t test_prng(uint32_t *state);
#define TEST_PRNG(state) test_prng(state)
// access generated test defines
intmax_t test_define(size_t define);
#define TEST_DEFINE(i) test_define(i)
// a few preconfigured defines that control how tests run
#define READ_SIZE_i 0
#define PROG_SIZE_i 1
#define ERASE_SIZE_i 2
#define ERASE_COUNT_i 3
#define BLOCK_SIZE_i 4
#define BLOCK_COUNT_i 5
#define CACHE_SIZE_i 6
#define LOOKAHEAD_SIZE_i 7
#define COMPACT_THRESH_i 8
#define INLINE_MAX_i 9
#define BLOCK_CYCLES_i 10
#define ERASE_VALUE_i 11
#define ERASE_CYCLES_i 12
#define BADBLOCK_BEHAVIOR_i 13
#define POWERLOSS_BEHAVIOR_i 14
#define DISK_VERSION_i 15
#define READ_SIZE TEST_DEFINE(READ_SIZE_i)
#define PROG_SIZE TEST_DEFINE(PROG_SIZE_i)
#define ERASE_SIZE TEST_DEFINE(ERASE_SIZE_i)
#define ERASE_COUNT TEST_DEFINE(ERASE_COUNT_i)
#define BLOCK_SIZE TEST_DEFINE(BLOCK_SIZE_i)
#define BLOCK_COUNT TEST_DEFINE(BLOCK_COUNT_i)
#define CACHE_SIZE TEST_DEFINE(CACHE_SIZE_i)
#define LOOKAHEAD_SIZE TEST_DEFINE(LOOKAHEAD_SIZE_i)
#define COMPACT_THRESH TEST_DEFINE(COMPACT_THRESH_i)
#define INLINE_MAX TEST_DEFINE(INLINE_MAX_i)
#define BLOCK_CYCLES TEST_DEFINE(BLOCK_CYCLES_i)
#define ERASE_VALUE TEST_DEFINE(ERASE_VALUE_i)
#define ERASE_CYCLES TEST_DEFINE(ERASE_CYCLES_i)
#define BADBLOCK_BEHAVIOR TEST_DEFINE(BADBLOCK_BEHAVIOR_i)
#define POWERLOSS_BEHAVIOR TEST_DEFINE(POWERLOSS_BEHAVIOR_i)
#define DISK_VERSION TEST_DEFINE(DISK_VERSION_i)
#define TEST_IMPLICIT_DEFINES \
TEST_DEF(READ_SIZE, PROG_SIZE) \
TEST_DEF(PROG_SIZE, ERASE_SIZE) \
TEST_DEF(ERASE_SIZE, 0) \
TEST_DEF(ERASE_COUNT, (1024*1024)/ERASE_SIZE) \
TEST_DEF(BLOCK_SIZE, ERASE_SIZE) \
TEST_DEF(BLOCK_COUNT, ERASE_COUNT/lfs_max(BLOCK_SIZE/ERASE_SIZE,1)) \
TEST_DEF(CACHE_SIZE, lfs_max(64,lfs_max(READ_SIZE,PROG_SIZE))) \
TEST_DEF(LOOKAHEAD_SIZE, 16) \
TEST_DEF(COMPACT_THRESH, 0) \
TEST_DEF(INLINE_MAX, 0) \
TEST_DEF(BLOCK_CYCLES, -1) \
TEST_DEF(ERASE_VALUE, 0xff) \
TEST_DEF(ERASE_CYCLES, 0) \
TEST_DEF(BADBLOCK_BEHAVIOR, LFS_EMUBD_BADBLOCK_PROGERROR) \
TEST_DEF(POWERLOSS_BEHAVIOR, LFS_EMUBD_POWERLOSS_NOOP) \
TEST_DEF(DISK_VERSION, 0)
#define TEST_GEOMETRY_DEFINE_COUNT 4
#define TEST_IMPLICIT_DEFINE_COUNT 16
#endif

1430
scripts/bench.py Executable file

File diff suppressed because it is too large Load Diff

181
scripts/changeprefix.py Executable file
View File

@@ -0,0 +1,181 @@
#!/usr/bin/env python3
#
# Change prefixes in files/filenames. Useful for creating different versions
# of a codebase that don't conflict at compile time.
#
# Example:
# $ ./scripts/changeprefix.py lfs lfs3
#
# Copyright (c) 2022, The littlefs authors.
# Copyright (c) 2019, Arm Limited. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
import glob
import itertools
import os
import os.path
import re
import shlex
import shutil
import subprocess
import tempfile
GIT_PATH = ['git']
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def changeprefix(from_prefix, to_prefix, line):
line, count1 = re.subn(
'\\b'+from_prefix,
to_prefix,
line)
line, count2 = re.subn(
'\\b'+from_prefix.upper(),
to_prefix.upper(),
line)
line, count3 = re.subn(
'\\B-D'+from_prefix.upper(),
'-D'+to_prefix.upper(),
line)
return line, count1+count2+count3
def changefile(from_prefix, to_prefix, from_path, to_path, *,
no_replacements=False):
# rename any prefixes in file
count = 0
# create a temporary file to avoid overwriting ourself
if from_path == to_path and to_path != '-':
to_path_temp = tempfile.NamedTemporaryFile('w', delete=False)
to_path = to_path_temp.name
else:
to_path_temp = None
with openio(from_path) as from_f:
with openio(to_path, 'w') as to_f:
for line in from_f:
if not no_replacements:
line, n = changeprefix(from_prefix, to_prefix, line)
count += n
to_f.write(line)
if from_path != '-' and to_path != '-':
shutil.copystat(from_path, to_path)
if to_path_temp:
os.rename(to_path, from_path)
elif from_path != '-':
os.remove(from_path)
# Summary
print('%s: %d replacements' % (
'%s -> %s' % (from_path, to_path) if not to_path_temp else from_path,
count))
def main(from_prefix, to_prefix, paths=[], *,
verbose=False,
output=None,
no_replacements=False,
no_renames=False,
git=False,
no_stage=False,
git_path=GIT_PATH):
if not paths:
if git:
cmd = git_path + ['ls-tree', '-r', '--name-only', 'HEAD']
if verbose:
print(' '.join(shlex.quote(c) for c in cmd))
paths = subprocess.check_output(cmd, encoding='utf8').split()
else:
print('no paths?', file=sys.stderr)
sys.exit(1)
for from_path in paths:
# rename filename?
if output:
to_path = output
elif no_renames:
to_path = from_path
else:
to_path = os.path.join(
os.path.dirname(from_path),
changeprefix(from_prefix, to_prefix,
os.path.basename(from_path))[0])
# rename contents
changefile(from_prefix, to_prefix, from_path, to_path,
no_replacements=no_replacements)
# stage?
if git and not no_stage:
if from_path != to_path:
cmd = git_path + ['rm', '-q', from_path]
if verbose:
print(' '.join(shlex.quote(c) for c in cmd))
subprocess.check_call(cmd)
cmd = git_path + ['add', to_path]
if verbose:
print(' '.join(shlex.quote(c) for c in cmd))
subprocess.check_call(cmd)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Change prefixes in files/filenames. Useful for creating "
"different versions of a codebase that don't conflict at compile "
"time.",
allow_abbrev=False)
parser.add_argument(
'from_prefix',
help="Prefix to replace.")
parser.add_argument(
'to_prefix',
help="Prefix to replace with.")
parser.add_argument(
'paths',
nargs='*',
help="Files to operate on.")
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Output commands that run behind the scenes.")
parser.add_argument(
'-o', '--output',
help="Output file.")
parser.add_argument(
'-N', '--no-replacements',
action='store_true',
help="Don't change prefixes in files")
parser.add_argument(
'-R', '--no-renames',
action='store_true',
help="Don't rename files")
parser.add_argument(
'--git',
action='store_true',
help="Use git to find/update files.")
parser.add_argument(
'--no-stage',
action='store_true',
help="Don't stage changes with git.")
parser.add_argument(
'--git-path',
type=lambda x: x.split(),
default=GIT_PATH,
help="Path to git executable, may include flags. "
"Defaults to %r." % GIT_PATH)
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

707
scripts/code.py Executable file
View File

@@ -0,0 +1,707 @@
#!/usr/bin/env python3
#
# Script to find code size at the function level. Basically just a big wrapper
# around nm with some extra conveniences for comparing builds. Heavily inspired
# by Linux's Bloat-O-Meter.
#
# Example:
# ./scripts/code.py lfs.o lfs_util.o -Ssize
#
# Copyright (c) 2022, The littlefs authors.
# Copyright (c) 2020, Arm Limited. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import csv
import difflib
import itertools as it
import math as m
import os
import re
import shlex
import subprocess as sp
NM_PATH = ['nm']
NM_TYPES = 'tTrRdD'
OBJDUMP_PATH = ['objdump']
# integer fields
class Int(co.namedtuple('Int', 'x')):
__slots__ = ()
def __new__(cls, x=0):
if isinstance(x, Int):
return x
if isinstance(x, str):
try:
x = int(x, 0)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, int) or m.isinf(x), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return str(self.x)
def __int__(self):
assert not m.isinf(self.x)
return self.x
def __float__(self):
return float(self.x)
none = '%7s' % '-'
def table(self):
return '%7s' % (self,)
diff_none = '%7s' % '-'
diff_table = table
def diff_diff(self, other):
new = self.x if self else 0
old = other.x if other else 0
diff = new - old
if diff == +m.inf:
return '%7s' % '+∞'
elif diff == -m.inf:
return '%7s' % '-∞'
else:
return '%+7d' % diff
def ratio(self, other):
new = self.x if self else 0
old = other.x if other else 0
if m.isinf(new) and m.isinf(old):
return 0.0
elif m.isinf(new):
return +m.inf
elif m.isinf(old):
return -m.inf
elif not old and not new:
return 0.0
elif not old:
return 1.0
else:
return (new-old) / old
def __add__(self, other):
return self.__class__(self.x + other.x)
def __sub__(self, other):
return self.__class__(self.x - other.x)
def __mul__(self, other):
return self.__class__(self.x * other.x)
# code size results
class CodeResult(co.namedtuple('CodeResult', [
'file', 'function',
'size'])):
_by = ['file', 'function']
_fields = ['size']
_sort = ['size']
_types = {'size': Int}
__slots__ = ()
def __new__(cls, file='', function='', size=0):
return super().__new__(cls, file, function,
Int(size))
def __add__(self, other):
return CodeResult(self.file, self.function,
self.size + other.size)
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def collect(obj_paths, *,
nm_path=NM_PATH,
nm_types=NM_TYPES,
objdump_path=OBJDUMP_PATH,
sources=None,
everything=False,
**args):
size_pattern = re.compile(
'^(?P<size>[0-9a-fA-F]+)' +
' (?P<type>[%s])' % re.escape(nm_types) +
' (?P<func>.+?)$')
line_pattern = re.compile(
'^\s+(?P<no>[0-9]+)'
'(?:\s+(?P<dir>[0-9]+))?'
'\s+.*'
'\s+(?P<path>[^\s]+)$')
info_pattern = re.compile(
'^(?:.*(?P<tag>DW_TAG_[a-z_]+).*'
'|.*DW_AT_name.*:\s*(?P<name>[^:\s]+)\s*'
'|.*DW_AT_decl_file.*:\s*(?P<file>[0-9]+)\s*)$')
results = []
for path in obj_paths:
# guess the source, if we have debug-info we'll replace this later
file = re.sub('(\.o)?$', '.c', path, 1)
# find symbol sizes
results_ = []
# note nm-path may contain extra args
cmd = nm_path + ['--size-sort', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
m = size_pattern.match(line)
if m:
func = m.group('func')
# discard internal functions
if not everything and func.startswith('__'):
continue
results_.append(CodeResult(
file, func,
int(m.group('size'), 16)))
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
sys.exit(-1)
# try to figure out the source file if we have debug-info
dirs = {}
files = {}
# note objdump-path may contain extra args
cmd = objdump_path + ['--dwarf=rawline', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
# note that files contain references to dirs, which we
# dereference as soon as we see them as each file table follows a
# dir table
m = line_pattern.match(line)
if m:
if not m.group('dir'):
# found a directory entry
dirs[int(m.group('no'))] = m.group('path')
else:
# found a file entry
dir = int(m.group('dir'))
if dir in dirs:
files[int(m.group('no'))] = os.path.join(
dirs[dir],
m.group('path'))
else:
files[int(m.group('no'))] = m.group('path')
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
# do nothing on error, we don't need objdump to work, source files
# may just be inaccurate
pass
defs = {}
is_func = False
f_name = None
f_file = None
# note objdump-path may contain extra args
cmd = objdump_path + ['--dwarf=info', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
# state machine here to find definitions
m = info_pattern.match(line)
if m:
if m.group('tag'):
if is_func:
defs[f_name] = files.get(f_file, '?')
is_func = (m.group('tag') == 'DW_TAG_subprogram')
elif m.group('name'):
f_name = m.group('name')
elif m.group('file'):
f_file = int(m.group('file'))
if is_func:
defs[f_name] = files.get(f_file, '?')
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
# do nothing on error, we don't need objdump to work, source files
# may just be inaccurate
pass
for r in results_:
# find best matching debug symbol, this may be slightly different
# due to optimizations
if defs:
# exact match? avoid difflib if we can for speed
if r.function in defs:
file = defs[r.function]
else:
_, file = max(
defs.items(),
key=lambda d: difflib.SequenceMatcher(None,
d[0],
r.function, False).ratio())
else:
file = r.file
# ignore filtered sources
if sources is not None:
if not any(
os.path.abspath(file) == os.path.abspath(s)
for s in sources):
continue
else:
# default to only cwd
if not everything and not os.path.commonpath([
os.getcwd(),
os.path.abspath(file)]) == os.getcwd():
continue
# simplify path
if os.path.commonpath([
os.getcwd(),
os.path.abspath(file)]) == os.getcwd():
file = os.path.relpath(file)
else:
file = os.path.abspath(file)
results.append(r._replace(file=file))
return results
def fold(Result, results, *,
by=None,
defines=None,
**_):
if by is None:
by = Result._by
for k in it.chain(by or [], (k for k, _ in defines or [])):
if k not in Result._by and k not in Result._fields:
print("error: could not find field %r?" % k)
sys.exit(-1)
# filter by matching defines
if defines is not None:
results_ = []
for r in results:
if all(getattr(r, k) in vs for k, vs in defines):
results_.append(r)
results = results_
# organize results into conflicts
folding = co.OrderedDict()
for r in results:
name = tuple(getattr(r, k) for k in by)
if name not in folding:
folding[name] = []
folding[name].append(r)
# merge conflicts
folded = []
for name, rs in folding.items():
folded.append(sum(rs[1:], start=rs[0]))
return folded
def table(Result, results, diff_results=None, *,
by=None,
fields=None,
sort=None,
summary=False,
all=False,
percent=False,
**_):
all_, all = all, __builtins__.all
if by is None:
by = Result._by
if fields is None:
fields = Result._fields
types = Result._types
# fold again
results = fold(Result, results, by=by)
if diff_results is not None:
diff_results = fold(Result, diff_results, by=by)
# organize by name
table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in results}
diff_table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in diff_results or []}
names = list(table.keys() | diff_table.keys())
# sort again, now with diff info, note that python's sort is stable
names.sort()
if diff_results is not None:
names.sort(key=lambda n: tuple(
types[k].ratio(
getattr(table.get(n), k, None),
getattr(diff_table.get(n), k, None))
for k in fields),
reverse=True)
if sort:
for k, reverse in reversed(sort):
names.sort(
key=lambda n: tuple(
(getattr(table[n], k),)
if getattr(table.get(n), k, None) is not None else ()
for k in ([k] if k else [
k for k in Result._sort if k in fields])),
reverse=reverse ^ (not k or k in Result._fields))
# build up our lines
lines = []
# header
header = []
header.append('%s%s' % (
','.join(by),
' (%d added, %d removed)' % (
sum(1 for n in table if n not in diff_table),
sum(1 for n in diff_table if n not in table))
if diff_results is not None and not percent else '')
if not summary else '')
if diff_results is None:
for k in fields:
header.append(k)
elif percent:
for k in fields:
header.append(k)
else:
for k in fields:
header.append('o'+k)
for k in fields:
header.append('n'+k)
for k in fields:
header.append('d'+k)
header.append('')
lines.append(header)
def table_entry(name, r, diff_r=None, ratios=[]):
entry = []
entry.append(name)
if diff_results is None:
for k in fields:
entry.append(getattr(r, k).table()
if getattr(r, k, None) is not None
else types[k].none)
elif percent:
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
else:
for k in fields:
entry.append(getattr(diff_r, k).diff_table()
if getattr(diff_r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(types[k].diff_diff(
getattr(r, k, None),
getattr(diff_r, k, None)))
if diff_results is None:
entry.append('')
elif percent:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios))
else:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios
if t)
if any(ratios) else '')
return entry
# entries
if not summary:
for name in names:
r = table.get(name)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = diff_table.get(name)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
if not all_ and not any(ratios):
continue
lines.append(table_entry(name, r, diff_r, ratios))
# total
r = next(iter(fold(Result, results, by=[])), None)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = next(iter(fold(Result, diff_results, by=[])), None)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
lines.append(table_entry('TOTAL', r, diff_r, ratios))
# find the best widths, note that column 0 contains the names and column -1
# the ratios, so those are handled a bit differently
widths = [
((max(it.chain([w], (len(l[i]) for l in lines)))+1+4-1)//4)*4-1
for w, i in zip(
it.chain([23], it.repeat(7)),
range(len(lines[0])-1))]
# print our table
for line in lines:
print('%-*s %s%s' % (
widths[0], line[0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], line[1:-1])),
line[-1]))
def main(obj_paths, *,
by=None,
fields=None,
defines=None,
sort=None,
**args):
# find sizes
if not args.get('use', None):
results = collect(obj_paths, **args)
else:
results = []
with openio(args['use']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('code_'+k in r and r['code_'+k].strip()
for k in CodeResult._fields):
continue
try:
results.append(CodeResult(
**{k: r[k] for k in CodeResult._by
if k in r and r[k].strip()},
**{k: r['code_'+k] for k in CodeResult._fields
if 'code_'+k in r and r['code_'+k].strip()}))
except TypeError:
pass
# fold
results = fold(CodeResult, results, by=by, defines=defines)
# sort, note that python's sort is stable
results.sort()
if sort:
for k, reverse in reversed(sort):
results.sort(
key=lambda r: tuple(
(getattr(r, k),) if getattr(r, k) is not None else ()
for k in ([k] if k else CodeResult._sort)),
reverse=reverse ^ (not k or k in CodeResult._fields))
# write results to CSV
if args.get('output'):
with openio(args['output'], 'w') as f:
writer = csv.DictWriter(f,
(by if by is not None else CodeResult._by)
+ ['code_'+k for k in (
fields if fields is not None else CodeResult._fields)])
writer.writeheader()
for r in results:
writer.writerow(
{k: getattr(r, k) for k in (
by if by is not None else CodeResult._by)}
| {'code_'+k: getattr(r, k) for k in (
fields if fields is not None else CodeResult._fields)})
# find previous results?
if args.get('diff'):
diff_results = []
try:
with openio(args['diff']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('code_'+k in r and r['code_'+k].strip()
for k in CodeResult._fields):
continue
try:
diff_results.append(CodeResult(
**{k: r[k] for k in CodeResult._by
if k in r and r[k].strip()},
**{k: r['code_'+k] for k in CodeResult._fields
if 'code_'+k in r and r['code_'+k].strip()}))
except TypeError:
pass
except FileNotFoundError:
pass
# fold
diff_results = fold(CodeResult, diff_results, by=by, defines=defines)
# print table
if not args.get('quiet'):
table(CodeResult, results,
diff_results if args.get('diff') else None,
by=by if by is not None else ['function'],
fields=fields,
sort=sort,
**args)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Find code size at the function level.",
allow_abbrev=False)
parser.add_argument(
'obj_paths',
nargs='*',
help="Input *.o files.")
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Output commands that run behind the scenes.")
parser.add_argument(
'-q', '--quiet',
action='store_true',
help="Don't show anything, useful with -o.")
parser.add_argument(
'-o', '--output',
help="Specify CSV file to store results.")
parser.add_argument(
'-u', '--use',
help="Don't parse anything, use this CSV file.")
parser.add_argument(
'-d', '--diff',
help="Specify CSV file to diff against.")
parser.add_argument(
'-a', '--all',
action='store_true',
help="Show all, not just the ones that changed.")
parser.add_argument(
'-p', '--percent',
action='store_true',
help="Only show percentage change, not a full diff.")
parser.add_argument(
'-b', '--by',
action='append',
choices=CodeResult._by,
help="Group by this field.")
parser.add_argument(
'-f', '--field',
dest='fields',
action='append',
choices=CodeResult._fields,
help="Show this field.")
parser.add_argument(
'-D', '--define',
dest='defines',
action='append',
type=lambda x: (lambda k,v: (k, set(v.split(','))))(*x.split('=', 1)),
help="Only include results where this field is this value.")
class AppendSort(argparse.Action):
def __call__(self, parser, namespace, value, option):
if namespace.sort is None:
namespace.sort = []
namespace.sort.append((value, True if option == '-S' else False))
parser.add_argument(
'-s', '--sort',
nargs='?',
action=AppendSort,
help="Sort by this field.")
parser.add_argument(
'-S', '--reverse-sort',
nargs='?',
action=AppendSort,
help="Sort by this field, but backwards.")
parser.add_argument(
'-Y', '--summary',
action='store_true',
help="Only show the total.")
parser.add_argument(
'-F', '--source',
dest='sources',
action='append',
help="Only consider definitions in this file. Defaults to anything "
"in the current directory.")
parser.add_argument(
'--everything',
action='store_true',
help="Include builtin and libc specific symbols.")
parser.add_argument(
'--nm-types',
default=NM_TYPES,
help="Type of symbols to report, this uses the same single-character "
"type-names emitted by nm. Defaults to %r." % NM_TYPES)
parser.add_argument(
'--nm-path',
type=lambda x: x.split(),
default=NM_PATH,
help="Path to the nm executable, may include flags. "
"Defaults to %r." % NM_PATH)
parser.add_argument(
'--objdump-path',
type=lambda x: x.split(),
default=OBJDUMP_PATH,
help="Path to the objdump executable, may include flags. "
"Defaults to %r." % OBJDUMP_PATH)
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

828
scripts/cov.py Executable file
View File

@@ -0,0 +1,828 @@
#!/usr/bin/env python3
#
# Script to find coverage info after running tests.
#
# Example:
# ./scripts/cov.py \
# lfs.t.a.gcda lfs_util.t.a.gcda \
# -Flfs.c -Flfs_util.c -slines
#
# Copyright (c) 2022, The littlefs authors.
# Copyright (c) 2020, Arm Limited. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import csv
import itertools as it
import json
import math as m
import os
import re
import shlex
import subprocess as sp
# TODO use explode_asserts to avoid counting assert branches?
# TODO use dwarf=info to find functions for inline functions?
GCOV_PATH = ['gcov']
# integer fields
class Int(co.namedtuple('Int', 'x')):
__slots__ = ()
def __new__(cls, x=0):
if isinstance(x, Int):
return x
if isinstance(x, str):
try:
x = int(x, 0)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, int) or m.isinf(x), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return str(self.x)
def __int__(self):
assert not m.isinf(self.x)
return self.x
def __float__(self):
return float(self.x)
none = '%7s' % '-'
def table(self):
return '%7s' % (self,)
diff_none = '%7s' % '-'
diff_table = table
def diff_diff(self, other):
new = self.x if self else 0
old = other.x if other else 0
diff = new - old
if diff == +m.inf:
return '%7s' % '+∞'
elif diff == -m.inf:
return '%7s' % '-∞'
else:
return '%+7d' % diff
def ratio(self, other):
new = self.x if self else 0
old = other.x if other else 0
if m.isinf(new) and m.isinf(old):
return 0.0
elif m.isinf(new):
return +m.inf
elif m.isinf(old):
return -m.inf
elif not old and not new:
return 0.0
elif not old:
return 1.0
else:
return (new-old) / old
def __add__(self, other):
return self.__class__(self.x + other.x)
def __sub__(self, other):
return self.__class__(self.x - other.x)
def __mul__(self, other):
return self.__class__(self.x * other.x)
# fractional fields, a/b
class Frac(co.namedtuple('Frac', 'a,b')):
__slots__ = ()
def __new__(cls, a=0, b=None):
if isinstance(a, Frac) and b is None:
return a
if isinstance(a, str) and b is None:
a, b = a.split('/', 1)
if b is None:
b = a
return super().__new__(cls, Int(a), Int(b))
def __str__(self):
return '%s/%s' % (self.a, self.b)
def __float__(self):
return float(self.a)
none = '%11s %7s' % ('-', '-')
def table(self):
t = self.a.x/self.b.x if self.b.x else 1.0
return '%11s %7s' % (
self,
'%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%.1f%%' % (100*t))
diff_none = '%11s' % '-'
def diff_table(self):
return '%11s' % (self,)
def diff_diff(self, other):
new_a, new_b = self if self else (Int(0), Int(0))
old_a, old_b = other if other else (Int(0), Int(0))
return '%11s' % ('%s/%s' % (
new_a.diff_diff(old_a).strip(),
new_b.diff_diff(old_b).strip()))
def ratio(self, other):
new_a, new_b = self if self else (Int(0), Int(0))
old_a, old_b = other if other else (Int(0), Int(0))
new = new_a.x/new_b.x if new_b.x else 1.0
old = old_a.x/old_b.x if old_b.x else 1.0
return new - old
def __add__(self, other):
return self.__class__(self.a + other.a, self.b + other.b)
def __sub__(self, other):
return self.__class__(self.a - other.a, self.b - other.b)
def __mul__(self, other):
return self.__class__(self.a * other.a, self.b + other.b)
def __lt__(self, other):
self_t = self.a.x/self.b.x if self.b.x else 1.0
other_t = other.a.x/other.b.x if other.b.x else 1.0
return (self_t, self.a.x) < (other_t, other.a.x)
def __gt__(self, other):
return self.__class__.__lt__(other, self)
def __le__(self, other):
return not self.__gt__(other)
def __ge__(self, other):
return not self.__lt__(other)
# coverage results
class CovResult(co.namedtuple('CovResult', [
'file', 'function', 'line',
'calls', 'hits', 'funcs', 'lines', 'branches'])):
_by = ['file', 'function', 'line']
_fields = ['calls', 'hits', 'funcs', 'lines', 'branches']
_sort = ['funcs', 'lines', 'branches', 'hits', 'calls']
_types = {
'calls': Int, 'hits': Int,
'funcs': Frac, 'lines': Frac, 'branches': Frac}
__slots__ = ()
def __new__(cls, file='', function='', line=0,
calls=0, hits=0, funcs=0, lines=0, branches=0):
return super().__new__(cls, file, function, int(Int(line)),
Int(calls), Int(hits), Frac(funcs), Frac(lines), Frac(branches))
def __add__(self, other):
return CovResult(self.file, self.function, self.line,
max(self.calls, other.calls),
max(self.hits, other.hits),
self.funcs + other.funcs,
self.lines + other.lines,
self.branches + other.branches)
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def collect(gcda_paths, *,
gcov_path=GCOV_PATH,
sources=None,
everything=False,
**args):
results = []
for path in gcda_paths:
# get coverage info through gcov's json output
# note, gcov-path may contain extra args
cmd = GCOV_PATH + ['-b', '-t', '--json-format', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
data = json.load(proc.stdout)
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
sys.exit(-1)
# collect line/branch coverage
for file in data['files']:
# ignore filtered sources
if sources is not None:
if not any(
os.path.abspath(file['file']) == os.path.abspath(s)
for s in sources):
continue
else:
# default to only cwd
if not everything and not os.path.commonpath([
os.getcwd(),
os.path.abspath(file['file'])]) == os.getcwd():
continue
# simplify path
if os.path.commonpath([
os.getcwd(),
os.path.abspath(file['file'])]) == os.getcwd():
file_name = os.path.relpath(file['file'])
else:
file_name = os.path.abspath(file['file'])
for func in file['functions']:
func_name = func.get('name', '(inlined)')
# discard internal functions (this includes injected test cases)
if not everything:
if func_name.startswith('__'):
continue
# go ahead and add functions, later folding will merge this if
# there are other hits on this line
results.append(CovResult(
file_name, func_name, func['start_line'],
func['execution_count'], 0,
Frac(1 if func['execution_count'] > 0 else 0, 1),
0,
0))
for line in file['lines']:
func_name = line.get('function_name', '(inlined)')
# discard internal function (this includes injected test cases)
if not everything:
if func_name.startswith('__'):
continue
# go ahead and add lines, later folding will merge this if
# there are other hits on this line
results.append(CovResult(
file_name, func_name, line['line_number'],
0, line['count'],
0,
Frac(1 if line['count'] > 0 else 0, 1),
Frac(
sum(1 if branch['count'] > 0 else 0
for branch in line['branches']),
len(line['branches']))))
return results
def fold(Result, results, *,
by=None,
defines=None,
**_):
if by is None:
by = Result._by
for k in it.chain(by or [], (k for k, _ in defines or [])):
if k not in Result._by and k not in Result._fields:
print("error: could not find field %r?" % k)
sys.exit(-1)
# filter by matching defines
if defines is not None:
results_ = []
for r in results:
if all(getattr(r, k) in vs for k, vs in defines):
results_.append(r)
results = results_
# organize results into conflicts
folding = co.OrderedDict()
for r in results:
name = tuple(getattr(r, k) for k in by)
if name not in folding:
folding[name] = []
folding[name].append(r)
# merge conflicts
folded = []
for name, rs in folding.items():
folded.append(sum(rs[1:], start=rs[0]))
return folded
def table(Result, results, diff_results=None, *,
by=None,
fields=None,
sort=None,
summary=False,
all=False,
percent=False,
**_):
all_, all = all, __builtins__.all
if by is None:
by = Result._by
if fields is None:
fields = Result._fields
types = Result._types
# fold again
results = fold(Result, results, by=by)
if diff_results is not None:
diff_results = fold(Result, diff_results, by=by)
# organize by name
table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in results}
diff_table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in diff_results or []}
names = list(table.keys() | diff_table.keys())
# sort again, now with diff info, note that python's sort is stable
names.sort()
if diff_results is not None:
names.sort(key=lambda n: tuple(
types[k].ratio(
getattr(table.get(n), k, None),
getattr(diff_table.get(n), k, None))
for k in fields),
reverse=True)
if sort:
for k, reverse in reversed(sort):
names.sort(
key=lambda n: tuple(
(getattr(table[n], k),)
if getattr(table.get(n), k, None) is not None else ()
for k in ([k] if k else [
k for k in Result._sort if k in fields])),
reverse=reverse ^ (not k or k in Result._fields))
# build up our lines
lines = []
# header
header = []
header.append('%s%s' % (
','.join(by),
' (%d added, %d removed)' % (
sum(1 for n in table if n not in diff_table),
sum(1 for n in diff_table if n not in table))
if diff_results is not None and not percent else '')
if not summary else '')
if diff_results is None:
for k in fields:
header.append(k)
elif percent:
for k in fields:
header.append(k)
else:
for k in fields:
header.append('o'+k)
for k in fields:
header.append('n'+k)
for k in fields:
header.append('d'+k)
header.append('')
lines.append(header)
def table_entry(name, r, diff_r=None, ratios=[]):
entry = []
entry.append(name)
if diff_results is None:
for k in fields:
entry.append(getattr(r, k).table()
if getattr(r, k, None) is not None
else types[k].none)
elif percent:
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
else:
for k in fields:
entry.append(getattr(diff_r, k).diff_table()
if getattr(diff_r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(types[k].diff_diff(
getattr(r, k, None),
getattr(diff_r, k, None)))
if diff_results is None:
entry.append('')
elif percent:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios))
else:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios
if t)
if any(ratios) else '')
return entry
# entries
if not summary:
for name in names:
r = table.get(name)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = diff_table.get(name)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
if not all_ and not any(ratios):
continue
lines.append(table_entry(name, r, diff_r, ratios))
# total
r = next(iter(fold(Result, results, by=[])), None)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = next(iter(fold(Result, diff_results, by=[])), None)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
lines.append(table_entry('TOTAL', r, diff_r, ratios))
# find the best widths, note that column 0 contains the names and column -1
# the ratios, so those are handled a bit differently
widths = [
((max(it.chain([w], (len(l[i]) for l in lines)))+1+4-1)//4)*4-1
for w, i in zip(
it.chain([23], it.repeat(7)),
range(len(lines[0])-1))]
# print our table
for line in lines:
print('%-*s %s%s' % (
widths[0], line[0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], line[1:-1])),
line[-1]))
def annotate(Result, results, *,
annotate=False,
lines=False,
branches=False,
**args):
# if neither branches/lines specified, color both
if annotate and not lines and not branches:
lines, branches = True, True
for path in co.OrderedDict.fromkeys(r.file for r in results).keys():
# flatten to line info
results = fold(Result, results, by=['file', 'line'])
table = {r.line: r for r in results if r.file == path}
# calculate spans to show
if not annotate:
spans = []
last = None
func = None
for line, r in sorted(table.items()):
if ((lines and int(r.hits) == 0)
or (branches and r.branches.a < r.branches.b)):
if last is not None and line - last.stop <= args['context']:
last = range(
last.start,
line+1+args['context'])
else:
if last is not None:
spans.append((last, func))
last = range(
line-args['context'],
line+1+args['context'])
func = r.function
if last is not None:
spans.append((last, func))
with open(path) as f:
skipped = False
for i, line in enumerate(f):
# skip lines not in spans?
if not annotate and not any(i+1 in s for s, _ in spans):
skipped = True
continue
if skipped:
skipped = False
print('%s@@ %s:%d: %s @@%s' % (
'\x1b[36m' if args['color'] else '',
path,
i+1,
next(iter(f for _, f in spans)),
'\x1b[m' if args['color'] else ''))
# build line
if line.endswith('\n'):
line = line[:-1]
if i+1 in table:
r = table[i+1]
line = '%-*s // %s hits%s' % (
args['width'],
line,
r.hits,
', %s branches' % (r.branches,)
if int(r.branches.b) else '')
if args['color']:
if lines and int(r.hits) == 0:
line = '\x1b[1;31m%s\x1b[m' % line
elif branches and r.branches.a < r.branches.b:
line = '\x1b[35m%s\x1b[m' % line
print(line)
def main(gcda_paths, *,
by=None,
fields=None,
defines=None,
sort=None,
hits=False,
**args):
# figure out what color should be
if args.get('color') == 'auto':
args['color'] = sys.stdout.isatty()
elif args.get('color') == 'always':
args['color'] = True
else:
args['color'] = False
# find sizes
if not args.get('use', None):
results = collect(gcda_paths, **args)
else:
results = []
with openio(args['use']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('cov_'+k in r and r['cov_'+k].strip()
for k in CovResult._fields):
continue
try:
results.append(CovResult(
**{k: r[k] for k in CovResult._by
if k in r and r[k].strip()},
**{k: r['cov_'+k]
for k in CovResult._fields
if 'cov_'+k in r
and r['cov_'+k].strip()}))
except TypeError:
pass
# fold
results = fold(CovResult, results, by=by, defines=defines)
# sort, note that python's sort is stable
results.sort()
if sort:
for k, reverse in reversed(sort):
results.sort(
key=lambda r: tuple(
(getattr(r, k),) if getattr(r, k) is not None else ()
for k in ([k] if k else CovResult._sort)),
reverse=reverse ^ (not k or k in CovResult._fields))
# write results to CSV
if args.get('output'):
with openio(args['output'], 'w') as f:
writer = csv.DictWriter(f,
(by if by is not None else CovResult._by)
+ ['cov_'+k for k in (
fields if fields is not None else CovResult._fields)])
writer.writeheader()
for r in results:
writer.writerow(
{k: getattr(r, k) for k in (
by if by is not None else CovResult._by)}
| {'cov_'+k: getattr(r, k) for k in (
fields if fields is not None else CovResult._fields)})
# find previous results?
if args.get('diff'):
diff_results = []
try:
with openio(args['diff']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('cov_'+k in r and r['cov_'+k].strip()
for k in CovResult._fields):
continue
try:
diff_results.append(CovResult(
**{k: r[k] for k in CovResult._by
if k in r and r[k].strip()},
**{k: r['cov_'+k]
for k in CovResult._fields
if 'cov_'+k in r
and r['cov_'+k].strip()}))
except TypeError:
pass
except FileNotFoundError:
pass
# fold
diff_results = fold(CovResult, diff_results,
by=by, defines=defines)
# print table
if not args.get('quiet'):
if (args.get('annotate')
or args.get('lines')
or args.get('branches')):
# annotate sources
annotate(CovResult, results, **args)
else:
# print table
table(CovResult, results,
diff_results if args.get('diff') else None,
by=by if by is not None else ['function'],
fields=fields if fields is not None
else ['lines', 'branches'] if not hits
else ['calls', 'hits'],
sort=sort,
**args)
# catch lack of coverage
if args.get('error_on_lines') and any(
r.lines.a < r.lines.b for r in results):
sys.exit(2)
elif args.get('error_on_branches') and any(
r.branches.a < r.branches.b for r in results):
sys.exit(3)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Find coverage info after running tests.",
allow_abbrev=False)
parser.add_argument(
'gcda_paths',
nargs='*',
help="Input *.gcda files.")
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Output commands that run behind the scenes.")
parser.add_argument(
'-q', '--quiet',
action='store_true',
help="Don't show anything, useful with -o.")
parser.add_argument(
'-o', '--output',
help="Specify CSV file to store results.")
parser.add_argument(
'-u', '--use',
help="Don't parse anything, use this CSV file.")
parser.add_argument(
'-d', '--diff',
help="Specify CSV file to diff against.")
parser.add_argument(
'-a', '--all',
action='store_true',
help="Show all, not just the ones that changed.")
parser.add_argument(
'-p', '--percent',
action='store_true',
help="Only show percentage change, not a full diff.")
parser.add_argument(
'-b', '--by',
action='append',
choices=CovResult._by,
help="Group by this field.")
parser.add_argument(
'-f', '--field',
dest='fields',
action='append',
choices=CovResult._fields,
help="Show this field.")
parser.add_argument(
'-D', '--define',
dest='defines',
action='append',
type=lambda x: (lambda k,v: (k, set(v.split(','))))(*x.split('=', 1)),
help="Only include results where this field is this value.")
class AppendSort(argparse.Action):
def __call__(self, parser, namespace, value, option):
if namespace.sort is None:
namespace.sort = []
namespace.sort.append((value, True if option == '-S' else False))
parser.add_argument(
'-s', '--sort',
nargs='?',
action=AppendSort,
help="Sort by this field.")
parser.add_argument(
'-S', '--reverse-sort',
nargs='?',
action=AppendSort,
help="Sort by this field, but backwards.")
parser.add_argument(
'-Y', '--summary',
action='store_true',
help="Only show the total.")
parser.add_argument(
'-F', '--source',
dest='sources',
action='append',
help="Only consider definitions in this file. Defaults to anything "
"in the current directory.")
parser.add_argument(
'--everything',
action='store_true',
help="Include builtin and libc specific symbols.")
parser.add_argument(
'--hits',
action='store_true',
help="Show total hits instead of coverage.")
parser.add_argument(
'-A', '--annotate',
action='store_true',
help="Show source files annotated with coverage info.")
parser.add_argument(
'-L', '--lines',
action='store_true',
help="Show uncovered lines.")
parser.add_argument(
'-B', '--branches',
action='store_true',
help="Show uncovered branches.")
parser.add_argument(
'-c', '--context',
type=lambda x: int(x, 0),
default=3,
help="Show n additional lines of context. Defaults to 3.")
parser.add_argument(
'-W', '--width',
type=lambda x: int(x, 0),
default=80,
help="Assume source is styled with this many columns. Defaults to 80.")
parser.add_argument(
'--color',
choices=['never', 'always', 'auto'],
default='auto',
help="When to use terminal colors. Defaults to 'auto'.")
parser.add_argument(
'-e', '--error-on-lines',
action='store_true',
help="Error if any lines are not covered.")
parser.add_argument(
'-E', '--error-on-branches',
action='store_true',
help="Error if any branches are not covered.")
parser.add_argument(
'--gcov-path',
default=GCOV_PATH,
type=lambda x: x.split(),
help="Path to the gcov executable, may include paths. "
"Defaults to %r." % GCOV_PATH)
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

704
scripts/data.py Executable file
View File

@@ -0,0 +1,704 @@
#!/usr/bin/env python3
#
# Script to find data size at the function level. Basically just a big wrapper
# around nm with some extra conveniences for comparing builds. Heavily inspired
# by Linux's Bloat-O-Meter.
#
# Example:
# ./scripts/data.py lfs.o lfs_util.o -Ssize
#
# Copyright (c) 2022, The littlefs authors.
# Copyright (c) 2020, Arm Limited. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import csv
import difflib
import itertools as it
import math as m
import os
import re
import shlex
import subprocess as sp
NM_PATH = ['nm']
NM_TYPES = 'dDbB'
OBJDUMP_PATH = ['objdump']
# integer fields
class Int(co.namedtuple('Int', 'x')):
__slots__ = ()
def __new__(cls, x=0):
if isinstance(x, Int):
return x
if isinstance(x, str):
try:
x = int(x, 0)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, int) or m.isinf(x), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return str(self.x)
def __int__(self):
assert not m.isinf(self.x)
return self.x
def __float__(self):
return float(self.x)
none = '%7s' % '-'
def table(self):
return '%7s' % (self,)
diff_none = '%7s' % '-'
diff_table = table
def diff_diff(self, other):
new = self.x if self else 0
old = other.x if other else 0
diff = new - old
if diff == +m.inf:
return '%7s' % '+∞'
elif diff == -m.inf:
return '%7s' % '-∞'
else:
return '%+7d' % diff
def ratio(self, other):
new = self.x if self else 0
old = other.x if other else 0
if m.isinf(new) and m.isinf(old):
return 0.0
elif m.isinf(new):
return +m.inf
elif m.isinf(old):
return -m.inf
elif not old and not new:
return 0.0
elif not old:
return 1.0
else:
return (new-old) / old
def __add__(self, other):
return self.__class__(self.x + other.x)
def __sub__(self, other):
return self.__class__(self.x - other.x)
def __mul__(self, other):
return self.__class__(self.x * other.x)
# data size results
class DataResult(co.namedtuple('DataResult', [
'file', 'function',
'size'])):
_by = ['file', 'function']
_fields = ['size']
_sort = ['size']
_types = {'size': Int}
__slots__ = ()
def __new__(cls, file='', function='', size=0):
return super().__new__(cls, file, function,
Int(size))
def __add__(self, other):
return DataResult(self.file, self.function,
self.size + other.size)
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def collect(obj_paths, *,
nm_path=NM_PATH,
nm_types=NM_TYPES,
objdump_path=OBJDUMP_PATH,
sources=None,
everything=False,
**args):
size_pattern = re.compile(
'^(?P<size>[0-9a-fA-F]+)' +
' (?P<type>[%s])' % re.escape(nm_types) +
' (?P<func>.+?)$')
line_pattern = re.compile(
'^\s+(?P<no>[0-9]+)'
'(?:\s+(?P<dir>[0-9]+))?'
'\s+.*'
'\s+(?P<path>[^\s]+)$')
info_pattern = re.compile(
'^(?:.*(?P<tag>DW_TAG_[a-z_]+).*'
'|.*DW_AT_name.*:\s*(?P<name>[^:\s]+)\s*'
'|.*DW_AT_decl_file.*:\s*(?P<file>[0-9]+)\s*)$')
results = []
for path in obj_paths:
# guess the source, if we have debug-info we'll replace this later
file = re.sub('(\.o)?$', '.c', path, 1)
# find symbol sizes
results_ = []
# note nm-path may contain extra args
cmd = nm_path + ['--size-sort', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
m = size_pattern.match(line)
if m:
func = m.group('func')
# discard internal functions
if not everything and func.startswith('__'):
continue
results_.append(DataResult(
file, func,
int(m.group('size'), 16)))
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
sys.exit(-1)
# try to figure out the source file if we have debug-info
dirs = {}
files = {}
# note objdump-path may contain extra args
cmd = objdump_path + ['--dwarf=rawline', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
# note that files contain references to dirs, which we
# dereference as soon as we see them as each file table follows a
# dir table
m = line_pattern.match(line)
if m:
if not m.group('dir'):
# found a directory entry
dirs[int(m.group('no'))] = m.group('path')
else:
# found a file entry
dir = int(m.group('dir'))
if dir in dirs:
files[int(m.group('no'))] = os.path.join(
dirs[dir],
m.group('path'))
else:
files[int(m.group('no'))] = m.group('path')
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
# do nothing on error, we don't need objdump to work, source files
# may just be inaccurate
pass
defs = {}
is_func = False
f_name = None
f_file = None
# note objdump-path may contain extra args
cmd = objdump_path + ['--dwarf=info', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
# state machine here to find definitions
m = info_pattern.match(line)
if m:
if m.group('tag'):
if is_func:
defs[f_name] = files.get(f_file, '?')
is_func = (m.group('tag') == 'DW_TAG_subprogram')
elif m.group('name'):
f_name = m.group('name')
elif m.group('file'):
f_file = int(m.group('file'))
if is_func:
defs[f_name] = files.get(f_file, '?')
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
# do nothing on error, we don't need objdump to work, source files
# may just be inaccurate
pass
for r in results_:
# find best matching debug symbol, this may be slightly different
# due to optimizations
if defs:
# exact match? avoid difflib if we can for speed
if r.function in defs:
file = defs[r.function]
else:
_, file = max(
defs.items(),
key=lambda d: difflib.SequenceMatcher(None,
d[0],
r.function, False).ratio())
else:
file = r.file
# ignore filtered sources
if sources is not None:
if not any(
os.path.abspath(file) == os.path.abspath(s)
for s in sources):
continue
else:
# default to only cwd
if not everything and not os.path.commonpath([
os.getcwd(),
os.path.abspath(file)]) == os.getcwd():
continue
# simplify path
if os.path.commonpath([
os.getcwd(),
os.path.abspath(file)]) == os.getcwd():
file = os.path.relpath(file)
else:
file = os.path.abspath(file)
results.append(r._replace(file=file))
return results
def fold(Result, results, *,
by=None,
defines=None,
**_):
if by is None:
by = Result._by
for k in it.chain(by or [], (k for k, _ in defines or [])):
if k not in Result._by and k not in Result._fields:
print("error: could not find field %r?" % k)
sys.exit(-1)
# filter by matching defines
if defines is not None:
results_ = []
for r in results:
if all(getattr(r, k) in vs for k, vs in defines):
results_.append(r)
results = results_
# organize results into conflicts
folding = co.OrderedDict()
for r in results:
name = tuple(getattr(r, k) for k in by)
if name not in folding:
folding[name] = []
folding[name].append(r)
# merge conflicts
folded = []
for name, rs in folding.items():
folded.append(sum(rs[1:], start=rs[0]))
return folded
def table(Result, results, diff_results=None, *,
by=None,
fields=None,
sort=None,
summary=False,
all=False,
percent=False,
**_):
all_, all = all, __builtins__.all
if by is None:
by = Result._by
if fields is None:
fields = Result._fields
types = Result._types
# fold again
results = fold(Result, results, by=by)
if diff_results is not None:
diff_results = fold(Result, diff_results, by=by)
# organize by name
table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in results}
diff_table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in diff_results or []}
names = list(table.keys() | diff_table.keys())
# sort again, now with diff info, note that python's sort is stable
names.sort()
if diff_results is not None:
names.sort(key=lambda n: tuple(
types[k].ratio(
getattr(table.get(n), k, None),
getattr(diff_table.get(n), k, None))
for k in fields),
reverse=True)
if sort:
for k, reverse in reversed(sort):
names.sort(
key=lambda n: tuple(
(getattr(table[n], k),)
if getattr(table.get(n), k, None) is not None else ()
for k in ([k] if k else [
k for k in Result._sort if k in fields])),
reverse=reverse ^ (not k or k in Result._fields))
# build up our lines
lines = []
# header
header = []
header.append('%s%s' % (
','.join(by),
' (%d added, %d removed)' % (
sum(1 for n in table if n not in diff_table),
sum(1 for n in diff_table if n not in table))
if diff_results is not None and not percent else '')
if not summary else '')
if diff_results is None:
for k in fields:
header.append(k)
elif percent:
for k in fields:
header.append(k)
else:
for k in fields:
header.append('o'+k)
for k in fields:
header.append('n'+k)
for k in fields:
header.append('d'+k)
header.append('')
lines.append(header)
def table_entry(name, r, diff_r=None, ratios=[]):
entry = []
entry.append(name)
if diff_results is None:
for k in fields:
entry.append(getattr(r, k).table()
if getattr(r, k, None) is not None
else types[k].none)
elif percent:
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
else:
for k in fields:
entry.append(getattr(diff_r, k).diff_table()
if getattr(diff_r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(types[k].diff_diff(
getattr(r, k, None),
getattr(diff_r, k, None)))
if diff_results is None:
entry.append('')
elif percent:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios))
else:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios
if t)
if any(ratios) else '')
return entry
# entries
if not summary:
for name in names:
r = table.get(name)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = diff_table.get(name)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
if not all_ and not any(ratios):
continue
lines.append(table_entry(name, r, diff_r, ratios))
# total
r = next(iter(fold(Result, results, by=[])), None)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = next(iter(fold(Result, diff_results, by=[])), None)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
lines.append(table_entry('TOTAL', r, diff_r, ratios))
# find the best widths, note that column 0 contains the names and column -1
# the ratios, so those are handled a bit differently
widths = [
((max(it.chain([w], (len(l[i]) for l in lines)))+1+4-1)//4)*4-1
for w, i in zip(
it.chain([23], it.repeat(7)),
range(len(lines[0])-1))]
# print our table
for line in lines:
print('%-*s %s%s' % (
widths[0], line[0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], line[1:-1])),
line[-1]))
def main(obj_paths, *,
by=None,
fields=None,
defines=None,
sort=None,
**args):
# find sizes
if not args.get('use', None):
results = collect(obj_paths, **args)
else:
results = []
with openio(args['use']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
try:
results.append(DataResult(
**{k: r[k] for k in DataResult._by
if k in r and r[k].strip()},
**{k: r['data_'+k] for k in DataResult._fields
if 'data_'+k in r and r['data_'+k].strip()}))
except TypeError:
pass
# fold
results = fold(DataResult, results, by=by, defines=defines)
# sort, note that python's sort is stable
results.sort()
if sort:
for k, reverse in reversed(sort):
results.sort(
key=lambda r: tuple(
(getattr(r, k),) if getattr(r, k) is not None else ()
for k in ([k] if k else DataResult._sort)),
reverse=reverse ^ (not k or k in DataResult._fields))
# write results to CSV
if args.get('output'):
with openio(args['output'], 'w') as f:
writer = csv.DictWriter(f,
(by if by is not None else DataResult._by)
+ ['data_'+k for k in (
fields if fields is not None else DataResult._fields)])
writer.writeheader()
for r in results:
writer.writerow(
{k: getattr(r, k) for k in (
by if by is not None else DataResult._by)}
| {'data_'+k: getattr(r, k) for k in (
fields if fields is not None else DataResult._fields)})
# find previous results?
if args.get('diff'):
diff_results = []
try:
with openio(args['diff']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('data_'+k in r and r['data_'+k].strip()
for k in DataResult._fields):
continue
try:
diff_results.append(DataResult(
**{k: r[k] for k in DataResult._by
if k in r and r[k].strip()},
**{k: r['data_'+k] for k in DataResult._fields
if 'data_'+k in r and r['data_'+k].strip()}))
except TypeError:
pass
except FileNotFoundError:
pass
# fold
diff_results = fold(DataResult, diff_results, by=by, defines=defines)
# print table
if not args.get('quiet'):
table(DataResult, results,
diff_results if args.get('diff') else None,
by=by if by is not None else ['function'],
fields=fields,
sort=sort,
**args)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Find data size at the function level.",
allow_abbrev=False)
parser.add_argument(
'obj_paths',
nargs='*',
help="Input *.o files.")
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Output commands that run behind the scenes.")
parser.add_argument(
'-q', '--quiet',
action='store_true',
help="Don't show anything, useful with -o.")
parser.add_argument(
'-o', '--output',
help="Specify CSV file to store results.")
parser.add_argument(
'-u', '--use',
help="Don't parse anything, use this CSV file.")
parser.add_argument(
'-d', '--diff',
help="Specify CSV file to diff against.")
parser.add_argument(
'-a', '--all',
action='store_true',
help="Show all, not just the ones that changed.")
parser.add_argument(
'-p', '--percent',
action='store_true',
help="Only show percentage change, not a full diff.")
parser.add_argument(
'-b', '--by',
action='append',
choices=DataResult._by,
help="Group by this field.")
parser.add_argument(
'-f', '--field',
dest='fields',
action='append',
choices=DataResult._fields,
help="Show this field.")
parser.add_argument(
'-D', '--define',
dest='defines',
action='append',
type=lambda x: (lambda k,v: (k, set(v.split(','))))(*x.split('=', 1)),
help="Only include results where this field is this value.")
class AppendSort(argparse.Action):
def __call__(self, parser, namespace, value, option):
if namespace.sort is None:
namespace.sort = []
namespace.sort.append((value, True if option == '-S' else False))
parser.add_argument(
'-s', '--sort',
nargs='?',
action=AppendSort,
help="Sort by this field.")
parser.add_argument(
'-S', '--reverse-sort',
nargs='?',
action=AppendSort,
help="Sort by this field, but backwards.")
parser.add_argument(
'-Y', '--summary',
action='store_true',
help="Only show the total.")
parser.add_argument(
'-F', '--source',
dest='sources',
action='append',
help="Only consider definitions in this file. Defaults to anything "
"in the current directory.")
parser.add_argument(
'--everything',
action='store_true',
help="Include builtin and libc specific symbols.")
parser.add_argument(
'--nm-types',
default=NM_TYPES,
help="Type of symbols to report, this uses the same single-character "
"type-names emitted by nm. Defaults to %r." % NM_TYPES)
parser.add_argument(
'--nm-path',
type=lambda x: x.split(),
default=NM_PATH,
help="Path to the nm executable, may include flags. "
"Defaults to %r." % NM_PATH)
parser.add_argument(
'--objdump-path',
type=lambda x: x.split(),
default=OBJDUMP_PATH,
help="Path to the objdump executable, may include flags. "
"Defaults to %r." % OBJDUMP_PATH)
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

1344
scripts/perf.py Executable file

File diff suppressed because it is too large Load Diff

1276
scripts/perfbd.py Executable file

File diff suppressed because it is too large Load Diff

1592
scripts/plot.py Executable file

File diff suppressed because it is too large Load Diff

1262
scripts/plotmpl.py Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env python
# This script replaces prefixes of files, and symbols in that file.
# Useful for creating different versions of the codebase that don't
# conflict at compile time.
#
# example:
# $ ./scripts/prefix.py lfs2
import os
import os.path
import re
import glob
import itertools
import tempfile
import shutil
import subprocess
DEFAULT_PREFIX = "lfs"
def subn(from_prefix, to_prefix, name):
name, count1 = re.subn('\\b'+from_prefix, to_prefix, name)
name, count2 = re.subn('\\b'+from_prefix.upper(), to_prefix.upper(), name)
name, count3 = re.subn('\\B-D'+from_prefix.upper(),
'-D'+to_prefix.upper(), name)
return name, count1+count2+count3
def main(from_prefix, to_prefix=None, files=None):
if not to_prefix:
from_prefix, to_prefix = DEFAULT_PREFIX, from_prefix
if not files:
files = subprocess.check_output([
'git', 'ls-tree', '-r', '--name-only', 'HEAD']).split()
for oldname in files:
# Rename any matching file names
newname, namecount = subn(from_prefix, to_prefix, oldname)
if namecount:
subprocess.check_call(['git', 'mv', oldname, newname])
# Rename any prefixes in file
count = 0
with open(newname+'~', 'w') as tempf:
with open(newname) as newf:
for line in newf:
line, n = subn(from_prefix, to_prefix, line)
count += n
tempf.write(line)
shutil.copystat(newname, newname+'~')
os.rename(newname+'~', newname)
subprocess.check_call(['git', 'add', newname])
# Summary
print '%s: %d replacements' % (
'%s -> %s' % (oldname, newname) if namecount else oldname,
count)
if __name__ == "__main__":
import sys
sys.exit(main(*sys.argv[1:]))

452
scripts/prettyasserts.py Executable file
View File

@@ -0,0 +1,452 @@
#!/usr/bin/env python3
#
# Preprocessor that makes asserts easier to debug.
#
# Example:
# ./scripts/prettyasserts.py -p LFS_ASSERT lfs.c -o lfs.a.c
#
# Copyright (c) 2022, The littlefs authors.
# Copyright (c) 2020, Arm Limited. All rights reserved.
# SPDX-License-Identifier: BSD-3-Clause
#
import re
import sys
# NOTE the use of macros here helps keep a consistent stack depth which
# tools may rely on.
#
# If compilation errors are noisy consider using -ftrack-macro-expansion=0.
#
LIMIT = 16
CMP = {
'==': 'eq',
'!=': 'ne',
'<=': 'le',
'>=': 'ge',
'<': 'lt',
'>': 'gt',
}
LEXEMES = {
'ws': [r'(?:\s|\n|#.*?\n|//.*?\n|/\*.*?\*/)+'],
'assert': ['assert'],
'arrow': ['=>'],
'string': [r'"(?:\\.|[^"])*"', r"'(?:\\.|[^'])\'"],
'paren': ['\(', '\)'],
'cmp': CMP.keys(),
'logic': ['\&\&', '\|\|'],
'sep': [':', ';', '\{', '\}', ','],
'op': ['->'], # specifically ops that conflict with cmp
}
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def write_header(f, limit=LIMIT):
f.writeln("// Generated by %s:" % sys.argv[0])
f.writeln("//")
f.writeln("// %s" % ' '.join(sys.argv))
f.writeln("//")
f.writeln()
f.writeln("#include <stdbool.h>")
f.writeln("#include <stdint.h>")
f.writeln("#include <inttypes.h>")
f.writeln("#include <stdio.h>")
f.writeln("#include <string.h>")
f.writeln("#include <signal.h>")
# give source a chance to define feature macros
f.writeln("#undef _FEATURES_H")
f.writeln()
# write print macros
f.writeln("__attribute__((unused))")
f.writeln("static void __pretty_assert_print_bool(")
f.writeln(" const void *v, size_t size) {")
f.writeln(" (void)size;")
f.writeln(" printf(\"%s\", *(const bool*)v ? \"true\" : \"false\");")
f.writeln("}")
f.writeln()
f.writeln("__attribute__((unused))")
f.writeln("static void __pretty_assert_print_int(")
f.writeln(" const void *v, size_t size) {")
f.writeln(" (void)size;")
f.writeln(" printf(\"%\"PRIiMAX, *(const intmax_t*)v);")
f.writeln("}")
f.writeln()
f.writeln("__attribute__((unused))")
f.writeln("static void __pretty_assert_print_mem(")
f.writeln(" const void *v, size_t size) {")
f.writeln(" const uint8_t *v_ = v;")
f.writeln(" printf(\"\\\"\");")
f.writeln(" for (size_t i = 0; i < size && i < %d; i++) {" % limit)
f.writeln(" if (v_[i] >= ' ' && v_[i] <= '~') {")
f.writeln(" printf(\"%c\", v_[i]);")
f.writeln(" } else {")
f.writeln(" printf(\"\\\\x%02x\", v_[i]);")
f.writeln(" }")
f.writeln(" }")
f.writeln(" if (size > %d) {" % limit)
f.writeln(" printf(\"...\");")
f.writeln(" }")
f.writeln(" printf(\"\\\"\");")
f.writeln("}")
f.writeln()
f.writeln("__attribute__((unused))")
f.writeln("static void __pretty_assert_print_str(")
f.writeln(" const void *v, size_t size) {")
f.writeln(" __pretty_assert_print_mem(v, size);")
f.writeln("}")
f.writeln()
f.writeln("__attribute__((unused, noinline))")
f.writeln("static void __pretty_assert_fail(")
f.writeln(" const char *file, int line,")
f.writeln(" void (*type_print_cb)(const void*, size_t),")
f.writeln(" const char *cmp,")
f.writeln(" const void *lh, size_t lsize,")
f.writeln(" const void *rh, size_t rsize) {")
f.writeln(" printf(\"%s:%d:assert: assert failed with \", file, line);")
f.writeln(" type_print_cb(lh, lsize);")
f.writeln(" printf(\", expected %s \", cmp);")
f.writeln(" type_print_cb(rh, rsize);")
f.writeln(" printf(\"\\n\");")
f.writeln(" fflush(NULL);")
f.writeln(" raise(SIGABRT);")
f.writeln("}")
f.writeln()
# write assert macros
for op, cmp in sorted(CMP.items()):
f.writeln("#define __PRETTY_ASSERT_BOOL_%s(lh, rh) do { \\"
% cmp.upper())
f.writeln(" bool _lh = !!(lh); \\")
f.writeln(" bool _rh = !!(rh); \\")
f.writeln(" if (!(_lh %s _rh)) { \\" % op)
f.writeln(" __pretty_assert_fail( \\")
f.writeln(" __FILE__, __LINE__, \\")
f.writeln(" __pretty_assert_print_bool, \"%s\", \\"
% cmp)
f.writeln(" &_lh, 0, \\")
f.writeln(" &_rh, 0); \\")
f.writeln(" } \\")
f.writeln("} while (0)")
for op, cmp in sorted(CMP.items()):
f.writeln("#define __PRETTY_ASSERT_INT_%s(lh, rh) do { \\"
% cmp.upper())
f.writeln(" __typeof__(lh) _lh = lh; \\")
f.writeln(" __typeof__(lh) _rh = rh; \\")
f.writeln(" if (!(_lh %s _rh)) { \\" % op)
f.writeln(" __pretty_assert_fail( \\")
f.writeln(" __FILE__, __LINE__, \\")
f.writeln(" __pretty_assert_print_int, \"%s\", \\"
% cmp)
f.writeln(" &(intmax_t){_lh}, 0, \\")
f.writeln(" &(intmax_t){_rh}, 0); \\")
f.writeln(" } \\")
f.writeln("} while (0)")
for op, cmp in sorted(CMP.items()):
f.writeln("#define __PRETTY_ASSERT_MEM_%s(lh, rh, size) do { \\"
% cmp.upper())
f.writeln(" const void *_lh = lh; \\")
f.writeln(" const void *_rh = rh; \\")
f.writeln(" if (!(memcmp(_lh, _rh, size) %s 0)) { \\" % op)
f.writeln(" __pretty_assert_fail( \\")
f.writeln(" __FILE__, __LINE__, \\")
f.writeln(" __pretty_assert_print_mem, \"%s\", \\"
% cmp)
f.writeln(" _lh, size, \\")
f.writeln(" _rh, size); \\")
f.writeln(" } \\")
f.writeln("} while (0)")
for op, cmp in sorted(CMP.items()):
f.writeln("#define __PRETTY_ASSERT_STR_%s(lh, rh) do { \\"
% cmp.upper())
f.writeln(" const char *_lh = lh; \\")
f.writeln(" const char *_rh = rh; \\")
f.writeln(" if (!(strcmp(_lh, _rh) %s 0)) { \\" % op)
f.writeln(" __pretty_assert_fail( \\")
f.writeln(" __FILE__, __LINE__, \\")
f.writeln(" __pretty_assert_print_str, \"%s\", \\"
% cmp)
f.writeln(" _lh, strlen(_lh), \\")
f.writeln(" _rh, strlen(_rh)); \\")
f.writeln(" } \\")
f.writeln("} while (0)")
f.writeln()
f.writeln()
def mkassert(type, cmp, lh, rh, size=None):
if size is not None:
return ("__PRETTY_ASSERT_%s_%s(%s, %s, %s)"
% (type.upper(), cmp.upper(), lh, rh, size))
else:
return ("__PRETTY_ASSERT_%s_%s(%s, %s)"
% (type.upper(), cmp.upper(), lh, rh))
# simple recursive descent parser
class ParseFailure(Exception):
def __init__(self, expected, found):
self.expected = expected
self.found = found
def __str__(self):
return "expected %r, found %s..." % (
self.expected, repr(self.found)[:70])
class Parser:
def __init__(self, in_f, lexemes=LEXEMES):
p = '|'.join('(?P<%s>%s)' % (n, '|'.join(l))
for n, l in lexemes.items())
p = re.compile(p, re.DOTALL)
data = in_f.read()
tokens = []
line = 1
col = 0
while True:
m = p.search(data)
if m:
if m.start() > 0:
tokens.append((None, data[:m.start()], line, col))
tokens.append((m.lastgroup, m.group(), line, col))
data = data[m.end():]
else:
tokens.append((None, data, line, col))
break
self.tokens = tokens
self.off = 0
def lookahead(self, *pattern):
if self.off < len(self.tokens):
token = self.tokens[self.off]
if token[0] in pattern or token[1] in pattern:
self.m = token[1]
return self.m
self.m = None
return self.m
def accept(self, *patterns):
m = self.lookahead(*patterns)
if m is not None:
self.off += 1
return m
def expect(self, *patterns):
m = self.accept(*patterns)
if not m:
raise ParseFailure(patterns, self.tokens[self.off:])
return m
def push(self):
return self.off
def pop(self, state):
self.off = state
def p_assert(p):
state = p.push()
# assert(memcmp(a,b,size) cmp 0)?
try:
p.expect('assert') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
p.expect('memcmp') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
lh = p_expr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
rh = p_expr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
size = p_expr(p) ; p.accept('ws')
p.expect(')') ; p.accept('ws')
cmp = p.expect('cmp') ; p.accept('ws')
p.expect('0') ; p.accept('ws')
p.expect(')')
return mkassert('mem', CMP[cmp], lh, rh, size)
except ParseFailure:
p.pop(state)
# assert(strcmp(a,b) cmp 0)?
try:
p.expect('assert') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
p.expect('strcmp') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
lh = p_expr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
rh = p_expr(p) ; p.accept('ws')
p.expect(')') ; p.accept('ws')
cmp = p.expect('cmp') ; p.accept('ws')
p.expect('0') ; p.accept('ws')
p.expect(')')
return mkassert('str', CMP[cmp], lh, rh)
except ParseFailure:
p.pop(state)
# assert(a cmp b)?
try:
p.expect('assert') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
lh = p_expr(p) ; p.accept('ws')
cmp = p.expect('cmp') ; p.accept('ws')
rh = p_expr(p) ; p.accept('ws')
p.expect(')')
return mkassert('int', CMP[cmp], lh, rh)
except ParseFailure:
p.pop(state)
# assert(a)?
p.expect('assert') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
lh = p_exprs(p) ; p.accept('ws')
p.expect(')')
return mkassert('bool', 'eq', lh, 'true')
def p_expr(p):
res = []
while True:
if p.accept('('):
res.append(p.m)
while True:
res.append(p_exprs(p))
if p.accept('sep'):
res.append(p.m)
else:
break
res.append(p.expect(')'))
elif p.lookahead('assert'):
state = p.push()
try:
res.append(p_assert(p))
except ParseFailure:
p.pop(state)
res.append(p.expect('assert'))
elif p.accept('string', 'op', 'ws', None):
res.append(p.m)
else:
return ''.join(res)
def p_exprs(p):
res = []
while True:
res.append(p_expr(p))
if p.accept('cmp', 'logic', ','):
res.append(p.m)
else:
return ''.join(res)
def p_stmt(p):
ws = p.accept('ws') or ''
# memcmp(lh,rh,size) => 0?
if p.lookahead('memcmp'):
state = p.push()
try:
p.expect('memcmp') ; p.accept('ws')
p.expect('(') ; p.accept('ws')
lh = p_expr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
rh = p_expr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
size = p_expr(p) ; p.accept('ws')
p.expect(')') ; p.accept('ws')
p.expect('=>') ; p.accept('ws')
p.expect('0') ; p.accept('ws')
return ws + mkassert('mem', 'eq', lh, rh, size)
except ParseFailure:
p.pop(state)
# strcmp(lh,rh) => 0?
if p.lookahead('strcmp'):
state = p.push()
try:
p.expect('strcmp') ; p.accept('ws') ; p.expect('(') ; p.accept('ws')
lh = p_expr(p) ; p.accept('ws')
p.expect(',') ; p.accept('ws')
rh = p_expr(p) ; p.accept('ws')
p.expect(')') ; p.accept('ws')
p.expect('=>') ; p.accept('ws')
p.expect('0') ; p.accept('ws')
return ws + mkassert('str', 'eq', lh, rh)
except ParseFailure:
p.pop(state)
# lh => rh?
lh = p_exprs(p)
if p.accept('=>'):
rh = p_exprs(p)
return ws + mkassert('int', 'eq', lh, rh)
else:
return ws + lh
def main(input=None, output=None, pattern=[], limit=LIMIT):
with openio(input or '-', 'r') as in_f:
# create parser
lexemes = LEXEMES.copy()
lexemes['assert'] += pattern
p = Parser(in_f, lexemes)
with openio(output or '-', 'w') as f:
def writeln(s=''):
f.write(s)
f.write('\n')
f.writeln = writeln
# write extra verbose asserts
write_header(f, limit=limit)
if input is not None:
f.writeln("#line %d \"%s\"" % (1, input))
# parse and write out stmt at a time
try:
while True:
f.write(p_stmt(p))
if p.accept('sep'):
f.write(p.m)
else:
break
except ParseFailure as e:
print('warning: %s' % e)
pass
for i in range(p.off, len(p.tokens)):
f.write(p.tokens[i][1])
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Preprocessor that makes asserts easier to debug.",
allow_abbrev=False)
parser.add_argument(
'input',
help="Input C file.")
parser.add_argument(
'-o', '--output',
required=True,
help="Output C file.")
parser.add_argument(
'-p', '--pattern',
action='append',
help="Regex patterns to search for starting an assert statement. This"
" implicitly includes \"assert\" and \"=>\".")
parser.add_argument(
'-l', '--limit',
type=lambda x: int(x, 0),
default=LIMIT,
help="Maximum number of characters to display in strcmp and memcmp. "
"Defaults to %r." % LIMIT)
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

26
scripts/readblock.py Executable file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env python3
import subprocess as sp
def main(args):
with open(args.disk, 'rb') as f:
f.seek(args.block * args.block_size)
block = (f.read(args.block_size)
.ljust(args.block_size, b'\xff'))
# what did you expect?
print("%-8s %-s" % ('off', 'data'))
return sp.run(['xxd', '-g1', '-'], input=block).returncode
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Hex dump a specific block in a disk.")
parser.add_argument('disk',
help="File representing the block device.")
parser.add_argument('block_size', type=lambda x: int(x, 0),
help="Size of a block in bytes.")
parser.add_argument('block', type=lambda x: int(x, 0),
help="Address of block to dump.")
sys.exit(main(parser.parse_args()))

399
scripts/readmdir.py Executable file
View File

@@ -0,0 +1,399 @@
#!/usr/bin/env python3
import struct
import binascii
import sys
import itertools as it
TAG_TYPES = {
'splice': (0x700, 0x400),
'create': (0x7ff, 0x401),
'delete': (0x7ff, 0x4ff),
'name': (0x700, 0x000),
'reg': (0x7ff, 0x001),
'dir': (0x7ff, 0x002),
'superblock': (0x7ff, 0x0ff),
'struct': (0x700, 0x200),
'dirstruct': (0x7ff, 0x200),
'ctzstruct': (0x7ff, 0x202),
'inlinestruct': (0x7ff, 0x201),
'userattr': (0x700, 0x300),
'tail': (0x700, 0x600),
'softtail': (0x7ff, 0x600),
'hardtail': (0x7ff, 0x601),
'gstate': (0x700, 0x700),
'movestate': (0x7ff, 0x7ff),
'crc': (0x700, 0x500),
'ccrc': (0x780, 0x500),
'fcrc': (0x7ff, 0x5ff),
}
class Tag:
def __init__(self, *args):
if len(args) == 1:
self.tag = args[0]
elif len(args) == 3:
if isinstance(args[0], str):
type = TAG_TYPES[args[0]][1]
else:
type = args[0]
if isinstance(args[1], str):
id = int(args[1], 0) if args[1] not in 'x.' else 0x3ff
else:
id = args[1]
if isinstance(args[2], str):
size = int(args[2], str) if args[2] not in 'x.' else 0x3ff
else:
size = args[2]
self.tag = (type << 20) | (id << 10) | size
else:
assert False
@property
def isvalid(self):
return not bool(self.tag & 0x80000000)
@property
def isattr(self):
return not bool(self.tag & 0x40000000)
@property
def iscompactable(self):
return bool(self.tag & 0x20000000)
@property
def isunique(self):
return not bool(self.tag & 0x10000000)
@property
def type(self):
return (self.tag & 0x7ff00000) >> 20
@property
def type1(self):
return (self.tag & 0x70000000) >> 20
@property
def type3(self):
return (self.tag & 0x7ff00000) >> 20
@property
def id(self):
return (self.tag & 0x000ffc00) >> 10
@property
def size(self):
return (self.tag & 0x000003ff) >> 0
@property
def dsize(self):
return 4 + (self.size if self.size != 0x3ff else 0)
@property
def chunk(self):
return self.type & 0xff
@property
def schunk(self):
return struct.unpack('b', struct.pack('B', self.chunk))[0]
def is_(self, type):
try:
if ' ' in type:
type1, type3 = type.split()
return (self.is_(type1) and
(self.type & ~TAG_TYPES[type1][0]) == int(type3, 0))
return self.type == int(type, 0)
except (ValueError, KeyError):
return (self.type & TAG_TYPES[type][0]) == TAG_TYPES[type][1]
def mkmask(self):
return Tag(
0x700 if self.isunique else 0x7ff,
0x3ff if self.isattr else 0,
0)
def chid(self, nid):
ntag = Tag(self.type, nid, self.size)
if hasattr(self, 'off'): ntag.off = self.off
if hasattr(self, 'data'): ntag.data = self.data
if hasattr(self, 'ccrc'): ntag.crc = self.crc
if hasattr(self, 'erased'): ntag.erased = self.erased
return ntag
def typerepr(self):
if (self.is_('ccrc')
and getattr(self, 'ccrc', 0xffffffff) != 0xffffffff):
crc_status = ' (bad)'
elif self.is_('fcrc') and getattr(self, 'erased', False):
crc_status = ' (era)'
else:
crc_status = ''
reverse_types = {v: k for k, v in TAG_TYPES.items()}
for prefix in range(12):
mask = 0x7ff & ~((1 << prefix)-1)
if (mask, self.type & mask) in reverse_types:
type = reverse_types[mask, self.type & mask]
if prefix > 0:
return '%s %#x%s' % (
type, self.type & ((1 << prefix)-1), crc_status)
else:
return '%s%s' % (type, crc_status)
else:
return '%02x%s' % (self.type, crc_status)
def idrepr(self):
return repr(self.id) if self.id != 0x3ff else '.'
def sizerepr(self):
return repr(self.size) if self.size != 0x3ff else 'x'
def __repr__(self):
return 'Tag(%r, %d, %d)' % (self.typerepr(), self.id, self.size)
def __lt__(self, other):
return (self.id, self.type) < (other.id, other.type)
def __bool__(self):
return self.isvalid
def __int__(self):
return self.tag
def __index__(self):
return self.tag
class MetadataPair:
def __init__(self, blocks):
if len(blocks) > 1:
self.pair = [MetadataPair([block]) for block in blocks]
self.pair = sorted(self.pair, reverse=True)
self.data = self.pair[0].data
self.rev = self.pair[0].rev
self.tags = self.pair[0].tags
self.ids = self.pair[0].ids
self.log = self.pair[0].log
self.all_ = self.pair[0].all_
return
self.pair = [self]
self.data = blocks[0]
block = self.data
self.rev, = struct.unpack('<I', block[0:4])
crc = binascii.crc32(block[0:4])
fcrctag = None
fcrcdata = None
# parse tags
corrupt = False
tag = Tag(0xffffffff)
off = 4
self.log = []
self.all_ = []
while len(block) - off >= 4:
ntag, = struct.unpack('>I', block[off:off+4])
tag = Tag((int(tag) ^ ntag) & 0x7fffffff)
tag.off = off + 4
tag.data = block[off+4:off+tag.dsize]
if tag.is_('ccrc'):
crc = binascii.crc32(block[off:off+2*4], crc)
else:
crc = binascii.crc32(block[off:off+tag.dsize], crc)
tag.crc = crc
off += tag.dsize
self.all_.append(tag)
if tag.is_('fcrc') and len(tag.data) == 8:
fcrctag = tag
fcrcdata = struct.unpack('<II', tag.data)
elif tag.is_('ccrc'):
# is valid commit?
if crc != 0xffffffff:
corrupt = True
if not corrupt:
self.log = self.all_.copy()
# end of commit?
if fcrcdata:
fcrcsize, fcrc = fcrcdata
fcrc_ = 0xffffffff ^ binascii.crc32(
block[off:off+fcrcsize])
if fcrc_ == fcrc:
fcrctag.erased = True
corrupt = True
# reset tag parsing
crc = 0
tag = Tag(int(tag) ^ ((tag.type & 1) << 31))
fcrctag = None
fcrcdata = None
# find active ids
self.ids = list(it.takewhile(
lambda id: Tag('name', id, 0) in self,
it.count()))
# find most recent tags
self.tags = []
for tag in self.log:
if tag.is_('crc') or tag.is_('splice'):
continue
elif tag.id == 0x3ff:
if tag in self and self[tag] is tag:
self.tags.append(tag)
else:
# id could have change, I know this is messy and slow
# but it works
for id in self.ids:
ntag = tag.chid(id)
if ntag in self and self[ntag] is tag:
self.tags.append(ntag)
self.tags = sorted(self.tags)
def __bool__(self):
return bool(self.log)
def __lt__(self, other):
# corrupt blocks don't count
if not self or not other:
return bool(other)
# use sequence arithmetic to avoid overflow
return not ((other.rev - self.rev) & 0x80000000)
def __contains__(self, args):
try:
self[args]
return True
except KeyError:
return False
def __getitem__(self, args):
if isinstance(args, tuple):
gmask, gtag = args
else:
gmask, gtag = args.mkmask(), args
gdiff = 0
for tag in reversed(self.log):
if (gmask.id != 0 and tag.is_('splice') and
tag.id <= gtag.id - gdiff):
if tag.is_('create') and tag.id == gtag.id - gdiff:
# creation point
break
gdiff += tag.schunk
if ((int(gmask) & int(tag)) ==
(int(gmask) & int(gtag.chid(gtag.id - gdiff)))):
if tag.size == 0x3ff:
# deleted
break
return tag
raise KeyError(gmask, gtag)
def _dump_tags(self, tags, f=sys.stdout, truncate=True):
f.write("%-8s %-8s %-13s %4s %4s" % (
'off', 'tag', 'type', 'id', 'len'))
if truncate:
f.write(' data (truncated)')
f.write('\n')
for tag in tags:
f.write("%08x: %08x %-14s %3s %4s" % (
tag.off, tag,
tag.typerepr(), tag.idrepr(), tag.sizerepr()))
if truncate:
f.write(" %-23s %-8s\n" % (
' '.join('%02x' % c for c in tag.data[:8]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, tag.data[:8]))))
else:
f.write("\n")
for i in range(0, len(tag.data), 16):
f.write(" %08x: %-47s %-16s\n" % (
tag.off+i,
' '.join('%02x' % c for c in tag.data[i:i+16]),
''.join(c if c >= ' ' and c <= '~' else '.'
for c in map(chr, tag.data[i:i+16]))))
def dump_tags(self, f=sys.stdout, truncate=True):
self._dump_tags(self.tags, f=f, truncate=truncate)
def dump_log(self, f=sys.stdout, truncate=True):
self._dump_tags(self.log, f=f, truncate=truncate)
def dump_all(self, f=sys.stdout, truncate=True):
self._dump_tags(self.all_, f=f, truncate=truncate)
def main(args):
blocks = []
with open(args.disk, 'rb') as f:
for block in [args.block1, args.block2]:
if block is None:
continue
f.seek(block * args.block_size)
blocks.append(f.read(args.block_size)
.ljust(args.block_size, b'\xff'))
# find most recent pair
mdir = MetadataPair(blocks)
try:
mdir.tail = mdir[Tag('tail', 0, 0)]
if mdir.tail.size != 8 or mdir.tail.data == 8*b'\xff':
mdir.tail = None
except KeyError:
mdir.tail = None
print("mdir {%s} rev %d%s%s%s" % (
', '.join('%#x' % b
for b in [args.block1, args.block2]
if b is not None),
mdir.rev,
' (was %s)' % ', '.join('%d' % m.rev for m in mdir.pair[1:])
if len(mdir.pair) > 1 else '',
' (corrupted!)' if not mdir else '',
' -> {%#x, %#x}' % struct.unpack('<II', mdir.tail.data)
if mdir.tail else ''))
if args.all:
mdir.dump_all(truncate=not args.no_truncate)
elif args.log:
mdir.dump_log(truncate=not args.no_truncate)
else:
mdir.dump_tags(truncate=not args.no_truncate)
return 0 if mdir else 1
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Dump useful info about metadata pairs in littlefs.")
parser.add_argument('disk',
help="File representing the block device.")
parser.add_argument('block_size', type=lambda x: int(x, 0),
help="Size of a block in bytes.")
parser.add_argument('block1', type=lambda x: int(x, 0),
help="First block address for finding the metadata pair.")
parser.add_argument('block2', nargs='?', type=lambda x: int(x, 0),
help="Second block address for finding the metadata pair.")
parser.add_argument('-l', '--log', action='store_true',
help="Show tags in log.")
parser.add_argument('-a', '--all', action='store_true',
help="Show all tags in log, included tags in corrupted commits.")
parser.add_argument('-T', '--no-truncate', action='store_true',
help="Don't truncate large amounts of data.")
sys.exit(main(parser.parse_args()))

183
scripts/readtree.py Executable file
View File

@@ -0,0 +1,183 @@
#!/usr/bin/env python3
import struct
import sys
import json
import io
import itertools as it
from readmdir import Tag, MetadataPair
def main(args):
superblock = None
gstate = b'\0\0\0\0\0\0\0\0\0\0\0\0'
dirs = []
mdirs = []
corrupted = []
cycle = False
with open(args.disk, 'rb') as f:
tail = (args.block1, args.block2)
hard = False
while True:
for m in it.chain((m for d in dirs for m in d), mdirs):
if set(m.blocks) == set(tail):
# cycle detected
cycle = m.blocks
if cycle:
break
# load mdir
data = []
blocks = {}
for block in tail:
f.seek(block * args.block_size)
data.append(f.read(args.block_size)
.ljust(args.block_size, b'\xff'))
blocks[id(data[-1])] = block
mdir = MetadataPair(data)
mdir.blocks = tuple(blocks[id(p.data)] for p in mdir.pair)
# fetch some key metadata as a we scan
try:
mdir.tail = mdir[Tag('tail', 0, 0)]
if mdir.tail.size != 8 or mdir.tail.data == 8*b'\xff':
mdir.tail = None
except KeyError:
mdir.tail = None
# have superblock?
try:
nsuperblock = mdir[
Tag(0x7ff, 0x3ff, 0), Tag('superblock', 0, 0)]
superblock = nsuperblock, mdir[Tag('inlinestruct', 0, 0)]
except KeyError:
pass
# have gstate?
try:
ngstate = mdir[Tag('movestate', 0, 0)]
gstate = bytes((a or 0) ^ (b or 0)
for a,b in it.zip_longest(gstate, ngstate.data))
except KeyError:
pass
# corrupted?
if not mdir:
corrupted.append(mdir)
# add to directories
mdirs.append(mdir)
if mdir.tail is None or not mdir.tail.is_('hardtail'):
dirs.append(mdirs)
mdirs = []
if mdir.tail is None:
break
tail = struct.unpack('<II', mdir.tail.data)
hard = mdir.tail.is_('hardtail')
# find paths
dirtable = {}
for dir in dirs:
dirtable[frozenset(dir[0].blocks)] = dir
pending = [("/", dirs[0])]
while pending:
path, dir = pending.pop(0)
for mdir in dir:
for tag in mdir.tags:
if tag.is_('dir'):
try:
npath = tag.data.decode('utf8')
dirstruct = mdir[Tag('dirstruct', tag.id, 0)]
nblocks = struct.unpack('<II', dirstruct.data)
nmdir = dirtable[frozenset(nblocks)]
pending.append(((path + '/' + npath), nmdir))
except KeyError:
pass
dir[0].path = path.replace('//', '/')
# print littlefs + version info
version = ('?', '?')
if superblock:
version = tuple(reversed(
struct.unpack('<HH', superblock[1].data[0:4].ljust(4, b'\xff'))))
print("%-47s%s" % ("littlefs v%s.%s" % version,
"data (truncated, if it fits)"
if not any([args.no_truncate, args.log, args.all]) else ""))
# print gstate
print("gstate 0x%s" % ''.join('%02x' % c for c in gstate))
tag = Tag(struct.unpack('<I', gstate[0:4].ljust(4, b'\xff'))[0])
blocks = struct.unpack('<II', gstate[4:4+8].ljust(8, b'\xff'))
if tag.size or not tag.isvalid:
print(" orphans >=%d" % max(tag.size, 1))
if tag.type:
print(" move dir {%#x, %#x} id %d" % (
blocks[0], blocks[1], tag.id))
# print mdir info
for i, dir in enumerate(dirs):
print("dir %s" % (json.dumps(dir[0].path)
if hasattr(dir[0], 'path') else '(orphan)'))
for j, mdir in enumerate(dir):
print("mdir {%#x, %#x} rev %d (was %d)%s%s" % (
mdir.blocks[0], mdir.blocks[1], mdir.rev, mdir.pair[1].rev,
' (corrupted!)' if not mdir else '',
' -> {%#x, %#x}' % struct.unpack('<II', mdir.tail.data)
if mdir.tail else ''))
f = io.StringIO()
if args.log:
mdir.dump_log(f, truncate=not args.no_truncate)
elif args.all:
mdir.dump_all(f, truncate=not args.no_truncate)
else:
mdir.dump_tags(f, truncate=not args.no_truncate)
lines = list(filter(None, f.getvalue().split('\n')))
for k, line in enumerate(lines):
print("%s %s" % (
' ' if j == len(dir)-1 else
'v' if k == len(lines)-1 else
'|',
line))
errcode = 0
for mdir in corrupted:
errcode = errcode or 1
print("*** corrupted mdir {%#x, %#x}! ***" % (
mdir.blocks[0], mdir.blocks[1]))
if cycle:
errcode = errcode or 2
print("*** cycle detected {%#x, %#x}! ***" % (
cycle[0], cycle[1]))
return errcode
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Dump semantic info about the metadata tree in littlefs")
parser.add_argument('disk',
help="File representing the block device.")
parser.add_argument('block_size', type=lambda x: int(x, 0),
help="Size of a block in bytes.")
parser.add_argument('block1', nargs='?', default=0,
type=lambda x: int(x, 0),
help="Optional first block address for finding the superblock.")
parser.add_argument('block2', nargs='?', default=1,
type=lambda x: int(x, 0),
help="Optional second block address for finding the superblock.")
parser.add_argument('-l', '--log', action='store_true',
help="Show tags in log.")
parser.add_argument('-a', '--all', action='store_true',
help="Show all tags in log, included tags in corrupted commits.")
parser.add_argument('-T', '--no-truncate', action='store_true',
help="Show the full contents of files/attrs/tags.")
sys.exit(main(parser.parse_args()))

735
scripts/stack.py Executable file
View File

@@ -0,0 +1,735 @@
#!/usr/bin/env python3
#
# Script to find stack usage at the function level. Will detect recursion and
# report as infinite stack usage.
#
# Example:
# ./scripts/stack.py lfs.ci lfs_util.ci -Slimit
#
# Copyright (c) 2022, The littlefs authors.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import csv
import itertools as it
import math as m
import os
import re
# integer fields
class Int(co.namedtuple('Int', 'x')):
__slots__ = ()
def __new__(cls, x=0):
if isinstance(x, Int):
return x
if isinstance(x, str):
try:
x = int(x, 0)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, int) or m.isinf(x), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return str(self.x)
def __int__(self):
assert not m.isinf(self.x)
return self.x
def __float__(self):
return float(self.x)
none = '%7s' % '-'
def table(self):
return '%7s' % (self,)
diff_none = '%7s' % '-'
diff_table = table
def diff_diff(self, other):
new = self.x if self else 0
old = other.x if other else 0
diff = new - old
if diff == +m.inf:
return '%7s' % '+∞'
elif diff == -m.inf:
return '%7s' % '-∞'
else:
return '%+7d' % diff
def ratio(self, other):
new = self.x if self else 0
old = other.x if other else 0
if m.isinf(new) and m.isinf(old):
return 0.0
elif m.isinf(new):
return +m.inf
elif m.isinf(old):
return -m.inf
elif not old and not new:
return 0.0
elif not old:
return 1.0
else:
return (new-old) / old
def __add__(self, other):
return self.__class__(self.x + other.x)
def __sub__(self, other):
return self.__class__(self.x - other.x)
def __mul__(self, other):
return self.__class__(self.x * other.x)
# size results
class StackResult(co.namedtuple('StackResult', [
'file', 'function', 'frame', 'limit', 'children'])):
_by = ['file', 'function']
_fields = ['frame', 'limit']
_sort = ['limit', 'frame']
_types = {'frame': Int, 'limit': Int}
__slots__ = ()
def __new__(cls, file='', function='',
frame=0, limit=0, children=set()):
return super().__new__(cls, file, function,
Int(frame), Int(limit),
children)
def __add__(self, other):
return StackResult(self.file, self.function,
self.frame + other.frame,
max(self.limit, other.limit),
self.children | other.children)
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def collect(ci_paths, *,
sources=None,
everything=False,
**args):
# parse the vcg format
k_pattern = re.compile('([a-z]+)\s*:', re.DOTALL)
v_pattern = re.compile('(?:"(.*?)"|([a-z]+))', re.DOTALL)
def parse_vcg(rest):
def parse_vcg(rest):
node = []
while True:
rest = rest.lstrip()
m_ = k_pattern.match(rest)
if not m_:
return (node, rest)
k, rest = m_.group(1), rest[m_.end(0):]
rest = rest.lstrip()
if rest.startswith('{'):
v, rest = parse_vcg(rest[1:])
assert rest[0] == '}', "unexpected %r" % rest[0:1]
rest = rest[1:]
node.append((k, v))
else:
m_ = v_pattern.match(rest)
assert m_, "unexpected %r" % rest[0:1]
v, rest = m_.group(1) or m_.group(2), rest[m_.end(0):]
node.append((k, v))
node, rest = parse_vcg(rest)
assert rest == '', "unexpected %r" % rest[0:1]
return node
# collect into functions
callgraph = co.defaultdict(lambda: (None, None, 0, set()))
f_pattern = re.compile(
r'([^\\]*)\\n([^:]*)[^\\]*\\n([0-9]+) bytes \((.*)\)')
for path in ci_paths:
with open(path) as f:
vcg = parse_vcg(f.read())
for k, graph in vcg:
if k != 'graph':
continue
for k, info in graph:
if k == 'node':
info = dict(info)
m_ = f_pattern.match(info['label'])
if m_:
function, file, size, type = m_.groups()
if (not args.get('quiet')
and 'static' not in type
and 'bounded' not in type):
print("warning: "
"found non-static stack for %s (%s, %s)" % (
function, type, size))
_, _, _, targets = callgraph[info['title']]
callgraph[info['title']] = (
file, function, int(size), targets)
elif k == 'edge':
info = dict(info)
_, _, _, targets = callgraph[info['sourcename']]
targets.add(info['targetname'])
else:
continue
callgraph_ = co.defaultdict(lambda: (None, None, 0, set()))
for source, (s_file, s_function, frame, targets) in callgraph.items():
# discard internal functions
if not everything and s_function.startswith('__'):
continue
# ignore filtered sources
if sources is not None:
if not any(
os.path.abspath(s_file) == os.path.abspath(s)
for s in sources):
continue
else:
# default to only cwd
if not everything and not os.path.commonpath([
os.getcwd(),
os.path.abspath(s_file)]) == os.getcwd():
continue
# smiplify path
if os.path.commonpath([
os.getcwd(),
os.path.abspath(s_file)]) == os.getcwd():
s_file = os.path.relpath(s_file)
else:
s_file = os.path.abspath(s_file)
callgraph_[source] = (s_file, s_function, frame, targets)
callgraph = callgraph_
if not everything:
callgraph_ = co.defaultdict(lambda: (None, None, 0, set()))
for source, (s_file, s_function, frame, targets) in callgraph.items():
# discard filtered sources
if sources is not None and not any(
os.path.abspath(s_file) == os.path.abspath(s)
for s in sources):
continue
# discard internal functions
if s_function.startswith('__'):
continue
callgraph_[source] = (s_file, s_function, frame, targets)
callgraph = callgraph_
# find maximum stack size recursively, this requires also detecting cycles
# (in case of recursion)
def find_limit(source, seen=None):
seen = seen or set()
if source not in callgraph:
return 0
_, _, frame, targets = callgraph[source]
limit = 0
for target in targets:
if target in seen:
# found a cycle
return m.inf
limit_ = find_limit(target, seen | {target})
limit = max(limit, limit_)
return frame + limit
def find_children(targets):
children = set()
for target in targets:
if target in callgraph:
t_file, t_function, _, _ = callgraph[target]
children.add((t_file, t_function))
return children
# build results
results = []
for source, (s_file, s_function, frame, targets) in callgraph.items():
limit = find_limit(source)
children = find_children(targets)
results.append(StackResult(s_file, s_function, frame, limit, children))
return results
def fold(Result, results, *,
by=None,
defines=None,
**_):
if by is None:
by = Result._by
for k in it.chain(by or [], (k for k, _ in defines or [])):
if k not in Result._by and k not in Result._fields:
print("error: could not find field %r?" % k)
sys.exit(-1)
# filter by matching defines
if defines is not None:
results_ = []
for r in results:
if all(getattr(r, k) in vs for k, vs in defines):
results_.append(r)
results = results_
# organize results into conflicts
folding = co.OrderedDict()
for r in results:
name = tuple(getattr(r, k) for k in by)
if name not in folding:
folding[name] = []
folding[name].append(r)
# merge conflicts
folded = []
for name, rs in folding.items():
folded.append(sum(rs[1:], start=rs[0]))
return folded
def table(Result, results, diff_results=None, *,
by=None,
fields=None,
sort=None,
summary=False,
all=False,
percent=False,
tree=False,
depth=1,
**_):
all_, all = all, __builtins__.all
if by is None:
by = Result._by
if fields is None:
fields = Result._fields
types = Result._types
# fold again
results = fold(Result, results, by=by)
if diff_results is not None:
diff_results = fold(Result, diff_results, by=by)
# organize by name
table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in results}
diff_table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in diff_results or []}
names = list(table.keys() | diff_table.keys())
# sort again, now with diff info, note that python's sort is stable
names.sort()
if diff_results is not None:
names.sort(key=lambda n: tuple(
types[k].ratio(
getattr(table.get(n), k, None),
getattr(diff_table.get(n), k, None))
for k in fields),
reverse=True)
if sort:
for k, reverse in reversed(sort):
names.sort(
key=lambda n: tuple(
(getattr(table[n], k),)
if getattr(table.get(n), k, None) is not None else ()
for k in ([k] if k else [
k for k in Result._sort if k in fields])),
reverse=reverse ^ (not k or k in Result._fields))
# build up our lines
lines = []
# header
header = []
header.append('%s%s' % (
','.join(by),
' (%d added, %d removed)' % (
sum(1 for n in table if n not in diff_table),
sum(1 for n in diff_table if n not in table))
if diff_results is not None and not percent else '')
if not summary else '')
if diff_results is None:
for k in fields:
header.append(k)
elif percent:
for k in fields:
header.append(k)
else:
for k in fields:
header.append('o'+k)
for k in fields:
header.append('n'+k)
for k in fields:
header.append('d'+k)
header.append('')
lines.append(header)
def table_entry(name, r, diff_r=None, ratios=[]):
entry = []
entry.append(name)
if diff_results is None:
for k in fields:
entry.append(getattr(r, k).table()
if getattr(r, k, None) is not None
else types[k].none)
elif percent:
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
else:
for k in fields:
entry.append(getattr(diff_r, k).diff_table()
if getattr(diff_r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(types[k].diff_diff(
getattr(r, k, None),
getattr(diff_r, k, None)))
if diff_results is None:
entry.append('')
elif percent:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios))
else:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios
if t)
if any(ratios) else '')
return entry
# entries
if not summary:
for name in names:
r = table.get(name)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = diff_table.get(name)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
if not all_ and not any(ratios):
continue
lines.append(table_entry(name, r, diff_r, ratios))
# total
r = next(iter(fold(Result, results, by=[])), None)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = next(iter(fold(Result, diff_results, by=[])), None)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
lines.append(table_entry('TOTAL', r, diff_r, ratios))
# find the best widths, note that column 0 contains the names and column -1
# the ratios, so those are handled a bit differently
widths = [
((max(it.chain([w], (len(l[i]) for l in lines)))+1+4-1)//4)*4-1
for w, i in zip(
it.chain([23], it.repeat(7)),
range(len(lines[0])-1))]
# adjust the name width based on the expected call depth, though
# note this doesn't really work with unbounded recursion
if not summary and not m.isinf(depth):
widths[0] += 4*(depth-1)
# print the tree recursively
if not tree:
print('%-*s %s%s' % (
widths[0], lines[0][0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], lines[0][1:-1])),
lines[0][-1]))
if not summary:
line_table = {n: l for n, l in zip(names, lines[1:-1])}
def recurse(names_, depth_, prefixes=('', '', '', '')):
for i, name in enumerate(names_):
if name not in line_table:
continue
line = line_table[name]
is_last = (i == len(names_)-1)
print('%s%-*s ' % (
prefixes[0+is_last],
widths[0] - (
len(prefixes[0+is_last])
if not m.isinf(depth) else 0),
line[0]),
end='')
if not tree:
print(' %s%s' % (
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], line[1:-1])),
line[-1]),
end='')
print()
# recurse?
if name in table and depth_ > 1:
children = {
','.join(str(getattr(Result(*c), k) or '') for k in by)
for c in table[name].children}
recurse(
# note we're maintaining sort order
[n for n in names if n in children],
depth_-1,
(prefixes[2+is_last] + "|-> ",
prefixes[2+is_last] + "'-> ",
prefixes[2+is_last] + "| ",
prefixes[2+is_last] + " "))
recurse(names, depth)
if not tree:
print('%-*s %s%s' % (
widths[0], lines[-1][0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], lines[-1][1:-1])),
lines[-1][-1]))
def main(ci_paths,
by=None,
fields=None,
defines=None,
sort=None,
**args):
# it doesn't really make sense to not have a depth with tree,
# so assume depth=inf if tree by default
if args.get('depth') is None:
args['depth'] = m.inf if args['tree'] else 1
elif args.get('depth') == 0:
args['depth'] = m.inf
# find sizes
if not args.get('use', None):
results = collect(ci_paths, **args)
else:
results = []
with openio(args['use']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('stack_'+k in r and r['stack_'+k].strip()
for k in StackResult._fields):
continue
try:
results.append(StackResult(
**{k: r[k] for k in StackResult._by
if k in r and r[k].strip()},
**{k: r['stack_'+k] for k in StackResult._fields
if 'stack_'+k in r and r['stack_'+k].strip()}))
except TypeError:
pass
# fold
results = fold(StackResult, results, by=by, defines=defines)
# sort, note that python's sort is stable
results.sort()
if sort:
for k, reverse in reversed(sort):
results.sort(
key=lambda r: tuple(
(getattr(r, k),) if getattr(r, k) is not None else ()
for k in ([k] if k else StackResult._sort)),
reverse=reverse ^ (not k or k in StackResult._fields))
# write results to CSV
if args.get('output'):
with openio(args['output'], 'w') as f:
writer = csv.DictWriter(f,
(by if by is not None else StackResult._by)
+ ['stack_'+k for k in (
fields if fields is not None else StackResult._fields)])
writer.writeheader()
for r in results:
writer.writerow(
{k: getattr(r, k) for k in (
by if by is not None else StackResult._by)}
| {'stack_'+k: getattr(r, k) for k in (
fields if fields is not None else StackResult._fields)})
# find previous results?
if args.get('diff'):
diff_results = []
try:
with openio(args['diff']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('stack_'+k in r and r['stack_'+k].strip()
for k in StackResult._fields):
continue
try:
diff_results.append(StackResult(
**{k: r[k] for k in StackResult._by
if k in r and r[k].strip()},
**{k: r['stack_'+k] for k in StackResult._fields
if 'stack_'+k in r and r['stack_'+k].strip()}))
except TypeError:
raise
except FileNotFoundError:
pass
# fold
diff_results = fold(StackResult, diff_results, by=by, defines=defines)
# print table
if not args.get('quiet'):
table(StackResult, results,
diff_results if args.get('diff') else None,
by=by if by is not None else ['function'],
fields=fields,
sort=sort,
**args)
# error on recursion
if args.get('error_on_recursion') and any(
m.isinf(float(r.limit)) for r in results):
sys.exit(2)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Find stack usage at the function level.",
allow_abbrev=False)
parser.add_argument(
'ci_paths',
nargs='*',
help="Input *.ci files.")
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Output commands that run behind the scenes.")
parser.add_argument(
'-q', '--quiet',
action='store_true',
help="Don't show anything, useful with -o.")
parser.add_argument(
'-o', '--output',
help="Specify CSV file to store results.")
parser.add_argument(
'-u', '--use',
help="Don't parse anything, use this CSV file.")
parser.add_argument(
'-d', '--diff',
help="Specify CSV file to diff against.")
parser.add_argument(
'-a', '--all',
action='store_true',
help="Show all, not just the ones that changed.")
parser.add_argument(
'-p', '--percent',
action='store_true',
help="Only show percentage change, not a full diff.")
parser.add_argument(
'-b', '--by',
action='append',
choices=StackResult._by,
help="Group by this field.")
parser.add_argument(
'-f', '--field',
dest='fields',
action='append',
choices=StackResult._fields,
help="Show this field.")
parser.add_argument(
'-D', '--define',
dest='defines',
action='append',
type=lambda x: (lambda k,v: (k, set(v.split(','))))(*x.split('=', 1)),
help="Only include results where this field is this value.")
class AppendSort(argparse.Action):
def __call__(self, parser, namespace, value, option):
if namespace.sort is None:
namespace.sort = []
namespace.sort.append((value, True if option == '-S' else False))
parser.add_argument(
'-s', '--sort',
nargs='?',
action=AppendSort,
help="Sort by this field.")
parser.add_argument(
'-S', '--reverse-sort',
nargs='?',
action=AppendSort,
help="Sort by this field, but backwards.")
parser.add_argument(
'-Y', '--summary',
action='store_true',
help="Only show the total.")
parser.add_argument(
'-F', '--source',
dest='sources',
action='append',
help="Only consider definitions in this file. Defaults to anything "
"in the current directory.")
parser.add_argument(
'--everything',
action='store_true',
help="Include builtin and libc specific symbols.")
parser.add_argument(
'--tree',
action='store_true',
help="Only show the function call tree.")
parser.add_argument(
'-Z', '--depth',
nargs='?',
type=lambda x: int(x, 0),
const=0,
help="Depth of function calls to show. 0 shows all calls but may not "
"terminate!")
parser.add_argument(
'-e', '--error-on-recursion',
action='store_true',
help="Error if any functions are recursive.")
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

652
scripts/structs.py Executable file
View File

@@ -0,0 +1,652 @@
#!/usr/bin/env python3
#
# Script to find struct sizes.
#
# Example:
# ./scripts/structs.py lfs.o lfs_util.o -Ssize
#
# Copyright (c) 2022, The littlefs authors.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import csv
import difflib
import itertools as it
import math as m
import os
import re
import shlex
import subprocess as sp
OBJDUMP_PATH = ['objdump']
# integer fields
class Int(co.namedtuple('Int', 'x')):
__slots__ = ()
def __new__(cls, x=0):
if isinstance(x, Int):
return x
if isinstance(x, str):
try:
x = int(x, 0)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, int) or m.isinf(x), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return str(self.x)
def __int__(self):
assert not m.isinf(self.x)
return self.x
def __float__(self):
return float(self.x)
none = '%7s' % '-'
def table(self):
return '%7s' % (self,)
diff_none = '%7s' % '-'
diff_table = table
def diff_diff(self, other):
new = self.x if self else 0
old = other.x if other else 0
diff = new - old
if diff == +m.inf:
return '%7s' % '+∞'
elif diff == -m.inf:
return '%7s' % '-∞'
else:
return '%+7d' % diff
def ratio(self, other):
new = self.x if self else 0
old = other.x if other else 0
if m.isinf(new) and m.isinf(old):
return 0.0
elif m.isinf(new):
return +m.inf
elif m.isinf(old):
return -m.inf
elif not old and not new:
return 0.0
elif not old:
return 1.0
else:
return (new-old) / old
def __add__(self, other):
return self.__class__(self.x + other.x)
def __sub__(self, other):
return self.__class__(self.x - other.x)
def __mul__(self, other):
return self.__class__(self.x * other.x)
# struct size results
class StructResult(co.namedtuple('StructResult', ['file', 'struct', 'size'])):
_by = ['file', 'struct']
_fields = ['size']
_sort = ['size']
_types = {'size': Int}
__slots__ = ()
def __new__(cls, file='', struct='', size=0):
return super().__new__(cls, file, struct,
Int(size))
def __add__(self, other):
return StructResult(self.file, self.struct,
self.size + other.size)
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def collect(obj_paths, *,
objdump_path=OBJDUMP_PATH,
sources=None,
everything=False,
internal=False,
**args):
line_pattern = re.compile(
'^\s+(?P<no>[0-9]+)'
'(?:\s+(?P<dir>[0-9]+))?'
'\s+.*'
'\s+(?P<path>[^\s]+)$')
info_pattern = re.compile(
'^(?:.*(?P<tag>DW_TAG_[a-z_]+).*'
'|.*DW_AT_name.*:\s*(?P<name>[^:\s]+)\s*'
'|.*DW_AT_decl_file.*:\s*(?P<file>[0-9]+)\s*'
'|.*DW_AT_byte_size.*:\s*(?P<size>[0-9]+)\s*)$')
results = []
for path in obj_paths:
# find files, we want to filter by structs in .h files
dirs = {}
files = {}
# note objdump-path may contain extra args
cmd = objdump_path + ['--dwarf=rawline', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
# note that files contain references to dirs, which we
# dereference as soon as we see them as each file table follows a
# dir table
m = line_pattern.match(line)
if m:
if not m.group('dir'):
# found a directory entry
dirs[int(m.group('no'))] = m.group('path')
else:
# found a file entry
dir = int(m.group('dir'))
if dir in dirs:
files[int(m.group('no'))] = os.path.join(
dirs[dir],
m.group('path'))
else:
files[int(m.group('no'))] = m.group('path')
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
sys.exit(-1)
# collect structs as we parse dwarf info
results_ = []
is_struct = False
s_name = None
s_file = None
s_size = None
# note objdump-path may contain extra args
cmd = objdump_path + ['--dwarf=info', path]
if args.get('verbose'):
print(' '.join(shlex.quote(c) for c in cmd))
proc = sp.Popen(cmd,
stdout=sp.PIPE,
stderr=sp.PIPE if not args.get('verbose') else None,
universal_newlines=True,
errors='replace',
close_fds=False)
for line in proc.stdout:
# state machine here to find structs
m = info_pattern.match(line)
if m:
if m.group('tag'):
if is_struct:
file = files.get(s_file, '?')
results_.append(StructResult(file, s_name, s_size))
is_struct = (m.group('tag') == 'DW_TAG_structure_type')
elif m.group('name'):
s_name = m.group('name')
elif m.group('file'):
s_file = int(m.group('file'))
elif m.group('size'):
s_size = int(m.group('size'))
if is_struct:
file = files.get(s_file, '?')
results_.append(StructResult(file, s_name, s_size))
proc.wait()
if proc.returncode != 0:
if not args.get('verbose'):
for line in proc.stderr:
sys.stdout.write(line)
sys.exit(-1)
for r in results_:
# ignore filtered sources
if sources is not None:
if not any(
os.path.abspath(r.file) == os.path.abspath(s)
for s in sources):
continue
else:
# default to only cwd
if not everything and not os.path.commonpath([
os.getcwd(),
os.path.abspath(r.file)]) == os.getcwd():
continue
# limit to .h files unless --internal
if not internal and not r.file.endswith('.h'):
continue
# simplify path
if os.path.commonpath([
os.getcwd(),
os.path.abspath(r.file)]) == os.getcwd():
file = os.path.relpath(r.file)
else:
file = os.path.abspath(r.file)
results.append(r._replace(file=file))
return results
def fold(Result, results, *,
by=None,
defines=None,
**_):
if by is None:
by = Result._by
for k in it.chain(by or [], (k for k, _ in defines or [])):
if k not in Result._by and k not in Result._fields:
print("error: could not find field %r?" % k)
sys.exit(-1)
# filter by matching defines
if defines is not None:
results_ = []
for r in results:
if all(getattr(r, k) in vs for k, vs in defines):
results_.append(r)
results = results_
# organize results into conflicts
folding = co.OrderedDict()
for r in results:
name = tuple(getattr(r, k) for k in by)
if name not in folding:
folding[name] = []
folding[name].append(r)
# merge conflicts
folded = []
for name, rs in folding.items():
folded.append(sum(rs[1:], start=rs[0]))
return folded
def table(Result, results, diff_results=None, *,
by=None,
fields=None,
sort=None,
summary=False,
all=False,
percent=False,
**_):
all_, all = all, __builtins__.all
if by is None:
by = Result._by
if fields is None:
fields = Result._fields
types = Result._types
# fold again
results = fold(Result, results, by=by)
if diff_results is not None:
diff_results = fold(Result, diff_results, by=by)
# organize by name
table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in results}
diff_table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in diff_results or []}
names = list(table.keys() | diff_table.keys())
# sort again, now with diff info, note that python's sort is stable
names.sort()
if diff_results is not None:
names.sort(key=lambda n: tuple(
types[k].ratio(
getattr(table.get(n), k, None),
getattr(diff_table.get(n), k, None))
for k in fields),
reverse=True)
if sort:
for k, reverse in reversed(sort):
names.sort(
key=lambda n: tuple(
(getattr(table[n], k),)
if getattr(table.get(n), k, None) is not None else ()
for k in ([k] if k else [
k for k in Result._sort if k in fields])),
reverse=reverse ^ (not k or k in Result._fields))
# build up our lines
lines = []
# header
header = []
header.append('%s%s' % (
','.join(by),
' (%d added, %d removed)' % (
sum(1 for n in table if n not in diff_table),
sum(1 for n in diff_table if n not in table))
if diff_results is not None and not percent else '')
if not summary else '')
if diff_results is None:
for k in fields:
header.append(k)
elif percent:
for k in fields:
header.append(k)
else:
for k in fields:
header.append('o'+k)
for k in fields:
header.append('n'+k)
for k in fields:
header.append('d'+k)
header.append('')
lines.append(header)
def table_entry(name, r, diff_r=None, ratios=[]):
entry = []
entry.append(name)
if diff_results is None:
for k in fields:
entry.append(getattr(r, k).table()
if getattr(r, k, None) is not None
else types[k].none)
elif percent:
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
else:
for k in fields:
entry.append(getattr(diff_r, k).diff_table()
if getattr(diff_r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(types[k].diff_diff(
getattr(r, k, None),
getattr(diff_r, k, None)))
if diff_results is None:
entry.append('')
elif percent:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios))
else:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios
if t)
if any(ratios) else '')
return entry
# entries
if not summary:
for name in names:
r = table.get(name)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = diff_table.get(name)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
if not all_ and not any(ratios):
continue
lines.append(table_entry(name, r, diff_r, ratios))
# total
r = next(iter(fold(Result, results, by=[])), None)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = next(iter(fold(Result, diff_results, by=[])), None)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
lines.append(table_entry('TOTAL', r, diff_r, ratios))
# find the best widths, note that column 0 contains the names and column -1
# the ratios, so those are handled a bit differently
widths = [
((max(it.chain([w], (len(l[i]) for l in lines)))+1+4-1)//4)*4-1
for w, i in zip(
it.chain([23], it.repeat(7)),
range(len(lines[0])-1))]
# print our table
for line in lines:
print('%-*s %s%s' % (
widths[0], line[0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], line[1:-1])),
line[-1]))
def main(obj_paths, *,
by=None,
fields=None,
defines=None,
sort=None,
**args):
# find sizes
if not args.get('use', None):
results = collect(obj_paths, **args)
else:
results = []
with openio(args['use']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('struct_'+k in r and r['struct_'+k].strip()
for k in StructResult._fields):
continue
try:
results.append(StructResult(
**{k: r[k] for k in StructResult._by
if k in r and r[k].strip()},
**{k: r['struct_'+k]
for k in StructResult._fields
if 'struct_'+k in r
and r['struct_'+k].strip()}))
except TypeError:
pass
# fold
results = fold(StructResult, results, by=by, defines=defines)
# sort, note that python's sort is stable
results.sort()
if sort:
for k, reverse in reversed(sort):
results.sort(
key=lambda r: tuple(
(getattr(r, k),) if getattr(r, k) is not None else ()
for k in ([k] if k else StructResult._sort)),
reverse=reverse ^ (not k or k in StructResult._fields))
# write results to CSV
if args.get('output'):
with openio(args['output'], 'w') as f:
writer = csv.DictWriter(f,
(by if by is not None else StructResult._by)
+ ['struct_'+k for k in (
fields if fields is not None else StructResult._fields)])
writer.writeheader()
for r in results:
writer.writerow(
{k: getattr(r, k) for k in (
by if by is not None else StructResult._by)}
| {'struct_'+k: getattr(r, k) for k in (
fields if fields is not None else StructResult._fields)})
# find previous results?
if args.get('diff'):
diff_results = []
try:
with openio(args['diff']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
if not any('struct_'+k in r and r['struct_'+k].strip()
for k in StructResult._fields):
continue
try:
diff_results.append(StructResult(
**{k: r[k] for k in StructResult._by
if k in r and r[k].strip()},
**{k: r['struct_'+k]
for k in StructResult._fields
if 'struct_'+k in r
and r['struct_'+k].strip()}))
except TypeError:
pass
except FileNotFoundError:
pass
# fold
diff_results = fold(StructResult, diff_results, by=by, defines=defines)
# print table
if not args.get('quiet'):
table(StructResult, results,
diff_results if args.get('diff') else None,
by=by if by is not None else ['struct'],
fields=fields,
sort=sort,
**args)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Find struct sizes.",
allow_abbrev=False)
parser.add_argument(
'obj_paths',
nargs='*',
help="Input *.o files.")
parser.add_argument(
'-v', '--verbose',
action='store_true',
help="Output commands that run behind the scenes.")
parser.add_argument(
'-q', '--quiet',
action='store_true',
help="Don't show anything, useful with -o.")
parser.add_argument(
'-o', '--output',
help="Specify CSV file to store results.")
parser.add_argument(
'-u', '--use',
help="Don't parse anything, use this CSV file.")
parser.add_argument(
'-d', '--diff',
help="Specify CSV file to diff against.")
parser.add_argument(
'-a', '--all',
action='store_true',
help="Show all, not just the ones that changed.")
parser.add_argument(
'-p', '--percent',
action='store_true',
help="Only show percentage change, not a full diff.")
parser.add_argument(
'-b', '--by',
action='append',
choices=StructResult._by,
help="Group by this field.")
parser.add_argument(
'-f', '--field',
dest='fields',
action='append',
choices=StructResult._fields,
help="Show this field.")
parser.add_argument(
'-D', '--define',
dest='defines',
action='append',
type=lambda x: (lambda k,v: (k, set(v.split(','))))(*x.split('=', 1)),
help="Only include results where this field is this value.")
class AppendSort(argparse.Action):
def __call__(self, parser, namespace, value, option):
if namespace.sort is None:
namespace.sort = []
namespace.sort.append((value, True if option == '-S' else False))
parser.add_argument(
'-s', '--sort',
nargs='?',
action=AppendSort,
help="Sort by this field.")
parser.add_argument(
'-S', '--reverse-sort',
nargs='?',
action=AppendSort,
help="Sort by this field, but backwards.")
parser.add_argument(
'-Y', '--summary',
action='store_true',
help="Only show the total.")
parser.add_argument(
'-F', '--source',
dest='sources',
action='append',
help="Only consider definitions in this file. Defaults to anything "
"in the current directory.")
parser.add_argument(
'--everything',
action='store_true',
help="Include builtin and libc specific symbols.")
parser.add_argument(
'--internal',
action='store_true',
help="Also show structs in .c files.")
parser.add_argument(
'--objdump-path',
type=lambda x: x.split(),
default=OBJDUMP_PATH,
help="Path to the objdump executable, may include flags. "
"Defaults to %r." % OBJDUMP_PATH)
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

829
scripts/summary.py Executable file
View File

@@ -0,0 +1,829 @@
#!/usr/bin/env python3
#
# Script to summarize the outputs of other scripts. Operates on CSV files.
#
# Example:
# ./scripts/code.py lfs.o lfs_util.o -q -o lfs.code.csv
# ./scripts/data.py lfs.o lfs_util.o -q -o lfs.data.csv
# ./scripts/summary.py lfs.code.csv lfs.data.csv -q -o lfs.csv
# ./scripts/summary.py -Y lfs.csv -f code=code_size,data=data_size
#
# Copyright (c) 2022, The littlefs authors.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import csv
import functools as ft
import itertools as it
import math as m
import os
import re
# supported merge operations
#
# this is a terrible way to express these
#
OPS = {
'sum': lambda xs: sum(xs[1:], start=xs[0]),
'prod': lambda xs: m.prod(xs[1:], start=xs[0]),
'min': min,
'max': max,
'mean': lambda xs: Float(sum(float(x) for x in xs) / len(xs)),
'stddev': lambda xs: (
lambda mean: Float(
m.sqrt(sum((float(x) - mean)**2 for x in xs) / len(xs)))
)(sum(float(x) for x in xs) / len(xs)),
'gmean': lambda xs: Float(m.prod(float(x) for x in xs)**(1/len(xs))),
'gstddev': lambda xs: (
lambda gmean: Float(
m.exp(m.sqrt(sum(m.log(float(x)/gmean)**2 for x in xs) / len(xs)))
if gmean else m.inf)
)(m.prod(float(x) for x in xs)**(1/len(xs))),
}
# integer fields
class Int(co.namedtuple('Int', 'x')):
__slots__ = ()
def __new__(cls, x=0):
if isinstance(x, Int):
return x
if isinstance(x, str):
try:
x = int(x, 0)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, int) or m.isinf(x), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return str(self.x)
def __int__(self):
assert not m.isinf(self.x)
return self.x
def __float__(self):
return float(self.x)
none = '%7s' % '-'
def table(self):
return '%7s' % (self,)
diff_none = '%7s' % '-'
diff_table = table
def diff_diff(self, other):
new = self.x if self else 0
old = other.x if other else 0
diff = new - old
if diff == +m.inf:
return '%7s' % '+∞'
elif diff == -m.inf:
return '%7s' % '-∞'
else:
return '%+7d' % diff
def ratio(self, other):
new = self.x if self else 0
old = other.x if other else 0
if m.isinf(new) and m.isinf(old):
return 0.0
elif m.isinf(new):
return +m.inf
elif m.isinf(old):
return -m.inf
elif not old and not new:
return 0.0
elif not old:
return 1.0
else:
return (new-old) / old
def __add__(self, other):
return self.__class__(self.x + other.x)
def __sub__(self, other):
return self.__class__(self.x - other.x)
def __mul__(self, other):
return self.__class__(self.x * other.x)
# float fields
class Float(co.namedtuple('Float', 'x')):
__slots__ = ()
def __new__(cls, x=0.0):
if isinstance(x, Float):
return x
if isinstance(x, str):
try:
x = float(x)
except ValueError:
# also accept +-∞ and +-inf
if re.match('^\s*\+?\s*(?:∞|inf)\s*$', x):
x = m.inf
elif re.match('^\s*-\s*(?:∞|inf)\s*$', x):
x = -m.inf
else:
raise
assert isinstance(x, float), x
return super().__new__(cls, x)
def __str__(self):
if self.x == m.inf:
return ''
elif self.x == -m.inf:
return '-∞'
else:
return '%.1f' % self.x
def __float__(self):
return float(self.x)
none = Int.none
table = Int.table
diff_none = Int.diff_none
diff_table = Int.diff_table
diff_diff = Int.diff_diff
ratio = Int.ratio
__add__ = Int.__add__
__sub__ = Int.__sub__
__mul__ = Int.__mul__
# fractional fields, a/b
class Frac(co.namedtuple('Frac', 'a,b')):
__slots__ = ()
def __new__(cls, a=0, b=None):
if isinstance(a, Frac) and b is None:
return a
if isinstance(a, str) and b is None:
a, b = a.split('/', 1)
if b is None:
b = a
return super().__new__(cls, Int(a), Int(b))
def __str__(self):
return '%s/%s' % (self.a, self.b)
def __float__(self):
return float(self.a)
none = '%11s %7s' % ('-', '-')
def table(self):
t = self.a.x/self.b.x if self.b.x else 1.0
return '%11s %7s' % (
self,
'%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%.1f%%' % (100*t))
diff_none = '%11s' % '-'
def diff_table(self):
return '%11s' % (self,)
def diff_diff(self, other):
new_a, new_b = self if self else (Int(0), Int(0))
old_a, old_b = other if other else (Int(0), Int(0))
return '%11s' % ('%s/%s' % (
new_a.diff_diff(old_a).strip(),
new_b.diff_diff(old_b).strip()))
def ratio(self, other):
new_a, new_b = self if self else (Int(0), Int(0))
old_a, old_b = other if other else (Int(0), Int(0))
new = new_a.x/new_b.x if new_b.x else 1.0
old = old_a.x/old_b.x if old_b.x else 1.0
return new - old
def __add__(self, other):
return self.__class__(self.a + other.a, self.b + other.b)
def __sub__(self, other):
return self.__class__(self.a - other.a, self.b - other.b)
def __mul__(self, other):
return self.__class__(self.a * other.a, self.b + other.b)
def __lt__(self, other):
self_t = self.a.x/self.b.x if self.b.x else 1.0
other_t = other.a.x/other.b.x if other.b.x else 1.0
return (self_t, self.a.x) < (other_t, other.a.x)
def __gt__(self, other):
return self.__class__.__lt__(other, self)
def __le__(self, other):
return not self.__gt__(other)
def __ge__(self, other):
return not self.__lt__(other)
# available types
TYPES = co.OrderedDict([
('int', Int),
('float', Float),
('frac', Frac)
])
def infer(results, *,
by=None,
fields=None,
types={},
ops={},
renames=[],
**_):
# if fields not specified, try to guess from data
if fields is None:
fields = co.OrderedDict()
for r in results:
for k, v in r.items():
if (by is None or k not in by) and v.strip():
types_ = []
for t in fields.get(k, TYPES.values()):
try:
t(v)
types_.append(t)
except ValueError:
pass
fields[k] = types_
fields = list(k for k, v in fields.items() if v)
# deduplicate fields
fields = list(co.OrderedDict.fromkeys(fields).keys())
# if by not specified, guess it's anything not in fields and not a
# source of a rename
if by is None:
by = co.OrderedDict()
for r in results:
# also ignore None keys, these are introduced by csv.DictReader
# when header + row mismatch
by.update((k, True) for k in r.keys()
if k is not None
and k not in fields
and not any(k == old_k for _, old_k in renames))
by = list(by.keys())
# deduplicate fields
by = list(co.OrderedDict.fromkeys(by).keys())
# find best type for all fields
types_ = {}
for k in fields:
if k in types:
types_[k] = types[k]
else:
for t in TYPES.values():
for r in results:
if k in r and r[k].strip():
try:
t(r[k])
except ValueError:
break
else:
types_[k] = t
break
else:
print("error: no type matches field %r?" % k)
sys.exit(-1)
types = types_
# does folding change the type?
types_ = {}
for k, t in types.items():
types_[k] = ops.get(k, OPS['sum'])([t()]).__class__
# create result class
def __new__(cls, **r):
return cls.__mro__[1].__new__(cls,
**{k: r.get(k, '') for k in by},
**{k: r[k] if k in r and isinstance(r[k], list)
else [types[k](r[k])] if k in r
else []
for k in fields})
def __add__(self, other):
return self.__class__(
**{k: getattr(self, k) for k in by},
**{k: object.__getattribute__(self, k)
+ object.__getattribute__(other, k)
for k in fields})
def __getattribute__(self, k):
if k in fields:
if object.__getattribute__(self, k):
return ops.get(k, OPS['sum'])(object.__getattribute__(self, k))
else:
return None
return object.__getattribute__(self, k)
return type('Result', (co.namedtuple('Result', by + fields),), {
'__slots__': (),
'__new__': __new__,
'__add__': __add__,
'__getattribute__': __getattribute__,
'_by': by,
'_fields': fields,
'_sort': fields,
'_types': types_,
})
def fold(Result, results, *,
by=None,
defines=None,
**_):
if by is None:
by = Result._by
for k in it.chain(by or [], (k for k, _ in defines or [])):
if k not in Result._by and k not in Result._fields:
print("error: could not find field %r?" % k)
sys.exit(-1)
# filter by matching defines
if defines is not None:
results_ = []
for r in results:
if all(getattr(r, k) in vs for k, vs in defines):
results_.append(r)
results = results_
# organize results into conflicts
folding = co.OrderedDict()
for r in results:
name = tuple(getattr(r, k) for k in by)
if name not in folding:
folding[name] = []
folding[name].append(r)
# merge conflicts
folded = []
for name, rs in folding.items():
folded.append(sum(rs[1:], start=rs[0]))
return folded
def table(Result, results, diff_results=None, *,
by=None,
fields=None,
sort=None,
summary=False,
all=False,
percent=False,
**_):
all_, all = all, __builtins__.all
if by is None:
by = Result._by
if fields is None:
fields = Result._fields
types = Result._types
# fold again
results = fold(Result, results, by=by)
if diff_results is not None:
diff_results = fold(Result, diff_results, by=by)
# organize by name
table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in results}
diff_table = {
','.join(str(getattr(r, k) or '') for k in by): r
for r in diff_results or []}
names = list(table.keys() | diff_table.keys())
# sort again, now with diff info, note that python's sort is stable
names.sort()
if diff_results is not None:
names.sort(key=lambda n: tuple(
types[k].ratio(
getattr(table.get(n), k, None),
getattr(diff_table.get(n), k, None))
for k in fields),
reverse=True)
if sort:
for k, reverse in reversed(sort):
names.sort(
key=lambda n: tuple(
(getattr(table[n], k),)
if getattr(table.get(n), k, None) is not None else ()
for k in ([k] if k else [
k for k in Result._sort if k in fields])),
reverse=reverse ^ (not k or k in Result._fields))
# build up our lines
lines = []
# header
header = []
header.append('%s%s' % (
','.join(by),
' (%d added, %d removed)' % (
sum(1 for n in table if n not in diff_table),
sum(1 for n in diff_table if n not in table))
if diff_results is not None and not percent else '')
if not summary else '')
if diff_results is None:
for k in fields:
header.append(k)
elif percent:
for k in fields:
header.append(k)
else:
for k in fields:
header.append('o'+k)
for k in fields:
header.append('n'+k)
for k in fields:
header.append('d'+k)
header.append('')
lines.append(header)
def table_entry(name, r, diff_r=None, ratios=[]):
entry = []
entry.append(name)
if diff_results is None:
for k in fields:
entry.append(getattr(r, k).table()
if getattr(r, k, None) is not None
else types[k].none)
elif percent:
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
else:
for k in fields:
entry.append(getattr(diff_r, k).diff_table()
if getattr(diff_r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(getattr(r, k).diff_table()
if getattr(r, k, None) is not None
else types[k].diff_none)
for k in fields:
entry.append(types[k].diff_diff(
getattr(r, k, None),
getattr(diff_r, k, None)))
if diff_results is None:
entry.append('')
elif percent:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios))
else:
entry.append(' (%s)' % ', '.join(
'+∞%' if t == +m.inf
else '-∞%' if t == -m.inf
else '%+.1f%%' % (100*t)
for t in ratios
if t)
if any(ratios) else '')
return entry
# entries
if not summary:
for name in names:
r = table.get(name)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = diff_table.get(name)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
if not all_ and not any(ratios):
continue
lines.append(table_entry(name, r, diff_r, ratios))
# total
r = next(iter(fold(Result, results, by=[])), None)
if diff_results is None:
diff_r = None
ratios = None
else:
diff_r = next(iter(fold(Result, diff_results, by=[])), None)
ratios = [
types[k].ratio(
getattr(r, k, None),
getattr(diff_r, k, None))
for k in fields]
lines.append(table_entry('TOTAL', r, diff_r, ratios))
# find the best widths, note that column 0 contains the names and column -1
# the ratios, so those are handled a bit differently
widths = [
((max(it.chain([w], (len(l[i]) for l in lines)))+1+4-1)//4)*4-1
for w, i in zip(
it.chain([23], it.repeat(7)),
range(len(lines[0])-1))]
# print our table
for line in lines:
print('%-*s %s%s' % (
widths[0], line[0],
' '.join('%*s' % (w, x)
for w, x in zip(widths[1:], line[1:-1])),
line[-1]))
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def main(csv_paths, *,
by=None,
fields=None,
defines=None,
sort=None,
**args):
# separate out renames
renames = list(it.chain.from_iterable(
((k, v) for v in vs)
for k, vs in it.chain(by or [], fields or [])))
if by is not None:
by = [k for k, _ in by]
if fields is not None:
fields = [k for k, _ in fields]
# figure out types
types = {}
for t in TYPES.keys():
for k in args.get(t, []):
if k in types:
print("error: conflicting type for field %r?" % k)
sys.exit(-1)
types[k] = TYPES[t]
# rename types?
if renames:
types_ = {}
for new_k, old_k in renames:
if old_k in types:
types_[new_k] = types[old_k]
types.update(types_)
# figure out merge operations
ops = {}
for o in OPS.keys():
for k in args.get(o, []):
if k in ops:
print("error: conflicting op for field %r?" % k)
sys.exit(-1)
ops[k] = OPS[o]
# rename ops?
if renames:
ops_ = {}
for new_k, old_k in renames:
if old_k in ops:
ops_[new_k] = ops[old_k]
ops.update(ops_)
# find CSV files
results = []
for path in csv_paths:
try:
with openio(path) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
# rename fields?
if renames:
# make a copy so renames can overlap
r_ = {}
for new_k, old_k in renames:
if old_k in r:
r_[new_k] = r[old_k]
r.update(r_)
results.append(r)
except FileNotFoundError:
pass
# homogenize
Result = infer(results,
by=by,
fields=fields,
types=types,
ops=ops,
renames=renames)
results_ = []
for r in results:
if not any(k in r and r[k].strip()
for k in Result._fields):
continue
try:
results_.append(Result(**{
k: r[k] for k in Result._by + Result._fields
if k in r and r[k].strip()}))
except TypeError:
pass
results = results_
# fold
results = fold(Result, results, by=by, defines=defines)
# sort, note that python's sort is stable
results.sort()
if sort:
for k, reverse in reversed(sort):
results.sort(
key=lambda r: tuple(
(getattr(r, k),) if getattr(r, k) is not None else ()
for k in ([k] if k else Result._sort)),
reverse=reverse ^ (not k or k in Result._fields))
# write results to CSV
if args.get('output'):
with openio(args['output'], 'w') as f:
writer = csv.DictWriter(f, Result._by + Result._fields)
writer.writeheader()
for r in results:
# note we need to go through getattr to resolve lazy fields
writer.writerow({
k: getattr(r, k) for k in Result._by + Result._fields})
# find previous results?
if args.get('diff'):
diff_results = []
try:
with openio(args['diff']) as f:
reader = csv.DictReader(f, restval='')
for r in reader:
# rename fields?
if renames:
# make a copy so renames can overlap
r_ = {}
for new_k, old_k in renames:
if old_k in r:
r_[new_k] = r[old_k]
r.update(r_)
if not any(k in r and r[k].strip()
for k in Result._fields):
continue
try:
diff_results.append(Result(**{
k: r[k] for k in Result._by + Result._fields
if k in r and r[k].strip()}))
except TypeError:
pass
except FileNotFoundError:
pass
# fold
diff_results = fold(Result, diff_results, by=by, defines=defines)
# print table
if not args.get('quiet'):
table(Result, results,
diff_results if args.get('diff') else None,
by=by,
fields=fields,
sort=sort,
**args)
if __name__ == "__main__":
import argparse
import sys
parser = argparse.ArgumentParser(
description="Summarize measurements in CSV files.",
allow_abbrev=False)
parser.add_argument(
'csv_paths',
nargs='*',
help="Input *.csv files.")
parser.add_argument(
'-q', '--quiet',
action='store_true',
help="Don't show anything, useful with -o.")
parser.add_argument(
'-o', '--output',
help="Specify CSV file to store results.")
parser.add_argument(
'-d', '--diff',
help="Specify CSV file to diff against.")
parser.add_argument(
'-a', '--all',
action='store_true',
help="Show all, not just the ones that changed.")
parser.add_argument(
'-p', '--percent',
action='store_true',
help="Only show percentage change, not a full diff.")
parser.add_argument(
'-b', '--by',
action='append',
type=lambda x: (
lambda k,v=None: (k, v.split(',') if v is not None else ())
)(*x.split('=', 1)),
help="Group by this field. Can rename fields with new_name=old_name.")
parser.add_argument(
'-f', '--field',
dest='fields',
action='append',
type=lambda x: (
lambda k,v=None: (k, v.split(',') if v is not None else ())
)(*x.split('=', 1)),
help="Show this field. Can rename fields with new_name=old_name.")
parser.add_argument(
'-D', '--define',
dest='defines',
action='append',
type=lambda x: (lambda k,v: (k, set(v.split(','))))(*x.split('=', 1)),
help="Only include results where this field is this value. May include "
"comma-separated options.")
class AppendSort(argparse.Action):
def __call__(self, parser, namespace, value, option):
if namespace.sort is None:
namespace.sort = []
namespace.sort.append((value, True if option == '-S' else False))
parser.add_argument(
'-s', '--sort',
nargs='?',
action=AppendSort,
help="Sort by this field.")
parser.add_argument(
'-S', '--reverse-sort',
nargs='?',
action=AppendSort,
help="Sort by this field, but backwards.")
parser.add_argument(
'-Y', '--summary',
action='store_true',
help="Only show the total.")
parser.add_argument(
'--int',
action='append',
help="Treat these fields as ints.")
parser.add_argument(
'--float',
action='append',
help="Treat these fields as floats.")
parser.add_argument(
'--frac',
action='append',
help="Treat these fields as fractions.")
parser.add_argument(
'--sum',
action='append',
help="Add these fields (the default).")
parser.add_argument(
'--prod',
action='append',
help="Multiply these fields.")
parser.add_argument(
'--min',
action='append',
help="Take the minimum of these fields.")
parser.add_argument(
'--max',
action='append',
help="Take the maximum of these fields.")
parser.add_argument(
'--mean',
action='append',
help="Average these fields.")
parser.add_argument(
'--stddev',
action='append',
help="Find the standard deviation of these fields.")
parser.add_argument(
'--gmean',
action='append',
help="Find the geometric mean of these fields.")
parser.add_argument(
'--gstddev',
action='append',
help="Find the geometric standard deviation of these fields.")
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

177
scripts/tailpipe.py Executable file
View File

@@ -0,0 +1,177 @@
#!/usr/bin/env python3
#
# Efficiently displays the last n lines of a file/pipe.
#
# Example:
# ./scripts/tailpipe.py trace -n5
#
# Copyright (c) 2022, The littlefs authors.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import io
import os
import select
import shutil
import sys
import threading as th
import time
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
class LinesIO:
def __init__(self, maxlen=None):
self.maxlen = maxlen
self.lines = co.deque(maxlen=maxlen)
self.tail = io.StringIO()
# trigger automatic sizing
if maxlen == 0:
self.resize(0)
def write(self, s):
# note using split here ensures the trailing string has no newline
lines = s.split('\n')
if len(lines) > 1 and self.tail.getvalue():
self.tail.write(lines[0])
lines[0] = self.tail.getvalue()
self.tail = io.StringIO()
self.lines.extend(lines[:-1])
if lines[-1]:
self.tail.write(lines[-1])
def resize(self, maxlen):
self.maxlen = maxlen
if maxlen == 0:
maxlen = shutil.get_terminal_size((80, 5))[1]
if maxlen != self.lines.maxlen:
self.lines = co.deque(self.lines, maxlen=maxlen)
canvas_lines = 1
def draw(self):
# did terminal size change?
if self.maxlen == 0:
self.resize(0)
# first thing first, give ourself a canvas
while LinesIO.canvas_lines < len(self.lines):
sys.stdout.write('\n')
LinesIO.canvas_lines += 1
# clear the bottom of the canvas if we shrink
shrink = LinesIO.canvas_lines - len(self.lines)
if shrink > 0:
for i in range(shrink):
sys.stdout.write('\r')
if shrink-1-i > 0:
sys.stdout.write('\x1b[%dA' % (shrink-1-i))
sys.stdout.write('\x1b[K')
if shrink-1-i > 0:
sys.stdout.write('\x1b[%dB' % (shrink-1-i))
sys.stdout.write('\x1b[%dA' % shrink)
LinesIO.canvas_lines = len(self.lines)
for i, line in enumerate(self.lines):
# move cursor, clear line, disable/reenable line wrapping
sys.stdout.write('\r')
if len(self.lines)-1-i > 0:
sys.stdout.write('\x1b[%dA' % (len(self.lines)-1-i))
sys.stdout.write('\x1b[K')
sys.stdout.write('\x1b[?7l')
sys.stdout.write(line)
sys.stdout.write('\x1b[?7h')
if len(self.lines)-1-i > 0:
sys.stdout.write('\x1b[%dB' % (len(self.lines)-1-i))
sys.stdout.flush()
def main(path='-', *, lines=5, cat=False, sleep=None, keep_open=False):
if cat:
ring = sys.stdout
else:
ring = LinesIO(lines)
# if sleep print in background thread to avoid getting stuck in a read call
event = th.Event()
lock = th.Lock()
if not cat:
done = False
def background():
while not done:
event.wait()
event.clear()
with lock:
ring.draw()
time.sleep(sleep or 0.01)
th.Thread(target=background, daemon=True).start()
try:
while True:
with openio(path) as f:
for line in f:
with lock:
ring.write(line)
event.set()
if not keep_open:
break
# don't just flood open calls
time.sleep(sleep or 0.1)
except FileNotFoundError as e:
print("error: file not found %r" % path)
sys.exit(-1)
except KeyboardInterrupt:
pass
if not cat:
done = True
lock.acquire() # avoids https://bugs.python.org/issue42717
sys.stdout.write('\n')
if __name__ == "__main__":
import sys
import argparse
parser = argparse.ArgumentParser(
description="Efficiently displays the last n lines of a file/pipe.",
allow_abbrev=False)
parser.add_argument(
'path',
nargs='?',
help="Path to read from.")
parser.add_argument(
'-n', '--lines',
nargs='?',
type=lambda x: int(x, 0),
const=0,
help="Show this many lines of history. 0 uses the terminal height. "
"Defaults to 5.")
parser.add_argument(
'-z', '--cat',
action='store_true',
help="Pipe directly to stdout.")
parser.add_argument(
'-s', '--sleep',
type=float,
help="Seconds to sleep between reads. Defaults to 0.01.")
parser.add_argument(
'-k', '--keep-open',
action='store_true',
help="Reopen the pipe on EOF, useful when multiple "
"processes are writing.")
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

73
scripts/teepipe.py Executable file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/env python3
#
# tee, but for pipes
#
# Example:
# ./scripts/tee.py in_pipe out_pipe1 out_pipe2
#
# Copyright (c) 2022, The littlefs authors.
# SPDX-License-Identifier: BSD-3-Clause
#
import os
import io
import time
import sys
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def main(in_path, out_paths, *, keep_open=False):
out_pipes = [openio(p, 'wb', 0) for p in out_paths]
try:
with openio(in_path, 'rb', 0) as f:
while True:
buf = f.read(io.DEFAULT_BUFFER_SIZE)
if not buf:
if not keep_open:
break
# don't just flood reads
time.sleep(0.1)
continue
for p in out_pipes:
try:
p.write(buf)
except BrokenPipeError:
pass
except FileNotFoundError as e:
print("error: file not found %r" % in_path)
sys.exit(-1)
except KeyboardInterrupt:
pass
if __name__ == "__main__":
import sys
import argparse
parser = argparse.ArgumentParser(
description="tee, but for pipes.",
allow_abbrev=False)
parser.add_argument(
'in_path',
help="Path to read from.")
parser.add_argument(
'out_paths',
nargs='+',
help="Path to write to.")
parser.add_argument(
'-k', '--keep-open',
action='store_true',
help="Reopen the pipe on EOF, useful when multiple "
"processes are writing.")
sys.exit(main(**{k: v
for k, v in vars(parser.parse_intermixed_args()).items()
if v is not None}))

1484
scripts/test.py Executable file

File diff suppressed because it is too large Load Diff

1002
scripts/tracebd.py Executable file

File diff suppressed because it is too large Load Diff

265
scripts/watch.py Executable file
View File

@@ -0,0 +1,265 @@
#!/usr/bin/env python3
#
# Traditional watch command, but with higher resolution updates and a bit
# different options/output format
#
# Example:
# ./scripts/watch.py -s0.1 date
#
# Copyright (c) 2022, The littlefs authors.
# SPDX-License-Identifier: BSD-3-Clause
#
import collections as co
import errno
import fcntl
import io
import os
import pty
import re
import shutil
import struct
import subprocess as sp
import sys
import termios
import time
try:
import inotify_simple
except ModuleNotFoundError:
inotify_simple = None
def openio(path, mode='r', buffering=-1):
# allow '-' for stdin/stdout
if path == '-':
if mode == 'r':
return os.fdopen(os.dup(sys.stdin.fileno()), mode, buffering)
else:
return os.fdopen(os.dup(sys.stdout.fileno()), mode, buffering)
else:
return open(path, mode, buffering)
def inotifywait(paths):
# wait for interesting events
inotify = inotify_simple.INotify()
flags = (inotify_simple.flags.ATTRIB
| inotify_simple.flags.CREATE
| inotify_simple.flags.DELETE
| inotify_simple.flags.DELETE_SELF
| inotify_simple.flags.MODIFY
| inotify_simple.flags.MOVED_FROM
| inotify_simple.flags.MOVED_TO
| inotify_simple.flags.MOVE_SELF)
# recurse into directories
for path in paths:
if os.path.isdir(path):
for dir, _, files in os.walk(path):
inotify.add_watch(dir, flags)
for f in files:
inotify.add_watch(os.path.join(dir, f), flags)
else:
inotify.add_watch(path, flags)
# wait for event
inotify.read()
class LinesIO:
def __init__(self, maxlen=None):
self.maxlen = maxlen
self.lines = co.deque(maxlen=maxlen)
self.tail = io.StringIO()
# trigger automatic sizing
if maxlen == 0:
self.resize(0)
def write(self, s):
# note using split here ensures the trailing string has no newline
lines = s.split('\n')
if len(lines) > 1 and self.tail.getvalue():
self.tail.write(lines[0])
lines[0] = self.tail.getvalue()
self.tail = io.StringIO()
self.lines.extend(lines[:-1])
if lines[-1]:
self.tail.write(lines[-1])
def resize(self, maxlen):
self.maxlen = maxlen
if maxlen == 0:
maxlen = shutil.get_terminal_size((80, 5))[1]
if maxlen != self.lines.maxlen:
self.lines = co.deque(self.lines, maxlen=maxlen)
canvas_lines = 1
def draw(self):
# did terminal size change?
if self.maxlen == 0:
self.resize(0)
# first thing first, give ourself a canvas
while LinesIO.canvas_lines < len(self.lines):
sys.stdout.write('\n')
LinesIO.canvas_lines += 1
# clear the bottom of the canvas if we shrink
shrink = LinesIO.canvas_lines - len(self.lines)
if shrink > 0:
for i in range(shrink):
sys.stdout.write('\r')
if shrink-1-i > 0:
sys.stdout.write('\x1b[%dA' % (shrink-1-i))
sys.stdout.write('\x1b[K')
if shrink-1-i > 0:
sys.stdout.write('\x1b[%dB' % (shrink-1-i))
sys.stdout.write('\x1b[%dA' % shrink)
LinesIO.canvas_lines = len(self.lines)
for i, line in enumerate(self.lines):
# move cursor, clear line, disable/reenable line wrapping
sys.stdout.write('\r')
if len(self.lines)-1-i > 0:
sys.stdout.write('\x1b[%dA' % (len(self.lines)-1-i))
sys.stdout.write('\x1b[K')
sys.stdout.write('\x1b[?7l')
sys.stdout.write(line)
sys.stdout.write('\x1b[?7h')
if len(self.lines)-1-i > 0:
sys.stdout.write('\x1b[%dB' % (len(self.lines)-1-i))
sys.stdout.flush()
def main(command, *,
lines=0,
cat=False,
sleep=None,
keep_open=False,
keep_open_paths=None,
exit_on_error=False):
returncode = 0
try:
while True:
# reset ring each run
if cat:
ring = sys.stdout
else:
ring = LinesIO(lines)
try:
# run the command under a pseudoterminal
mpty, spty = pty.openpty()
# forward terminal size
w, h = shutil.get_terminal_size((80, 5))
if lines:
h = lines
fcntl.ioctl(spty, termios.TIOCSWINSZ,
struct.pack('HHHH', h, w, 0, 0))
proc = sp.Popen(command,
stdout=spty,
stderr=spty,
close_fds=False)
os.close(spty)
mpty = os.fdopen(mpty, 'r', 1)
while True:
try:
line = mpty.readline()
except OSError as e:
if e.errno != errno.EIO:
raise
break
if not line:
break
ring.write(line)
if not cat:
ring.draw()
mpty.close()
proc.wait()
if exit_on_error and proc.returncode != 0:
returncode = proc.returncode
break
except OSError as e:
if e.errno != errno.ETXTBSY:
raise
pass
# try to inotifywait
if keep_open and inotify_simple is not None:
if keep_open_paths:
paths = set(keep_paths)
else:
# guess inotify paths from command
paths = set()
for p in command:
for p in {
p,
re.sub('^-.', '', p),
re.sub('^--[^=]+=', '', p)}:
if p and os.path.exists(p):
paths.add(p)
ptime = time.time()
inotifywait(paths)
# sleep for a minimum amount of time, this helps issues around
# rapidly updating files
time.sleep(max(0, (sleep or 0.1) - (time.time()-ptime)))
else:
time.sleep(sleep or 0.1)
except KeyboardInterrupt:
pass
if not cat:
sys.stdout.write('\n')
sys.exit(returncode)
if __name__ == "__main__":
import sys
import argparse
parser = argparse.ArgumentParser(
description="Traditional watch command, but with higher resolution "
"updates and a bit different options/output format.",
allow_abbrev=False)
parser.add_argument(
'command',
nargs=argparse.REMAINDER,
help="Command to run.")
parser.add_argument(
'-n', '--lines',
nargs='?',
type=lambda x: int(x, 0),
const=0,
help="Show this many lines of history. 0 uses the terminal height. "
"Defaults to 0.")
parser.add_argument(
'-z', '--cat',
action='store_true',
help="Pipe directly to stdout.")
parser.add_argument(
'-s', '--sleep',
type=float,
help="Seconds to sleep between runs. Defaults to 0.1.")
parser.add_argument(
'-k', '--keep-open',
action='store_true',
help="Try to use inotify to wait for changes.")
parser.add_argument(
'-K', '--keep-open-path',
dest='keep_open_paths',
action='append',
help="Use this path for inotify. Defaults to guessing.")
parser.add_argument(
'-e', '--exit-on-error',
action='store_true',
help="Exit on error.")
sys.exit(main(**{k: v
for k, v in vars(parser.parse_args()).items()
if v is not None}))

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env python
import struct
import sys
import os
import argparse
def corrupt(block):
with open(block, 'r+b') as file:
# skip rev
file.read(4)
# go to last commit
tag = 0xffffffff
while True:
try:
ntag, = struct.unpack('>I', file.read(4))
except struct.error:
break
tag ^= ntag
size = (tag & 0x3ff) if (tag & 0x3ff) != 0x3ff else 0
file.seek(size, os.SEEK_CUR)
# lob off last 3 bytes
file.seek(-(size + 3), os.SEEK_CUR)
file.truncate()
def main(args):
if args.n or not args.blocks:
with open('blocks/.history', 'rb') as file:
for i in range(int(args.n or 1)):
last, = struct.unpack('<I', file.read(4))
args.blocks.append('blocks/%x' % last)
for block in args.blocks:
print 'corrupting %s' % block
corrupt(block)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-n')
parser.add_argument('blocks', nargs='*')
main(parser.parse_args())

View File

@@ -1,112 +0,0 @@
#!/usr/bin/env python2
import struct
import binascii
TYPES = {
(0x700, 0x400): 'splice',
(0x7ff, 0x401): 'create',
(0x7ff, 0x4ff): 'delete',
(0x700, 0x000): 'name',
(0x7ff, 0x001): 'name reg',
(0x7ff, 0x002): 'name dir',
(0x7ff, 0x0ff): 'name superblock',
(0x700, 0x200): 'struct',
(0x7ff, 0x200): 'struct dir',
(0x7ff, 0x202): 'struct ctz',
(0x7ff, 0x201): 'struct inline',
(0x700, 0x300): 'userattr',
(0x700, 0x600): 'tail',
(0x7ff, 0x600): 'tail soft',
(0x7ff, 0x601): 'tail hard',
(0x700, 0x700): 'gstate',
(0x7ff, 0x7ff): 'gstate move',
(0x700, 0x500): 'crc',
}
def typeof(type):
for prefix in range(12):
mask = 0x7ff & ~((1 << prefix)-1)
if (mask, type & mask) in TYPES:
return TYPES[mask, type & mask] + (
' %0*x' % (prefix/4, type & ((1 << prefix)-1))
if prefix else '')
else:
return '%02x' % type
def main(*blocks):
# find most recent block
file = None
rev = None
crc = None
versions = []
for block in blocks:
try:
nfile = open(block, 'rb')
ndata = nfile.read(4)
ncrc = binascii.crc32(ndata)
nrev, = struct.unpack('<I', ndata)
assert rev != nrev
if not file or ((rev - nrev) & 0x80000000):
file = nfile
rev = nrev
crc = ncrc
versions.append((nrev, '%s (rev %d)' % (block, nrev)))
except (IOError, struct.error):
pass
if not file:
print 'Bad metadata pair {%s}' % ', '.join(blocks)
return 1
print "--- %s ---" % ', '.join(v for _,v in sorted(versions, reverse=True))
# go through each tag, print useful information
print "%-4s %-8s %-14s %3s %4s %s" % (
'off', 'tag', 'type', 'id', 'len', 'dump')
tag = 0xffffffff
off = 4
while True:
try:
data = file.read(4)
crc = binascii.crc32(data, crc)
ntag, = struct.unpack('>I', data)
except struct.error:
break
tag ^= ntag
off += 4
type = (tag & 0x7ff00000) >> 20
id = (tag & 0x000ffc00) >> 10
size = (tag & 0x000003ff) >> 0
iscrc = (type & 0x700) == 0x500
data = file.read(size if size != 0x3ff else 0)
if iscrc:
crc = binascii.crc32(data[:4], crc)
else:
crc = binascii.crc32(data, crc)
print '%04x: %08x %-15s %3s %4s %-23s %-8s' % (
off, tag,
typeof(type) + (' bad!' if iscrc and ~crc else ''),
id if id != 0x3ff else '.',
size if size != 0x3ff else 'x',
' '.join('%02x' % ord(c) for c in data[:8]),
''.join(c if c >= ' ' and c <= '~' else '.' for c in data[:8]))
off += size if size != 0x3ff else 0
if iscrc:
crc = 0
tag ^= (type & 1) << 31
return 0
if __name__ == "__main__":
import sys
sys.exit(main(*sys.argv[1:]))

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env python
import struct
import sys
import time
import os
import re
def main():
with open('blocks/.config') as file:
s = struct.unpack('<LLLL', file.read())
print 'read_size: %d' % s[0]
print 'prog_size: %d' % s[1]
print 'block_size: %d' % s[2]
print 'block_size: %d' % s[3]
print 'real_size: %d' % sum(
os.path.getsize(os.path.join('blocks', f))
for f in os.listdir('blocks') if re.match('\d+', f))
with open('blocks/.stats') as file:
s = struct.unpack('<QQQ', file.read())
print 'read_count: %d' % s[0]
print 'prog_count: %d' % s[1]
print 'erase_count: %d' % s[2]
print 'runtime: %.3f' % (time.time() - os.stat('blocks').st_ctime)
if __name__ == "__main__":
main(*sys.argv[1:])

View File

@@ -1,116 +0,0 @@
/// AUTOGENERATED TEST ///
#include "lfs.h"
#include "emubd/lfs_emubd.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
// test stuff
static void test_log(const char *s, uintmax_t v) {{
printf("%s: %jd\n", s, v);
}}
static void test_assert(const char *file, unsigned line,
const char *s, uintmax_t v, uintmax_t e) {{
static const char *last[6] = {{0, 0}};
if (v != e || !(last[0] == s || last[1] == s ||
last[2] == s || last[3] == s ||
last[4] == s || last[5] == s)) {{
test_log(s, v);
last[0] = last[1];
last[1] = last[2];
last[2] = last[3];
last[3] = last[4];
last[4] = last[5];
last[5] = s;
}}
if (v != e) {{
fprintf(stderr, "\033[31m%s:%u: assert %s failed with %jd, "
"expected %jd\033[0m\n", file, line, s, v, e);
exit(-2);
}}
}}
#define test_assert(s, v, e) test_assert(__FILE__, __LINE__, s, v, e)
// utility functions for traversals
static int __attribute__((used)) test_count(void *p, lfs_block_t b) {{
(void)b;
unsigned *u = (unsigned*)p;
*u += 1;
return 0;
}}
// lfs declarations
lfs_t lfs;
lfs_emubd_t bd;
lfs_file_t file[4];
lfs_dir_t dir[4];
struct lfs_info info;
uint8_t buffer[1024];
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_size_t size;
lfs_size_t wsize;
lfs_size_t rsize;
uintmax_t test;
#ifndef LFS_READ_SIZE
#define LFS_READ_SIZE 16
#endif
#ifndef LFS_PROG_SIZE
#define LFS_PROG_SIZE LFS_READ_SIZE
#endif
#ifndef LFS_BLOCK_SIZE
#define LFS_BLOCK_SIZE 512
#endif
#ifndef LFS_BLOCK_COUNT
#define LFS_BLOCK_COUNT 1024
#endif
#ifndef LFS_BLOCK_CYCLES
#define LFS_BLOCK_CYCLES 1024
#endif
#ifndef LFS_CACHE_SIZE
#define LFS_CACHE_SIZE 64
#endif
#ifndef LFS_LOOKAHEAD_SIZE
#define LFS_LOOKAHEAD_SIZE 16
#endif
const struct lfs_config cfg = {{
.context = &bd,
.read = &lfs_emubd_read,
.prog = &lfs_emubd_prog,
.erase = &lfs_emubd_erase,
.sync = &lfs_emubd_sync,
.read_size = LFS_READ_SIZE,
.prog_size = LFS_PROG_SIZE,
.block_size = LFS_BLOCK_SIZE,
.block_count = LFS_BLOCK_COUNT,
.block_cycles = LFS_BLOCK_CYCLES,
.cache_size = LFS_CACHE_SIZE,
.lookahead_size = LFS_LOOKAHEAD_SIZE,
}};
// Entry point
int main(void) {{
lfs_emubd_create(&cfg, "blocks");
{tests}
lfs_emubd_destroy(&cfg);
}}

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env python
import re
import sys
import subprocess
import os
def generate(test):
with open("tests/template.fmt") as file:
template = file.read()
lines = []
for line in re.split('(?<=(?:.;| [{}]))\n', test.read()):
match = re.match('(?: *\n)*( *)(.*)=>(.*);', line, re.DOTALL | re.MULTILINE)
if match:
tab, test, expect = match.groups()
lines.append(tab+'test = {test};'.format(test=test.strip()))
lines.append(tab+'test_assert("{name}", test, {expect});'.format(
name = re.match('\w*', test.strip()).group(),
expect = expect.strip()))
else:
lines.append(line)
# Create test file
with open('test.c', 'w') as file:
file.write(template.format(tests='\n'.join(lines)))
# Remove build artifacts to force rebuild
try:
os.remove('test.o')
os.remove('lfs')
except OSError:
pass
def compile():
subprocess.check_call([
os.environ.get('MAKE', 'make'),
'--no-print-directory', '-s'])
def execute():
if 'EXEC' in os.environ:
subprocess.check_call([os.environ['EXEC'], "./lfs"])
else:
subprocess.check_call(["./lfs"])
def main(test=None):
if test and not test.startswith('-'):
with open(test) as file:
generate(file)
else:
generate(sys.stdin)
compile()
if test == '-s':
sys.exit(1)
execute()
if __name__ == "__main__":
main(*sys.argv[1:])

View File

@@ -1,485 +0,0 @@
#!/bin/bash
set -eu
echo "=== Allocator tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
SIZE=15000
lfs_mkdir() {
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "$1") => 0;
lfs_unmount(&lfs) => 0;
TEST
}
lfs_remove() {
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "$1/eggs") => 0;
lfs_remove(&lfs, "$1/bacon") => 0;
lfs_remove(&lfs, "$1/pancakes") => 0;
lfs_remove(&lfs, "$1") => 0;
lfs_unmount(&lfs) => 0;
TEST
}
lfs_alloc_singleproc() {
tests/test.py << TEST
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
sprintf((char*)buffer, "$1/%s", names[n]);
lfs_file_open(&lfs, &file[n], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
size = strlen(names[n]);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[n], names[n], size) => size;
}
}
for (unsigned n = 0; n < sizeof(names)/sizeof(names[0]); n++) {
lfs_file_close(&lfs, &file[n]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
lfs_alloc_multiproc() {
for name in bacon eggs pancakes
do
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "$1/$name",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("$name");
memcpy(buffer, "$name", size);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
done
}
lfs_verify() {
for name in bacon eggs pancakes
do
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "$1/$name", LFS_O_RDONLY) => 0;
size = strlen("$name");
for (int i = 0; i < $SIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "$name", size) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
done
}
echo "--- Single-process allocation test ---"
lfs_mkdir singleproc
lfs_alloc_singleproc singleproc
lfs_verify singleproc
echo "--- Multi-process allocation test ---"
lfs_mkdir multiproc
lfs_alloc_multiproc multiproc
lfs_verify multiproc
lfs_verify singleproc
echo "--- Single-process reuse test ---"
lfs_remove singleproc
lfs_mkdir singleprocreuse
lfs_alloc_singleproc singleprocreuse
lfs_verify singleprocreuse
lfs_verify multiproc
echo "--- Multi-process reuse test ---"
lfs_remove multiproc
lfs_mkdir multiprocreuse
lfs_alloc_singleproc multiprocreuse
lfs_verify multiprocreuse
lfs_verify singleprocreuse
echo "--- Cleanup ---"
lfs_remove multiprocreuse
lfs_remove singleprocreuse
echo "--- Exhaustion test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file[0], buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file[0]) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Exhaustion wraparound test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_file_open(&lfs, &file[0], "padding", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("buffering");
memcpy(buffer, "buffering", size);
for (int i = 0; i < $SIZE; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "padding") => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file[0], buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file[0]) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Dir exhaustion test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
int err;
while (true) {
err = lfs_file_write(&lfs, &file[0], buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
// see if dir fits with max file size
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustion") => 0;
// see if dir fits with > max file size
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Chained dir exhaustion test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
sprintf((char*)buffer, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, (char*)buffer) => 0;
}
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
int err;
while (true) {
err = lfs_file_write(&lfs, &file[0], buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
sprintf((char*)buffer, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_remove(&lfs, (char*)buffer) => 0;
}
// see that chained dir fails
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_sync(&lfs, &file[0]) => 0;
for (int i = 0; i < 10; i++) {
sprintf((char*)buffer, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, (char*)buffer) => 0;
}
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
// shorten file to try a second chained dir
while (true) {
err = lfs_mkdir(&lfs, "exhaustiondir");
if (err != LFS_ERR_NOSPC) {
break;
}
lfs_ssize_t filesize = lfs_file_size(&lfs, &file[0]);
filesize > 0 => true;
lfs_file_truncate(&lfs, &file[0], filesize - size) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
}
err => 0;
lfs_mkdir(&lfs, "exhaustiondir2") => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Split dir test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
// create one block hole for half a directory
lfs_file_open(&lfs, &file[0], "bump", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg.block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file[0], buffer, cfg.block_size) => cfg.block_size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < (cfg.block_count-4)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// open hole
lfs_remove(&lfs, "bump") => 0;
lfs_mkdir(&lfs, "splitdir") => 0;
lfs_file_open(&lfs, &file[0], "splitdir/bump",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg.block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file[0], buffer, 2*cfg.block_size) => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Outdated lookahead test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// fill completely with two files
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// rewrite one file
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// rewrite second file, this requires lookahead does not
// use old population
lfs_file_open(&lfs, &file[0], "exhaustion2",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
TEST
echo "--- Outdated lookahead and split dir test ---"
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
// fill completely with two files
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2+1)/2)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
// rewrite one file with a hole of one block
lfs_file_open(&lfs, &file[0], "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg.block_count-2)/2 - 1)*(cfg.block_size-8);
i += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
// try to allocate a directory, should fail!
lfs_mkdir(&lfs, "split") => LFS_ERR_NOSPC;
// file should not fail
lfs_file_open(&lfs, &file[0], "notasplit",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file[0], "hi", 2) => 2;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

737
tests/test_alloc.toml Normal file
View File

@@ -0,0 +1,737 @@
# allocator tests
# note for these to work there are a number constraints on the device geometry
if = 'BLOCK_CYCLES == -1'
# parallel allocation test
[cases.test_alloc_parallel]
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.GC = [false, true]
defines.COMPACT_THRESH = ['-1', '0', 'BLOCK_SIZE/2']
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_file_t files[FILES];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &files[n], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int n = 0; n < FILES; n++) {
if (GC) {
lfs_fs_gc(&lfs) => 0;
}
size_t size = strlen(names[n]);
for (lfs_size_t i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &files[n], names[n], size) => size;
}
}
for (int n = 0; n < FILES; n++) {
lfs_file_close(&lfs, &files[n]) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size_t size = strlen(names[n]);
for (lfs_size_t i = 0; i < SIZE; i += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
# serial allocation test
[cases.test_alloc_serial]
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.GC = [false, true]
defines.COMPACT_THRESH = ['-1', '0', 'BLOCK_SIZE/2']
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
for (int n = 0; n < FILES; n++) {
lfs_mount(&lfs, cfg) => 0;
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size_t size = strlen(names[n]);
uint8_t buffer[1024];
memcpy(buffer, names[n], size);
for (int i = 0; i < SIZE; i += size) {
if (GC) {
lfs_fs_gc(&lfs) => 0;
}
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
}
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size_t size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
# parallel allocation reuse test
[cases.test_alloc_parallel_reuse]
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.CYCLES = [1, 10]
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_file_t files[FILES];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
for (int c = 0; c < CYCLES; c++) {
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_open(&lfs, &files[n], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int n = 0; n < FILES; n++) {
size_t size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &files[n], names[n], size) => size;
}
}
for (int n = 0; n < FILES; n++) {
lfs_file_close(&lfs, &files[n]) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size_t size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_remove(&lfs, path) => 0;
}
lfs_remove(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
}
'''
# serial allocation reuse test
[cases.test_alloc_serial_reuse]
defines.FILES = 3
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-6)) / FILES)'
defines.CYCLES = [1, 10]
code = '''
const char *names[] = {"bacon", "eggs", "pancakes"};
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
for (int c = 0; c < CYCLES; c++) {
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
for (int n = 0; n < FILES; n++) {
lfs_mount(&lfs, cfg) => 0;
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size_t size = strlen(names[n]);
uint8_t buffer[1024];
memcpy(buffer, names[n], size);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
}
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
size_t size = strlen(names[n]);
for (int i = 0; i < SIZE; i += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, names[n], size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int n = 0; n < FILES; n++) {
char path[1024];
sprintf(path, "breakfast/%s", names[n]);
lfs_remove(&lfs, path) => 0;
}
lfs_remove(&lfs, "breakfast") => 0;
lfs_unmount(&lfs) => 0;
}
'''
# exhaustion test
[cases.test_alloc_exhaustion]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size_t size = strlen("exhaustion");
uint8_t buffer[1024];
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file, buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# exhaustion wraparound test
[cases.test_alloc_exhaustion_wraparound]
defines.SIZE = '(((BLOCK_SIZE-8)*(BLOCK_COUNT-4)) / 3)'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "padding", LFS_O_WRONLY | LFS_O_CREAT);
size_t size = strlen("buffering");
uint8_t buffer[1024];
memcpy(buffer, "buffering", size);
for (int i = 0; i < SIZE; i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "padding") => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size = strlen("exhaustion");
memcpy(buffer, "exhaustion", size);
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
lfs_ssize_t res;
while (true) {
res = lfs_file_write(&lfs, &file, buffer, size);
if (res < 0) {
break;
}
res => size;
}
res => LFS_ERR_NOSPC;
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_RDONLY);
size = strlen("exhaustion");
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "exhaustion", size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
'''
# dir exhaustion test
[cases.test_alloc_dir_exhaustion]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
size_t size = strlen("blahblahblahblah");
uint8_t buffer[1024];
memcpy(buffer, "blahblahblahblah", size);
lfs_file_t file;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
int err;
while (true) {
err = lfs_file_write(&lfs, &file, buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
// see if dir fits with max file size
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
lfs_remove(&lfs, "exhaustion") => 0;
// see if dir fits with > max file size
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_unmount(&lfs) => 0;
'''
# what if we have a bad block during an allocation scan?
[cases.test_alloc_bad_blocks]
in = "lfs.c"
defines.ERASE_CYCLES = 0xffffffff
defines.BADBLOCK_BEHAVIOR = 'LFS_EMUBD_BADBLOCK_READERROR'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// first fill to exhaustion to find available space
lfs_file_t file;
lfs_file_open(&lfs, &file, "pacman", LFS_O_WRONLY | LFS_O_CREAT) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "waka");
size_t size = strlen("waka");
lfs_size_t filesize = 0;
while (true) {
lfs_ssize_t res = lfs_file_write(&lfs, &file, buffer, size);
assert(res == (lfs_ssize_t)size || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
break;
}
filesize += size;
}
lfs_file_close(&lfs, &file) => 0;
// now fill all but a couple of blocks of the filesystem with data
filesize -= 3*BLOCK_SIZE;
lfs_file_open(&lfs, &file, "pacman", LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "waka");
size = strlen("waka");
for (lfs_size_t i = 0; i < filesize/size; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// also save head of file so we can error during lookahead scan
lfs_block_t fileblock = file.ctz.head;
lfs_unmount(&lfs) => 0;
// remount to force an alloc scan
lfs_mount(&lfs, cfg) => 0;
// but mark the head of our file as a "bad block", this is force our
// scan to bail early
lfs_emubd_setwear(cfg, fileblock, 0xffffffff) => 0;
lfs_file_open(&lfs, &file, "ghost", LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "chomp");
size = strlen("chomp");
while (true) {
lfs_ssize_t res = lfs_file_write(&lfs, &file, buffer, size);
assert(res == (lfs_ssize_t)size || res == LFS_ERR_CORRUPT);
if (res == LFS_ERR_CORRUPT) {
break;
}
}
lfs_file_close(&lfs, &file) => 0;
// now reverse the "bad block" and try to write the file again until we
// run out of space
lfs_emubd_setwear(cfg, fileblock, 0) => 0;
lfs_file_open(&lfs, &file, "ghost", LFS_O_WRONLY | LFS_O_CREAT) => 0;
strcpy((char*)buffer, "chomp");
size = strlen("chomp");
while (true) {
lfs_ssize_t res = lfs_file_write(&lfs, &file, buffer, size);
assert(res == (lfs_ssize_t)size || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
break;
}
}
// note that lfs_fs_gc should not error here
lfs_fs_gc(&lfs) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// check that the disk isn't hurt
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "pacman", LFS_O_RDONLY) => 0;
strcpy((char*)buffer, "waka");
size = strlen("waka");
for (lfs_size_t i = 0; i < filesize/size; i++) {
uint8_t rbuffer[4];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# Below, I don't like these tests. They're fragile and depend _heavily_
# on the geometry of the block device. But they are valuable. Eventually they
# should be removed and replaced with generalized tests.
# chained dir exhaustion test
[cases.test_alloc_chained_dir_exhaustion]
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// find out max file size
lfs_mkdir(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
char path[1024];
sprintf(path, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, path) => 0;
}
size_t size = strlen("blahblahblahblah");
uint8_t buffer[1024];
memcpy(buffer, "blahblahblahblah", size);
lfs_file_t file;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
int count = 0;
int err;
while (true) {
err = lfs_file_write(&lfs, &file, buffer, size);
if (err < 0) {
break;
}
count += 1;
}
err => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "exhaustion") => 0;
lfs_remove(&lfs, "exhaustiondir") => 0;
for (int i = 0; i < 10; i++) {
char path[1024];
sprintf(path, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_remove(&lfs, path) => 0;
}
// see that chained dir fails
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
for (int i = 0; i < count+1; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_sync(&lfs, &file) => 0;
for (int i = 0; i < 10; i++) {
char path[1024];
sprintf(path, "dirwithanexhaustivelylongnameforpadding%d", i);
lfs_mkdir(&lfs, path) => 0;
}
lfs_mkdir(&lfs, "exhaustiondir") => LFS_ERR_NOSPC;
// shorten file to try a second chained dir
while (true) {
err = lfs_mkdir(&lfs, "exhaustiondir");
if (err != LFS_ERR_NOSPC) {
break;
}
lfs_ssize_t filesize = lfs_file_size(&lfs, &file);
filesize > 0 => true;
lfs_file_truncate(&lfs, &file, filesize - size) => 0;
lfs_file_sync(&lfs, &file) => 0;
}
err => 0;
lfs_mkdir(&lfs, "exhaustiondir2") => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# split dir test
[cases.test_alloc_split_dir]
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// create one block hole for half a directory
lfs_file_t file;
lfs_file_open(&lfs, &file, "bump", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg->block_size; i += 2) {
uint8_t buffer[1024];
memcpy(&buffer[i], "hi", 2);
}
uint8_t buffer[1024];
lfs_file_write(&lfs, &file, buffer, cfg->block_size) => cfg->block_size;
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "exhaustion", LFS_O_WRONLY | LFS_O_CREAT);
size_t size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < (cfg->block_count-4)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
// open hole
lfs_remove(&lfs, "bump") => 0;
lfs_mkdir(&lfs, "splitdir") => 0;
lfs_file_open(&lfs, &file, "splitdir/bump",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (lfs_size_t i = 0; i < cfg->block_size; i += 2) {
memcpy(&buffer[i], "hi", 2);
}
lfs_file_write(&lfs, &file, buffer, 2*cfg->block_size) => LFS_ERR_NOSPC;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# outdated lookahead test
[cases.test_alloc_outdated_lookahead]
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// fill completely with two files
lfs_file_t file;
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size_t size = strlen("blahblahblahblah");
uint8_t buffer[1024];
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2)/2)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2+1)/2)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
// rewrite one file
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2)/2)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// rewrite second file, this requires lookahead does not
// use old population
lfs_file_open(&lfs, &file, "exhaustion2",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2+1)/2)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# outdated lookahead and split dir test
[cases.test_alloc_outdated_lookahead_split_dir]
if = 'ERASE_SIZE == 512'
defines.ERASE_COUNT = 1024
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// fill completely with two files
lfs_file_t file;
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size_t size = strlen("blahblahblahblah");
uint8_t buffer[1024];
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2)/2)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "exhaustion2",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2+1)/2)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// remount to force reset of lookahead
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
// rewrite one file with a hole of one block
lfs_file_open(&lfs, &file, "exhaustion1",
LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_sync(&lfs, &file) => 0;
size = strlen("blahblahblahblah");
memcpy(buffer, "blahblahblahblah", size);
for (lfs_size_t i = 0;
i < ((cfg->block_count-2)/2 - 1)*(cfg->block_size-8);
i += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
// try to allocate a directory, should fail!
lfs_mkdir(&lfs, "split") => LFS_ERR_NOSPC;
// file should not fail
lfs_file_open(&lfs, &file, "notasplit",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

190
tests/test_attrs.sh → tests/test_attrs.toml Executable file → Normal file
View File

@@ -1,24 +1,18 @@
#!/bin/bash
set -eu
echo "=== Attr tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
[cases.test_attrs_get_set]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_open(&lfs, &file[0], "hello/hello",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file[0], "hello", strlen("hello"))
=> strlen("hello");
lfs_file_close(&lfs, &file[0]);
lfs_file_t file;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Set/get attribute ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
uint8_t buffer[1024];
memset(buffer, 0, sizeof(buffer));
lfs_setattr(&lfs, "hello", 'A', "aaaa", 4) => 0;
lfs_setattr(&lfs, "hello", 'B', "bbbbbb", 6) => 0;
lfs_setattr(&lfs, "hello", 'C', "ccccc", 5) => 0;
@@ -68,9 +62,9 @@ tests/test.py << TEST
lfs_getattr(&lfs, "hello", 'C', buffer+10, 5) => 5;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
memset(buffer, 0, sizeof(buffer));
lfs_getattr(&lfs, "hello", 'A', buffer, 4) => 4;
lfs_getattr(&lfs, "hello", 'B', buffer+4, 9) => 9;
lfs_getattr(&lfs, "hello", 'C', buffer+13, 5) => 5;
@@ -78,16 +72,28 @@ tests/test.py << TEST
memcmp(buffer+4, "fffffffff", 9) => 0;
memcmp(buffer+13, "ccccc", 5) => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, sizeof(buffer)) => strlen("hello");
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => strlen("hello");
memcmp(buffer, "hello", strlen("hello")) => 0;
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
TEST
'''
echo "--- Set/get root attribute ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
[cases.test_attrs_get_set_root]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
uint8_t buffer[1024];
memset(buffer, 0, sizeof(buffer));
lfs_setattr(&lfs, "/", 'A', "aaaa", 4) => 0;
lfs_setattr(&lfs, "/", 'B', "bbbbbb", 6) => 0;
lfs_setattr(&lfs, "/", 'C', "ccccc", 5) => 0;
@@ -136,9 +142,9 @@ tests/test.py << TEST
lfs_getattr(&lfs, "/", 'B', buffer+4, 6) => 9;
lfs_getattr(&lfs, "/", 'C', buffer+10, 5) => 5;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
memset(buffer, 0, sizeof(buffer));
lfs_getattr(&lfs, "/", 'A', buffer, 4) => 4;
lfs_getattr(&lfs, "/", 'B', buffer+4, 9) => 9;
lfs_getattr(&lfs, "/", 'C', buffer+13, 5) => 5;
@@ -146,16 +152,28 @@ tests/test.py << TEST
memcmp(buffer+4, "fffffffff", 9) => 0;
memcmp(buffer+13, "ccccc", 5) => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, sizeof(buffer)) => strlen("hello");
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => strlen("hello");
memcmp(buffer, "hello", strlen("hello")) => 0;
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
TEST
'''
echo "--- Set/get file attribute ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
[cases.test_attrs_get_set_file]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
uint8_t buffer[1024];
memset(buffer, 0, sizeof(buffer));
struct lfs_attr attrs1[] = {
{'A', buffer, 4},
{'B', buffer+4, 6},
@@ -163,55 +181,55 @@ tests/test.py << TEST
};
struct lfs_file_config cfg1 = {.attrs=attrs1, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
memcpy(buffer, "aaaa", 4);
memcpy(buffer+4, "bbbbbb", 6);
memcpy(buffer+10, "ccccc", 5);
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15);
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "bbbbbb", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[1].size = 0;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15);
attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "\0\0\0\0\0\0", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
memcpy(buffer+4, "dddddd", 6);
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15);
attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "dddddd", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[1].size = 3;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
memcpy(buffer+4, "eee", 3);
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_close(&lfs, &file) => 0;
memset(buffer, 0, 15);
attrs1[1].size = 6;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "eee\0\0\0", 6) => 0;
memcmp(buffer+10, "ccccc", 5) => 0;
attrs1[0].size = LFS_ATTR_MAX+1;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1)
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1)
=> LFS_ERR_NOSPC;
struct lfs_attr attrs2[] = {
@@ -220,40 +238,55 @@ tests/test.py << TEST
{'C', buffer+13, 5},
};
struct lfs_file_config cfg2 = {.attrs=attrs2, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDWR, &cfg2) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDWR, &cfg2) => 0;
memcpy(buffer+4, "fffffffff", 9);
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_close(&lfs, &file) => 0;
attrs1[0].size = 4;
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg1) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
struct lfs_attr attrs2[] = {
lfs_mount(&lfs, cfg) => 0;
memset(buffer, 0, sizeof(buffer));
struct lfs_attr attrs3[] = {
{'A', buffer, 4},
{'B', buffer+4, 9},
{'C', buffer+13, 5},
};
struct lfs_file_config cfg2 = {.attrs=attrs2, .attr_count=3};
struct lfs_file_config cfg3 = {.attrs=attrs3, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_RDONLY, &cfg2) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_RDONLY, &cfg3) => 0;
lfs_file_close(&lfs, &file) => 0;
memcmp(buffer, "aaaa", 4) => 0;
memcmp(buffer+4, "fffffffff", 9) => 0;
memcmp(buffer+13, "ccccc", 5) => 0;
lfs_file_open(&lfs, &file[0], "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, sizeof(buffer)) => strlen("hello");
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => strlen("hello");
memcmp(buffer, "hello", strlen("hello")) => 0;
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
TEST
'''
echo "--- Deferred file attributes ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
[cases.test_attrs_deferred_file]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "hello/hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_write(&lfs, &file, "hello", strlen("hello")) => strlen("hello");
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_setattr(&lfs, "hello/hello", 'B', "fffffffff", 9) => 0;
lfs_setattr(&lfs, "hello/hello", 'C', "ccccc", 5) => 0;
uint8_t buffer[1024];
memset(buffer, 0, sizeof(buffer));
struct lfs_attr attrs1[] = {
{'B', "gggg", 4},
{'C', "", 0},
@@ -261,7 +294,7 @@ tests/test.py << TEST
};
struct lfs_file_config cfg1 = {.attrs=attrs1, .attr_count=3};
lfs_file_opencfg(&lfs, &file[0], "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_file_opencfg(&lfs, &file, "hello/hello", LFS_O_WRONLY, &cfg1) => 0;
lfs_getattr(&lfs, "hello/hello", 'B', buffer, 9) => 9;
lfs_getattr(&lfs, "hello/hello", 'C', buffer+9, 9) => 5;
@@ -270,7 +303,7 @@ tests/test.py << TEST
memcmp(buffer+9, "ccccc\0\0\0\0", 9) => 0;
memcmp(buffer+18, "\0\0\0\0\0\0\0\0\0", 9) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
lfs_file_sync(&lfs, &file) => 0;
lfs_getattr(&lfs, "hello/hello", 'B', buffer, 9) => 4;
lfs_getattr(&lfs, "hello/hello", 'C', buffer+9, 9) => 0;
lfs_getattr(&lfs, "hello/hello", 'D', buffer+18, 9) => 4;
@@ -278,9 +311,6 @@ tests/test.py << TEST
memcmp(buffer+9, "\0\0\0\0\0\0\0\0\0", 9) => 0;
memcmp(buffer+18, "hhhh\0\0\0\0\0", 9) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py
'''

260
tests/test_badblocks.toml Normal file
View File

@@ -0,0 +1,260 @@
# bad blocks with block cycles should be tested in test_relocations
if = '(int32_t)BLOCK_CYCLES == -1'
[cases.test_badblocks_single]
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
'LFS_EMUBD_BADBLOCK_ERASEERROR',
'LFS_EMUBD_BADBLOCK_READERROR',
'LFS_EMUBD_BADBLOCK_PROGNOOP',
'LFS_EMUBD_BADBLOCK_ERASENOOP',
]
defines.NAMEMULT = 64
defines.FILEMULT = 1
code = '''
for (lfs_block_t badblock = 2; badblock < BLOCK_COUNT; badblock++) {
lfs_emubd_setwear(cfg, badblock-1, 0) => 0;
lfs_emubd_setwear(cfg, badblock, 0xffffffff) => 0;
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 1; i < 10; i++) {
uint8_t buffer[1024];
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_t file;
lfs_file_open(&lfs, &file, (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_size_t size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 1; i < 10; i++) {
uint8_t buffer[1024];
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
struct lfs_info info;
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_t file;
lfs_file_open(&lfs, &file, (char*)buffer, LFS_O_RDONLY) => 0;
int size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
}
'''
[cases.test_badblocks_region_corruption] # (causes cascading failures)
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
'LFS_EMUBD_BADBLOCK_ERASEERROR',
'LFS_EMUBD_BADBLOCK_READERROR',
'LFS_EMUBD_BADBLOCK_PROGNOOP',
'LFS_EMUBD_BADBLOCK_ERASENOOP',
]
defines.NAMEMULT = 64
defines.FILEMULT = 1
code = '''
for (lfs_block_t i = 0; i < (BLOCK_COUNT-2)/2; i++) {
lfs_emubd_setwear(cfg, i+2, 0xffffffff) => 0;
}
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 1; i < 10; i++) {
uint8_t buffer[1024];
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_t file;
lfs_file_open(&lfs, &file, (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_size_t size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 1; i < 10; i++) {
uint8_t buffer[1024];
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
struct lfs_info info;
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_t file;
lfs_file_open(&lfs, &file, (char*)buffer, LFS_O_RDONLY) => 0;
lfs_size_t size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_badblocks_alternating_corruption] # (causes cascading failures)
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
'LFS_EMUBD_BADBLOCK_ERASEERROR',
'LFS_EMUBD_BADBLOCK_READERROR',
'LFS_EMUBD_BADBLOCK_PROGNOOP',
'LFS_EMUBD_BADBLOCK_ERASENOOP',
]
defines.NAMEMULT = 64
defines.FILEMULT = 1
code = '''
for (lfs_block_t i = 0; i < (BLOCK_COUNT-2)/2; i++) {
lfs_emubd_setwear(cfg, (2*i) + 2, 0xffffffff) => 0;
}
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 1; i < 10; i++) {
uint8_t buffer[1024];
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_t file;
lfs_file_open(&lfs, &file, (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_size_t size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 1; i < 10; i++) {
uint8_t buffer[1024];
for (int j = 0; j < NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[NAMEMULT] = '\0';
struct lfs_info info;
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[NAMEMULT] = '/';
for (int j = 0; j < NAMEMULT; j++) {
buffer[j+NAMEMULT+1] = '0'+i;
}
buffer[2*NAMEMULT+1] = '\0';
lfs_file_t file;
lfs_file_open(&lfs, &file, (char*)buffer, LFS_O_RDONLY) => 0;
lfs_size_t size = NAMEMULT;
for (int j = 0; j < i*FILEMULT; j++) {
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
# other corner cases
[cases.test_badblocks_superblocks] # (corrupt 1 or 0)
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_VALUE = [0x00, 0xff, -1]
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
'LFS_EMUBD_BADBLOCK_ERASEERROR',
'LFS_EMUBD_BADBLOCK_READERROR',
'LFS_EMUBD_BADBLOCK_PROGNOOP',
'LFS_EMUBD_BADBLOCK_ERASENOOP',
]
code = '''
lfs_emubd_setwear(cfg, 0, 0xffffffff) => 0;
lfs_emubd_setwear(cfg, 1, 0xffffffff) => 0;
lfs_t lfs;
lfs_format(&lfs, cfg) => LFS_ERR_NOSPC;
lfs_mount(&lfs, cfg) => LFS_ERR_CORRUPT;
'''

248
tests/test_bd.toml Normal file
View File

@@ -0,0 +1,248 @@
# These tests don't really test littlefs at all, they are here only to make
# sure the underlying block device is working.
#
# Note we use 251, a prime, in places to avoid aliasing powers of 2.
#
[cases.test_bd_one_block]
defines.READ = ['READ_SIZE', 'BLOCK_SIZE']
defines.PROG = ['PROG_SIZE', 'BLOCK_SIZE']
code = '''
uint8_t buffer[lfs_max(READ, PROG)];
// write data
cfg->erase(cfg, 0) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (i+j) % 251;
}
cfg->prog(cfg, 0, i, buffer, PROG) => 0;
}
// read data
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, 0, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (i+j) % 251);
}
}
'''
[cases.test_bd_two_block]
defines.READ = ['READ_SIZE', 'BLOCK_SIZE']
defines.PROG = ['PROG_SIZE', 'BLOCK_SIZE']
code = '''
uint8_t buffer[lfs_max(READ, PROG)];
lfs_block_t block;
// write block 0
block = 0;
cfg->erase(cfg, block) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (block+i+j) % 251;
}
cfg->prog(cfg, block, i, buffer, PROG) => 0;
}
// read block 0
block = 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
// write block 1
block = 1;
cfg->erase(cfg, block) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (block+i+j) % 251;
}
cfg->prog(cfg, block, i, buffer, PROG) => 0;
}
// read block 1
block = 1;
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
// read block 0 again
block = 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
'''
[cases.test_bd_last_block]
defines.READ = ['READ_SIZE', 'BLOCK_SIZE']
defines.PROG = ['PROG_SIZE', 'BLOCK_SIZE']
code = '''
uint8_t buffer[lfs_max(READ, PROG)];
lfs_block_t block;
// write block 0
block = 0;
cfg->erase(cfg, block) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (block+i+j) % 251;
}
cfg->prog(cfg, block, i, buffer, PROG) => 0;
}
// read block 0
block = 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
// write block n-1
block = cfg->block_count-1;
cfg->erase(cfg, block) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (block+i+j) % 251;
}
cfg->prog(cfg, block, i, buffer, PROG) => 0;
}
// read block n-1
block = cfg->block_count-1;
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
// read block 0 again
block = 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
'''
[cases.test_bd_powers_of_two]
defines.READ = ['READ_SIZE', 'BLOCK_SIZE']
defines.PROG = ['PROG_SIZE', 'BLOCK_SIZE']
code = '''
uint8_t buffer[lfs_max(READ, PROG)];
// write/read every power of 2
lfs_block_t block = 1;
while (block < cfg->block_count) {
// write
cfg->erase(cfg, block) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (block+i+j) % 251;
}
cfg->prog(cfg, block, i, buffer, PROG) => 0;
}
// read
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
block *= 2;
}
// read every power of 2 again
block = 1;
while (block < cfg->block_count) {
// read
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
block *= 2;
}
'''
[cases.test_bd_fibonacci]
defines.READ = ['READ_SIZE', 'BLOCK_SIZE']
defines.PROG = ['PROG_SIZE', 'BLOCK_SIZE']
code = '''
uint8_t buffer[lfs_max(READ, PROG)];
// write/read every fibonacci number on our device
lfs_block_t block = 1;
lfs_block_t block_ = 1;
while (block < cfg->block_count) {
// write
cfg->erase(cfg, block) => 0;
for (lfs_off_t i = 0; i < cfg->block_size; i += PROG) {
for (lfs_off_t j = 0; j < PROG; j++) {
buffer[j] = (block+i+j) % 251;
}
cfg->prog(cfg, block, i, buffer, PROG) => 0;
}
// read
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
lfs_block_t nblock = block + block_;
block_ = block;
block = nblock;
}
// read every fibonacci number again
block = 1;
block_ = 1;
while (block < cfg->block_count) {
// read
for (lfs_off_t i = 0; i < cfg->block_size; i += READ) {
cfg->read(cfg, block, i, buffer, READ) => 0;
for (lfs_off_t j = 0; j < READ; j++) {
LFS_ASSERT(buffer[j] == (block+i+j) % 251);
}
}
lfs_block_t nblock = block + block_;
block_ = block;
block = nblock;
}
'''

1453
tests/test_compat.toml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,118 +0,0 @@
#!/bin/bash
set -eu
echo "=== Corrupt tests ==="
NAMEMULT=64
FILEMULT=1
lfs_mktree() {
tests/test.py ${1:-} << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[$NAMEMULT] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
buffer[$NAMEMULT] = '/';
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j+$NAMEMULT+1] = '0'+i;
}
buffer[2*$NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = $NAMEMULT;
for (int j = 0; j < i*$FILEMULT; j++) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
lfs_chktree() {
tests/test.py ${1:-} << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 1; i < 10; i++) {
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j] = '0'+i;
}
buffer[$NAMEMULT] = '\0';
lfs_stat(&lfs, (char*)buffer, &info) => 0;
info.type => LFS_TYPE_DIR;
buffer[$NAMEMULT] = '/';
for (int j = 0; j < $NAMEMULT; j++) {
buffer[j+$NAMEMULT+1] = '0'+i;
}
buffer[2*$NAMEMULT+1] = '\0';
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
size = $NAMEMULT;
for (int j = 0; j < i*$FILEMULT; j++) {
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(buffer, rbuffer, size) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
echo "--- Sanity check ---"
rm -rf blocks
lfs_mktree
lfs_chktree
BLOCKS="$(ls blocks | grep -vw '[01]')"
echo "--- Block corruption ---"
for b in $BLOCKS
do
rm -rf blocks
mkdir blocks
ln -s /dev/zero blocks/$b
lfs_mktree
lfs_chktree
done
echo "--- Block persistance ---"
for b in $BLOCKS
do
rm -rf blocks
mkdir blocks
lfs_mktree
chmod a-w blocks/$b || true
lfs_mktree
lfs_chktree
done
echo "--- Big region corruption ---"
rm -rf blocks
mkdir blocks
for i in {2..512}
do
ln -s /dev/zero blocks/$(printf '%x' $i)
done
lfs_mktree
lfs_chktree
echo "--- Alternating corruption ---"
rm -rf blocks
mkdir blocks
for i in {2..1024..2}
do
ln -s /dev/zero blocks/$(printf '%x' $i)
done
lfs_mktree
lfs_chktree
echo "--- Results ---"
tests/stats.py

View File

@@ -1,484 +0,0 @@
#!/bin/bash
set -eu
LARGESIZE=128
echo "=== Directory tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Root directory ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory creation ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- File creation ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "burito", LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory iteration ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "potato") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory failures ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato") => LFS_ERR_EXIST;
lfs_dir_open(&lfs, &dir[0], "tomato") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "burito") => LFS_ERR_NOTDIR;
lfs_file_open(&lfs, &file[0], "tomato", LFS_O_RDONLY) => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file[0], "potato", LFS_O_RDONLY) => LFS_ERR_ISDIR;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Nested directories ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "potato/baked") => 0;
lfs_mkdir(&lfs, "potato/sweet") => 0;
lfs_mkdir(&lfs, "potato/fried") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "potato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block directory ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "cactus") => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/test%03d", i);
lfs_mkdir(&lfs, (char*)buffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "cactus") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "test%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_DIR;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory remove ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "potato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "potato/sweet") => 0;
lfs_remove(&lfs, "potato/baked") => 0;
lfs_remove(&lfs, "potato/fried") => 0;
lfs_dir_open(&lfs, &dir[0], "potato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_remove(&lfs, "potato") => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "cactus") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "cactus") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Directory rename ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coldpotato") => 0;
lfs_mkdir(&lfs, "coldpotato/baked") => 0;
lfs_mkdir(&lfs, "coldpotato/sweet") => 0;
lfs_mkdir(&lfs, "coldpotato/fried") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "coldpotato", "hotpotato") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "hotpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "warmpotato") => 0;
lfs_mkdir(&lfs, "warmpotato/mushy") => 0;
lfs_rename(&lfs, "hotpotato", "warmpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "warmpotato/mushy") => 0;
lfs_rename(&lfs, "hotpotato", "warmpotato") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "warmpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "coldpotato") => 0;
lfs_rename(&lfs, "warmpotato/baked", "coldpotato/baked") => 0;
lfs_rename(&lfs, "warmpotato/sweet", "coldpotato/sweet") => 0;
lfs_rename(&lfs, "warmpotato/fried", "coldpotato/fried") => 0;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOTEMPTY;
lfs_remove(&lfs, "warmpotato") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "coldpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "baked") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "fried") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "sweet") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Recursive remove ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "coldpotato") => LFS_ERR_NOTEMPTY;
lfs_dir_open(&lfs, &dir[0], "coldpotato") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
while (true) {
int err = lfs_dir_read(&lfs, &dir[0], &info);
err >= 0 => 1;
if (err == 0) {
break;
}
strcpy((char*)buffer, "coldpotato/");
strcat((char*)buffer, info.name);
lfs_remove(&lfs, (char*)buffer) => 0;
}
lfs_remove(&lfs, "coldpotato") => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "cactus") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block rename ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/test%03d", i);
sprintf((char*)wbuffer, "cactus/tedd%03d", i);
lfs_rename(&lfs, (char*)buffer, (char*)wbuffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "cactus") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "tedd%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_DIR;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block remove ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "cactus") => LFS_ERR_NOTEMPTY;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "cactus/tedd%03d", i);
lfs_remove(&lfs, (char*)buffer) => 0;
}
lfs_remove(&lfs, "cactus") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block directory with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "prickly-pear") => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/test%03d", i);
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = 6;
memcpy(wbuffer, "Hello", size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "test%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_REG;
info.size => 6;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block rename with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/test%03d", i);
sprintf((char*)wbuffer, "prickly-pear/tedd%03d", i);
lfs_rename(&lfs, (char*)buffer, (char*)wbuffer) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "prickly-pear") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "tedd%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
info.type => LFS_TYPE_REG;
info.size => 6;
}
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Multi-block remove with files ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "prickly-pear") => LFS_ERR_NOTEMPTY;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "prickly-pear/tedd%03d", i);
lfs_remove(&lfs, (char*)buffer) => 0;
}
lfs_remove(&lfs, "prickly-pear") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "burito") => 0;
info.type => LFS_TYPE_REG;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

1044
tests/test_dirs.toml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,221 +0,0 @@
#!/bin/bash
set -eu
# Note: These tests are intended for 512 byte inline size at different
# inline sizes they should still pass, but won't be testing anything
echo "=== Entry tests ==="
rm -rf blocks
function read_file {
cat << TEST
size = $2;
lfs_file_open(&lfs, &file[0], "$1", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
TEST
}
function write_file {
cat << TEST
size = $2;
lfs_file_open(&lfs, &file[0], "$1",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
TEST
}
echo "--- Entry grow test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 20)
$(write_file "hi1" 20)
$(write_file "hi2" 20)
$(write_file "hi3" 20)
$(read_file "hi1" 20)
$(write_file "hi1" 200)
$(read_file "hi0" 20)
$(read_file "hi1" 200)
$(read_file "hi2" 20)
$(read_file "hi3" 20)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry shrink test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 20)
$(write_file "hi1" 200)
$(write_file "hi2" 20)
$(write_file "hi3" 20)
$(read_file "hi1" 200)
$(write_file "hi1" 20)
$(read_file "hi0" 20)
$(read_file "hi1" 20)
$(read_file "hi2" 20)
$(read_file "hi3" 20)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry spill test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 200)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
$(read_file "hi0" 200)
$(read_file "hi1" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry push spill test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 20)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
$(read_file "hi1" 20)
$(write_file "hi1" 200)
$(read_file "hi0" 200)
$(read_file "hi1" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry push spill two test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 20)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
$(write_file "hi4" 200)
$(read_file "hi1" 20)
$(write_file "hi1" 200)
$(read_file "hi0" 200)
$(read_file "hi1" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
$(read_file "hi4" 200)
lfs_unmount(&lfs) => 0;
TEST
echo "--- Entry drop test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
$(write_file "hi0" 200)
$(write_file "hi1" 200)
$(write_file "hi2" 200)
$(write_file "hi3" 200)
lfs_remove(&lfs, "hi1") => 0;
lfs_stat(&lfs, "hi1", &info) => LFS_ERR_NOENT;
$(read_file "hi0" 200)
$(read_file "hi2" 200)
$(read_file "hi3" 200)
lfs_remove(&lfs, "hi2") => 0;
lfs_stat(&lfs, "hi2", &info) => LFS_ERR_NOENT;
$(read_file "hi0" 200)
$(read_file "hi3" 200)
lfs_remove(&lfs, "hi3") => 0;
lfs_stat(&lfs, "hi3", &info) => LFS_ERR_NOENT;
$(read_file "hi0" 200)
lfs_remove(&lfs, "hi0") => 0;
lfs_stat(&lfs, "hi0", &info) => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Create too big ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'm', 200);
buffer[200] = '\0';
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Resize too big ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'm', 200);
buffer[200] = '\0';
size = 40;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
size = 40;
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
size = 400;
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

642
tests/test_entries.toml Normal file
View File

@@ -0,0 +1,642 @@
# These tests are for some specific corner cases with neighboring inline files.
# Note that these tests are intended for 512 byte inline sizes. They should
# still pass with other inline sizes but wouldn't be testing anything.
defines.CACHE_SIZE = 512
if = 'CACHE_SIZE % PROG_SIZE == 0 && CACHE_SIZE == 512'
[cases.test_entries_grow]
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// write hi0 20
char path[1024];
lfs_size_t size;
sprintf(path, "hi0"); size = 20;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 20
sprintf(path, "hi0"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_shrink]
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// write hi0 20
char path[1024];
lfs_size_t size;
sprintf(path, "hi0"); size = 20;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 20
sprintf(path, "hi0"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 20
sprintf(path, "hi2"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 20
sprintf(path, "hi3"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_spill]
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// write hi0 200
char path[1024];
lfs_size_t size;
sprintf(path, "hi0"); size = 200;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_push_spill]
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// write hi0 200
char path[1024];
lfs_size_t size;
sprintf(path, "hi0"); size = 200;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_push_spill_two]
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// write hi0 200
char path[1024];
lfs_size_t size;
sprintf(path, "hi0"); size = 200;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi4 200
sprintf(path, "hi4"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi1 20
sprintf(path, "hi1"); size = 20;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi4 200
sprintf(path, "hi4"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_drop]
code = '''
uint8_t wbuffer[1024];
uint8_t rbuffer[1024];
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// write hi0 200
char path[1024];
lfs_size_t size;
sprintf(path, "hi0"); size = 200;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi1 200
sprintf(path, "hi1"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
// write hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi1") => 0;
struct lfs_info info;
lfs_stat(&lfs, "hi1", &info) => LFS_ERR_NOENT;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi2 200
sprintf(path, "hi2"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi2") => 0;
lfs_stat(&lfs, "hi2", &info) => LFS_ERR_NOENT;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
// read hi3 200
sprintf(path, "hi3"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi3") => 0;
lfs_stat(&lfs, "hi3", &info) => LFS_ERR_NOENT;
// read hi0 200
sprintf(path, "hi0"); size = 200;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => size;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, "hi0") => 0;
lfs_stat(&lfs, "hi0", &info) => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_create_too_big]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
char path[1024];
memset(path, 'm', 200);
path[200] = '\0';
lfs_size_t size = 400;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint8_t wbuffer[1024];
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
size = 400;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_entries_resize_too_big]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
char path[1024];
memset(path, 'm', 200);
path[200] = '\0';
lfs_size_t size = 40;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint8_t wbuffer[1024];
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
size = 40;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
uint8_t rbuffer[1024];
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
size = 400;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
memset(wbuffer, 'c', size);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
size = 400;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

306
tests/test_evil.toml Normal file
View File

@@ -0,0 +1,306 @@
# Tests for recovering from conditions which shouldn't normally
# happen during normal operation of littlefs
# invalid pointer tests (outside of block_count)
[cases.test_evil_invalid_tail_pointer]
defines.TAIL_TYPE = ['LFS_TYPE_HARDTAIL', 'LFS_TYPE_SOFTTAIL']
defines.INVALSET = [0x3, 0x1, 0x2]
in = "lfs.c"
code = '''
// create littlefs
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// change tail-pointer to invalid pointers
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8),
(lfs_block_t[2]){
(INVALSET & 0x1) ? 0xcccccccc : 0,
(INVALSET & 0x2) ? 0xcccccccc : 0}})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, cfg) => LFS_ERR_CORRUPT;
'''
[cases.test_evil_invalid_dir_pointer]
defines.INVALSET = [0x3, 0x1, 0x2]
in = "lfs.c"
code = '''
// create littlefs
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// make a dir
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "dir_here") => 0;
lfs_unmount(&lfs) => 0;
// change the dir pointer to be invalid
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
// make sure id 1 == our directory
uint8_t buffer[1024];
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_NAME, 1, strlen("dir_here")), buffer)
=> LFS_MKTAG(LFS_TYPE_DIR, 1, strlen("dir_here"));
assert(memcmp((char*)buffer, "dir_here", strlen("dir_here")) == 0);
// change dir pointer
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, 8),
(lfs_block_t[2]){
(INVALSET & 0x1) ? 0xcccccccc : 0,
(INVALSET & 0x2) ? 0xcccccccc : 0}})) => 0;
lfs_deinit(&lfs) => 0;
// test that accessing our bad dir fails, note there's a number
// of ways to access the dir, some can fail, but some don't
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "dir_here", &info) => 0;
assert(strcmp(info.name, "dir_here") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "dir_here") => LFS_ERR_CORRUPT;
lfs_stat(&lfs, "dir_here/file_here", &info) => LFS_ERR_CORRUPT;
lfs_dir_open(&lfs, &dir, "dir_here/dir_here") => LFS_ERR_CORRUPT;
lfs_file_t file;
lfs_file_open(&lfs, &file, "dir_here/file_here",
LFS_O_RDONLY) => LFS_ERR_CORRUPT;
lfs_file_open(&lfs, &file, "dir_here/file_here",
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_CORRUPT;
lfs_unmount(&lfs) => 0;
'''
[cases.test_evil_invalid_file_pointer]
in = "lfs.c"
defines.SIZE = [10, 1000, 100000] # faked file size
code = '''
// create littlefs
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// make a file
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "file_here",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// change the file pointer to be invalid
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
// make sure id 1 == our file
uint8_t buffer[1024];
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_NAME, 1, strlen("file_here")), buffer)
=> LFS_MKTAG(LFS_TYPE_REG, 1, strlen("file_here"));
assert(memcmp((char*)buffer, "file_here", strlen("file_here")) == 0);
// change file pointer
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_CTZSTRUCT, 1, sizeof(struct lfs_ctz)),
&(struct lfs_ctz){0xcccccccc, lfs_tole32(SIZE)}})) => 0;
lfs_deinit(&lfs) => 0;
// test that accessing our bad file fails, note there's a number
// of ways to access the dir, some can fail, but some don't
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "file_here", &info) => 0;
assert(strcmp(info.name, "file_here") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_file_open(&lfs, &file, "file_here", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, SIZE) => LFS_ERR_CORRUPT;
lfs_file_close(&lfs, &file) => 0;
// any allocs that traverse CTZ must unfortunately must fail
if (SIZE > 2*BLOCK_SIZE) {
lfs_mkdir(&lfs, "dir_here") => LFS_ERR_CORRUPT;
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_evil_invalid_ctz_pointer] # invalid pointer in CTZ skip-list test
defines.SIZE = ['2*BLOCK_SIZE', '3*BLOCK_SIZE', '4*BLOCK_SIZE']
in = "lfs.c"
code = '''
// create littlefs
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// make a file
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "file_here",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < SIZE; i++) {
char c = 'c';
lfs_file_write(&lfs, &file, &c, 1) => 1;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// change pointer in CTZ skip-list to be invalid
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
// make sure id 1 == our file and get our CTZ structure
uint8_t buffer[4*BLOCK_SIZE];
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_NAME, 1, strlen("file_here")), buffer)
=> LFS_MKTAG(LFS_TYPE_REG, 1, strlen("file_here"));
assert(memcmp((char*)buffer, "file_here", strlen("file_here")) == 0);
struct lfs_ctz ctz;
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x700, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_STRUCT, 1, sizeof(struct lfs_ctz)), &ctz)
=> LFS_MKTAG(LFS_TYPE_CTZSTRUCT, 1, sizeof(struct lfs_ctz));
lfs_ctz_fromle32(&ctz);
// rewrite block to contain bad pointer
uint8_t bbuffer[BLOCK_SIZE];
cfg->read(cfg, ctz.head, 0, bbuffer, BLOCK_SIZE) => 0;
uint32_t bad = lfs_tole32(0xcccccccc);
memcpy(&bbuffer[0], &bad, sizeof(bad));
memcpy(&bbuffer[4], &bad, sizeof(bad));
cfg->erase(cfg, ctz.head) => 0;
cfg->prog(cfg, ctz.head, 0, bbuffer, BLOCK_SIZE) => 0;
lfs_deinit(&lfs) => 0;
// test that accessing our bad file fails, note there's a number
// of ways to access the dir, some can fail, but some don't
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "file_here", &info) => 0;
assert(strcmp(info.name, "file_here") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_file_open(&lfs, &file, "file_here", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, SIZE) => LFS_ERR_CORRUPT;
lfs_file_close(&lfs, &file) => 0;
// any allocs that traverse CTZ must unfortunately must fail
if (SIZE > 2*BLOCK_SIZE) {
lfs_mkdir(&lfs, "dir_here") => LFS_ERR_CORRUPT;
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_evil_invalid_gstate_pointer]
defines.INVALSET = [0x3, 0x1, 0x2]
in = "lfs.c"
code = '''
// create littlefs
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// create an invalid gstate
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_fs_prepmove(&lfs, 1, (lfs_block_t [2]){
(INVALSET & 0x1) ? 0xcccccccc : 0,
(INVALSET & 0x2) ? 0xcccccccc : 0});
lfs_dir_commit(&lfs, &mdir, NULL, 0) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
// mount may not fail, but our first alloc should fail when
// we try to fix the gstate
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "should_fail") => LFS_ERR_CORRUPT;
lfs_unmount(&lfs) => 0;
'''
# cycle detection/recovery tests
[cases.test_evil_mdir_loop] # metadata-pair threaded-list loop test
in = "lfs.c"
code = '''
// create littlefs
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// change tail-pointer to point to ourself
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8),
(lfs_block_t[2]){0, 1}})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, cfg) => LFS_ERR_CORRUPT;
'''
[cases.test_evil_mdir_loop2] # metadata-pair threaded-list 2-length loop test
in = "lfs.c"
code = '''
// create littlefs with child dir
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
// find child
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_block_t pair[2];
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x7ff, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair)), pair)
=> LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair));
lfs_pair_fromle32(pair);
// change tail-pointer to point to root
lfs_dir_fetch(&lfs, &mdir, pair) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8),
(lfs_block_t[2]){0, 1}})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, cfg) => LFS_ERR_CORRUPT;
'''
[cases.test_evil_mdir_loop_child] # metadata-pair threaded-list 1-length child loop test
in = "lfs.c"
code = '''
// create littlefs with child dir
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
// find child
lfs_init(&lfs, cfg) => 0;
lfs_mdir_t mdir;
lfs_block_t pair[2];
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_get(&lfs, &mdir,
LFS_MKTAG(0x7ff, 0x3ff, 0),
LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair)), pair)
=> LFS_MKTAG(LFS_TYPE_DIRSTRUCT, 1, sizeof(pair));
lfs_pair_fromle32(pair);
// change tail-pointer to point to ourself
lfs_dir_fetch(&lfs, &mdir, pair) => 0;
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_HARDTAIL, 0x3ff, 8), pair})) => 0;
lfs_deinit(&lfs) => 0;
// test that mount fails gracefully
lfs_mount(&lfs, cfg) => LFS_ERR_CORRUPT;
'''

505
tests/test_exhaustion.toml Normal file
View File

@@ -0,0 +1,505 @@
# test running a filesystem to exhaustion
[cases.test_exhaustion_normal]
defines.ERASE_CYCLES = 10
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
'LFS_EMUBD_BADBLOCK_ERASEERROR',
'LFS_EMUBD_BADBLOCK_READERROR',
'LFS_EMUBD_BADBLOCK_PROGNOOP',
'LFS_EMUBD_BADBLOCK_ERASENOOP',
]
defines.FILES = 10
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "roadrunner") => 0;
lfs_unmount(&lfs) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
char path[1024];
sprintf(path, "roadrunner/test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
sprintf(path, "roadrunner/test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
sprintf(path, "roadrunner/test%d", i);
struct lfs_info info;
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
LFS_WARN("completed %d cycles", cycle);
'''
# test running a filesystem to exhaustion
# which also requires expanding superblocks
[cases.test_exhaustion_superblocks]
defines.ERASE_CYCLES = 10
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.BADBLOCK_BEHAVIOR = [
'LFS_EMUBD_BADBLOCK_PROGERROR',
'LFS_EMUBD_BADBLOCK_ERASEERROR',
'LFS_EMUBD_BADBLOCK_READERROR',
'LFS_EMUBD_BADBLOCK_PROGNOOP',
'LFS_EMUBD_BADBLOCK_ERASENOOP',
]
defines.FILES = 10
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
char path[1024];
sprintf(path, "test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
sprintf(path, "test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
struct lfs_info info;
sprintf(path, "test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
LFS_WARN("completed %d cycles", cycle);
'''
# These are a sort of high-level litmus test for wear-leveling. One definition
# of wear-leveling is that increasing a block device's space translates directly
# into increasing the block devices lifetime. This is something we can actually
# check for.
# wear-level test running a filesystem to exhaustion
[cases.test_exhuastion_wear_leveling]
defines.ERASE_CYCLES = 20
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.FILES = 10
code = '''
uint32_t run_cycles[2];
const uint32_t run_block_count[2] = {BLOCK_COUNT/2, BLOCK_COUNT};
for (int run = 0; run < 2; run++) {
for (lfs_block_t b = 0; b < BLOCK_COUNT; b++) {
lfs_emubd_setwear(cfg, b,
(b < run_block_count[run]) ? 0 : ERASE_CYCLES) => 0;
}
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "roadrunner") => 0;
lfs_unmount(&lfs) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
char path[1024];
sprintf(path, "roadrunner/test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
sprintf(path, "roadrunner/test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
struct lfs_info info;
sprintf(path, "roadrunner/test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
run_cycles[run] = cycle;
LFS_WARN("completed %d blocks %d cycles",
run_block_count[run], run_cycles[run]);
}
// check we increased the lifetime by 2x with ~10% error
LFS_ASSERT(run_cycles[1]*110/100 > 2*run_cycles[0]);
'''
# wear-level test + expanding superblock
[cases.test_exhaustion_wear_leveling_superblocks]
defines.ERASE_CYCLES = 20
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = 'ERASE_CYCLES / 2'
defines.FILES = 10
code = '''
uint32_t run_cycles[2];
const uint32_t run_block_count[2] = {BLOCK_COUNT/2, BLOCK_COUNT};
for (int run = 0; run < 2; run++) {
for (lfs_block_t b = 0; b < BLOCK_COUNT; b++) {
lfs_emubd_setwear(cfg, b,
(b < run_block_count[run]) ? 0 : ERASE_CYCLES) => 0;
}
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
uint32_t cycle = 0;
while (true) {
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
char path[1024];
sprintf(path, "test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
sprintf(path, "test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << ((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
struct lfs_info info;
sprintf(path, "test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
run_cycles[run] = cycle;
LFS_WARN("completed %d blocks %d cycles",
run_block_count[run], run_cycles[run]);
}
// check we increased the lifetime by 2x with ~10% error
LFS_ASSERT(run_cycles[1]*110/100 > 2*run_cycles[0]);
'''
# test that we wear blocks roughly evenly
[cases.test_exhaustion_wear_distribution]
defines.ERASE_CYCLES = 0xffffffff
defines.ERASE_COUNT = 256 # small bd so test runs faster
defines.BLOCK_CYCLES = [5, 4, 3, 2, 1]
defines.CYCLES = 100
defines.FILES = 10
if = 'BLOCK_CYCLES < CYCLES/10'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "roadrunner") => 0;
lfs_unmount(&lfs) => 0;
uint32_t cycle = 0;
while (cycle < CYCLES) {
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// chose name, roughly random seed, and random 2^n size
char path[1024];
sprintf(path, "roadrunner/test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << 4; //((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
lfs_ssize_t res = lfs_file_write(&lfs, &file, &c, 1);
assert(res == 1 || res == LFS_ERR_NOSPC);
if (res == LFS_ERR_NOSPC) {
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
int err = lfs_file_close(&lfs, &file);
assert(err == 0 || err == LFS_ERR_NOSPC);
if (err == LFS_ERR_NOSPC) {
lfs_unmount(&lfs) => 0;
goto exhausted;
}
}
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
sprintf(path, "roadrunner/test%d", i);
uint32_t prng = cycle * i;
lfs_size_t size = 1 << 4; //((TEST_PRNG(&prng) % 10)+2);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
for (lfs_size_t j = 0; j < size; j++) {
char c = 'a' + (TEST_PRNG(&prng) % 26);
char r;
lfs_file_read(&lfs, &file, &r, 1) => 1;
assert(r == c);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
cycle += 1;
}
exhausted:
// should still be readable
lfs_mount(&lfs, cfg) => 0;
for (uint32_t i = 0; i < FILES; i++) {
// check for errors
char path[1024];
struct lfs_info info;
sprintf(path, "roadrunner/test%d", i);
lfs_stat(&lfs, path, &info) => 0;
}
lfs_unmount(&lfs) => 0;
LFS_WARN("completed %d cycles", cycle);
// check the wear on our block device
lfs_emubd_wear_t minwear = -1;
lfs_emubd_wear_t totalwear = 0;
lfs_emubd_wear_t maxwear = 0;
// skip 0 and 1 as superblock movement is intentionally avoided
for (lfs_block_t b = 2; b < BLOCK_COUNT; b++) {
lfs_emubd_wear_t wear = lfs_emubd_wear(cfg, b);
printf("%08x: wear %d\n", b, wear);
assert(wear >= 0);
if (wear < minwear) {
minwear = wear;
}
if (wear > maxwear) {
maxwear = wear;
}
totalwear += wear;
}
lfs_emubd_wear_t avgwear = totalwear / BLOCK_COUNT;
LFS_WARN("max wear: %d cycles", maxwear);
LFS_WARN("avg wear: %d cycles", totalwear / (int)BLOCK_COUNT);
LFS_WARN("min wear: %d cycles", minwear);
// find standard deviation^2
lfs_emubd_wear_t dev2 = 0;
for (lfs_block_t b = 2; b < BLOCK_COUNT; b++) {
lfs_emubd_wear_t wear = lfs_emubd_wear(cfg, b);
assert(wear >= 0);
lfs_emubd_swear_t diff = wear - avgwear;
dev2 += diff*diff;
}
dev2 /= totalwear;
LFS_WARN("std dev^2: %d", dev2);
assert(dev2 < 8);
'''

View File

@@ -1,158 +0,0 @@
#!/bin/bash
set -eu
SMALLSIZE=32
MEDIUMSIZE=8192
LARGESIZE=262144
echo "=== File tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Simple file test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello", LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = strlen("Hello World!\n");
memcpy(wbuffer, "Hello World!\n", size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_file_open(&lfs, &file[0], "hello", LFS_O_RDONLY) => 0;
size = strlen("Hello World!\n");
lfs_file_read(&lfs, &file[0], rbuffer, size) => size;
memcmp(rbuffer, wbuffer, size) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
w_test() {
tests/test.py ${4:-} << TEST
size = $1;
lfs_size_t chunk = 31;
srand(0);
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "$2",
${3:-LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC}) => 0;
for (lfs_size_t i = 0; i < size; i += chunk) {
chunk = (chunk < size - i) ? chunk : size - i;
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = rand() & 0xff;
}
lfs_file_write(&lfs, &file[0], buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
}
r_test() {
tests/test.py << TEST
size = $1;
lfs_size_t chunk = 29;
srand(0);
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "$2", &info) => 0;
info.type => LFS_TYPE_REG;
info.size => size;
lfs_file_open(&lfs, &file[0], "$2", ${3:-LFS_O_RDONLY}) => 0;
for (lfs_size_t i = 0; i < size; i += chunk) {
chunk = (chunk < size - i) ? chunk : size - i;
lfs_file_read(&lfs, &file[0], buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk && i+b < size; b++) {
buffer[b] => rand() & 0xff;
}
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
}
echo "--- Small file test ---"
w_test $SMALLSIZE smallavacado
r_test $SMALLSIZE smallavacado
echo "--- Medium file test ---"
w_test $MEDIUMSIZE mediumavacado
r_test $MEDIUMSIZE mediumavacado
echo "--- Large file test ---"
w_test $LARGESIZE largeavacado
r_test $LARGESIZE largeavacado
echo "--- Zero file test ---"
w_test 0 noavacado
r_test 0 noavacado
echo "--- Truncate small test ---"
w_test $SMALLSIZE mediumavacado
r_test $SMALLSIZE mediumavacado
w_test $MEDIUMSIZE mediumavacado
r_test $MEDIUMSIZE mediumavacado
echo "--- Truncate zero test ---"
w_test $SMALLSIZE noavacado
r_test $SMALLSIZE noavacado
w_test 0 noavacado
r_test 0 noavacado
echo "--- Non-overlap check ---"
r_test $SMALLSIZE smallavacado
r_test $MEDIUMSIZE mediumavacado
r_test $LARGESIZE largeavacado
r_test 0 noavacado
echo "--- Dir check ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
info.type => LFS_TYPE_REG;
info.size => strlen("Hello World!\n");
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "largeavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => $LARGESIZE;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "mediumavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => $MEDIUMSIZE;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "noavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "smallavacado") => 0;
info.type => LFS_TYPE_REG;
info.size => $SMALLSIZE;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Many file test ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
// Create 300 files of 6 bytes
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "directory") => 0;
for (unsigned i = 0; i < 300; i++) {
snprintf((char*)buffer, sizeof(buffer), "file_%03d", i);
lfs_file_open(&lfs, &file[0], (char*)buffer, LFS_O_WRONLY | LFS_O_CREAT) => 0;
size = 6;
memcpy(wbuffer, "Hello", size);
lfs_file_write(&lfs, &file[0], wbuffer, size) => size;
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

539
tests/test_files.toml Normal file
View File

@@ -0,0 +1,539 @@
[cases.test_files_simple]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "hello",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_size_t size = strlen("Hello World!")+1;
uint8_t buffer[1024];
strcpy((char*)buffer, "Hello World!");
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(strcmp((char*)buffer, "Hello World!") == 0);
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_large]
defines.SIZE = [32, 8192, 262144, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 33, 1, 1023]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// write
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t prng = 1;
uint8_t buffer[1024];
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE;
prng = 1;
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_rewrite]
defines.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
defines.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 1]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// write
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
uint8_t buffer[1024];
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1;
prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// rewrite
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY) => 0;
prng = 2;
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => lfs_max(SIZE1, SIZE2);
prng = 2;
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
if (SIZE1 > SIZE2) {
prng = 1;
for (lfs_size_t b = 0; b < SIZE2; b++) {
TEST_PRNG(&prng);
}
for (lfs_size_t i = SIZE2; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_append]
defines.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
defines.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 1]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// write
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
uint8_t buffer[1024];
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1;
prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// append
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY | LFS_O_APPEND) => 0;
prng = 2;
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1 + SIZE2;
prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
prng = 2;
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_truncate]
defines.SIZE1 = [32, 8192, 131072, 0, 7, 8193]
defines.SIZE2 = [32, 8192, 131072, 0, 7, 8193]
defines.CHUNKSIZE = [31, 16, 1]
defines.INLINE_MAX = [0, -1, 8]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// write
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
uint8_t buffer[1024];
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
uint32_t prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE1;
prng = 1;
for (lfs_size_t i = 0; i < SIZE1; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE1-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// truncate
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY | LFS_O_TRUNC) => 0;
prng = 2;
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// read
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE2;
prng = 2;
for (lfs_size_t i = 0; i < SIZE2; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE2-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_reentrant_write]
defines.SIZE = [32, 0, 7, 2049]
defines.CHUNKSIZE = [31, 16, 65]
defines.INLINE_MAX = [0, -1, 8]
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
lfs_file_t file;
uint8_t buffer[1024];
err = lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY);
assert(err == LFS_ERR_NOENT || err == 0);
if (err == 0) {
// can only be 0 (new file) or full size
lfs_size_t size = lfs_file_size(&lfs, &file);
assert(size == 0 || size == SIZE);
lfs_file_close(&lfs, &file) => 0;
}
// write
lfs_file_open(&lfs, &file, "avacado", LFS_O_WRONLY | LFS_O_CREAT) => 0;
uint32_t prng = 1;
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
}
lfs_file_close(&lfs, &file) => 0;
// read
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE;
prng = 1;
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_reentrant_write_sync]
defines = [
# append (O(n))
{MODE='LFS_O_APPEND',
SIZE=[32, 0, 7, 2049],
CHUNKSIZE=[31, 16, 65],
INLINE_MAX=[0, -1, 8]},
# truncate (O(n^2))
{MODE='LFS_O_TRUNC',
SIZE=[32, 0, 7, 200],
CHUNKSIZE=[31, 16, 65],
INLINE_MAX=[0, -1, 8]},
# rewrite (O(n^2))
{MODE=0,
SIZE=[32, 0, 7, 200],
CHUNKSIZE=[31, 16, 65],
INLINE_MAX=[0, -1, 8]},
]
reentrant = true
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
lfs_file_t file;
uint8_t buffer[1024];
err = lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY);
assert(err == LFS_ERR_NOENT || err == 0);
if (err == 0) {
// with syncs we could be any size, but it at least must be valid data
lfs_size_t size = lfs_file_size(&lfs, &file);
assert(size <= SIZE);
uint32_t prng = 1;
for (lfs_size_t i = 0; i < size; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, size-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_close(&lfs, &file) => 0;
}
// write
lfs_file_open(&lfs, &file, "avacado",
LFS_O_WRONLY | LFS_O_CREAT | MODE) => 0;
lfs_size_t size = lfs_file_size(&lfs, &file);
assert(size <= SIZE);
uint32_t prng = 1;
lfs_size_t skip = (MODE == LFS_O_APPEND) ? size : 0;
for (lfs_size_t b = 0; b < skip; b++) {
TEST_PRNG(&prng);
}
for (lfs_size_t i = skip; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
for (lfs_size_t b = 0; b < chunk; b++) {
buffer[b] = TEST_PRNG(&prng) & 0xff;
}
lfs_file_write(&lfs, &file, buffer, chunk) => chunk;
lfs_file_sync(&lfs, &file) => 0;
}
lfs_file_close(&lfs, &file) => 0;
// read
lfs_file_open(&lfs, &file, "avacado", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => SIZE;
prng = 1;
for (lfs_size_t i = 0; i < SIZE; i += CHUNKSIZE) {
lfs_size_t chunk = lfs_min(CHUNKSIZE, SIZE-i);
lfs_file_read(&lfs, &file, buffer, chunk) => chunk;
for (lfs_size_t b = 0; b < chunk; b++) {
assert(buffer[b] == (TEST_PRNG(&prng) & 0xff));
}
}
lfs_file_read(&lfs, &file, buffer, CHUNKSIZE) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_many]
defines.N = 300
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// create N files of 7 bytes
lfs_mount(&lfs, cfg) => 0;
for (int i = 0; i < N; i++) {
lfs_file_t file;
char path[1024];
sprintf(path, "file_%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
char wbuffer[1024];
lfs_size_t size = 7;
sprintf(wbuffer, "Hi %03d", i);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
char rbuffer[1024];
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(strcmp(rbuffer, wbuffer) == 0);
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_many_power_cycle]
defines.N = 300
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// create N files of 7 bytes
lfs_mount(&lfs, cfg) => 0;
for (int i = 0; i < N; i++) {
lfs_file_t file;
char path[1024];
sprintf(path, "file_%03d", i);
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
char wbuffer[1024];
lfs_size_t size = 7;
sprintf(wbuffer, "Hi %03d", i);
lfs_file_write(&lfs, &file, wbuffer, size) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
char rbuffer[1024];
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(strcmp(rbuffer, wbuffer) == 0);
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_files_many_power_loss]
defines.N = 300
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
// create N files of 7 bytes
for (int i = 0; i < N; i++) {
lfs_file_t file;
char path[1024];
sprintf(path, "file_%03d", i);
err = lfs_file_open(&lfs, &file, path, LFS_O_WRONLY | LFS_O_CREAT);
char wbuffer[1024];
lfs_size_t size = 7;
sprintf(wbuffer, "Hi %03d", i);
if ((lfs_size_t)lfs_file_size(&lfs, &file) != size) {
lfs_file_write(&lfs, &file, wbuffer, size) => size;
}
lfs_file_close(&lfs, &file) => 0;
char rbuffer[1024];
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(strcmp(rbuffer, wbuffer) == 0);
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,50 +0,0 @@
#!/bin/bash
set -eu
echo "=== Formatting tests ==="
rm -rf blocks
echo "--- Basic formatting ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Basic mounting ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Invalid superblocks ---"
ln -f -s /dev/zero blocks/0
ln -f -s /dev/zero blocks/1
tests/test.py << TEST
lfs_format(&lfs, &cfg) => LFS_ERR_NOSPC;
TEST
rm blocks/0 blocks/1
echo "--- Invalid mount ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => LFS_ERR_CORRUPT;
TEST
echo "--- Expanding superblock ---"
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int i = 0; i < 100; i++) {
lfs_mkdir(&lfs, "dummy") => 0;
lfs_remove(&lfs, "dummy") => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "dummy") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

View File

@@ -1,186 +0,0 @@
#!/bin/bash
set -eu
echo "=== Interspersed tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Interspersed file test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "a", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[1], "b", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[2], "c", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[3], "d", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"a", 1) => 1;
lfs_file_write(&lfs, &file[1], (const void*)"b", 1) => 1;
lfs_file_write(&lfs, &file[2], (const void*)"c", 1) => 1;
lfs_file_write(&lfs, &file[3], (const void*)"d", 1) => 1;
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_file_close(&lfs, &file[2]);
lfs_file_close(&lfs, &file[3]);
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "a") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "b") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "c") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "d") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_file_open(&lfs, &file[0], "a", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[1], "b", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[2], "c", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[3], "d", LFS_O_RDONLY) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_read(&lfs, &file[0], buffer, 1) => 1;
buffer[0] => 'a';
lfs_file_read(&lfs, &file[1], buffer, 1) => 1;
buffer[0] => 'b';
lfs_file_read(&lfs, &file[2], buffer, 1) => 1;
buffer[0] => 'c';
lfs_file_read(&lfs, &file[3], buffer, 1) => 1;
buffer[0] => 'd';
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_file_close(&lfs, &file[2]);
lfs_file_close(&lfs, &file[3]);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Interspersed remove file test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
}
lfs_remove(&lfs, "a") => 0;
lfs_remove(&lfs, "b") => 0;
lfs_remove(&lfs, "c") => 0;
lfs_remove(&lfs, "d") => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
}
lfs_file_close(&lfs, &file[0]);
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "e") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_RDONLY) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_read(&lfs, &file[0], buffer, 1) => 1;
buffer[0] => 'e';
}
lfs_file_close(&lfs, &file[0]);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Remove inconveniently test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_WRONLY | LFS_O_TRUNC) => 0;
lfs_file_open(&lfs, &file[1], "f", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &file[2], "g", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &file[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &file[2], (const void*)"g", 1) => 1;
}
lfs_remove(&lfs, "f") => 0;
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &file[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &file[2], (const void*)"g", 1) => 1;
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_file_close(&lfs, &file[2]);
lfs_dir_open(&lfs, &dir[0], "/") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
info.type => LFS_TYPE_DIR;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "e") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "g") => 0;
info.type => LFS_TYPE_REG;
info.size => 10;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_file_open(&lfs, &file[0], "e", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &file[1], "g", LFS_O_RDONLY) => 0;
for (int i = 0; i < 10; i++) {
lfs_file_read(&lfs, &file[0], buffer, 1) => 1;
buffer[0] => 'e';
lfs_file_read(&lfs, &file[1], buffer, 1) => 1;
buffer[0] => 'g';
}
lfs_file_close(&lfs, &file[0]);
lfs_file_close(&lfs, &file[1]);
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

View File

@@ -0,0 +1,274 @@
[cases.test_interspersed_files]
defines.SIZE = [10, 100]
defines.FILES = [4, 10, 26]
code = '''
lfs_t lfs;
lfs_file_t files[FILES];
const char alphas[] = "abcdefghijklmnopqrstuvwxyz";
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
}
for (int i = 0; i < SIZE; i++) {
for (int j = 0; j < FILES; j++) {
lfs_file_write(&lfs, &files[j], &alphas[j], 1) => 1;
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "/") => 0;
struct lfs_info info;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path, LFS_O_RDONLY) => 0;
}
for (int i = 0; i < 10; i++) {
for (int j = 0; j < FILES; j++) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &files[j], buffer, 1) => 1;
assert(buffer[0] == alphas[j]);
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_interspersed_remove_files]
defines.SIZE = [10, 100]
defines.FILES = [4, 10, 26]
code = '''
lfs_t lfs;
const char alphas[] = "abcdefghijklmnopqrstuvwxyz";
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
for (int i = 0; i < SIZE; i++) {
lfs_file_write(&lfs, &file, &alphas[j], 1) => 1;
}
lfs_file_close(&lfs, &file);
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "zzz", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int j = 0; j < FILES; j++) {
lfs_file_write(&lfs, &file, (const void*)"~", 1) => 1;
lfs_file_sync(&lfs, &file) => 0;
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_remove(&lfs, path) => 0;
}
lfs_file_close(&lfs, &file);
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "/") => 0;
struct lfs_info info;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "zzz") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == FILES);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_file_open(&lfs, &file, "zzz", LFS_O_RDONLY) => 0;
for (int i = 0; i < FILES; i++) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, 1) => 1;
assert(buffer[0] == '~');
}
lfs_file_close(&lfs, &file);
lfs_unmount(&lfs) => 0;
'''
[cases.test_interspersed_remove_inconveniently]
defines.SIZE = [10, 100]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t files[3];
lfs_file_open(&lfs, &files[0], "e", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &files[1], "f", LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_open(&lfs, &files[2], "g", LFS_O_WRONLY | LFS_O_CREAT) => 0;
for (int i = 0; i < SIZE/2; i++) {
lfs_file_write(&lfs, &files[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &files[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &files[2], (const void*)"g", 1) => 1;
}
lfs_remove(&lfs, "f") => 0;
for (int i = 0; i < SIZE/2; i++) {
lfs_file_write(&lfs, &files[0], (const void*)"e", 1) => 1;
lfs_file_write(&lfs, &files[1], (const void*)"f", 1) => 1;
lfs_file_write(&lfs, &files[2], (const void*)"g", 1) => 1;
}
lfs_file_close(&lfs, &files[0]);
lfs_file_close(&lfs, &files[1]);
lfs_file_close(&lfs, &files[2]);
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "/") => 0;
struct lfs_info info;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "e") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "g") == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
lfs_file_open(&lfs, &files[0], "e", LFS_O_RDONLY) => 0;
lfs_file_open(&lfs, &files[1], "g", LFS_O_RDONLY) => 0;
for (int i = 0; i < SIZE; i++) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &files[0], buffer, 1) => 1;
assert(buffer[0] == 'e');
lfs_file_read(&lfs, &files[1], buffer, 1) => 1;
assert(buffer[0] == 'g');
}
lfs_file_close(&lfs, &files[0]);
lfs_file_close(&lfs, &files[1]);
lfs_unmount(&lfs) => 0;
'''
[cases.test_interspersed_reentrant_files]
defines.SIZE = [10, 100]
defines.FILES = [4, 10, 26]
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
lfs_file_t files[FILES];
const char alphas[] = "abcdefghijklmnopqrstuvwxyz";
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
}
for (int i = 0; i < SIZE; i++) {
for (int j = 0; j < FILES; j++) {
lfs_ssize_t size = lfs_file_size(&lfs, &files[j]);
assert(size >= 0);
if ((int)size <= i) {
lfs_file_write(&lfs, &files[j], &alphas[j], 1) => 1;
lfs_file_sync(&lfs, &files[j]) => 0;
}
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "/") => 0;
struct lfs_info info;
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, ".") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, "..") == 0);
assert(info.type == LFS_TYPE_DIR);
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_dir_read(&lfs, &dir, &info) => 1;
assert(strcmp(info.name, path) == 0);
assert(info.type == LFS_TYPE_REG);
assert(info.size == SIZE);
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int j = 0; j < FILES; j++) {
char path[1024];
sprintf(path, "%c", alphas[j]);
lfs_file_open(&lfs, &files[j], path, LFS_O_RDONLY) => 0;
}
for (int i = 0; i < 10; i++) {
for (int j = 0; j < FILES; j++) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &files[j], buffer, 1) => 1;
assert(buffer[0] == alphas[j]);
}
}
for (int j = 0; j < FILES; j++) {
lfs_file_close(&lfs, &files[j]);
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,332 +0,0 @@
#!/bin/bash
set -eu
echo "=== Move tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "a") => 0;
lfs_mkdir(&lfs, "b") => 0;
lfs_mkdir(&lfs, "c") => 0;
lfs_mkdir(&lfs, "d") => 0;
lfs_mkdir(&lfs, "a/hi") => 0;
lfs_mkdir(&lfs, "a/hi/hola") => 0;
lfs_mkdir(&lfs, "a/hi/bonjour") => 0;
lfs_mkdir(&lfs, "a/hi/ohayo") => 0;
lfs_file_open(&lfs, &file[0], "a/hello", LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file[0], "hola\n", 5) => 5;
lfs_file_write(&lfs, &file[0], "bonjour\n", 8) => 8;
lfs_file_write(&lfs, &file[0], "ohayo\n", 6) => 6;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "a/hello", "b/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file corrupt source ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "b/hello", "c/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 1
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file corrupt source and dest ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hello", "d/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 2
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move file after corrupt ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hello", "d/hello") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "a/hi", "b/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir corrupt source ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "b/hi", "c/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 1
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "b") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir corrupt source and dest ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hi", "d/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/corrupt.py -n 2
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move dir after corrupt ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_rename(&lfs, "c/hi", "d/hi") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "c") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "d") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move check ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "d/hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "bonjour") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hola") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "ohayo") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hello") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b/hello") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c/hello") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file[0], "d/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, 5) => 5;
memcmp(buffer, "hola\n", 5) => 0;
lfs_file_read(&lfs, &file[0], buffer, 8) => 8;
memcmp(buffer, "bonjour\n", 8) => 0;
lfs_file_read(&lfs, &file[0], buffer, 6) => 6;
memcmp(buffer, "ohayo\n", 6) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Move state stealing ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_remove(&lfs, "b") => 0;
lfs_remove(&lfs, "c") => 0;
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hi") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "d/hi") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "bonjour") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "hola") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "ohayo") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_dir_open(&lfs, &dir[0], "a/hello") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "b") => LFS_ERR_NOENT;
lfs_dir_open(&lfs, &dir[0], "c") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file[0], "d/hello", LFS_O_RDONLY) => 0;
lfs_file_read(&lfs, &file[0], buffer, 5) => 5;
memcmp(buffer, "hola\n", 5) => 0;
lfs_file_read(&lfs, &file[0], buffer, 8) => 8;
memcmp(buffer, "bonjour\n", 8) => 0;
lfs_file_read(&lfs, &file[0], buffer, 6) => 6;
memcmp(buffer, "ohayo\n", 6) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

1905
tests/test_move.toml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,45 +0,0 @@
#!/bin/bash
set -eu
echo "=== Orphan tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
echo "--- Orphan test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "parent") => 0;
lfs_mkdir(&lfs, "parent/orphan") => 0;
lfs_mkdir(&lfs, "parent/child") => 0;
lfs_remove(&lfs, "parent/orphan") => 0;
TEST
# corrupt most recent commit, this should be the update to the previous
# linked-list entry and should orphan the child
tests/corrupt.py
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_ssize_t before = lfs_fs_size(&lfs);
before => 8;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_ssize_t orphaned = lfs_fs_size(&lfs);
orphaned => 8;
lfs_mkdir(&lfs, "parent/otherchild") => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_ssize_t deorphaned = lfs_fs_size(&lfs);
deorphaned => 8;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

273
tests/test_orphans.toml Normal file
View File

@@ -0,0 +1,273 @@
[cases.test_orphans_normal]
in = "lfs.c"
if = 'PROG_SIZE <= 0x3fe' # only works with one crc per commit
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "parent") => 0;
lfs_mkdir(&lfs, "parent/orphan") => 0;
lfs_mkdir(&lfs, "parent/child") => 0;
lfs_remove(&lfs, "parent/orphan") => 0;
lfs_unmount(&lfs) => 0;
// corrupt the child's most recent commit, this should be the update
// to the linked-list entry, which should orphan the orphan. Note this
// makes a lot of assumptions about the remove operation.
lfs_mount(&lfs, cfg) => 0;
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "parent/child") => 0;
lfs_block_t block = dir.m.pair[0];
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
uint8_t buffer[BLOCK_SIZE];
cfg->read(cfg, block, 0, buffer, BLOCK_SIZE) => 0;
int off = BLOCK_SIZE-1;
while (off >= 0 && buffer[off] == ERASE_VALUE) {
off -= 1;
}
memset(&buffer[off-3], BLOCK_SIZE, 3);
cfg->erase(cfg, block) => 0;
cfg->prog(cfg, block, 0, buffer, BLOCK_SIZE) => 0;
cfg->sync(cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_fs_size(&lfs) => 8;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_fs_size(&lfs) => 8;
// this mkdir should both create a dir and deorphan, so size
// should be unchanged
lfs_mkdir(&lfs, "parent/otherchild") => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_stat(&lfs, "parent/otherchild", &info) => 0;
lfs_fs_size(&lfs) => 8;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_stat(&lfs, "parent/orphan", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "parent/child", &info) => 0;
lfs_stat(&lfs, "parent/otherchild", &info) => 0;
lfs_fs_size(&lfs) => 8;
lfs_unmount(&lfs) => 0;
'''
# test that we only run deorphan once per power-cycle
[cases.test_orphans_no_orphans]
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// mark the filesystem as having orphans
lfs_fs_preporphans(&lfs, +1) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_commit(&lfs, &mdir, NULL, 0) => 0;
// we should have orphans at this state
assert(lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
// mount
lfs_mount(&lfs, cfg) => 0;
// we should detect orphans
assert(lfs_gstate_hasorphans(&lfs.gstate));
// force consistency
lfs_fs_forceconsistency(&lfs) => 0;
// we should no longer have orphans
assert(!lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
'''
[cases.test_orphans_one_orphan]
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// create an orphan
lfs_mdir_t orphan;
lfs_alloc_ckpoint(&lfs);
lfs_dir_alloc(&lfs, &orphan) => 0;
lfs_dir_commit(&lfs, &orphan, NULL, 0) => 0;
// append our orphan and mark the filesystem as having orphans
lfs_fs_preporphans(&lfs, +1) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_pair_tole32(orphan.pair);
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_SOFTTAIL, 0x3ff, 8), orphan.pair})) => 0;
// we should have orphans at this state
assert(lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
// mount
lfs_mount(&lfs, cfg) => 0;
// we should detect orphans
assert(lfs_gstate_hasorphans(&lfs.gstate));
// force consistency
lfs_fs_forceconsistency(&lfs) => 0;
// we should no longer have orphans
assert(!lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
'''
# test that we can persist gstate with lfs_fs_mkconsistent
[cases.test_orphans_mkconsistent_no_orphans]
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// mark the filesystem as having orphans
lfs_fs_preporphans(&lfs, +1) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_dir_commit(&lfs, &mdir, NULL, 0) => 0;
// we should have orphans at this state
assert(lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
// mount
lfs_mount(&lfs, cfg) => 0;
// we should detect orphans
assert(lfs_gstate_hasorphans(&lfs.gstate));
// force consistency
lfs_fs_mkconsistent(&lfs) => 0;
// we should no longer have orphans
assert(!lfs_gstate_hasorphans(&lfs.gstate));
// remount
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
// we should still have no orphans
assert(!lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
'''
[cases.test_orphans_mkconsistent_one_orphan]
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
// create an orphan
lfs_mdir_t orphan;
lfs_alloc_ckpoint(&lfs);
lfs_dir_alloc(&lfs, &orphan) => 0;
lfs_dir_commit(&lfs, &orphan, NULL, 0) => 0;
// append our orphan and mark the filesystem as having orphans
lfs_fs_preporphans(&lfs, +1) => 0;
lfs_mdir_t mdir;
lfs_dir_fetch(&lfs, &mdir, (lfs_block_t[2]){0, 1}) => 0;
lfs_pair_tole32(orphan.pair);
lfs_dir_commit(&lfs, &mdir, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_SOFTTAIL, 0x3ff, 8), orphan.pair})) => 0;
// we should have orphans at this state
assert(lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
// mount
lfs_mount(&lfs, cfg) => 0;
// we should detect orphans
assert(lfs_gstate_hasorphans(&lfs.gstate));
// force consistency
lfs_fs_mkconsistent(&lfs) => 0;
// we should no longer have orphans
assert(!lfs_gstate_hasorphans(&lfs.gstate));
// remount
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
// we should still have no orphans
assert(!lfs_gstate_hasorphans(&lfs.gstate));
lfs_unmount(&lfs) => 0;
'''
# reentrant testing for orphans, basically just spam mkdir/remove
[cases.test_orphans_reentrant]
reentrant = true
# TODO fix this case, caused by non-DAG trees
if = '!(DEPTH == 3 && CACHE_SIZE != 64)'
defines = [
{FILES=6, DEPTH=1, CYCLES=20},
{FILES=26, DEPTH=1, CYCLES=20},
{FILES=3, DEPTH=3, CYCLES=20},
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
uint32_t prng = 1;
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (unsigned i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (unsigned d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[TEST_PRNG(&prng) % FILES]);
}
// if it does not exist, we create it, else we destroy
struct lfs_info info;
int res = lfs_stat(&lfs, full_path, &info);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
// is valid dir?
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// try to delete path in reverse order, ignore if dir is not empty
for (int d = DEPTH-1; d >= 0; d--) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,201 +0,0 @@
#!/bin/bash
set -eu
echo "=== Path tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "soda") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
lfs_mkdir(&lfs, "soda/hotsoda") => 0;
lfs_mkdir(&lfs, "soda/warmsoda") => 0;
lfs_mkdir(&lfs, "soda/coldsoda") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Root path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "/milk1") => 0;
lfs_stat(&lfs, "/milk1", &info) => 0;
strcmp(info.name, "milk1") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Redundant slash path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "//tea//hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "///tea///hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "///milk2") => 0;
lfs_stat(&lfs, "///milk2", &info) => 0;
strcmp(info.name, "milk2") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "./tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/./tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/././tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "/./tea/./hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "/./milk3") => 0;
lfs_stat(&lfs, "/./milk3", &info) => 0;
strcmp(info.name, "milk3") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Dot dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "coffee/../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/coldtea/../hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "coffee/coldcoffee/../../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "coffee/../soda/../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "coffee/../milk4") => 0;
lfs_stat(&lfs, "coffee/../milk4", &info) => 0;
strcmp(info.name, "milk4") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Trailing dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "tea/hottea/", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/hottea/.", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/hottea/./.", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_stat(&lfs, "tea/hottea/..", &info) => 0;
strcmp(info.name, "tea") => 0;
lfs_stat(&lfs, "tea/hottea/../.", &info) => 0;
strcmp(info.name, "tea") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Root dot dot path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "coffee/../../../../../../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "coffee/../../../../../../milk5") => 0;
lfs_stat(&lfs, "coffee/../../../../../../milk5", &info) => 0;
strcmp(info.name, "milk5") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Root tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_stat(&lfs, "/", &info) => 0;
info.type => LFS_TYPE_DIR;
strcmp(info.name, "/") => 0;
lfs_mkdir(&lfs, "/") => LFS_ERR_EXIST;
lfs_file_open(&lfs, &file[0], "/", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_ISDIR;
// more corner cases
lfs_remove(&lfs, "") => LFS_ERR_INVAL;
lfs_remove(&lfs, ".") => LFS_ERR_INVAL;
lfs_remove(&lfs, "..") => LFS_ERR_INVAL;
lfs_remove(&lfs, "/") => LFS_ERR_INVAL;
lfs_remove(&lfs, "//") => LFS_ERR_INVAL;
lfs_remove(&lfs, "./") => LFS_ERR_INVAL;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Sketchy path tests ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "dirt/ground") => LFS_ERR_NOENT;
lfs_mkdir(&lfs, "dirt/ground/earth") => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Superblock conflict test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "littlefs") => 0;
lfs_remove(&lfs, "littlefs") => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Max path test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'w', LFS_NAME_MAX+1);
buffer[LFS_NAME_MAX+2] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_NAMETOOLONG;
memcpy(buffer, "coffee/", strlen("coffee/"));
memset(buffer+strlen("coffee/"), 'w', LFS_NAME_MAX+1);
buffer[strlen("coffee/")+LFS_NAME_MAX+2] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => LFS_ERR_NAMETOOLONG;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Really big path test ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
memset(buffer, 'w', LFS_NAME_MAX);
buffer[LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
memcpy(buffer, "coffee/", strlen("coffee/"));
memset(buffer+strlen("coffee/"), 'w', LFS_NAME_MAX);
buffer[strlen("coffee/")+LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, (char*)buffer) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_remove(&lfs, (char*)buffer) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

336
tests/test_paths.toml Normal file
View File

@@ -0,0 +1,336 @@
# simple path test
[cases.test_paths_normal]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
struct lfs_info info;
lfs_stat(&lfs, "tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "/milk") => 0;
lfs_stat(&lfs, "/milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_stat(&lfs, "milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_unmount(&lfs) => 0;
'''
# redundant slashes
[cases.test_paths_redundant_slashes]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
struct lfs_info info;
lfs_stat(&lfs, "/tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "//tea//hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "///tea///hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "////milk") => 0;
lfs_stat(&lfs, "////milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_stat(&lfs, "milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_unmount(&lfs) => 0;
'''
# dot path test
[cases.test_paths_dot]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
struct lfs_info info;
lfs_stat(&lfs, "./tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/./tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/././tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "/./tea/./hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "/./milk") => 0;
lfs_stat(&lfs, "/./milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_stat(&lfs, "milk", &info) => 0;
assert(strcmp(info.name, "milk") == 0);
lfs_unmount(&lfs) => 0;
'''
# dot dot path test
[cases.test_paths_dot_dot]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
struct lfs_info info;
lfs_stat(&lfs, "coffee/../tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/coldtea/../hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "coffee/coldcoffee/../../tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "coffee/../coffee/../tea/hottea", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_mkdir(&lfs, "coffee/../milk") => 0;
lfs_stat(&lfs, "coffee/../milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_stat(&lfs, "milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_unmount(&lfs) => 0;
'''
# trailing dot path test
[cases.test_paths_trailing_dot]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
struct lfs_info info;
lfs_stat(&lfs, "tea/hottea/", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/hottea/.", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/hottea/./.", &info) => 0;
assert(strcmp(info.name, "hottea") == 0);
lfs_stat(&lfs, "tea/hottea/..", &info) => 0;
assert(strcmp(info.name, "tea") == 0);
lfs_stat(&lfs, "tea/hottea/../.", &info) => 0;
assert(strcmp(info.name, "tea") == 0);
lfs_unmount(&lfs) => 0;
'''
# leading dot path test
[cases.test_paths_leading_dot]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, ".milk") => 0;
struct lfs_info info;
lfs_stat(&lfs, ".milk", &info) => 0;
strcmp(info.name, ".milk") => 0;
lfs_stat(&lfs, "tea/.././.milk", &info) => 0;
strcmp(info.name, ".milk") => 0;
lfs_unmount(&lfs) => 0;
'''
# root dot dot path test
[cases.test_paths_root_dot_dot]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "tea") => 0;
lfs_mkdir(&lfs, "tea/hottea") => 0;
lfs_mkdir(&lfs, "tea/warmtea") => 0;
lfs_mkdir(&lfs, "tea/coldtea") => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
struct lfs_info info;
lfs_stat(&lfs, "coffee/../../../../../../tea/hottea", &info) => 0;
strcmp(info.name, "hottea") => 0;
lfs_mkdir(&lfs, "coffee/../../../../../../milk") => 0;
lfs_stat(&lfs, "coffee/../../../../../../milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_stat(&lfs, "milk", &info) => 0;
strcmp(info.name, "milk") => 0;
lfs_unmount(&lfs) => 0;
'''
# invalid path tests
[cases.test_paths_invalid]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg);
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "dirt", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "dirt/ground", &info) => LFS_ERR_NOENT;
lfs_stat(&lfs, "dirt/ground/earth", &info) => LFS_ERR_NOENT;
lfs_remove(&lfs, "dirt") => LFS_ERR_NOENT;
lfs_remove(&lfs, "dirt/ground") => LFS_ERR_NOENT;
lfs_remove(&lfs, "dirt/ground/earth") => LFS_ERR_NOENT;
lfs_mkdir(&lfs, "dirt/ground") => LFS_ERR_NOENT;
lfs_file_t file;
lfs_file_open(&lfs, &file, "dirt/ground", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NOENT;
lfs_mkdir(&lfs, "dirt/ground/earth") => LFS_ERR_NOENT;
lfs_file_open(&lfs, &file, "dirt/ground/earth", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
# root operations
[cases.test_paths_root]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "/", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_mkdir(&lfs, "/") => LFS_ERR_EXIST;
lfs_file_t file;
lfs_file_open(&lfs, &file, "/", LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_ISDIR;
lfs_remove(&lfs, "/") => LFS_ERR_INVAL;
lfs_unmount(&lfs) => 0;
'''
# root representations
[cases.test_paths_root_reprs]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "/", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, ".", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "..", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "//", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_stat(&lfs, "./", &info) => 0;
assert(strcmp(info.name, "/") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_unmount(&lfs) => 0;
'''
# superblock conflict test
[cases.test_paths_superblock_conflict]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "littlefs", &info) => LFS_ERR_NOENT;
lfs_remove(&lfs, "littlefs") => LFS_ERR_NOENT;
lfs_mkdir(&lfs, "littlefs") => 0;
lfs_stat(&lfs, "littlefs", &info) => 0;
assert(strcmp(info.name, "littlefs") == 0);
assert(info.type == LFS_TYPE_DIR);
lfs_remove(&lfs, "littlefs") => 0;
lfs_stat(&lfs, "littlefs", &info) => LFS_ERR_NOENT;
lfs_unmount(&lfs) => 0;
'''
# max path test
[cases.test_paths_max]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
char path[1024];
memset(path, 'w', LFS_NAME_MAX+1);
path[LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, path) => LFS_ERR_NAMETOOLONG;
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NAMETOOLONG;
memcpy(path, "coffee/", strlen("coffee/"));
memset(path+strlen("coffee/"), 'w', LFS_NAME_MAX+1);
path[strlen("coffee/")+LFS_NAME_MAX+1] = '\0';
lfs_mkdir(&lfs, path) => LFS_ERR_NAMETOOLONG;
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY | LFS_O_CREAT)
=> LFS_ERR_NAMETOOLONG;
lfs_unmount(&lfs) => 0;
'''
# really big path test
[cases.test_paths_really_big]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "coffee") => 0;
lfs_mkdir(&lfs, "coffee/hotcoffee") => 0;
lfs_mkdir(&lfs, "coffee/warmcoffee") => 0;
lfs_mkdir(&lfs, "coffee/coldcoffee") => 0;
char path[1024];
memset(path, 'w', LFS_NAME_MAX);
path[LFS_NAME_MAX] = '\0';
lfs_mkdir(&lfs, path) => 0;
lfs_remove(&lfs, path) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, path) => 0;
memcpy(path, "coffee/", strlen("coffee/"));
memset(path+strlen("coffee/"), 'w', LFS_NAME_MAX);
path[strlen("coffee/")+LFS_NAME_MAX] = '\0';
lfs_mkdir(&lfs, path) => 0;
lfs_remove(&lfs, path) => 0;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_remove(&lfs, path) => 0;
lfs_unmount(&lfs) => 0;
'''

185
tests/test_powerloss.toml Normal file
View File

@@ -0,0 +1,185 @@
# There are already a number of tests that test general operations under
# power-loss (see the reentrant attribute). These tests are for explicitly
# testing specific corner cases.
# only a revision count
[cases.test_powerloss_only_rev]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "notebook") => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "notebook/paper",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
char buffer[256];
strcpy(buffer, "hello");
lfs_size_t size = strlen("hello");
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
}
lfs_file_close(&lfs, &file) => 0;
char rbuffer[256];
lfs_file_open(&lfs, &file, "notebook/paper", LFS_O_RDONLY) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// get pair/rev count
lfs_mount(&lfs, cfg) => 0;
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "notebook") => 0;
lfs_block_t pair[2] = {dir.m.pair[0], dir.m.pair[1]};
uint32_t rev = dir.m.rev;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
// write just the revision count
uint8_t bbuffer[BLOCK_SIZE];
cfg->read(cfg, pair[1], 0, bbuffer, BLOCK_SIZE) => 0;
memcpy(bbuffer, &(uint32_t){lfs_tole32(rev+1)}, sizeof(uint32_t));
cfg->erase(cfg, pair[1]) => 0;
cfg->prog(cfg, pair[1], 0, bbuffer, BLOCK_SIZE) => 0;
lfs_mount(&lfs, cfg) => 0;
// can read?
lfs_file_open(&lfs, &file, "notebook/paper", LFS_O_RDONLY) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
// can write?
lfs_file_open(&lfs, &file, "notebook/paper",
LFS_O_WRONLY | LFS_O_APPEND) => 0;
strcpy(buffer, "goodbye");
size = strlen("goodbye");
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "notebook/paper", LFS_O_RDONLY) => 0;
strcpy(buffer, "hello");
size = strlen("hello");
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
strcpy(buffer, "goodbye");
size = strlen("goodbye");
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# partial prog, may not be byte in order!
[cases.test_powerloss_partial_prog]
if = '''
PROG_SIZE < BLOCK_SIZE
&& (DISK_VERSION == 0 || DISK_VERSION >= 0x00020001)
'''
defines.BYTE_OFF = ["0", "PROG_SIZE-1", "PROG_SIZE/2"]
defines.BYTE_VALUE = [0x33, 0xcc]
in = "lfs.c"
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_mkdir(&lfs, "notebook") => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "notebook/paper",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
char buffer[256];
strcpy(buffer, "hello");
lfs_size_t size = strlen("hello");
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
}
lfs_file_close(&lfs, &file) => 0;
char rbuffer[256];
lfs_file_open(&lfs, &file, "notebook/paper", LFS_O_RDONLY) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// imitate a partial prog, value should not matter, if littlefs
// doesn't notice the partial prog testbd will assert
// get offset to next prog
lfs_mount(&lfs, cfg) => 0;
lfs_dir_t dir;
lfs_dir_open(&lfs, &dir, "notebook") => 0;
lfs_block_t block = dir.m.pair[0];
lfs_off_t off = dir.m.off;
lfs_dir_close(&lfs, &dir) => 0;
lfs_unmount(&lfs) => 0;
// tweak byte
uint8_t bbuffer[BLOCK_SIZE];
cfg->read(cfg, block, 0, bbuffer, BLOCK_SIZE) => 0;
bbuffer[off + BYTE_OFF] = BYTE_VALUE;
cfg->erase(cfg, block) => 0;
cfg->prog(cfg, block, 0, bbuffer, BLOCK_SIZE) => 0;
lfs_mount(&lfs, cfg) => 0;
// can read?
lfs_file_open(&lfs, &file, "notebook/paper", LFS_O_RDONLY) => 0;
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
// can write?
lfs_file_open(&lfs, &file, "notebook/paper",
LFS_O_WRONLY | LFS_O_APPEND) => 0;
strcpy(buffer, "goodbye");
size = strlen("goodbye");
for (int i = 0; i < 5; i++) {
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_sync(&lfs, &file) => 0;
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "notebook/paper", LFS_O_RDONLY) => 0;
strcpy(buffer, "hello");
size = strlen("hello");
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
strcpy(buffer, "goodbye");
size = strlen("goodbye");
for (int i = 0; i < 5; i++) {
lfs_file_read(&lfs, &file, rbuffer, size) => size;
assert(memcmp(rbuffer, buffer, size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

343
tests/test_relocations.toml Normal file
View File

@@ -0,0 +1,343 @@
# specific corner cases worth explicitly testing for
[cases.test_relocations_dangling_split_dir]
defines.ITERATIONS = 20
defines.COUNT = 10
defines.BLOCK_CYCLES = [8, 1]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// fill up filesystem so only ~16 blocks are left
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "padding", LFS_O_CREAT | LFS_O_WRONLY) => 0;
uint8_t buffer[512];
memset(buffer, 0, 512);
while (BLOCK_COUNT - lfs_fs_size(&lfs) > 16) {
lfs_file_write(&lfs, &file, buffer, 512) => 512;
}
lfs_file_close(&lfs, &file) => 0;
// make a child dir to use in bounded space
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (unsigned j = 0; j < ITERATIONS; j++) {
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_t dir;
struct lfs_info info;
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
if (j == (unsigned)ITERATIONS-1) {
break;
}
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_dir_t dir;
struct lfs_info info;
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[cases.test_relocations_outdated_head]
defines.ITERATIONS = 20
defines.COUNT = 10
defines.BLOCK_CYCLES = [8, 1]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// fill up filesystem so only ~16 blocks are left
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "padding", LFS_O_CREAT | LFS_O_WRONLY) => 0;
uint8_t buffer[512];
memset(buffer, 0, 512);
while (BLOCK_COUNT - lfs_fs_size(&lfs) > 16) {
lfs_file_write(&lfs, &file, buffer, 512) => 512;
}
lfs_file_close(&lfs, &file) => 0;
// make a child dir to use in bounded space
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (unsigned j = 0; j < ITERATIONS; j++) {
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_t dir;
struct lfs_info info;
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 0;
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_rewind(&lfs, &dir) => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 2;
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_rewind(&lfs, &dir) => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 2;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
}
lfs_unmount(&lfs) => 0;
'''
# reentrant testing for relocations, this is the same as the
# orphan testing, except here we also set block_cycles so that
# almost every tree operation needs a relocation
[cases.test_relocations_reentrant]
reentrant = true
# TODO fix this case, caused by non-DAG trees
# NOTE the second condition is required
if = '!(DEPTH == 3 && CACHE_SIZE != 64) && 2*FILES < BLOCK_COUNT'
defines = [
{FILES=6, DEPTH=1, CYCLES=20, BLOCK_CYCLES=1},
{FILES=26, DEPTH=1, CYCLES=20, BLOCK_CYCLES=1},
{FILES=3, DEPTH=3, CYCLES=20, BLOCK_CYCLES=1},
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
uint32_t prng = 1;
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (unsigned i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (unsigned d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[TEST_PRNG(&prng) % FILES]);
}
// if it does not exist, we create it, else we destroy
struct lfs_info info;
int res = lfs_stat(&lfs, full_path, &info);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
// is valid dir?
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// try to delete path in reverse order, ignore if dir is not empty
for (unsigned d = DEPTH-1; d+1 > 0; d--) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
lfs_unmount(&lfs) => 0;
'''
# reentrant testing for relocations, but now with random renames!
[cases.test_relocations_reentrant_renames]
reentrant = true
# TODO fix this case, caused by non-DAG trees
# NOTE the second condition is required
if = '!(DEPTH == 3 && CACHE_SIZE != 64) && 2*FILES < BLOCK_COUNT'
defines = [
{FILES=6, DEPTH=1, CYCLES=20, BLOCK_CYCLES=1},
{FILES=26, DEPTH=1, CYCLES=20, BLOCK_CYCLES=1},
{FILES=3, DEPTH=3, CYCLES=20, BLOCK_CYCLES=1},
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
uint32_t prng = 1;
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (unsigned i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (unsigned d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[TEST_PRNG(&prng) % FILES]);
}
// if it does not exist, we create it, else we destroy
struct lfs_info info;
int res = lfs_stat(&lfs, full_path, &info);
assert(!res || res == LFS_ERR_NOENT);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// create new random path
char new_path[256];
for (unsigned d = 0; d < DEPTH; d++) {
sprintf(&new_path[2*d], "/%c", alpha[TEST_PRNG(&prng) % FILES]);
}
// if new path does not exist, rename, otherwise destroy
res = lfs_stat(&lfs, new_path, &info);
assert(!res || res == LFS_ERR_NOENT);
if (res == LFS_ERR_NOENT) {
// stop once some dir is renamed
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(&path[2*d], &full_path[2*d]);
path[2*d+2] = '\0';
strcpy(&path[128+2*d], &new_path[2*d]);
path[128+2*d+2] = '\0';
err = lfs_rename(&lfs, path, path+128);
assert(!err || err == LFS_ERR_NOTEMPTY);
if (!err) {
strcpy(path, path+128);
}
}
for (unsigned d = 0; d < DEPTH; d++) {
char path[1024];
strcpy(path, new_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
} else {
// try to delete path in reverse order,
// ignore if dir is not empty
for (unsigned d = DEPTH-1; d+1 > 0; d--) {
char path[1024];
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
}
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,361 +0,0 @@
#!/bin/bash
set -eu
SMALLSIZE=4
MEDIUMSIZE=128
LARGESIZE=132
echo "=== Seek tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_mkdir(&lfs, "hello") => 0;
for (int i = 0; i < $LARGESIZE; i++) {
sprintf((char*)buffer, "hello/kitty%03d", i);
lfs_file_open(&lfs, &file[0], (char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size = strlen("kittycatcat");
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < $LARGESIZE; j++) {
lfs_file_write(&lfs, &file[0], buffer, size);
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
echo "--- Simple dir seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_soff_t pos;
int i;
for (i = 0; i < $SMALLSIZE; i++) {
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
pos = lfs_dir_tell(&lfs, &dir[0]);
}
pos >= 0 => 1;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_rewind(&lfs, &dir[0]) => 0;
sprintf((char*)buffer, "kitty%03d", 0);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Large dir seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir[0], "hello") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_soff_t pos;
int i;
for (i = 0; i < $MEDIUMSIZE; i++) {
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
pos = lfs_dir_tell(&lfs, &dir[0]);
}
pos >= 0 => 1;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_rewind(&lfs, &dir[0]) => 0;
sprintf((char*)buffer, "kitty%03d", 0);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, ".") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, "..") => 0;
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_seek(&lfs, &dir[0], pos) => 0;
sprintf((char*)buffer, "kitty%03d", i);
lfs_dir_read(&lfs, &dir[0], &info) => 1;
strcmp(info.name, (char*)buffer) => 0;
lfs_dir_close(&lfs, &dir[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Simple file seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDONLY) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $SMALLSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], size, LFS_SEEK_CUR) => 3*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_CUR) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Large file seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDONLY) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $MEDIUMSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], size, LFS_SEEK_CUR) => 3*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_CUR) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Simple file seek and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $SMALLSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
memcpy(buffer, "doggodogdog", size);
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Large file seek and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
lfs_soff_t pos;
size = strlen("kittycatcat");
for (int i = 0; i < $MEDIUMSIZE; i++) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
if (i != $SMALLSIZE) {
memcmp(buffer, "kittycatcat", size) => 0;
}
pos = lfs_file_tell(&lfs, &file[0]);
}
pos >= 0 => 1;
memcpy(buffer, "doggodogdog", size);
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_rewind(&lfs, &file[0]) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file[0], pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_seek(&lfs, &file[0], -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file[0]);
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Boundary seek and write ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
size = strlen("hedgehoghog");
const lfs_soff_t offsets[] = {512, 1020, 513, 1021, 511, 1019};
for (unsigned i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
lfs_soff_t off = offsets[i];
memcpy(buffer, "hedgehoghog", size);
lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off;
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
lfs_file_seek(&lfs, &file[0], 0, LFS_SEEK_SET) => 0;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_sync(&lfs, &file[0]) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Out-of-bounds seek ---"
tests/test.py << TEST
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file[0], "hello/kitty042", LFS_O_RDWR) => 0;
size = strlen("kittycatcat");
lfs_file_size(&lfs, &file[0]) => $LARGESIZE*size;
lfs_file_seek(&lfs, &file[0], ($LARGESIZE+$SMALLSIZE)*size,
LFS_SEEK_SET) => ($LARGESIZE+$SMALLSIZE)*size;
lfs_file_read(&lfs, &file[0], buffer, size) => 0;
memcpy(buffer, "porcupineee", size);
lfs_file_write(&lfs, &file[0], buffer, size) => size;
lfs_file_seek(&lfs, &file[0], ($LARGESIZE+$SMALLSIZE)*size,
LFS_SEEK_SET) => ($LARGESIZE+$SMALLSIZE)*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "porcupineee", size) => 0;
lfs_file_seek(&lfs, &file[0], $LARGESIZE*size,
LFS_SEEK_SET) => $LARGESIZE*size;
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "\0\0\0\0\0\0\0\0\0\0\0", size) => 0;
lfs_file_seek(&lfs, &file[0], -(($LARGESIZE+$SMALLSIZE)*size),
LFS_SEEK_CUR) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file[0]) => ($LARGESIZE+1)*size;
lfs_file_seek(&lfs, &file[0], -(($LARGESIZE+2*$SMALLSIZE)*size),
LFS_SEEK_END) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file[0]) => ($LARGESIZE+1)*size;
lfs_file_close(&lfs, &file[0]) => 0;
lfs_unmount(&lfs) => 0;
TEST
echo "--- Results ---"
tests/stats.py

407
tests/test_seek.toml Normal file
View File

@@ -0,0 +1,407 @@
# simple file seek
[cases.test_seek_read]
defines = [
{COUNT=132, SKIP=4},
{COUNT=132, SKIP=128},
{COUNT=200, SKIP=10},
{COUNT=200, SKIP=100},
{COUNT=4, SKIP=1},
{COUNT=4, SKIP=2},
]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size_t size = strlen("kittycatcat");
uint8_t buffer[1024];
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDONLY) => 0;
lfs_soff_t pos = -1;
size = strlen("kittycatcat");
for (int i = 0; i < SKIP; i++) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file);
}
assert(pos >= 0);
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_rewind(&lfs, &file) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_CUR) => size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, size, LFS_SEEK_CUR) => 3*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, -size, LFS_SEEK_CUR) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file);
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# simple file seek and write
[cases.test_seek_write]
defines = [
{COUNT=132, SKIP=4},
{COUNT=132, SKIP=128},
{COUNT=200, SKIP=10},
{COUNT=200, SKIP=100},
{COUNT=4, SKIP=1},
{COUNT=4, SKIP=2},
]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size_t size = strlen("kittycatcat");
uint8_t buffer[1024];
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
lfs_soff_t pos = -1;
size = strlen("kittycatcat");
for (int i = 0; i < SKIP; i++) {
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
pos = lfs_file_tell(&lfs, &file);
}
assert(pos >= 0);
memcpy(buffer, "doggodogdog", size);
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_rewind(&lfs, &file) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, pos, LFS_SEEK_SET) => pos;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "doggodogdog", size) => 0;
lfs_file_seek(&lfs, &file, -size, LFS_SEEK_END) >= 0 => 1;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
size = lfs_file_size(&lfs, &file);
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_CUR) => size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# boundary seek and writes
[cases.test_seek_boundary_write]
defines.COUNT = 132
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size_t size = strlen("kittycatcat");
uint8_t buffer[1024];
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
size = strlen("hedgehoghog");
const lfs_soff_t offsets[] = {512, 1020, 513, 1021, 511, 1019, 1441};
for (unsigned i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
lfs_soff_t off = offsets[i];
memcpy(buffer, "hedgehoghog", size);
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
lfs_file_sync(&lfs, &file) => 0;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "kittycatcat", size) => 0;
lfs_file_seek(&lfs, &file, off, LFS_SEEK_SET) => off;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hedgehoghog", size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# out of bounds seek
[cases.test_seek_out_of_bounds]
defines = [
{COUNT=132, SKIP=4},
{COUNT=132, SKIP=128},
{COUNT=200, SKIP=10},
{COUNT=200, SKIP=100},
{COUNT=4, SKIP=2},
{COUNT=4, SKIP=3},
]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "kitty",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_APPEND) => 0;
size_t size = strlen("kittycatcat");
uint8_t buffer[1024];
memcpy(buffer, "kittycatcat", size);
for (int j = 0; j < COUNT; j++) {
lfs_file_write(&lfs, &file, buffer, size);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
size = strlen("kittycatcat");
lfs_file_size(&lfs, &file) => COUNT*size;
lfs_file_seek(&lfs, &file, (COUNT+SKIP)*size,
LFS_SEEK_SET) => (COUNT+SKIP)*size;
lfs_file_read(&lfs, &file, buffer, size) => 0;
memcpy(buffer, "porcupineee", size);
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, (COUNT+SKIP)*size,
LFS_SEEK_SET) => (COUNT+SKIP)*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "porcupineee", size) => 0;
lfs_file_seek(&lfs, &file, COUNT*size,
LFS_SEEK_SET) => COUNT*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "\0\0\0\0\0\0\0\0\0\0\0", size) => 0;
lfs_file_seek(&lfs, &file, -((COUNT+SKIP)*size),
LFS_SEEK_CUR) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file) => (COUNT+1)*size;
lfs_file_seek(&lfs, &file, -((COUNT+2*SKIP)*size),
LFS_SEEK_END) => LFS_ERR_INVAL;
lfs_file_tell(&lfs, &file) => (COUNT+1)*size;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# inline write and seek
[cases.test_seek_inline_write]
defines.SIZE = [2, 4, 128, 132]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "tinykitty",
LFS_O_RDWR | LFS_O_CREAT) => 0;
int j = 0;
int k = 0;
uint8_t buffer[1024];
memcpy(buffer, "abcdefghijklmnopqrstuvwxyz", 26);
for (unsigned i = 0; i < SIZE; i++) {
lfs_file_write(&lfs, &file, &buffer[j++ % 26], 1) => 1;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => i+1;
}
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file) => 0;
lfs_file_size(&lfs, &file) => SIZE;
for (unsigned i = 0; i < SIZE; i++) {
uint8_t c;
lfs_file_read(&lfs, &file, &c, 1) => 1;
c => buffer[k++ % 26];
}
lfs_file_sync(&lfs, &file) => 0;
lfs_file_tell(&lfs, &file) => SIZE;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
for (unsigned i = 0; i < SIZE; i++) {
lfs_file_write(&lfs, &file, &buffer[j++ % 26], 1) => 1;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_sync(&lfs, &file) => 0;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => SIZE;
if (i < SIZE-2) {
uint8_t c[3];
lfs_file_seek(&lfs, &file, -1, LFS_SEEK_CUR) => i;
lfs_file_read(&lfs, &file, &c, 3) => 3;
lfs_file_tell(&lfs, &file) => i+3;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_seek(&lfs, &file, i+1, LFS_SEEK_SET) => i+1;
lfs_file_tell(&lfs, &file) => i+1;
lfs_file_size(&lfs, &file) => SIZE;
}
}
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file) => 0;
lfs_file_size(&lfs, &file) => SIZE;
for (unsigned i = 0; i < SIZE; i++) {
uint8_t c;
lfs_file_read(&lfs, &file, &c, 1) => 1;
c => buffer[k++ % 26];
}
lfs_file_sync(&lfs, &file) => 0;
lfs_file_tell(&lfs, &file) => SIZE;
lfs_file_size(&lfs, &file) => SIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# file seek and write with power-loss
[cases.test_seek_reentrant_write]
# must be power-of-2 for quadratic probing to be exhaustive
defines.COUNT = [4, 64, 128]
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
lfs_file_t file;
uint8_t buffer[1024];
err = lfs_file_open(&lfs, &file, "kitty", LFS_O_RDONLY);
assert(!err || err == LFS_ERR_NOENT);
if (!err) {
if (lfs_file_size(&lfs, &file) != 0) {
lfs_file_size(&lfs, &file) => 11*COUNT;
for (int j = 0; j < COUNT; j++) {
memset(buffer, 0, 11+1);
lfs_file_read(&lfs, &file, buffer, 11) => 11;
assert(memcmp(buffer, "kittycatcat", 11) == 0 ||
memcmp(buffer, "doggodogdog", 11) == 0);
}
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_file_open(&lfs, &file, "kitty", LFS_O_WRONLY | LFS_O_CREAT) => 0;
if (lfs_file_size(&lfs, &file) == 0) {
for (int j = 0; j < COUNT; j++) {
strcpy((char*)buffer, "kittycatcat");
size_t size = strlen((char*)buffer);
lfs_file_write(&lfs, &file, buffer, size) => size;
}
}
lfs_file_close(&lfs, &file) => 0;
strcpy((char*)buffer, "doggodogdog");
size_t size = strlen((char*)buffer);
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => COUNT*size;
// seek and write using quadratic probing to touch all
// 11-byte words in the file
lfs_off_t off = 0;
for (int j = 0; j < COUNT; j++) {
off = (5*off + 1) % COUNT;
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "kittycatcat", size) == 0 ||
memcmp(buffer, "doggodogdog", size) == 0);
if (memcmp(buffer, "doggodogdog", size) != 0) {
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
strcpy((char*)buffer, "doggodogdog");
lfs_file_write(&lfs, &file, buffer, size) => size;
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "doggodogdog", size) == 0);
lfs_file_sync(&lfs, &file) => 0;
lfs_file_seek(&lfs, &file, off*size, LFS_SEEK_SET) => off*size;
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "doggodogdog", size) == 0);
}
}
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "kitty", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => COUNT*size;
for (int j = 0; j < COUNT; j++) {
lfs_file_read(&lfs, &file, buffer, size) => size;
assert(memcmp(buffer, "doggodogdog", size) == 0);
}
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''

474
tests/test_superblocks.toml Normal file
View File

@@ -0,0 +1,474 @@
# simple formatting test
[cases.test_superblocks_format]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
'''
# mount/unmount
[cases.test_superblocks_mount]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_unmount(&lfs) => 0;
'''
# mount/unmount from interpretting a previous superblock block_count
[cases.test_superblocks_mount_unknown_block_count]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
memset(&lfs, 0, sizeof(lfs));
struct lfs_config tweaked_cfg = *cfg;
tweaked_cfg.block_count = 0;
lfs_mount(&lfs, &tweaked_cfg) => 0;
assert(lfs.block_count == cfg->block_count);
lfs_unmount(&lfs) => 0;
'''
# reentrant format
[cases.test_superblocks_reentrant_format]
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
lfs_unmount(&lfs) => 0;
'''
# invalid mount
[cases.test_superblocks_invalid_mount]
code = '''
lfs_t lfs;
lfs_mount(&lfs, cfg) => LFS_ERR_CORRUPT;
'''
# test we can read superblock info through lfs_fs_stat
[cases.test_superblocks_stat]
if = 'DISK_VERSION == 0'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// test we can mount and read fsinfo
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.disk_version == LFS_DISK_VERSION);
assert(fsinfo.name_max == LFS_NAME_MAX);
assert(fsinfo.file_max == LFS_FILE_MAX);
assert(fsinfo.attr_max == LFS_ATTR_MAX);
lfs_unmount(&lfs) => 0;
'''
[cases.test_superblocks_stat_tweaked]
if = 'DISK_VERSION == 0'
defines.TWEAKED_NAME_MAX = 63
defines.TWEAKED_FILE_MAX = '(1 << 16)-1'
defines.TWEAKED_ATTR_MAX = 512
code = '''
// create filesystem with tweaked params
struct lfs_config tweaked_cfg = *cfg;
tweaked_cfg.name_max = TWEAKED_NAME_MAX;
tweaked_cfg.file_max = TWEAKED_FILE_MAX;
tweaked_cfg.attr_max = TWEAKED_ATTR_MAX;
lfs_t lfs;
lfs_format(&lfs, &tweaked_cfg) => 0;
// test we can mount and read these params with the original config
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.disk_version == LFS_DISK_VERSION);
assert(fsinfo.name_max == TWEAKED_NAME_MAX);
assert(fsinfo.file_max == TWEAKED_FILE_MAX);
assert(fsinfo.attr_max == TWEAKED_ATTR_MAX);
lfs_unmount(&lfs) => 0;
'''
# expanding superblock
[cases.test_superblocks_expand]
defines.BLOCK_CYCLES = [32, 33, 1]
defines.N = [10, 100, 1000]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (int i = 0; i < N; i++) {
lfs_file_t file;
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
struct lfs_info info;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_remove(&lfs, "dummy") => 0;
}
lfs_unmount(&lfs) => 0;
// one last check after power-cycle
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
struct lfs_info info;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''
# expanding superblock with power cycle
[cases.test_superblocks_expand_power_cycle]
defines.BLOCK_CYCLES = [32, 33, 1]
defines.N = [10, 100, 1000]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
for (int i = 0; i < N; i++) {
lfs_mount(&lfs, cfg) => 0;
// remove lingering dummy?
struct lfs_info info;
int err = lfs_stat(&lfs, "dummy", &info);
assert(err == 0 || (err == LFS_ERR_NOENT && i == 0));
if (!err) {
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_remove(&lfs, "dummy") => 0;
}
lfs_file_t file;
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
}
// one last check after power-cycle
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''
# reentrant expanding superblock
[cases.test_superblocks_reentrant_expand]
defines.BLOCK_CYCLES = [2, 1]
defines.N = 24
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
for (int i = 0; i < N; i++) {
// remove lingering dummy?
struct lfs_info info;
err = lfs_stat(&lfs, "dummy", &info);
assert(err == 0 || (err == LFS_ERR_NOENT && i == 0));
if (!err) {
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_remove(&lfs, "dummy") => 0;
}
lfs_file_t file;
lfs_file_open(&lfs, &file, "dummy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
}
lfs_unmount(&lfs) => 0;
// one last check after power-cycle
lfs_mount(&lfs, cfg) => 0;
struct lfs_info info;
lfs_stat(&lfs, "dummy", &info) => 0;
assert(strcmp(info.name, "dummy") == 0);
assert(info.type == LFS_TYPE_REG);
lfs_unmount(&lfs) => 0;
'''
# mount with unknown block_count
[cases.test_superblocks_unknown_blocks]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// known block_size/block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// unknown block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// do some work
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_t file;
lfs_file_open(&lfs, &file, "test",
LFS_O_CREAT | LFS_O_EXCL | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hello!", 6) => 6;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_open(&lfs, &file, "test", LFS_O_RDONLY) => 0;
uint8_t buffer[256];
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => 6;
lfs_file_close(&lfs, &file) => 0;
assert(memcmp(buffer, "hello!", 6) == 0);
lfs_unmount(&lfs) => 0;
'''
# mount with blocks fewer than the erase_count
[cases.test_superblocks_fewer_blocks]
defines.BLOCK_COUNT = ['ERASE_COUNT/2', 'ERASE_COUNT/4', '2']
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
// known block_size/block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// incorrect block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = ERASE_COUNT;
lfs_mount(&lfs, cfg) => LFS_ERR_INVAL;
// unknown block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// do some work
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_t file;
lfs_file_open(&lfs, &file, "test",
LFS_O_CREAT | LFS_O_EXCL | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hello!", 6) => 6;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_file_open(&lfs, &file, "test", LFS_O_RDONLY) => 0;
uint8_t buffer[256];
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => 6;
lfs_file_close(&lfs, &file) => 0;
assert(memcmp(buffer, "hello!", 6) == 0);
lfs_unmount(&lfs) => 0;
'''
# mount with more blocks than the erase_count
[cases.test_superblocks_more_blocks]
defines.FORMAT_BLOCK_COUNT = '2*ERASE_COUNT'
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_init(&lfs, cfg) => 0;
lfs.block_count = BLOCK_COUNT;
lfs_mdir_t root = {
.pair = {0, 0}, // make sure this goes into block 0
.rev = 0,
.off = sizeof(uint32_t),
.etag = 0xffffffff,
.count = 0,
.tail = {LFS_BLOCK_NULL, LFS_BLOCK_NULL},
.erased = false,
.split = false,
};
lfs_superblock_t superblock = {
.version = LFS_DISK_VERSION,
.block_size = BLOCK_SIZE,
.block_count = FORMAT_BLOCK_COUNT,
.name_max = LFS_NAME_MAX,
.file_max = LFS_FILE_MAX,
.attr_max = LFS_ATTR_MAX,
};
lfs_superblock_tole32(&superblock);
lfs_dir_commit(&lfs, &root, LFS_MKATTRS(
{LFS_MKTAG(LFS_TYPE_CREATE, 0, 0), NULL},
{LFS_MKTAG(LFS_TYPE_SUPERBLOCK, 0, 8), "littlefs"},
{LFS_MKTAG(LFS_TYPE_INLINESTRUCT, 0, sizeof(superblock)),
&superblock})) => 0;
lfs_deinit(&lfs) => 0;
// known block_size/block_count
cfg->block_size = BLOCK_SIZE;
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => LFS_ERR_INVAL;
'''
# mount and grow the filesystem
[cases.test_superblocks_grow]
defines.BLOCK_COUNT = ['ERASE_COUNT/2', 'ERASE_COUNT/4', '2']
defines.BLOCK_COUNT_2 = 'ERASE_COUNT'
defines.KNOWN_BLOCK_COUNT = [true, false]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
if (KNOWN_BLOCK_COUNT) {
cfg->block_count = BLOCK_COUNT;
} else {
cfg->block_count = 0;
}
// mount with block_size < erase_size
lfs_mount(&lfs, cfg) => 0;
struct lfs_fsinfo fsinfo;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// same size is a noop
lfs_mount(&lfs, cfg) => 0;
lfs_fs_grow(&lfs, BLOCK_COUNT) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT);
lfs_unmount(&lfs) => 0;
// grow to new size
lfs_mount(&lfs, cfg) => 0;
lfs_fs_grow(&lfs, BLOCK_COUNT_2) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
if (KNOWN_BLOCK_COUNT) {
cfg->block_count = BLOCK_COUNT_2;
} else {
cfg->block_count = 0;
}
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
// mounting with the previous size should fail
cfg->block_count = BLOCK_COUNT;
lfs_mount(&lfs, cfg) => LFS_ERR_INVAL;
if (KNOWN_BLOCK_COUNT) {
cfg->block_count = BLOCK_COUNT_2;
} else {
cfg->block_count = 0;
}
// same size is a noop
lfs_mount(&lfs, cfg) => 0;
lfs_fs_grow(&lfs, BLOCK_COUNT_2) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_unmount(&lfs) => 0;
// do some work
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_file_t file;
lfs_file_open(&lfs, &file, "test",
LFS_O_CREAT | LFS_O_EXCL | LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hello!", 6) => 6;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_fs_stat(&lfs, &fsinfo) => 0;
assert(fsinfo.block_size == BLOCK_SIZE);
assert(fsinfo.block_count == BLOCK_COUNT_2);
lfs_file_open(&lfs, &file, "test", LFS_O_RDONLY) => 0;
uint8_t buffer[256];
lfs_file_read(&lfs, &file, buffer, sizeof(buffer)) => 6;
lfs_file_close(&lfs, &file) => 0;
assert(memcmp(buffer, "hello!", 6) == 0);
lfs_unmount(&lfs) => 0;
'''

View File

@@ -1,158 +0,0 @@
#!/bin/bash
set -eu
SMALLSIZE=32
MEDIUMSIZE=2048
LARGESIZE=8192
echo "=== Truncate tests ==="
rm -rf blocks
tests/test.py << TEST
lfs_format(&lfs, &cfg) => 0;
TEST
truncate_test() {
STARTSIZES="$1"
STARTSEEKS="$2"
HOTSIZES="$3"
COLDSIZES="$4"
tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t startseeks[] = {$STARTSEEKS};
static const lfs_off_t hotsizes[] = {$HOTSIZES};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
strcpy((char*)buffer, "hair");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < startsizes[i]; j += size) {
lfs_file_write(&lfs, &file[0], buffer, size) => size;
}
lfs_file_size(&lfs, &file[0]) => startsizes[i];
if (startseeks[i] != startsizes[i]) {
lfs_file_seek(&lfs, &file[0],
startseeks[i], LFS_SEEK_SET) => startseeks[i];
}
lfs_file_truncate(&lfs, &file[0], hotsizes[i]) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i];
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t hotsizes[] = {$HOTSIZES};
static const lfs_off_t coldsizes[] = {$COLDSIZES};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file[0]) => hotsizes[i];
size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < hotsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_truncate(&lfs, &file[0], coldsizes[i]) => 0;
lfs_file_size(&lfs, &file[0]) => coldsizes[i];
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
tests/test.py << TEST
static const lfs_off_t startsizes[] = {$STARTSIZES};
static const lfs_off_t hotsizes[] = {$HOTSIZES};
static const lfs_off_t coldsizes[] = {$COLDSIZES};
lfs_mount(&lfs, &cfg) => 0;
for (unsigned i = 0; i < sizeof(startsizes)/sizeof(startsizes[0]); i++) {
sprintf((char*)buffer, "hairyhead%d", i);
lfs_file_open(&lfs, &file[0], (const char*)buffer, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file[0]) => coldsizes[i];
size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i];
j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < coldsizes[i]; j += size) {
lfs_file_read(&lfs, &file[0], buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_close(&lfs, &file[0]) => 0;
}
lfs_unmount(&lfs) => 0;
TEST
}
echo "--- Cold shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE"
echo "--- Cold expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Warm shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, 0, 0, 0"
echo "--- Warm expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Mid-file shrinking truncate ---"
truncate_test \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
" $LARGESIZE, $LARGESIZE, $LARGESIZE, $LARGESIZE, $LARGESIZE" \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, 0, 0, 0"
echo "--- Mid-file expanding truncate ---"
truncate_test \
" 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE, 2*$LARGESIZE" \
" 0, 0, $SMALLSIZE, $MEDIUMSIZE, $LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE" \
"2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE, 2*$LARGESIZE"
echo "--- Results ---"
tests/stats.py

503
tests/test_truncate.toml Normal file
View File

@@ -0,0 +1,503 @@
# simple truncate
[cases.test_truncate_simple]
defines.MEDIUMSIZE = [31, 32, 33, 511, 512, 513, 2047, 2048, 2049]
defines.LARGESIZE = [32, 33, 512, 513, 2048, 2049, 8192, 8193]
if = 'MEDIUMSIZE < LARGESIZE'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "baldynoop",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "hair");
size_t size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, LARGESIZE-j))
=> lfs_min(size, LARGESIZE-j);
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldynoop", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldynoop", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
memcmp(buffer, "hair", lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# truncate and read
[cases.test_truncate_read]
defines.MEDIUMSIZE = [31, 32, 33, 511, 512, 513, 2047, 2048, 2049]
defines.LARGESIZE = [32, 33, 512, 513, 2048, 2049, 8192, 8193]
if = 'MEDIUMSIZE < LARGESIZE'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "baldyread",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "hair");
size_t size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, LARGESIZE-j))
=> lfs_min(size, LARGESIZE-j);
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldyread", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
memcmp(buffer, "hair", lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldyread", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("hair");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
memcmp(buffer, "hair", lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# write, truncate, and read
[cases.test_truncate_write_read]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "sequence",
LFS_O_RDWR | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint8_t buffer[1024];
size_t size = lfs_min(lfs.cfg->cache_size, sizeof(buffer)/2);
lfs_size_t qsize = size / 4;
uint8_t *wb = buffer;
uint8_t *rb = buffer + size;
for (lfs_off_t j = 0; j < size; ++j) {
wb[j] = j;
}
/* Spread sequence over size */
lfs_file_write(&lfs, &file, wb, size) => size;
lfs_file_size(&lfs, &file) => size;
lfs_file_tell(&lfs, &file) => size;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
lfs_file_tell(&lfs, &file) => 0;
/* Chop off the last quarter */
lfs_size_t trunc = size - qsize;
lfs_file_truncate(&lfs, &file, trunc) => 0;
lfs_file_tell(&lfs, &file) => 0;
lfs_file_size(&lfs, &file) => trunc;
/* Read should produce first 3/4 */
lfs_file_read(&lfs, &file, rb, size) => trunc;
memcmp(rb, wb, trunc) => 0;
/* Move to 1/4 */
lfs_file_size(&lfs, &file) => trunc;
lfs_file_seek(&lfs, &file, qsize, LFS_SEEK_SET) => qsize;
lfs_file_tell(&lfs, &file) => qsize;
/* Chop to 1/2 */
trunc -= qsize;
lfs_file_truncate(&lfs, &file, trunc) => 0;
lfs_file_tell(&lfs, &file) => qsize;
lfs_file_size(&lfs, &file) => trunc;
/* Read should produce second quarter */
lfs_file_read(&lfs, &file, rb, size) => trunc - qsize;
memcmp(rb, wb + qsize, trunc - qsize) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# truncate and write
[cases.test_truncate_write]
defines.MEDIUMSIZE = [31, 32, 33, 511, 512, 513, 2047, 2048, 2049]
defines.LARGESIZE = [32, 33, 512, 513, 2048, 2049, 8192, 8193]
if = 'MEDIUMSIZE < LARGESIZE'
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "baldywrite",
LFS_O_WRONLY | LFS_O_CREAT) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "hair");
size_t size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, LARGESIZE-j))
=> lfs_min(size, LARGESIZE-j);
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldywrite", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
/* truncate */
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
/* and write */
strcpy((char*)buffer, "bald");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
}
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldywrite", LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
size = strlen("bald");
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
memcmp(buffer, "bald", lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# truncate write under powerloss
[cases.test_truncate_reentrant_write]
defines.SMALLSIZE = [4, 512]
defines.MEDIUMSIZE = [0, 3, 4, 5, 31, 32, 33, 511, 512, 513, 1023, 1024, 1025]
defines.LARGESIZE = 2048
reentrant = true
defines.POWERLOSS_BEHAVIOR = [
'LFS_EMUBD_POWERLOSS_NOOP',
'LFS_EMUBD_POWERLOSS_OOO',
]
code = '''
lfs_t lfs;
int err = lfs_mount(&lfs, cfg);
if (err) {
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
}
lfs_file_t file;
err = lfs_file_open(&lfs, &file, "baldy", LFS_O_RDONLY);
assert(!err || err == LFS_ERR_NOENT);
if (!err) {
size_t size = lfs_file_size(&lfs, &file);
assert(size == 0 ||
size == (size_t)LARGESIZE ||
size == (size_t)MEDIUMSIZE ||
size == (size_t)SMALLSIZE);
for (lfs_off_t j = 0; j < size; j += 4) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, lfs_min(4, size-j))
=> lfs_min(4, size-j);
assert(memcmp(buffer, "hair", lfs_min(4, size-j)) == 0 ||
memcmp(buffer, "bald", lfs_min(4, size-j)) == 0 ||
memcmp(buffer, "comb", lfs_min(4, size-j)) == 0);
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_file_open(&lfs, &file, "baldy",
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
lfs_file_size(&lfs, &file) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "hair");
size_t size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < LARGESIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, LARGESIZE-j))
=> lfs_min(size, LARGESIZE-j);
}
lfs_file_size(&lfs, &file) => LARGESIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "baldy", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => LARGESIZE;
/* truncate */
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
/* and write */
strcpy((char*)buffer, "bald");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
}
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_file_open(&lfs, &file, "baldy", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_truncate(&lfs, &file, SMALLSIZE) => 0;
lfs_file_size(&lfs, &file) => SMALLSIZE;
strcpy((char*)buffer, "comb");
size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < SMALLSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, SMALLSIZE-j))
=> lfs_min(size, SMALLSIZE-j);
}
lfs_file_size(&lfs, &file) => SMALLSIZE;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''
# more aggressive general truncation tests
[cases.test_truncate_aggressive]
defines.CONFIG = 'range(6)'
defines.SMALLSIZE = 32
defines.MEDIUMSIZE = 2048
defines.LARGESIZE = 8192
code = '''
lfs_t lfs;
#define COUNT 5
const struct {
lfs_off_t startsizes[COUNT];
lfs_off_t startseeks[COUNT];
lfs_off_t hotsizes[COUNT];
lfs_off_t coldsizes[COUNT];
} configs[] = {
// cold shrinking
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE}},
// cold expanding
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE}},
// warm shrinking truncate
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, 0, 0, 0, 0}},
// warm expanding truncate
{{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE}},
// mid-file shrinking truncate
{{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{ LARGESIZE, LARGESIZE, LARGESIZE, LARGESIZE, LARGESIZE},
{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, 0, 0, 0, 0}},
// mid-file expanding truncate
{{ 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE, 2*LARGESIZE},
{ 0, 0, SMALLSIZE, MEDIUMSIZE, LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE},
{2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE, 2*LARGESIZE}},
};
const lfs_off_t *startsizes = configs[CONFIG].startsizes;
const lfs_off_t *startseeks = configs[CONFIG].startseeks;
const lfs_off_t *hotsizes = configs[CONFIG].hotsizes;
const lfs_off_t *coldsizes = configs[CONFIG].coldsizes;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "hairyhead%d", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, path,
LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "hair");
size_t size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < startsizes[i]; j += size) {
lfs_file_write(&lfs, &file, buffer, size) => size;
}
lfs_file_size(&lfs, &file) => startsizes[i];
if (startseeks[i] != startsizes[i]) {
lfs_file_seek(&lfs, &file,
startseeks[i], LFS_SEEK_SET) => startseeks[i];
}
lfs_file_truncate(&lfs, &file, hotsizes[i]) => 0;
lfs_file_size(&lfs, &file) => hotsizes[i];
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "hairyhead%d", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => hotsizes[i];
size_t size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i]; j += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < hotsizes[i]; j += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_truncate(&lfs, &file, coldsizes[i]) => 0;
lfs_file_size(&lfs, &file) => coldsizes[i];
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, cfg) => 0;
for (unsigned i = 0; i < COUNT; i++) {
char path[1024];
sprintf(path, "hairyhead%d", i);
lfs_file_t file;
lfs_file_open(&lfs, &file, path, LFS_O_RDONLY) => 0;
lfs_file_size(&lfs, &file) => coldsizes[i];
size_t size = strlen("hair");
lfs_off_t j = 0;
for (; j < startsizes[i] && j < hotsizes[i] && j < coldsizes[i];
j += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "hair", size) => 0;
}
for (; j < coldsizes[i]; j += size) {
uint8_t buffer[1024];
lfs_file_read(&lfs, &file, buffer, size) => size;
memcmp(buffer, "\0\0\0\0", size) => 0;
}
lfs_file_close(&lfs, &file) => 0;
}
lfs_unmount(&lfs) => 0;
'''
# noop truncate
[cases.test_truncate_nop]
defines.MEDIUMSIZE = [32, 33, 512, 513, 2048, 2049, 8192, 8193]
code = '''
lfs_t lfs;
lfs_format(&lfs, cfg) => 0;
lfs_mount(&lfs, cfg) => 0;
lfs_file_t file;
lfs_file_open(&lfs, &file, "baldynoop",
LFS_O_RDWR | LFS_O_CREAT) => 0;
uint8_t buffer[1024];
strcpy((char*)buffer, "hair");
size_t size = strlen((char*)buffer);
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_write(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
// this truncate should do nothing
lfs_file_truncate(&lfs, &file, j+lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
lfs_file_seek(&lfs, &file, 0, LFS_SEEK_SET) => 0;
// should do nothing again
lfs_file_truncate(&lfs, &file, MEDIUMSIZE) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
memcmp(buffer, "hair", lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
// still there after reboot?
lfs_mount(&lfs, cfg) => 0;
lfs_file_open(&lfs, &file, "baldynoop", LFS_O_RDWR) => 0;
lfs_file_size(&lfs, &file) => MEDIUMSIZE;
for (lfs_off_t j = 0; j < MEDIUMSIZE; j += size) {
lfs_file_read(&lfs, &file, buffer, lfs_min(size, MEDIUMSIZE-j))
=> lfs_min(size, MEDIUMSIZE-j);
memcmp(buffer, "hair", lfs_min(size, MEDIUMSIZE-j)) => 0;
}
lfs_file_read(&lfs, &file, buffer, size) => 0;
lfs_file_close(&lfs, &file) => 0;
lfs_unmount(&lfs) => 0;
'''