Reworked bench.py/bench_runner/how bench measurements are recorded

This is based on how bench.py/bench_runners have actually been used in
practice. The main changes have been to make the output of bench.py more
readibly consumable by plot.py/plotmpl.py without needing a bunch of
hacky intermediary scripts.

Now instead of a single per-bench BENCH_START/BENCH_STOP, benches can
have multiple named BENCH_START/BENCH_STOP invocations to measure
multiple things in one run:

  BENCH_START("fetch", i, STEP);
  lfsr_rbyd_fetch(&lfs, &rbyd_, rbyd.block, CFG->block_size) => 0;
  BENCH_STOP("fetch");

Benches can also now report explicit results, for non-io measurements:

  BENCH_RESULT("usage", i, STEP, rbyd.eoff);

The extra iter/size parameters to BENCH_START/BENCH_RESULT also allow
some extra information to be calculated post-bench. This infomation gets
tagged with an extra bench_agg field to help organize results in
plot.py/plotmpl.py:

  - bench_meas=<meas>+amor, bench_agg=raw - amortized results
  - bench_meas=<meas>+div,  bench_agg=raw - per-byte results
  - bench_meas=<meas>+avg,  bench_agg=avg - average over BENCH_SEED
  - bench_meas=<meas>+min,  bench_agg=min - minimum over BENCH_SEED
  - bench_meas=<meas>+max,  bench_agg=max - maximum over BENCH_SEED

---

Also removed all bench.tomls for now. This may seem counterproductive in
a commit to improve benchmarking, but I'm not sure there's actual value
to keeping bench cases committed in tree.

These were alway quick to fall out of date (at the time of this commit
most of the low-level bench.tomls, rbyd, btree, etc, no longer
compiled), and most benchmarks were one-off collections of scripts/data
with results too large/cumbersome to commit and keep updated in tree.

I think the better way to approach benchmarking is a seperate repo
(multiple repos?) with all related scripts/state/code and results
committed into a hopefully reproducible snapshot. Keeping the
bench.tomls in that repo makes more sense in this model.

There may be some value to having benchmarks in CI in the future, but
for that to make sense they would need to actually fail on performance
regression. How to do that isn't so clear. Anyways we can always address
this in the future rather than now.
This commit is contained in:
Christopher Haster
2023-11-03 10:27:17 -05:00
parent 4069cf5701
commit e8bdd4d381
11 changed files with 272 additions and 1399 deletions

View File

@@ -1,113 +0,0 @@
# Bench our mid-level B-trees
after = 'bench_rbyd'
# maximize lookahead buffer, we don't actually gc so we only get one pass
# of the disk for these tests
defines.LOOKAHEAD_SIZE = 'BLOCK_COUNT / 8'
[cases.bench_btree_lookup]
defines.N = [8, 16, 32, 64, 128, 256, 1024]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SEED = 42
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
// create free lookahead
memset(lfs.lookahead.buffer, 0, CFG->lookahead_size);
lfs.lookahead.start = 0;
lfs.lookahead.size = lfs_min(8*CFG->lookahead_size,
CFG->block_count);
lfs.lookahead.next = 0;
lfs_alloc_ack(&lfs);
uint32_t prng = SEED;
// create a tree with N elements
lfsr_btree_t btree = LFSR_BTREE_NULL;
const char *alphas = "abcdefghijklmnopqrstuvwxyz";
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (lfsr_btree_weight(&btree)+1);
lfsr_btree_push(&lfs, &btree, i_, LFSR_TAG_INLINED, 1,
LFSR_DATA_BUF(&alphas[i % 26], 1)) => 0;
}
// assume an unfetched btree
btree.root.off = 0;
// bench lookup
BENCH_START();
lfs_size_t i = BENCH_PRNG(&prng) % N;
uint8_t buffer[4];
lfsr_tag_t tag_;
lfs_size_t weight_;
lfsr_btree_get(&lfs, &btree, i,
&tag_, &weight_, buffer, 4) => 1;
assert(tag_ == LFSR_TAG_INLINED);
assert(weight_ == 1);
BENCH_STOP();
'''
[cases.bench_btree_commit]
defines.N = [8, 16, 32, 64, 128, 256, 1024]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SEED = 42
defines.AMORTIZED = false
in = 'lfs.c'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
// create free lookahead
memset(lfs.lookahead.buffer, 0, CFG->lookahead_size);
lfs.lookahead.start = 0;
lfs.lookahead.size = lfs_min(8*CFG->lookahead_size,
CFG->block_count);
lfs.lookahead.next = 0;
lfs_alloc_ack(&lfs);
uint32_t prng = SEED;
// create a tree with N elements
if (AMORTIZED) {
BENCH_START();
}
lfsr_btree_t btree = LFSR_BTREE_NULL;
const char *alphas = "abcdefghijklmnopqrstuvwxyz";
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (lfsr_btree_weight(&btree)+1);
lfsr_btree_push(&lfs, &btree, i_, LFSR_TAG_INLINED, 1,
LFSR_DATA_BUF(&alphas[i % 26], 1)) => 0;
}
// bench appending a new id
if (!AMORTIZED) {
BENCH_START();
}
lfs_size_t i = BENCH_PRNG(&prng) % N;
lfsr_btree_push(&lfs, &btree, i, LFSR_TAG_INLINED, 1,
LFSR_DATA_BUF(&alphas[i % 26], 1)) => 0;
BENCH_STOP();
uint8_t buffer[4];
lfsr_tag_t tag_;
lfs_size_t weight_;
lfsr_btree_get(&lfs, &btree, i,
&tag_, &weight_, buffer, 4) => 1;
assert(tag_ == LFSR_TAG_INLINED);
assert(weight_ == 1);
'''

View File

@@ -1,270 +0,0 @@
#[cases.bench_dir_open]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.N = 1024
#defines.FILE_SIZE = 8
#defines.CHUNK_SIZE = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
#
# // first create the files
# char name[256];
# uint8_t buffer[CHUNK_SIZE];
# for (lfs_size_t i = 0; i < N; i++) {
# sprintf(name, "file%08x", i);
# lfs_file_t file;
# lfs_file_open(&lfs, &file, name,
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
#
# uint32_t file_prng = i;
# for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
# for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
# buffer[k] = BENCH_PRNG(&file_prng);
# }
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
#
# lfs_file_close(&lfs, &file) => 0;
# }
#
# // then read the files
# BENCH_START();
# uint32_t prng = 42;
# for (lfs_size_t i = 0; i < N; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (N-1-i)
# : BENCH_PRNG(&prng) % N;
# sprintf(name, "file%08x", i_);
# lfs_file_t file;
# lfs_file_open(&lfs, &file, name, LFS_O_RDONLY) => 0;
#
# uint32_t file_prng = i_;
# for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
# lfs_file_read(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
# assert(buffer[k] == BENCH_PRNG(&file_prng));
# }
# }
#
# lfs_file_close(&lfs, &file) => 0;
# }
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_dir_creat]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.N = 1024
#defines.FILE_SIZE = 8
#defines.CHUNK_SIZE = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
#
# BENCH_START();
# uint32_t prng = 42;
# char name[256];
# uint8_t buffer[CHUNK_SIZE];
# for (lfs_size_t i = 0; i < N; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (N-1-i)
# : BENCH_PRNG(&prng) % N;
# sprintf(name, "file%08x", i_);
# lfs_file_t file;
# lfs_file_open(&lfs, &file, name,
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_TRUNC) => 0;
#
# uint32_t file_prng = i_;
# for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
# for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
# buffer[k] = BENCH_PRNG(&file_prng);
# }
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
#
# lfs_file_close(&lfs, &file) => 0;
# }
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_dir_remove]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.N = 1024
#defines.FILE_SIZE = 8
#defines.CHUNK_SIZE = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
#
# // first create the files
# char name[256];
# uint8_t buffer[CHUNK_SIZE];
# for (lfs_size_t i = 0; i < N; i++) {
# sprintf(name, "file%08x", i);
# lfs_file_t file;
# lfs_file_open(&lfs, &file, name,
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
#
# uint32_t file_prng = i;
# for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
# for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
# buffer[k] = BENCH_PRNG(&file_prng);
# }
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
#
# lfs_file_close(&lfs, &file) => 0;
# }
#
# // then remove the files
# BENCH_START();
# uint32_t prng = 42;
# for (lfs_size_t i = 0; i < N; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (N-1-i)
# : BENCH_PRNG(&prng) % N;
# sprintf(name, "file%08x", i_);
# int err = lfs_remove(&lfs, name);
# assert(!err || err == LFS_ERR_NOENT);
# }
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_dir_read]
#defines.N = 1024
#defines.FILE_SIZE = 8
#defines.CHUNK_SIZE = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
#
# // first create the files
# char name[256];
# uint8_t buffer[CHUNK_SIZE];
# for (lfs_size_t i = 0; i < N; i++) {
# sprintf(name, "file%08x", i);
# lfs_file_t file;
# lfs_file_open(&lfs, &file, name,
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
#
# uint32_t file_prng = i;
# for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
# for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
# buffer[k] = BENCH_PRNG(&file_prng);
# }
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
#
# lfs_file_close(&lfs, &file) => 0;
# }
#
# // then read the directory
# BENCH_START();
# lfs_dir_t dir;
# lfs_dir_open(&lfs, &dir, "/") => 0;
# struct lfs_info info;
# lfs_dir_read(&lfs, &dir, &info) => 1;
# assert(info.type == LFS_TYPE_DIR);
# assert(strcmp(info.name, ".") == 0);
# lfs_dir_read(&lfs, &dir, &info) => 1;
# assert(info.type == LFS_TYPE_DIR);
# assert(strcmp(info.name, "..") == 0);
# for (int i = 0; i < N; i++) {
# sprintf(name, "file%08x", i);
# lfs_dir_read(&lfs, &dir, &info) => 1;
# assert(info.type == LFS_TYPE_REG);
# assert(strcmp(info.name, name) == 0);
# }
# lfs_dir_read(&lfs, &dir, &info) => 0;
# lfs_dir_close(&lfs, &dir) => 0;
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_dir_mkdir]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.N = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
#
# BENCH_START();
# uint32_t prng = 42;
# char name[256];
# for (lfs_size_t i = 0; i < N; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (N-1-i)
# : BENCH_PRNG(&prng) % N;
# printf("hm %d\n", i);
# sprintf(name, "dir%08x", i_);
# int err = lfs_mkdir(&lfs, name);
# assert(!err || err == LFS_ERR_EXIST);
# }
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_dir_rmdir]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.N = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
#
# // first create the dirs
# char name[256];
# for (lfs_size_t i = 0; i < N; i++) {
# sprintf(name, "dir%08x", i);
# lfs_mkdir(&lfs, name) => 0;
# }
#
# // then remove the dirs
# BENCH_START();
# uint32_t prng = 42;
# for (lfs_size_t i = 0; i < N; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (N-1-i)
# : BENCH_PRNG(&prng) % N;
# sprintf(name, "dir%08x", i_);
# int err = lfs_remove(&lfs, name);
# assert(!err || err == LFS_ERR_NOENT);
# }
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#

View File

@@ -1,95 +0,0 @@
#[cases.bench_file_read]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.SIZE = '128*1024'
#defines.CHUNK_SIZE = 64
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
# lfs_size_t chunks = (SIZE+CHUNK_SIZE-1)/CHUNK_SIZE;
#
# // first write the file
# lfs_file_t file;
# uint8_t buffer[CHUNK_SIZE];
# lfs_file_open(&lfs, &file, "file",
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
# for (lfs_size_t i = 0; i < chunks; i++) {
# uint32_t chunk_prng = i;
# for (lfs_size_t j = 0; j < CHUNK_SIZE; j++) {
# buffer[j] = BENCH_PRNG(&chunk_prng);
# }
#
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# lfs_file_close(&lfs, &file) => 0;
#
# // then read the file
# BENCH_START();
# lfs_file_open(&lfs, &file, "file", LFS_O_RDONLY) => 0;
#
# uint32_t prng = 42;
# for (lfs_size_t i = 0; i < chunks; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (chunks-1-i)
# : BENCH_PRNG(&prng) % chunks;
# lfs_file_seek(&lfs, &file, i_*CHUNK_SIZE, LFS_SEEK_SET)
# => i_*CHUNK_SIZE;
# lfs_file_read(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
#
# uint32_t chunk_prng = i_;
# for (lfs_size_t j = 0; j < CHUNK_SIZE; j++) {
# assert(buffer[j] == BENCH_PRNG(&chunk_prng));
# }
# }
#
# lfs_file_close(&lfs, &file) => 0;
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_file_write]
## 0 = in-order
## 1 = reversed-order
## 2 = random-order
#defines.ORDER = [0, 1, 2]
#defines.SIZE = '128*1024'
#defines.CHUNK_SIZE = 64
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
# lfs_mount(&lfs, cfg) => 0;
# lfs_size_t chunks = (SIZE+CHUNK_SIZE-1)/CHUNK_SIZE;
#
# BENCH_START();
# lfs_file_t file;
# lfs_file_open(&lfs, &file, "file",
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
#
# uint8_t buffer[CHUNK_SIZE];
# uint32_t prng = 42;
# for (lfs_size_t i = 0; i < chunks; i++) {
# lfs_off_t i_
# = (ORDER == 0) ? i
# : (ORDER == 1) ? (chunks-1-i)
# : BENCH_PRNG(&prng) % chunks;
# uint32_t chunk_prng = i_;
# for (lfs_size_t j = 0; j < CHUNK_SIZE; j++) {
# buffer[j] = BENCH_PRNG(&chunk_prng);
# }
#
# lfs_file_seek(&lfs, &file, i_*CHUNK_SIZE, LFS_SEEK_SET)
# => i_*CHUNK_SIZE;
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
#
# lfs_file_close(&lfs, &file) => 0;
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''

View File

@@ -1,206 +0,0 @@
# Bench our high-level metadata tree in the core of littlefs
after = ['bench_rbyd', 'bench_btree']
# maximize lookahead buffer, we don't actually gc so we only get one pass
# of the disk for these tests
defines.LOOKAHEAD_SIZE = 'BLOCK_COUNT / 8'
[cases.bench_mtree_lookup]
defines.N = [8, 16, 32, 64, 128, 256, 1024]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SEED = 42
in = 'lfs.c'
code = '''
uint32_t prng = SEED;
const char *alphas = "abcdefghijklmnopqrstuvwxyz";
lfs_t lfs;
lfsr_format(&lfs, CFG) => 0;
lfsr_mount(&lfs, CFG) => 0;
lfs_alloc_ack(&lfs);
// create an mtree with N entries
for (lfs_size_t i = 0; i < N; i++) {
// choose an mid
lfs_ssize_t mid
= lfsr_mtree_weight(&lfs) == 0 ? -1
: (ORDER == 0) ? (lfs_ssize_t)(lfsr_mtree_weight(&lfs)-1)
: (ORDER == 1) ? 0
: (lfs_ssize_t)(BENCH_PRNG(&prng) % lfsr_mtree_weight(&lfs));
// fetch mdir
lfsr_mdir_t mdir;
lfsr_mtree_lookup(&lfs, mid, &mdir) => 0;
// choose rid
lfs_ssize_t rid
= (ORDER == 0) ? lfsr_mdir_weight(&mdir)
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (lfsr_mdir_weight(&mdir)+1);
// create an entry
lfsr_mdir_commit(&lfs, &mdir, &rid, LFSR_ATTRS(
LFSR_ATTR(rid, INLINED, +1, &alphas[i % 26], 1))) => 0;
}
// bench lookup
BENCH_START();
// choose an mid
lfs_ssize_t mid
= lfsr_mtree_weight(&lfs) == 0 ? -1
: (lfs_ssize_t)(BENCH_PRNG(&prng) % lfsr_mtree_weight(&lfs));
// fetch mdir
lfsr_mdir_t mdir;
lfsr_mtree_lookup(&lfs, mid, &mdir) => 0;
// choose rid
lfs_ssize_t rid = BENCH_PRNG(&prng) % lfsr_mdir_weight(&mdir);
// lookup
uint8_t buffer[4];
lfsr_mdir_get(&lfs, &mdir, rid, LFSR_TAG_INLINED, buffer, 4) => 1;
BENCH_STOP();
lfsr_unmount(&lfs) => 0;
'''
[cases.bench_mtree_commit]
defines.N = [8, 16, 32, 64, 128, 256, 1024]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SEED = 42
defines.AMORTIZED = false
in = 'lfs.c'
code = '''
uint32_t prng = SEED;
const char *alphas = "abcdefghijklmnopqrstuvwxyz";
lfs_t lfs;
lfsr_format(&lfs, CFG) => 0;
lfsr_mount(&lfs, CFG) => 0;
lfs_alloc_ack(&lfs);
// create an mtree with N entries
if (AMORTIZED) {
BENCH_START();
}
for (lfs_size_t i = 0; i < N; i++) {
// choose an mid
lfs_ssize_t mid
= lfsr_mtree_weight(&lfs) == 0 ? -1
: (ORDER == 0) ? (lfs_ssize_t)(lfsr_mtree_weight(&lfs)-1)
: (ORDER == 1) ? 0
: (lfs_ssize_t)(BENCH_PRNG(&prng) % lfsr_mtree_weight(&lfs));
// fetch mdir
lfsr_mdir_t mdir;
lfsr_mtree_lookup(&lfs, mid, &mdir) => 0;
// choose rid
lfs_ssize_t rid
= (ORDER == 0) ? lfsr_mdir_weight(&mdir)
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (lfsr_mdir_weight(&mdir)+1);
// create an entry
lfsr_mdir_commit(&lfs, &mdir, &rid, LFSR_ATTRS(
LFSR_ATTR(rid, INLINED, +1, &alphas[i % 26], 1))) => 0;
}
// bench commit
if (!AMORTIZED) {
BENCH_START();
}
// choose an mid
lfs_ssize_t mid
= lfsr_mtree_weight(&lfs) == 0 ? -1
: (ORDER == 0) ? (lfs_ssize_t)(lfsr_mtree_weight(&lfs)-1)
: (ORDER == 1) ? 0
: (lfs_ssize_t)(BENCH_PRNG(&prng) % lfsr_mtree_weight(&lfs));
// fetch mdir
lfsr_mdir_t mdir;
lfsr_mtree_lookup(&lfs, mid, &mdir) => 0;
// choose rid
lfs_ssize_t rid
= (ORDER == 0) ? lfsr_mdir_weight(&mdir)
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (lfsr_mdir_weight(&mdir)+1);
// create an entry
lfsr_mdir_commit(&lfs, &mdir, &rid, LFSR_ATTRS(
LFSR_ATTR(rid, INLINED, +1, "C", 1))) => 0;
BENCH_STOP();
lfsr_unmount(&lfs) => 0;
'''
[cases.bench_mtree_traversal]
defines.N = [8, 16, 32, 64, 128, 256, 1024]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
defines.SEED = 42
defines.VALIDATE = [false, true]
in = 'lfs.c'
code = '''
uint32_t prng = SEED;
const char *alphas = "abcdefghijklmnopqrstuvwxyz";
lfs_t lfs;
lfsr_format(&lfs, CFG) => 0;
lfsr_mount(&lfs, CFG) => 0;
lfs_alloc_ack(&lfs);
// create an mtree with N entries
for (lfs_size_t i = 0; i < N; i++) {
// choose an mid
lfs_ssize_t mid
= lfsr_mtree_weight(&lfs) == 0 ? -1
: (ORDER == 0) ? (lfs_ssize_t)(lfsr_mtree_weight(&lfs)-1)
: (ORDER == 1) ? 0
: (lfs_ssize_t)(BENCH_PRNG(&prng) % lfsr_mtree_weight(&lfs));
// fetch mdir
lfsr_mdir_t mdir;
lfsr_mtree_lookup(&lfs, mid, &mdir) => 0;
// choose rid
lfs_ssize_t rid
= (ORDER == 0) ? lfsr_mdir_weight(&mdir)
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (lfsr_mdir_weight(&mdir)+1);
// create an entry
lfsr_mdir_commit(&lfs, &mdir, &rid, LFSR_ATTRS(
LFSR_ATTR(rid, INLINED, +1, &alphas[i % 26], 1))) => 0;
}
// traverse the mtree
BENCH_START();
lfsr_mtree_traversal_t traversal = LFSR_MTREE_TRAVERSAL_INIT(
VALIDATE ? LFSR_MTREE_TRAVERSAL_VALIDATE : 0);
for (lfs_block_t i = 0;; i++) {
// a bit hacky, but this catches infinite loops
assert(i < 2*(1+N));
lfs_size_t mid_;
lfsr_tag_t tag_;
lfsr_data_t data_;
int err = lfsr_mtree_traversal_next(&lfs, &traversal,
&mid_, &tag_, &data_);
assert(!err || err == LFS_ERR_NOENT);
if (err == LFS_ERR_NOENT) {
break;
}
assert(tag_ == LFSR_TAG_BTREE || tag_ == LFSR_TAG_MDIR);
}
BENCH_STOP();
lfsr_unmount(&lfs) => 0;
'''

View File

@@ -1,597 +0,0 @@
# Bench our low-level rbyd data-structure
# set block_size to the full size of disk so we can test arbitrarily
# large rbyd trees, we don't really care about block sizes at this
# abstraction level
defines.BLOCK_SIZE = 'DISK_SIZE'
[cases.bench_rbyd_attr_commit]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*N <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// build the attribute list for the current permutations
//
// NOTE we only have 256 user attributes, so this benchmark is
// a bit limited
uint32_t prng = 42;
BENCH_START();
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
attrs[i] = LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
BENCH_STOP();
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
'''
[cases.bench_rbyd_attr_fetch]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*N <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// build the attribute list for the current permutations
//
// NOTE we only have 256 user attributes, so this benchmark is
// a bit limited
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
attrs[i] = LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
BENCH_START();
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_STOP();
'''
[cases.bench_rbyd_attr_lookup]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*N <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// build the attribute list for the current permutations
//
// NOTE we only have 256 user attributes, so this benchmark is
// a bit limited
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
attrs[i] = LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_START();
lfs_off_t i_ = BENCH_PRNG(&prng) % N;
lfsr_data_t data_;
int err = lfsr_rbyd_lookup(&lfs, &rbyd, -1, LFSR_TAG_UATTR(i_ & 0xff),
NULL, &data_);
// note that random order may have some collisions
assert(!err || err == LFS_ERR_NOENT);
BENCH_STOP();
'''
[cases.bench_rbyd_attr_append]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*(N+1) <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// build the attribute list for the current permutations
//
// NOTE we only have 256 user attributes, so this benchmark is
// a bit limited
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
attrs[i] = LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_START();
lfs_off_t i_ = BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xbb\xbb\xbb\xbb", 4))) => 0;
BENCH_STOP();
uint8_t buffer[4];
lfsr_rbyd_get(&lfs, &rbyd, -1, LFSR_TAG_UATTR(i_ & 0xff), buffer, 4) => 4;
assert(memcmp(buffer, "\xbb\xbb\xbb\xbb", 4) == 0);
'''
[cases.bench_rbyd_attr_remove]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*(N+1) <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// build the attribute list for the current permutations
//
// NOTE we only have 256 user attributes, so this benchmark is
// a bit limited
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
attrs[i] = LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? (N-1-i)
: BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, UATTR(i_ & 0xff), 0,
"\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_START();
lfs_off_t i_ = BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(-1, RMUATTR(i_ & 0xff), 0, NULL, 0))) => 0;
BENCH_STOP();
uint8_t buffer[4];
lfsr_rbyd_get(&lfs, &rbyd, -1, LFSR_TAG_UATTR(i_ & 0xff), buffer, 4)
=> LFS_ERR_NOENT;
'''
[cases.bench_rbyd_id_commit]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256, 1024, 2048, 4096]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*N <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// create commits, note we need to take care to generate
// indexes within a valid range as the rbyd grows
uint32_t prng = 42;
BENCH_START();
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
attrs[i] = LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
BENCH_STOP();
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
'''
[cases.bench_rbyd_id_fetch]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256, 1024, 2048, 4096]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*N <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// create commits, note we need to take care to generate
// indexes within a valid range as the rbyd grows
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
attrs[i] = LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
BENCH_START();
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_STOP();
'''
[cases.bench_rbyd_id_lookup]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256, 1024, 2048, 4096]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*N <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// create commits, note we need to take care to generate
// indexes within a valid range as the rbyd grows
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
attrs[i] = LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_START();
lfs_off_t i_ = BENCH_PRNG(&prng) % N;
lfsr_data_t data_;
lfsr_rbyd_lookup(&lfs, &rbyd, i_, LFSR_TAG_REG,
NULL, &data_) => 0;
BENCH_STOP();
'''
[cases.bench_rbyd_id_create]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256, 1024, 2048, 4096]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*(N+1) <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// create commits, note we need to take care to generate
// indexes within a valid range as the rbyd grows
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
attrs[i] = LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_START();
lfs_off_t i_ = BENCH_PRNG(&prng) % (N+1);
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, REG, +1, "\xbb\xbb\xbb\xbb", 4))) => 0;
BENCH_STOP();
uint8_t buffer[4];
lfsr_rbyd_get(&lfs, &rbyd, i_, LFSR_TAG_REG, buffer, 4) => 4;
assert(memcmp(buffer, "\xbb\xbb\xbb\xbb", 4) == 0);
'''
[cases.bench_rbyd_id_delete]
# 0 = in-order
# 1 = reversed-order
# 2 = random-order
defines.ORDER = [0, 1, 2]
# 0 = 1 commit
# 1 = N commits
defines.COMMIT = [0, 1]
defines.N = [8, 16, 32, 64, 128, 256, 1024, 2048, 4096]
in = 'lfs.c'
if = 'COMMIT == 0 || PROG_SIZE*(N+1) <= BLOCK_SIZE'
code = '''
lfs_t lfs;
lfs_init(&lfs, CFG) => 0;
lfsr_rbyd_t rbyd = {
.block = 0,
.off = 0,
.crc = 0,
.trunk = 0,
.weight = 0,
};
lfs_bd_erase(&lfs, rbyd.block) => 0;
// create commits, note we need to take care to generate
// indexes within a valid range as the rbyd grows
uint32_t prng = 42;
if (COMMIT == 0) {
struct lfsr_attr attrs[N];
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
attrs[i] = LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4);
}
lfsr_rbyd_commit(&lfs, &rbyd, attrs, N) => 0;
} else {
for (lfs_size_t i = 0; i < N; i++) {
lfs_off_t i_
= (ORDER == 0) ? i
: (ORDER == 1) ? 0
: BENCH_PRNG(&prng) % (rbyd.weight+1);
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, REG, +1, "\xaa\xaa\xaa\xaa", 4))) => 0;
}
}
lfsr_rbyd_fetch(&lfs, &rbyd, rbyd.block, CFG->block_size) => 0;
BENCH_START();
lfs_off_t i_ = BENCH_PRNG(&prng) % N;
lfsr_rbyd_commit(&lfs, &rbyd, LFSR_ATTRS(
LFSR_ATTR(i_, UNR, -1, NULL, 0))) => 0;
BENCH_STOP();
'''

View File

@@ -1,56 +0,0 @@
#[cases.bench_superblocks_found]
## support benchmarking with files
#defines.N = [0, 1024]
#defines.FILE_SIZE = 8
#defines.CHUNK_SIZE = 8
#code = '''
# lfs_t lfs;
# lfs_format(&lfs, cfg) => 0;
#
# // create files?
# lfs_mount(&lfs, cfg) => 0;
# char name[256];
# uint8_t buffer[CHUNK_SIZE];
# for (lfs_size_t i = 0; i < N; i++) {
# sprintf(name, "file%08x", i);
# lfs_file_t file;
# lfs_file_open(&lfs, &file, name,
# LFS_O_WRONLY | LFS_O_CREAT | LFS_O_EXCL) => 0;
#
# for (lfs_size_t j = 0; j < FILE_SIZE; j += CHUNK_SIZE) {
# for (lfs_size_t k = 0; k < CHUNK_SIZE; k++) {
# buffer[k] = i+j+k;
# }
# lfs_file_write(&lfs, &file, buffer, CHUNK_SIZE) => CHUNK_SIZE;
# }
#
# lfs_file_close(&lfs, &file) => 0;
# }
# lfs_unmount(&lfs) => 0;
#
# BENCH_START();
# lfs_mount(&lfs, cfg) => 0;
# BENCH_STOP();
#
# lfs_unmount(&lfs) => 0;
#'''
#
#[cases.bench_superblocks_missing]
#code = '''
# lfs_t lfs;
#
# BENCH_START();
# int err = lfs_mount(&lfs, cfg);
# assert(err != 0);
# BENCH_STOP();
#'''
#
#[cases.bench_superblocks_format]
#code = '''
# lfs_t lfs;
#
# BENCH_START();
# lfs_format(&lfs, cfg) => 0;
# BENCH_STOP();
#'''
#

39
lfs.c
View File

@@ -677,10 +677,12 @@ enum lfsr_tag_type {
#define LFSR_TAG_UATTR(attr) \
(LFSR_TAG_UATTR \
| ((0x80 & (lfsr_tag_t)(attr)) << 1) \
| (0x7f & (lfsr_tag_t)(attr)))
#define LFSR_TAG_SATTR(attr) \
(LFSR_TAG_SATTR \
| ((0x80 & (lfsr_tag_t)(attr)) << 1) \
| (0x7f & (lfsr_tag_t)(attr)))
// tag type operations
@@ -8048,6 +8050,43 @@ static int lfs_alloc(lfs_t *lfs, lfs_block_t *block) {
}
/// Other filesystem traversal things ///
lfs_ssize_t lfsr_fs_size(lfs_t *lfs) {
lfs_size_t count = 0;
lfsr_traversal_t traversal = LFSR_TRAVERSAL(LFSR_TRAVERSAL_ALL);
while (true) {
lfsr_tinfo_t tinfo;
int err = lfsr_traversal_read(lfs, &traversal, &tinfo);
if (err) {
if (err == LFS_ERR_NOENT) {
break;
}
return err;
}
// TODO add block pointers here?
// count the number of blocks we see, yes this may result in duplicates
if (tinfo.tag == LFSR_TAG_MDIR) {
count += 2;
} else if (tinfo.tag == LFSR_TAG_BRANCH) {
count += 1;
} else if (tinfo.tag == LFSR_TAG_BLOCK) {
count += 1;
} else {
LFS_UNREACHABLE();
}
}
return count;
}
/// Prepare the filesystem for mutation ///
static int lfsr_fs_fixgrm(lfs_t *lfs) {

1
lfs.h
View File

@@ -948,6 +948,7 @@ int lfsr_dir_rewind(lfs_t *lfs, lfsr_dir_t *dir);
//
// Returns the number of allocated blocks, or a negative error code on failure.
lfs_ssize_t lfs_fs_size(lfs_t *lfs);
lfs_ssize_t lfsr_fs_size(lfs_t *lfs);
// Traverse through all blocks in use by the filesystem
//

View File

@@ -637,24 +637,27 @@ void bench_permutation(size_t i, uint32_t *buffer, size_t size) {
// bench recording state
typedef struct bench_record {
const char *meas;
uintmax_t iter;
uintmax_t size;
lfs_emubd_io_t last_readed;
lfs_emubd_io_t last_proged;
lfs_emubd_io_t last_erased;
} bench_record_t;
static struct lfs_config *bench_cfg = NULL;
static lfs_emubd_io_t bench_last_readed = 0;
static lfs_emubd_io_t bench_last_proged = 0;
static lfs_emubd_io_t bench_last_erased = 0;
lfs_emubd_io_t bench_readed = 0;
lfs_emubd_io_t bench_proged = 0;
lfs_emubd_io_t bench_erased = 0;
static bench_record_t *bench_records;
size_t bench_record_count;
size_t bench_record_capacity;
void bench_reset(void) {
bench_readed = 0;
bench_proged = 0;
bench_erased = 0;
bench_last_readed = 0;
bench_last_proged = 0;
bench_last_erased = 0;
void bench_reset(struct lfs_config *cfg) {
bench_cfg = cfg;
bench_record_count = 0;
}
void bench_start(void) {
void bench_start(const char *meas, uintmax_t iter, uintmax_t size) {
// measure current read/prog/erase
assert(bench_cfg);
lfs_emubd_sio_t readed = lfs_emubd_readed(bench_cfg);
assert(readed >= 0);
@@ -663,12 +666,22 @@ void bench_start(void) {
lfs_emubd_sio_t erased = lfs_emubd_erased(bench_cfg);
assert(erased >= 0);
bench_last_readed = readed;
bench_last_proged = proged;
bench_last_erased = erased;
// allocate a new record
bench_record_t *record = mappend(
(void**)&bench_records,
sizeof(bench_record_t),
&bench_record_count,
&bench_record_capacity);
record->meas = meas;
record->iter = iter;
record->size = size;
record->last_readed = readed;
record->last_proged = proged;
record->last_erased = erased;
}
void bench_stop(void) {
void bench_stop(const char *meas) {
// measure current read/prog/erase
assert(bench_cfg);
lfs_emubd_sio_t readed = lfs_emubd_readed(bench_cfg);
assert(readed >= 0);
@@ -677,9 +690,52 @@ void bench_stop(void) {
lfs_emubd_sio_t erased = lfs_emubd_erased(bench_cfg);
assert(erased >= 0);
bench_readed += readed - bench_last_readed;
bench_proged += proged - bench_last_proged;
bench_erased += erased - bench_last_erased;
// find our record
for (size_t i = 0; i < bench_record_count; i++) {
if (strcmp(bench_records[i].meas, meas) == 0) {
// print results
printf("benched %s %zd %zd %"PRIu64" %"PRIu64" %"PRIu64"\n",
bench_records[i].meas,
bench_records[i].iter,
bench_records[i].size,
readed - bench_records[i].last_readed,
proged - bench_records[i].last_proged,
erased - bench_records[i].last_erased);
// remove our record
memmove(&bench_records[i],
&bench_records[i+1],
bench_record_count-(i+1));
bench_record_count -= 1;
return;
}
}
// not found?
fprintf(stderr, "error: bench stopped before it was started (%s)\n",
meas);
assert(false);
exit(-1);
}
void bench_result(const char *meas, uintmax_t iter, uintmax_t size,
uintmax_t result) {
// we just print these directly
printf("benched %s %zd %zd %"PRIu64"\n",
meas,
iter,
size,
result);
}
void bench_fresult(const char *meas, uintmax_t iter, uintmax_t size,
double result) {
// we just print these directly
printf("benched %s %zd %zd %.6f\n",
meas,
iter,
size,
result);
}
@@ -1404,8 +1460,7 @@ void perm_run(
}
// run the bench
bench_cfg = &cfg;
bench_reset();
bench_reset(&cfg);
printf("running ");
perm_printid(suite, case_);
printf("\n");
@@ -1414,10 +1469,6 @@ void perm_run(
printf("finished ");
perm_printid(suite, case_);
printf(" %"PRIu64" %"PRIu64" %"PRIu64,
bench_readed,
bench_proged,
bench_erased);
printf("\n");
// cleanup

View File

@@ -19,12 +19,26 @@ void bench_trace(const char *fmt, ...);
#define LFS_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
#define LFS_EMUBD_TRACE(...) LFS_TRACE_(__VA_ARGS__, "")
// provide BENCH_START/BENCH_STOP macros
void bench_start(void);
void bench_stop(void);
// BENCH_START/BENCH_STOP macros measure readed/proged/erased bytes
// through emubd
void bench_start(const char *meas, uintmax_t iter, uintmax_t size);
void bench_stop(const char *meas);
#define BENCH_START() bench_start()
#define BENCH_STOP() bench_stop()
#define BENCH_START(meas, iter, size) \
bench_start(meas, iter, size)
#define BENCH_STOP(meas) \
bench_stop(meas)
// BENCH_RESULT/BENCH_FRESULT allow for explicit non-io measurements
void bench_result(const char *meas, uintmax_t iter, uintmax_t size,
uintmax_t result);
void bench_fresult(const char *meas, uintmax_t iter, uintmax_t size,
double result);
#define BENCH_RESULT(meas, iter, size, result) \
bench_result(meas, iter, size, result)
#define BENCH_FRESULT(meas, iter, size, result) \
bench_fresult(meas, iter, size, result)
// note these are indirectly included in any generated files

View File

@@ -944,6 +944,54 @@ class BenchOutput:
for row in self.rows:
self.writer.writerow(row)
def avg(self):
# compute min/max/avg
ops = ['bench_readed', 'bench_proged', 'bench_erased']
results = co.defaultdict(lambda: {
'sums': {op: 0 for op in ops},
'mins': {op: +m.inf for op in ops},
'maxs': {op: -m.inf for op in ops},
'count': 0})
for row in self.rows:
# we only care about results with a BENCH_SEED entry
if 'BENCH_SEED' not in row:
continue
# figure our a key for each row, this is everything but the bench
# results/seed reencoded as a big tuple-tuple for hashability
key = (row['bench_meas'], tuple(sorted(
(k, v) for k, v in row.items()
if k != 'BENCH_SEED'
and k != 'bench_meas'
and k != 'bench_agg'
and k not in ops)))
# find sum/min/max/etc
result = results[key]
for op in ops:
result['sums'][op] += row[op]
result['mins'][op] = min(result['mins'][op], row[op])
result['maxs'][op] = max(result['maxs'][op], row[op])
result['count'] += 1
# append results to output
for (meas, key), result in results.items():
self.writerow({
'bench_meas': meas+'+avg',
'bench_agg': 'avg',
**{k: v for k, v in key},
**{op: result['sums'][op] / result['count'] for op in ops}})
self.writerow({
'bench_meas': meas+'+min',
'bench_agg': 'bnd',
**{k: v for k, v in key},
**{op: result['mins'][op] for op in ops}})
self.writerow({
'bench_meas': meas+'+max',
'bench_agg': 'bnd',
**{k: v for k, v in key},
**{op: result['maxs'][op] for op in ops}})
# A bench failure
class BenchFailure(Exception):
def __init__(self, id, returncode, stdout, assert_=None):
@@ -952,6 +1000,35 @@ class BenchFailure(Exception):
self.stdout = stdout
self.assert_ = assert_
# computer extra result stuff, this includes averages and amortized results
def bench_results(results):
ops = ['readed', 'proged', 'erased']
# first compute amortized results
amors = {}
for meas in set(meas for meas, _ in results.keys()):
# keep a running sum
sums = {op: 0 for op in ops}
size = 0
for i, (iter, result) in enumerate(sorted(
(iter, result) for (meas_, iter), result in results.items()
if meas_ == meas)):
for op in ops:
sums[op] += result.get(op, 0)
size += result.get('size', 1)
# find amortized results
amors[meas+'+amor', iter] = {
'size': result.get('size', 1),
**{op: sums[op] / (i+1) for op in ops}}
# also find per-byte results
amors[meas+'+div', iter] = {
'size': result.get('size', 1),
**{op: result.get(op, 0) / size for op in ops}}
return results | amors
def run_stage(name, runner, bench_ids, stdout_, trace_, output_, **args):
# get expected suite/case/perm counts
(case_suites,
@@ -970,13 +1047,17 @@ def run_stage(name, runner, bench_ids, stdout_, trace_, output_, **args):
killed = False
pattern = re.compile('^(?:'
'(?P<op>running|finished|skipped|powerloss)'
'(?P<op>running|finished|skipped)'
' (?P<id>(?P<case>[^:]+)[^\s]*)'
'(?: (?P<readed>\d+))?'
'(?: (?P<proged>\d+))?'
'(?: (?P<erased>\d+))?'
'|' '(?P<path>[^:]+):(?P<lineno>\d+):(?P<op_>assert):'
' *(?P<message>.*)'
'|' '(?P<op__>benched)'
' (?P<meas>[^\s]+)'
' (?P<iter>\d+)'
' (?P<size>\d+)'
'(?: (?P<readed>[\d\.]+))?'
'(?: (?P<proged>[\d\.]+))?'
'(?: (?P<erased>[\d\.]+))?'
')$')
locals = th.local()
children = set()
@@ -1004,6 +1085,8 @@ def run_stage(name, runner, bench_ids, stdout_, trace_, output_, **args):
last_id = None
last_stdout = co.deque(maxlen=args.get('context', 5) + 1)
last_assert = None
if output_:
last_results = {}
try:
while True:
# parse a line for state changes
@@ -1025,35 +1108,39 @@ def run_stage(name, runner, bench_ids, stdout_, trace_, output_, **args):
m = pattern.match(line)
if m:
op = m.group('op') or m.group('op_')
op = m.group('op') or m.group('op_') or m.group('op__')
if op == 'running':
locals.seen_perms += 1
last_id = m.group('id')
last_stdout.clear()
last_assert = None
if output_:
last_results = {}
elif op == 'finished':
case = m.group('case')
suite = case_suites[case]
readed_ = int(m.group('readed'))
proged_ = int(m.group('proged'))
erased_ = int(m.group('erased'))
passed_suite_perms[suite] += 1
passed_case_perms[case] += 1
passed_perms += 1
readed += readed_
proged += proged_
erased += erased_
if output_:
# get defines and write to csv
defines = find_defines(
runner, m.group('id'), **args)
output_.writerow({
'suite': suite,
'case': case,
'bench_readed': readed_,
'bench_proged': proged_,
'bench_erased': erased_,
**defines})
# compute extra measurements here
last_results = bench_results(last_results)
for (meas, iter), result in (
last_results.items()):
output_.writerow({
'suite': suite,
'case': case,
**defines,
'bench_meas': meas,
'bench_agg': 'raw',
'bench_iter': iter,
'bench_size': result['size'],
'bench_readed': result['readed'],
'bench_proged': result['proged'],
'bench_erased': result['erased']})
elif op == 'skipped':
locals.seen_perms += 1
elif op == 'assert':
@@ -1064,6 +1151,32 @@ def run_stage(name, runner, bench_ids, stdout_, trace_, output_, **args):
# go ahead and kill the process, aborting takes a while
if args.get('keep_going'):
proc.kill()
elif op == 'benched':
meas = m.group('meas')
iter = int(m.group('iter'))
size = int(m.group('size'))
result = {'size': size}
for op in ['readed', 'proged', 'erased']:
if m.group(op) is None:
result[op] = 0
elif '.' in m.group(op):
result[op] = float(m.group(op))
else:
result[op] = int(m.group(op))
# keep track of per-perm results
if output_:
# if we've already seen this measurement, sum
result_ = last_results.get((meas, iter))
if result_ is not None:
result['readed'] += result_['readed']
result['proged'] += result_['proged']
result['erased'] += result_['erased']
result['size'] += result_['size']
last_results[meas, iter] = result
# keep track of total for summary
readed += result['readed']
proged += result['proged']
erased += result['erased']
except KeyboardInterrupt:
raise BenchFailure(last_id, 1, list(last_stdout))
finally:
@@ -1102,17 +1215,6 @@ def run_stage(name, runner, bench_ids, stdout_, trace_, output_, **args):
start += locals.seen_perms*step
except BenchFailure as failure:
# keep track of failures
if output_:
case, _ = failure.id.split(':', 1)
suite = case_suites[case]
# get defines and write to csv
defines = find_defines(runner, failure.id, **args)
output_.writerow({
'suite': suite,
'case': case,
**defines})
# race condition for multiple failures?
if failures and not args.get('keep_going'):
break
@@ -1236,7 +1338,8 @@ def run(runner, bench_ids=[], **args):
if args.get('output'):
output = BenchOutput(args['output'],
['suite', 'case'],
['bench_readed', 'bench_proged', 'bench_erased'])
['bench_meas', 'bench_iter', 'bench_size',
'bench_readed', 'bench_proged', 'bench_erased'])
# measure runtime
start = time.time()
@@ -1287,6 +1390,8 @@ def run(runner, bench_ids=[], **args):
except BrokenPipeError:
pass
if output:
# computer averages?
output.avg()
output.close()
# show summary