forked from Imagelibrary/littlefs
Moved post-bench amor/avg analysis out into amor.py and avg.py
1. Being able to inspect results before benchmarks complete was useful to track their status. It also allows some analysis even if a benchmark fails. 2. Moving these scripts out of bench.py allows them to be a bit more flexible, at the cost of CSV parsing/structuring overhead. 3. Writing benchmark measurements immediately avoids RAM buildup as we store intermediate measurements for each bench permutation. This may increase the IO bottleneck, but we end up writing the same number of lines, so not sure... I realize avg.py has quite a bit of overlap with summary.py, but I don't want to entangle them further. summary.py is already trying to do too much as is...
This commit is contained in:
@@ -30,10 +30,10 @@ OPS = {
|
||||
'prod': lambda xs: m.prod(xs[1:], start=xs[0]),
|
||||
'min': min,
|
||||
'max': max,
|
||||
'mean': lambda xs: Float(sum(float(x) for x in xs) / len(xs)),
|
||||
'avg': lambda xs: Float(sum(float(x) for x in xs) / len(xs)),
|
||||
'stddev': lambda xs: (
|
||||
lambda mean: Float(
|
||||
m.sqrt(sum((float(x) - mean)**2 for x in xs) / len(xs)))
|
||||
lambda avg: Float(
|
||||
m.sqrt(sum((float(x) - avg)**2 for x in xs) / len(xs)))
|
||||
)(sum(float(x) for x in xs) / len(xs)),
|
||||
'gmean': lambda xs: Float(m.prod(float(x) for x in xs)**(1/len(xs))),
|
||||
'gstddev': lambda xs: (
|
||||
@@ -817,7 +817,7 @@ if __name__ == "__main__":
|
||||
action='append',
|
||||
help="Take the maximum of these fields.")
|
||||
parser.add_argument(
|
||||
'--mean',
|
||||
'--avg', '--mean',
|
||||
action='append',
|
||||
help="Average these fields.")
|
||||
parser.add_argument(
|
||||
|
||||
Reference in New Issue
Block a user