Jumping from a simple Python implementation to the fully hardware
accelerated crc32c library basically deletes any crc32c related
bottlenecks:
crc32c.py disk (1MiB) w/ crc32c lib: 0m0.027s
crc32c.py disk (1MiB) w/o crc32c lib: 0m0.844s
This uses the same try-import trick we use for inotify_simple, so we get
the speed improvement without losing portability.
---
In dbgbmap.py:
dbgbmap.py w/ crc32c lib: 0m0.273s
dbgbmap.py w/o crc32c lib: 0m0.697s
dbgbmap.py w/ crc32c lib --no-ckdata: 0m0.269s
dbgbmap.py w/o crc32c lib --no-ckdata: 0m0.490s
dbgbmap.old.py: 0m0.231s
The bulk of the runtime is still in Rbyd.fetch, but this is now
dominated by leb128 decoding, which makes sense. We do ~twice as many
fetches in the new dbgbmap.py in order to calculate the gcksum (which
we then ignore...).
This drops the option to read tags from a disk file. I don't think I've
ever used this, and it requires quite a bit of circuitry to implement.
Also dropped -s/--string, because most tags can't be represented as
strings?
And tweaked -x/--hex flags to correctly parse spaces in arguments, so
now these are equivalent:
- ./scripts/dbgtag.py -x 00 03 00 08
- ./scripts/dbgtag.py -x "00 03 00 08"
This only failed if "-" was used as an argument (for stdin/stdout), so
the issue was pretty hard to spot.
openio is a heavily copy-pasted function, so it makes sense to just add
the import os to openio directly. Otherwise this mistake will likely
happen again in the future.
Moved local import hack behind if __name__ == "__main__"
These scripts aren't really intended to be used as python libraries.
Still, it's useful to import them for debugging and to get access to
their juicy internals.
This seems like a more fitting name now that this script has evolved
into more of a general purpose high-level CSV tool.
Unfortunately this does conflict with the standard csv module in Python,
breaking every script that imports csv (which is most of them).
Fortunately, Python is flexible enough to let us remove the current
directory before imports with a bit of an ugly hack:
# prevent local imports
__import__('sys').path.pop(0)
These scripts are intended to be standalone anyways, so this is probably
a good pattern to adopt.
This matches the style used in C, which is good for consistency:
a_really_long_function_name(
double_indent_after_first_newline(
single_indent_nested_newlines))
We were already doing this for multiline control-flow statements, simply
because I'm not sure how else you could indent this without making
things really confusing:
if a_really_long_function_name(
double_indent_after_first_newline(
single_indent_nested_newlines)):
do_the_thing()
This was the only real difference style-wise between the Python code and
C code, so now both should be following roughly the same style (80 cols,
double-indent multiline exprs, prefix multiline binary ops, etc).
The naive implementation is simpler, less code, and more likely to be
correct, each of these are more valuable than speed in our debug
scripts.
We're in Python anyways (no offense Python!).
Plus I think it's good to show that the underlying logic of CRCs aren't
really that complex, at least until we throw optimizations into the mix.
So now the following forms are supported:
$ ./scripts/crc32c.py -x 41 42 43 44
fb9f8872
$ ./scripts/crc32c.py -s abcd
fb9f8872
$ echo '00: 41 42 43 44' | xxd -r | ./scripts/crc32c.py
fb9f8872
Hopefully this will make crc32c.py more useful. It hasn't seen very much
use, though that may just be because of the difficulty marshalling data
into a format crc32c.py can operate on.
That and dbgblock.py's -x/--cksum flag covering one of the main use
cases.