forked from Imagelibrary/littlefs
scripts: Fixed O(n^2) slicing in Rbyd.fetch
Do you see the O(n^2) behavior in this loop?
j = 0
while j < len(data):
word, d = fromleb(data[j:])
j += d
The slice, data[j:], creates a O(n) copy every iteration of the loop.
A bit tricky. Or at least I found it tricky to notice. Maybe because
array indexing being cheap is baked into my brain...
Long story short, this repeated slicing resulted in O(n^2) behavior in
Rbyd.fetch and probably some other functions. Even though we don't care
_too_ much about performance in these scripts, having Rbyd.fetch run in
O(n^2) isn't great.
Tweaking all from* functions to take an optional index solves this, at
least on paper.
---
In practice I didn't actually find any measurable performance gain. I
guess array slicing in Python is optimized enough that the constant
factor takes over?
(Maybe it's being helped by us limiting Rbyd.fetch to block_size in most
scripts? I haven't tested NAND block sizes yet...)
Still, it's good to at least know this isn't a bottleneck.
This commit is contained in:
@@ -21,14 +21,14 @@ def openio(path, mode='r', buffering=-1):
|
||||
else:
|
||||
return open(path, mode, buffering)
|
||||
|
||||
def fromle32(data):
|
||||
return struct.unpack('<I', data[0:4].ljust(4, b'\0'))[0]
|
||||
def fromle32(data, j=0):
|
||||
return struct.unpack('<I', data[j:j+4].ljust(4, b'\0'))[0]
|
||||
|
||||
def dbg_le32s(data):
|
||||
lines = []
|
||||
j = 0
|
||||
while j < len(data):
|
||||
word = fromle32(data[j:])
|
||||
word = fromle32(data, j)
|
||||
lines.append((
|
||||
' '.join('%02x' % b for b in data[j:j+4]),
|
||||
word))
|
||||
|
||||
Reference in New Issue
Block a user