Hi, i made a simple zip bomb with 100k entries each 1 byte in size just to see how this library will work.
I then tried parsing it in a web worker:
const reader = new zip.ZipReader(
new zip.HttpReader(url, { useRangeHeader: true })
)
const entries = await reader.getEntries()
The worker RAM usage exploded to almost 3GB!
I suppose its due to eader.getEntries()
Is there any way to inspect the zip file first without allocating all the ram needed to create all the entries?
Hi, i made a simple zip bomb with 100k entries each 1 byte in size just to see how this library will work.
I then tried parsing it in a web worker:
The worker RAM usage exploded to almost 3GB!
I suppose its due to
eader.getEntries()Is there any way to inspect the zip file first without allocating all the ram needed to create all the entries?