mksquashfs: fix rare race condition in "locked fragment" queueing

When mksquashfs is writing a file containing multiple blocks to
disk, we cannot have the fragment compressors writing their
fragment blocks to disk at the same time.  This is because
files are expected to be contiguous on disk, and having a
fragment written in the middle is not particularly useful!

But we don't want to turn off fragment compression during this
time because this will reduce parallelism significantly.
The solution to this is to "lock" fragment output during this time,
the fragment compressors continue to compress, but the fragments
generated are queued internally to the fragment compressors, and
only written once the file has finished being written.

This works fine, but there has been a subtle race condition with
this fragment locking and fragment lookup in duplicate checking.
Fragment lookup in duplicate checking first looks to see if the
fragment is still in the uncompressed fragment cache, normally if
the fragment has just been compressed and queued to the internal
"locked fragment" queue, it will still be there.  However, if
we've had a run on uncompressed buffers (*), it may have been
reused, in which case we will proceed to look for the compressed
fragment in the writer cache.  When we do this we look up the
fragment in the "fragment_table" table, but this table was only
updated once the fragment was written to disk, i.e. there was
an unlikely small window where the uncompressed fragment had
been reused but the fragment was still on the locked queue and so
the fragment_table hadn't yet been updated, which would have resulted
in incorrect values being read from the fragment_table.

Fix this by updating the necessary value in the fragment_table
when the fragment is queued to the internal locked fragment
queue (and obviously before the uncompressed fragment is released).

(*) In fact because the main thread when writing a multi-block
file does not create uncompressed fragments in the ordinary way, and
because by definition for a fragment to be on the "locked fragment"
queue the uncompressed fragment has been released during writing
the multi-block file, it is almost impossible for the released
uncompressed fragment to be reused.  The only way the released
uncompressed fragment block could be reused is in the duplicate
check itself if it needs to read and decompress a fragment block.
This race condition is almost impossible to hit, but it is
easy to fix.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
1 file changed
tree: 2fa2f67acd2d4aa1a5fbd76d436965e2c38b9d8a
  1. kernel/
  2. kernel-2.4/
  3. squashfs-tools/
  4. README