Commit 781b3e5efc (tftp: Do not use priority queue) caused a regression
when fetching files over TFTP whose size is bigger than 65535 * block size.
grub> linux /images/pxeboot/vmlinuz
grub> echo $?
0
grub> initrd /images/pxeboot/initrd.img
error: timeout reading '/images/pxeboot/initrd.img'.
grub> echo $?
28
It is caused by the block number counter being a 16-bit field, which leads
to a maximum file size of ((1 << 16) - 1) * block size. Because GRUB sets
the block size to 1024 octets (by using the TFTP Blocksize Option from RFC
2348 [0]), the maximum file size that can be transferred is 67107840 bytes.
The TFTP PROTOCOL (REVISION 2) RFC 1350 [1] does not mention what a client
should do when a file size is bigger than the maximum, but most TFTP hosts
support the block number counter to be rolled over. That is, acking a data
packet with a block number of 0 is taken as if the 65356th block was acked.
It was working before because the block counter roll-over was happening due
an overflow. But that got fixed by the mentioned commit, which led to the
regression when attempting to fetch files larger than the maximum size.
To allow TFTP file transfers of unlimited size again, re-introduce a block
counter roll-over so the data packets are acked preventing the timeouts.
[0]: https://tools.ietf.org/html/rfc2348
[1]: https://tools.ietf.org/html/rfc1350
Fixes: 781b3e5efc (tftp: Do not use priority queue)
Suggested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>