The documentation of base64_decode_update says
@var{dst} should point to an area of size at least BASE64_DECODE_LENGTH(@var{src_length}). The amount of data generated is returned in *@var{dst_length}.
(and similarly for base16_decode_update).
This is rather inconvenient when decoding a base64 blob which is expected to be of a fix size. E.g., base64 encoding of 32 octets is 44 characters (including padding), but BASE64_DECODE_LENGTH(44) is 33, so to decode using a single call to base16_decode_update according to the docs, one would need to allocate an extra byte. And more than one extra if input may contain white space in addition to the 44 base64 characters.
Suggestion: Make dst_length an input parameter as well (similar to, e.g., the rsa_decrypt functions). If decoding would produce more bytes than there is space for, decoding fails. This way, if you expect a fix size or a fix maximum size, you need only allocate that and pass to base64_decode_update. And if you allocate according to BASE64_DECODE_LENGTH, then decoding will never fail due to missing output space.
If we do this for nettle-4, then old code needs to be adjusted, not just recompiled. Unadjusted code will suffer random failures (from passing an uninitialized dst_length), but if it follows the docs on allocation of the dst area, there won't be any dangerous memory overwrites.
Alternatively, one could introduce a new name for the new behavior.
Opinions?
Regards, /Niels
nettle-bugs@lists.lysator.liu.se