hi nettle folks--
now that Crypt::Nettle seems effective and functional, i'm starting to look at using it in other systems i'm working on. Suddenly, i realize i'm missing access to 3DES and BLOWFISH, which i find i actually want :/
I'm missing them because there is no struct nettle_cipher for these algorithms (or for DES, for that matter, though i care less about DES).
I seem to have a few options:
0) Crypt::Nettle could write unique interfaces to those ciphers and expose them to the user of the perl module as (for example) Crypt::Nettle::Cipher::3DES and Crypt::Nettle::Cipher::Blowfish . this breaks symmetry with the rest of the interface, though.
1) Crypt::Nettle could create its own struct nettle_cipher objects for these ciphers, wrapping the weak key checking in some code of that belongs to the perl module
2) I could propose that nettle to create struct nettle_cipher objects for these ciphers directly.
I prefer (1) or (2) because they'll keep a simple interface for Crypt::Nettle. I'm not sure how to do (2) without breaking ABI in nettle somehow (or losing the weak-key error checking). But going with option (1) seems likely to cause code duplication in any other higher-level bindings that use the struct nettle_cipher objects to present a normalized interface.
Any thoughts on how i should proceed? I can certainly do (1) independently of libnettle itself, but if there's a way to handle (2) more cleanly than i've been able to imagine thus far, i'd be happy to hear about it.
Or, am i barking up the wrong tree entirely? I'm imagining (for example) a user who has in their possession a symmetrically-encrypted message that they happen to know the key for. The cipher used was one of the "weak-key" ciphers, and it's even possible that the key is in fact a weak key. The user should still be able to decrypt the message using Crypt::Nettle (or any other binding).
--dkg
On 03/29/2011 08:18 AM, Daniel Kahn Gillmor wrote:
now that Crypt::Nettle seems effective and functional, i'm starting to look at using it in other systems i'm working on. Suddenly, i realize i'm missing access to 3DES and BLOWFISH, which i find i actually want :/ I'm missing them because there is no struct nettle_cipher for these algorithms (or for DES, for that matter, though i care less about DES). I seem to have a few options:
- Crypt::Nettle could write unique interfaces to those ciphers and
expose them to the user of the perl module as (for example) Crypt::Nettle::Cipher::3DES and Crypt::Nettle::Cipher::Blowfish . this breaks symmetry with the rest of the interface, though.
- Crypt::Nettle could create its own struct nettle_cipher objects for
these ciphers, wrapping the weak key checking in some code of that belongs to the perl module 2) I could propose that nettle to create struct nettle_cipher objects for these ciphers directly. I prefer (1) or (2) because they'll keep a simple interface for Crypt::Nettle. I'm not sure how to do (2) without breaking ABI in nettle somehow (or losing the weak-key error checking).
I'd also prefer (2), because it reduces work for nettle consumers. For gnutls I didn't use nettle_cipher at all and created my own wrappers, because nettle cipher works only with few ciphers. At least for TLS, weak key checking is not that important due to low probability of selecting one, to be of any practical concern.
regards, Nikos
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
At least for TLS, weak key checking is not that important due to low probability of selecting one, to be of any practical concern.
In lsh, I disconnect when a weak key is detected.
The problem with relying on "low probability" is that unless you generate the random key all by yourself, you need that probability to be low also in the presence of any possible attacks on the key agreement protocol. The analysis needed to rule out such attacks may cause some headache, which you can avoid by simply refusing to use weak keys if they ever occur.
Regards, /Niels
On 03/29/2011 11:08 AM, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
At least for TLS, weak key checking is not that important due to low probability of selecting one, to be of any practical concern.
In lsh, I disconnect when a weak key is detected. The problem with relying on "low probability" is that unless you generate the random key all by yourself, you need that probability to be low also in the presence of any possible attacks on the key agreement protocol. The analysis needed to rule out such attacks may cause some headache, which you can avoid by simply refusing to use weak keys if they ever occur.
In TLS the generated keys do not only depend on the key exchange but also on several bytes of randomness contributed by both peers. Even a key exchange with a malicious party, would produce random keys with a little more than 224 bits of randomness.
Moreover if you handle weak keys, you should include it into the protocol, i.e. do you restart the key exchange once a weak key is detected, or you just terminate the handshake? TLS has no provisions for a re-handshake once a weak-key is detected.
regards, Nikos
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
I seem to have a few options:
- Crypt::Nettle could write unique interfaces to those ciphers and
expose them to the user of the perl module as (for example) Crypt::Nettle::Cipher::3DES and Crypt::Nettle::Cipher::Blowfish . this breaks symmetry with the rest of the interface, though.
- Crypt::Nettle could create its own struct nettle_cipher objects for
these ciphers, wrapping the weak key checking in some code of that belongs to the perl module
- I could propose that nettle to create struct nettle_cipher objects
for these ciphers directly.
I recommend (1). The way this is done in the Pike bindings (the implementation is maybe a bit too complicated for its own good), I use a struct pike_cipher very similar to nettle_cipher,
/* Calls Pike_error on errors */ typedef void (*pike_nettle_set_key_func)(void *ctx, ptrdiff_t length, const char *key, /* Force means to use key even if it is weak */ int force);
struct pike_cipher { const char *name;
unsigned context_size;
unsigned block_size;
/* Suggested key size; other sizes are sometimes possible. */ unsigned key_size;
pike_nettle_set_key_func set_encrypt_key; pike_nettle_set_key_func set_decrypt_key;
nettle_crypt_func encrypt; nettle_crypt_func decrypt; };
Here, the pike_nettle_set_key_func differs from nettle_set_key_func is two ways, related to error handling:
1. It checks if the key size is appropriate for the algorithm, and raises an exception if not (in contrast, a bad key size passed to the nettle set_key function would abort the process with an assertion failure).
2. The behaviour for weak keys. If the force argument is zero (for Pike calls, it's an optional argument and omitting it also means zero), a weak key results in an exception. If the force argument is non-zero, a weak key is not not considered an error.
In these bindings, unlike yours, each cipher like AES is a single class, with multiple supported key sizes. So all ciphers need their own set_key wrapper for proper error checking.
In your case, where you have one separate class per possible key size, I think you could do something similar and still use the new enumeration interface for the "normal" algorithms.
If you're fine with either having weak keys always raise an exception or always be accepted, you could write set_key wrappers for the affected ciphers which do precisely that and which adhere to the nettle_set_key_func interface (note that des_set_key and des3_set_key don't have a key size argument so they need wrappers also for that reason). If you want it to be configurable, things get a bit more complicated and you may need your own struct perl_cipher to extend struct nettle_cipher (you could still enumerate the available nettle_cipher and convert each to a corresponding perl_cipher). Or you could just define separate classes with and without weak key checking.
There will be a little code duplication. But there ought to be code *somewhere* to implements the language-specific pieces of the interface, such as exception based error handling, and new features, like, e.g., the optional force argument above.
Or, am i barking up the wrong tree entirely? I'm imagining (for example) a user who has in their possession a symmetrically-encrypted message that they happen to know the key for. The cipher used was one of the "weak-key" ciphers, and it's even possible that the key is in fact a weak key. The user should still be able to decrypt the message using Crypt::Nettle (or any other binding).
I agree that there are certainly cases where you don't want to treat weak keys as errors. Even though I think it makes sense to have a default behaviour which treats weak keys as errors.
Regards, /Niels
On 03/29/2011 05:02 AM, Niels Möller wrote:
If you're fine with either having weak keys always raise an exception or always be accepted, you could write set_key wrappers for the affected ciphers which do precisely that and which adhere to the nettle_set_key_func interface (note that des_set_key and des3_set_key don't have a key size argument so they need wrappers also for that reason). If you want it to be configurable, things get a bit more complicated and you may need your own struct perl_cipher to extend struct nettle_cipher (you could still enumerate the available nettle_cipher and convert each to a corresponding perl_cipher). Or you could just define separate classes with and without weak key checking.
this is quite a bit of code duplication across bindings. I'd rather just expose the fact of a weak key to the caller directly (whether through exceptions, return codes, or some other mechanism.
-----
Here's a proposal for (2) which i'll name "2a"; I believe it does involve an ABI+API bump to libnettle, but should allow for a reduction in the amount of code for all bindings (which in turn might make the creation of future bindings more likely, thereby getting the nettle goodness out to more people). I know i'd be more likely to maintain additional bindings if they are smaller/simpler.
redefine nettle_set_key_func to return an int instead of a void:
typedef int nettle_set_key_func(void *ctx, unsigned length, const uint8_t *key);
For the ciphers which have no weak keys, create wrapper functions around their set_key functions which always return 1, and use those wrapper functions to populate the standard nettle_cipher objects.
Add a wrapper function around des_set_key and des3_set_key that includes a key length argument; add corresponding nettle_cipher objects for des and des3.
Add new nettle_cipher objects for the remaining weak-key ciphers (only blowfish?) without the wrapping functions.
-----
And here is "2b", a more involved proposal for (2) -- it's a bigger ABI+API change, but the exposed API becomes more normalized:
Redefine nettle_set_key_func as in "2a"; and also change all the *_set_key() functions in nettle to return an int directly. ciphers with no weak keys will naturally always return 1.
Change des_set_key() and des3_set_key() to take length arguments like every other *_set_key() function.
Add new nettle_cipher objects for all missing ciphers.
-----
I understand the natural reluctance to make an ABI bump, and i think it's good to do so carefully (and i regret that i didn't make this proposal before the recent ABI bump to get it all done together). But i think the tradeoff in terms of simplicity of new bindings is an overall positive one.
In either proposal, bindings still retain the ability to report weak keys using language-specific mechanisms/error handling.
I'd be happy to write a patch for either 2a or 2b, if there was a chance that they would be accepted upstream. Either one would make me happy (and more willing to step up to writing python bindings, which i'd like to have on my plate for the future).
Regards,
--dkg
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
this is quite a bit of code duplication across bindings.
Could you be a bit more concrete? Which variant of wrappers and weak-key interface are you thinking about?
Before getting into specifics, I'd like to point out that the structs declared in nettle-meta.h are not intended as a fully general algorithm framework (to do that, one could, e.g., implement an interface on the same level of abstraction as libgcrypt's, on top of nettle).
Besides missing des and blowfish, it doesn't provide a nettle_cipher struct for every possible key size for all algorithms, and there's also no mechanism to query possible key sizes. To me, the lack of a struct nettle_cipher for des and blowfish-128 is comparable to the lack of struct nettle_cipher for cast, arcfour and blowfish with 80 bit key. And to support them all in a reasonable way would require something quite different from the current struct nettle_cipher.
nettle-meta.h is intended to provide the simplest algorithm abstraction possible, to cover the simple cases. And also as inspiration for extended frameworks, either more general or more application specific.
This works better for hash algorithm (more regular properties than for the ciphers), and in the Pike bindings, I use nettle_hash as is, but I don't use nettle_cipher.
Here's a proposal for (2) which i'll name "2a"; I believe it does involve an ABI+API bump to libnettle, but should allow for a reduction in the amount of code for all bindings (which in turn might make the creation of future bindings more likely, thereby getting the nettle goodness out to more people). I know i'd be more likely to maintain additional bindings if they are smaller/simpler.
redefine nettle_set_key_func to return an int instead of a void:
typedef int nettle_set_key_func(void *ctx, unsigned length, const uint8_t *key);
For the ciphers which have no weak keys, create wrapper functions around their set_key functions which always return 1, and use those wrapper functions to populate the standard nettle_cipher objects.
Add a wrapper function around des_set_key and des3_set_key that includes a key length argument; add corresponding nettle_cipher objects for des and des3.
I could consider this, but I'm not convinced that it really solves an important problem. To me, a language-specific wrapper like, e.g.,
void pike_blowfish_set_key(void *ctx, ptrdiff_t length, const char *key, int force) { if (length < BLOWFISH_MIN_KEY_SIZE || length > BLOWFISH_MAX_KEY_SIZE) Pike_error("BLOWFISH_Info: Bad keysize for BLOWFISH.\n"); if (!blowfish_set_key(ctx, length, (const uint8_t *)key) && !force) Pike_error("BLOWFISH_Info: Key is weak.\n"); }
seems more useful than a language agnostic wrapper
int aes_set_encrypt_key_wrapper (struct aes_ctx *ctx, unsigned length, const uint8_t *key) { aes_set_encrypt_key (ctx, length, key); return 1; }
which lacks adequate error handling for bad key sizes. And if we extend nettle_cipher to include a description of valid key sizes, and/or introduce additional error codes for the set_key functions to signal different types of bad keys, then we get quite far from the current minimalistic flavor of nettle.
And here is "2b", a more involved proposal for (2) -- it's a bigger ABI+API change, but the exposed API becomes more normalized:
I'm not going to do this. The low level cipher interface is not intended to normalize away important differences between ciphers.
In particular,
* I don't like dummy return values (which are either unnecessarily checked, making code ugly, or get people into the habit if ignoring return values). "Can't fail" (except by abort()) is a very simplifying property of a function, and then it shouldn't return an error code.
* I also don't like dummy function arguments, used like
int des_set_key (struct des_ctx *ctx, unsigned length, const uint8_t *key) { assert (length == DES_KEY_SIZE); ... }
(except in an optional wrapper function, where the above would be the right thing).
Remember that the C interface is intended to be nice also for applications that use just one or two algorithms. These applications should not suffer from cruft intended to unify the interface with some other algorithm which the applications couldn't care less about. An important case would be applications that use only the newer algorithms like aes or camellia. They shouldn't have to bother about a return value introduced just because it's needed for des.
BTW: There's another easy alternative which we could call (3): Keep nettle_set_key_func as is. Introduce wrappers for des and blowfish which just ignore weak keys, and provide nettle_cipher structs using these wrappers. Then you get des and blowfish sans weak key detection, using the same interface as the other ciphers. I think this would fit reasonably with the nettle design principles. Question is: Would anybody find it useful? For a general language binding, I would expect that one would want to have the possibility to detect weak keys.
Regards, /Niels
On 03/29/2011 03:38 PM, Niels Möller wrote:
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
this is quite a bit of code duplication across bindings.
Could you be a bit more concrete? Which variant of wrappers and weak-key interface are you thinking about?
sorry -- i meant that it seemed like it would be unnecessary code duplication for me to create a perl_nettle_cipher struct that matched your pike_nettle_cipher.
Before getting into specifics, I'd like to point out that the structs declared in nettle-meta.h are not intended as a fully general algorithm framework (to do that, one could, e.g., implement an interface on the same level of abstraction as libgcrypt's, on top of nettle).
hm, ok, i see your point. it sounds like nettle_cipher might be a reasonable choice for symmetric encryption (where the tools can select reasonable key sizes, etc), but *not* a reasonable choice for symmetric decryption (where we have to cope with arbitrary algorithms and keys foisted on us by the incoming ciphertext).
This works better for hash algorithm (more regular properties than for the ciphers), and in the Pike bindings, I use nettle_hash as is, but I don't use nettle_cipher.
Perhaps what i'm wondering is: can we define an cipher abstraction that exposes the relevant details in C, provides a framework that is suitable for symmetric decryption, and doesn't violate the minimalistic flavor of nettle?
I think having something like this in the canonical sources (instead of implemented outside of nettle) would make it easier to write and maintain language bindings.
Or, as you said about your Pike bindings: "the implementation is maybe a bit too complicated for its own good" -- wouldn't it be better to have the extra implementation complexity in only one place instead of expecting every binding that uses it to duplicate it?
I could consider this, but I'm not convinced that it really solves an important problem. To me, a language-specific wrapper like, e.g.,
void pike_blowfish_set_key(void *ctx, ptrdiff_t length, const char *key, int force) { if (length < BLOWFISH_MIN_KEY_SIZE || length > BLOWFISH_MAX_KEY_SIZE) Pike_error("BLOWFISH_Info: Bad keysize for BLOWFISH.\n"); if (!blowfish_set_key(ctx, length, (const uint8_t *)key) && !force) Pike_error("BLOWFISH_Info: Key is weak.\n"); }
seems more useful than a language agnostic wrapper
int aes_set_encrypt_key_wrapper (struct aes_ctx *ctx, unsigned length, const uint8_t *key) { aes_set_encrypt_key (ctx, length, key); return 1; }
i think you might be comparing apples and oranges here. I agree that language-specific error handling is useful. I'd expect any bindings to report errors in some sort of native form. But that doesn't mean that we shouldn't have a cipher-agnostic interface in C that is capable of reporting all the standard classes of errors. But it would be nice for the authors of the bindings to have a standard way (in C) to get error reports that is cipher-agnostic. Otherwise, we'll end up with a bunch of duplicate cipher-specific code in each binding.
Maybe it's useful to think through what possible errors could come up from a *_set_key function, and come up with a C interface that would cover them all in some sort of distinguishable fashion?
So far, i think the errors i've heard are:
* bad key size
* weak key
Anything else? Do we want a way to report the range of acceptable key sizes for a given cipher?
which lacks adequate error handling for bad key sizes. And if we extend nettle_cipher to include a description of valid key sizes, and/or introduce additional error codes for the set_key functions to signal different types of bad keys, then we get quite far from the current minimalistic flavor of nettle.
And here is "2b", a more involved proposal for (2) -- it's a bigger ABI+API change, but the exposed API becomes more normalized:
I'm not going to do this. The low level cipher interface is not intended to normalize away important differences between ciphers.
That seems reasonable to me. I'm happy to discard proposal 2b.
BTW: There's another easy alternative which we could call (3): Keep nettle_set_key_func as is. Introduce wrappers for des and blowfish which just ignore weak keys, and provide nettle_cipher structs using these wrappers. Then you get des and blowfish sans weak key detection, using the same interface as the other ciphers. I think this would fit reasonably with the nettle design principles. Question is: Would anybody find it useful? For a general language binding, I would expect that one would want to have the possibility to detect weak keys.
Yes, i agree that general language bindings should allow the user to detect weak keys.
--dkg
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
can we define an cipher abstraction that exposes the relevant details in C, provides a framework that is suitable for symmetric decryption, and doesn't violate the minimalistic flavor of nettle?
Feel free to try...
Some points to keep in mind:
1. I think the current nettle_cipher is useful for some applications, even if it's not suitable for language bindings which want to provide access to everything available.
2. Error checking at set_key
So far, i think the errors i've heard are:
bad key size
weak key
Anything else?
One could possibly add parity error for DES, but I doubt anyone really cares about that. One *might* want to make some distinction between "hard" errors, like bad key sizes, and "soft" errors or warnings like weak keys, where the resulting cipher context still can be used, if desired.
3. The set of supported key sizes.
This is the issue with most potential for making things hairy (and this information is not provided by nettle's cipher specific interfaces either; there's a define for a single "recommended" key size, and when applicable there are defines for the minimum and maximum key size. But the user has to look in the documentation to find out which key sizes in this range are actually supported).
Do we want a way to report the range of acceptable key sizes for a given cipher?
I think a general algorithm abstraction ought to provide that. And I think the model for this more general abstraction should be that ciphers are parametrized by key size, which is different from current nettle_cipher (and the design of your perl bindings) which is unaware that the two ciphers aes-128 and aes-256 are in fact related.
There are two types of queries one could support:
a) The ability to ask if a particular key size is ok.
b) The ability to get the complete set of available key sizes.
From a minimality perspective, it seems undesirable to have both, since
a) can be done on top of b). But I think it makes a lot of sense to have the set_key function of this interface check key sizes, thereby almost supporting a) (one would have to provide a particular key, not just a key size). For b), the simplest way that occurs to me would be to export a list of ranges. Something like
struct keysize_range { unsigned short start; unsigned short length; };
Then put a pointer to an array of those structs, sorted by start, into the new nettle_cipher struct, terminated by an entry with length == 0. So for aes, one would have
{{ 16, 1 }, {24, 1}, {32, 1}, {0, 0}}
and for blowfish,
{{BLOWFISH_MIN_KEY_SIZE, BLOWFISH_MAX_KEY_SIZE + 1 - BLOWFISH_MIN_KEY_SIZE}, {0,0}}
Would this be useful? For a particular usecase, could you use something like this in your perl bindings, and would that imply a design change switching to treat AES as a single cipher with several possible key sizes, rather than as three different ciphers?
And that said, I think the model of current nettle_cipher, which treats aes128 and aes256 as different ciphers, is also useful in some cases. I don't know how to reconcile the two views. I'd prefer to not have multiple nettle_cipher-like abstractions in nettle itself.
Regards, /Niels
nettle-bugs@lists.lysator.liu.se