- Blog
#Data&AI #Security
AI Security Posture Management (AI-SPM): What is it all about and considerations for it
- 05/11/2024
Reading time 2 minutes
Most commonly in our projects at Zure, Key Vault is used to store secrets for the application. But what if your application needs to sign or encrypt JSON Web Tokens (JWTs)? In this article we will look at how to do that with Key Vault’s namesake, keys. We will also look at an alternative approach using certificates.
Why would you use keys stored in Azure Key Vault? To see what it brings to the table, let’s look at how the process would work without it.
We want to sign JSON Web Tokens, so we would create a signing key. Often, this would be an X.509 certificate containing a key pair using either the RSA or EC algorithms. You will need to consider the environment that you create the certificate in to avoid it being stolen.
Next, the certificate needs to be stored somewhere in such a way that it is still accessible to the application. This could be the certificate store on the application servers. The application loads the certificate and uses it to sign tokens.
How do you handle rotation for the certificate? You would need to repeat what was done to create the signing certificate and then install it on all the relevant servers.
Instead of using certificates stored on application servers, we can instead use keys in Key Vault and do all the cryptographic operations within Key Vault without ever exporting the private key. The key is typically marked as non-exportable at creation time, meaning the private key can never be exported after creation. For extra security, it can be stored on an HSM (Hardware Security Module) in Azure.
We also don’t have to create the key locally; Key Vault can create the key inside the service. For example with AZ CLI:
az keyvault key create --vault-name testkeyvault --name SigningKey --kty RSA --size 2048 --protection software --not-before "2024-02-13T13:00:00Z" --expires "2024-02-20T13:00:00Z"
Now we have an RSA key that we can use without ever having seen the private key, and no ability to export it either.
To handle key rotation, you can define a rotation policy for the key:
az keyvault key rotation-policy update --vault-name testkeyvault --name SigningKey --value rotation-policy.json
Contents of rotation-policy.json:
{
"lifetimeActions": [
{
"trigger": {
"timeAfterCreate": null,
"timeBeforeExpiry" : "P7D"
},
"action": {
"type": "Rotate"
}
},
{
"trigger": {
"timeAfterCreate": null,
"timeBeforeExpiry" : "P10D"
},
"action": {
"type": "Notify"
}
}
],
"attributes": {
"expiryTime": "P28D"
}
}
Now your key will be automatically rotated (new version created) 7 days before expiry, with the option of getting notified about the upcoming expiry. One downside with this approach is that 28 days is the smallest expiry time you can set, and 7 days before expiry is also the smallest that can be set to. So if you wanted keys that rotate once per week, that is not possible with the built-in key rotation.
You can also make your own key rotator with e.g. Azure Functions, and at that point you can do the rotation in any way you want. Automatic rotation also does not set the activation date/time (nbf/not before), which is needed if you want the key to become active at a later time.
For this example, we have created two key pairs in Key Vault: a signing key pair and an encryption key pair. We will use these keys to do the following:
To handle these operations in code, we will use the following libraries:
There is also a library that adds classes necessary to connect these two libraries: Microsoft.IdentityModel.KeyVaultExtensions. But we will not use it in this article. This library makes some odd choices in its code, such as instantiating new Key Vault client objects within it, making it untestable.
The code for the whole thing is a bit long, so we won’t be including it all here. If you are interested though, the sample code is available on GitHub.
For signing tokens and verifying signatures, we need a class that inherits from SignatureProvider. This is a slightly simplified version:
public class KeyVaultKeySignatureProvider : SignatureProvider
{
private readonly CryptographyClient _cryptographyClient;
public override byte[] Sign(byte[] input)
{
var result = _cryptographyClient.SignData(GetKeyVaultAlgorithm(base.Algorithm), input);
return result.Signature;
}
public override bool Verify(byte[] input, byte[] signature)
{
var result = _cryptographyClient.VerifyData(GetKeyVaultAlgorithm(base.Algorithm), input, signature);
return result.IsValid;
}
private static SignatureAlgorithm GetKeyVaultAlgorithm(string algorithm)
{
return algorithm switch
{
SecurityAlgorithms.RsaSha256 => SignatureAlgorithm.RS256,
SecurityAlgorithms.RsaSha384 => SignatureAlgorithm.RS384,
SecurityAlgorithms.RsaSha512 => SignatureAlgorithm.RS512,
_ => throw new NotImplementedException(),
};
}
}
The way a digital signature in JSON Web Tokens with asymmetric cryptography usually works is that a hash is calculated over the header and payload, and then that hash is encrypted using the private key to create the signature. Then a receiver can calculate the same hash and decrypt the signature using the public key to see that the result is the same, indicating that the token has not been modified. Symmetric cryptography would use the same key on both sides.
Why not calculate the signature by encrypting the entire token with the private key? That would technically work, but asymmetric cryptography is generally more computationally expensive. So instead we use the faster hashing and then encrypt the result. There is also of course the matter of the header containing the algorithm that was used and also often the identifier of the key that was used to sign the token. You end up with a bit of a chicken-and-egg problem if you encrypt the header.
This calculated hash is what we receive as “input” to both Sign and Verify. The Verify method additionally receives the signature calculated for the received token that Key Vault will compare. The Key Vault CryptographyClient makes it quite easy to implement these methods as it is pretty much a one-to-one mapping. One downside is that these interfaces have been designed without async, leading to a higher usage of resources than necessary. The CryptographyClient object has been gotten from KeyClient, and it already points to a specific key in Key Vault, so we don’t need to specify one in the method calls here.
Encryption and decryption can be implemented similarly by inheriting from KeyWrapProvider (again simplified version):
public class KeyVaultKeyWrapProvider : KeyWrapProvider
{
private readonly CryptographyClient _cryptographyClient;
public override string Algorithm { get; }
public override byte[] WrapKey(byte[] keyBytes)
{
var result = _cryptographyClient.WrapKey(GetAlgorithm(Algorithm), keyBytes);
return result.EncryptedKey;
}
public override byte[] UnwrapKey(byte[] keyBytes)
{
var result = _cryptographyClient.UnwrapKey(GetAlgorithm(Algorithm), keyBytes);
return result.Key;
}
private static KeyWrapAlgorithm GetAlgorithm(string algorithm)
{
return algorithm switch
{
SecurityAlgorithms.RsaOAEP => KeyWrapAlgorithm.RsaOaep,
_ => throw new NotImplementedException(),
};
}
}
Just like with signing, we don’t actually encrypt the content with our asymmetric key. Instead a symmetric key unique to the token is generated, the content is encrypted with that, and then the symmetric key is encrypted (or “wrapped”) with the intended receiver’s public key. The encrypted key is sent with the encrypted content so the receiver can then do the same operations in reverse. Key Vault SDK again maps really well to these methods.
In addition to these two classes, we need an implementation of ICryptoProvider and a class to represent a key. The crypto provider is responsible for instantiating the above signature/key wrap providers. You can check the full implementation of those on GitHub.
With all of that in place, we can create a signed and encrypted JWT:
var keyClient = new KeyClient(vaultUri, credential);
var cryptoProviderFactory = new CryptoProviderFactory();
cryptoProviderFactory.CustomCryptoProvider = new KeyVaultCryptoProvider(keyClient);
var signingKey = await keyClient.GetKeyAsync("TestSigningKey", "c1d4752f020b4a77a9c899901db7c7cd");
var signingRsaKey = new KeyVaultRsaSecurityKey(signingKey)
{
CryptoProviderFactory = cryptoProviderFactory
};
var signingCredentials = new SigningCredentials(signingRsaKey, SecurityAlgorithms.RsaSha256);
var encryptionKey = await keyClient.GetKeyAsync("TestEncryptionKey", "b073d79dcaa74d7d9c7588a475b4fd91");
var encryptionRsaKey = new KeyVaultRsaSecurityKey(encryptionKey)
{
CryptoProviderFactory = cryptoProviderFactory
};
var encryptingCredentials = new EncryptingCredentials(encryptionRsaKey, SecurityAlgorithms.RsaOAEP, SecurityAlgorithms.Aes128CbcHmacSha256);
var handler = new JsonWebTokenHandler();
var encryptedToken = handler.CreateToken(
JsonConvert.SerializeObject(new
{
sub = "test-user-id",
aud = "TestApp",
iss = "https://zure.com",
iat = (long)(DateTime.UtcNow - DateTime.UnixEpoch).TotalSeconds,
nbf = (long)(DateTime.UtcNow - DateTime.UnixEpoch).TotalSeconds,
exp = (long)(DateTime.UtcNow.AddDays(1) - DateTime.UnixEpoch).TotalSeconds,
}),
signingCredentials,
encryptingCredentials);
Then to reverse the operations and validate the token:
var validationResult = await handler.ValidateTokenAsync(encryptedToken, new TokenValidationParameters
{
IssuerSigningKeys = new List<SecurityKey>
{
signingRsaKey
},
TokenDecryptionKeys = new List<SecurityKey>
{
encryptionRsaKey
},
TryAllIssuerSigningKeys = false,
ValidAudience = "TestApp",
ValidIssuer = "https://zure.com",
ClockSkew = TimeSpan.Zero,
ValidAlgorithms = new List<string>
{
SecurityAlgorithms.RsaSha256,
SecurityAlgorithms.Aes128CbcHmacSha256,
},
ValidateAudience = true,
ValidateIssuer = true,
ValidateIssuerSigningKey = true,
ValidateLifetime = true,
});
bool isValid = validationResult.IsValid;
if (!isValid)
{
// Check validationResult.Exception
}
IDictionary<string, object> claims = validationResult.Claims;
Running all cryptographic operations against Key Vault is not ideal if you are expecting these to be required many times per second. There will be a throughput limit created by the I/O and Key Vault’s limits. Each operation also takes longer than it would if the keys were in memory (or at least local).
Encryption and signature verification do not technically have to be ran against Key Vault, since they both use public keys. And we already have those public keys in memory in the sample. So we can modify those methods to something like this:
// Signature verification
using var rsa = signingRsaKey.Key.Key.ToRSA();
// TODO: Map HashAlgorithmName from given algorithm
var isValid = rsa.VerifyData(input, signature, HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1);
return isValid;
// Encryption (key wrapping)
using var rsa = encryptionRsaKey.Key.Key.ToRSA();
var result = rsa.Encrypt(keyBytes, RSAEncryptionPadding.OaepSHA1);
return result;
If running the decryption/signing operations against Key Vault is too much, then the private keys would need to be on the servers. One option to implement this is certificates, the topic of the next section.
We could also use certificates to sign tokens just like we would have done in the scenario outlined in the problem statement. Key Vault can store certificates as well, and can be used instead of the server-specific certificate store.
The advantage of certificates is that they can be exported from Key Vault (if marked exportable). This means the servers can do cryptographic operations in-memory. This leads to increased throughput as there is no I/O that needs to happen. The downside is that private keys will be held in-memory on the server, creating a possibility to leak them. Also, since the certificate can be exported, it could be stolen. This is your regular reminder to enable audit logs in important Key Vaults. Going through the access policies/RBAC assignment list is also recommended.
Another advantage is that the .NET SDKs readily support certificates, and no custom classes are required. We can use the following libraries:
To get started, we had Key Vault generate two self-signed certificates for us. With certificates, creating the encrypted and signed JWT looks like this:
var certificateClient = new CertificateClient(vaultUri, credential);
var signingCertificate = await certificateClient.DownloadCertificateAsync("TestSigningCertificate", "abcad05bc22b4da8b3c4469719aa5c06");
var signingKey = new X509SecurityKey(signingCertificate.Value);
var signingCredentials = new SigningCredentials(signingKey, SecurityAlgorithms.RsaSha256);
var encryptionCertificate = await certificateClient.DownloadCertificateAsync("TestEncryptionCertificate", "204838aebf3c4bc093342ea4e8d1d986");
var encryptionKey = new X509SecurityKey(encryptionCertificate.Value);
var encryptingCredentials = new EncryptingCredentials(encryptionKey, SecurityAlgorithms.RsaOAEP, SecurityAlgorithms.Aes128CbcHmacSha256);
var handler = new JsonWebTokenHandler();
var encryptedToken = handler.CreateToken(
JsonConvert.SerializeObject(new
{
sub = "test-user-id",
aud = "TestApp",
iss = "https://zure.com",
iat = (long)(DateTime.UtcNow - DateTime.UnixEpoch).TotalSeconds,
nbf = (long)(DateTime.UtcNow - DateTime.UnixEpoch).TotalSeconds,
exp = (long)(DateTime.UtcNow.AddDays(1) - DateTime.UnixEpoch).TotalSeconds,
}),
signingCredentials,
encryptingCredentials);
The main change is using CertificateClient to get the public and private key as a certificate object, and then using X509SecurityKey instead of our custom Key Vault RSA key class. Otherwise the code is unchanged. Validation is completely identical, so we will not include it here. You can find the full sample code on GitHub.
In this article we looked at two approaches for creating signed and encrypted JSON Web Tokens. Using keys instead of certificates has the advantage of the private keys being non-exportable, and allowing us to run those operations within Key Vault. All of that is great for security. This does come with the downside of reduced throughput and increased latency. Certificates on the other hand allow us to run all of the cryptographic operations locally, greatly speeding up the process. But then you get the risk of key theft. The choice of what to use will come down to your requirements.
Links:
Our newsletters contain stuff our crew is interested in: the articles we read, Azure news, Zure job opportunities, and so forth.
Please let us know what kind of content you are most interested about. Thank you!