rfc9958v1.txt   rfc9958.txt 
Internet Engineering Task Force (IETF) A. Banerjee Internet Engineering Task Force (IETF) A. Banerjee
Request for Comments: 9958 T. Reddy.K Request for Comments: 9958 T. Reddy.K
Category: Informational D. Schoinianakis Category: Informational D. Schoinianakis
ISSN: 2070-1721 Nokia ISSN: 2070-1721 Nokia
T. Hollebeek T. Hollebeek
DigiCert DigiCert
M. Ounsworth M. Ounsworth
Entrust Entrust
April 2026 May 2026
Post-Quantum Cryptography for Engineers Post-Quantum Cryptography for Engineers
Abstract Abstract
The advent of a cryptographically relevant quantum computer (CRQC) The advent of a cryptographically relevant quantum computer (CRQC)
would render state-of-the-art, traditional public key algorithms would render state-of-the-art, traditional public key algorithms
deployed today obsolete, as the mathematical assumptions underpinning deployed today obsolete, as the mathematical assumptions underpinning
their security would no longer hold. To address this, protocols and their security would no longer hold. To address this, protocols and
infrastructure must transition to post-quantum algorithms, which are infrastructure must transition to post-quantum algorithms, which are
skipping to change at line 95 skipping to change at line 95
9.2.2. Binding 9.2.2. Binding
9.3. HPKE 9.3. HPKE
10. PQC Signatures 10. PQC Signatures
10.1. Security Properties of PQC Signatures 10.1. Security Properties of PQC Signatures
10.1.1. EUF-CMA and SUF-CMA 10.1.1. EUF-CMA and SUF-CMA
10.2. Details of FN-DSA, ML-DSA, and SLH-DSA 10.2. Details of FN-DSA, ML-DSA, and SLH-DSA
10.3. Details of XMSS and LMS 10.3. Details of XMSS and LMS
10.3.1. LMS Key and Signature Sizes 10.3.1. LMS Key and Signature Sizes
10.4. Hash-then-Sign 10.4. Hash-then-Sign
11. NIST Recommendations for Security and Performance Trade-offs 11. NIST Recommendations for Security and Performance Trade-offs
12. Comparing PQC KEMs/Signatures and Traditional KEMs 12. Comparing PQC KEMs/Signatures and Traditional KEMs/Signatures
(KEXs)/Signatures
13. Post-Quantum and Traditional (PQ/T) Hybrid Schemes 13. Post-Quantum and Traditional (PQ/T) Hybrid Schemes
13.1. PQ/T Hybrid Confidentiality 13.1. PQ/T Hybrid Confidentiality
13.2. PQ/T Hybrid Authentication 13.2. PQ/T Hybrid Authentication
13.3. Hybrid Cryptographic Algorithm Combinations: 13.3. Hybrid Cryptographic Algorithm Combinations:
Considerations and Approaches Considerations and Approaches
13.3.1. Hybrid Cryptographic Combinations 13.3.1. Hybrid Cryptographic Combinations
13.3.2. Composite Keys in Hybrid Schemes 13.3.2. Composite Keys in Hybrid Schemes
13.3.3. Key Reuse in Hybrid Schemes 13.3.3. Key Reuse in Hybrid Schemes
13.3.4. Future Directions and Ongoing Research 13.3.4. Future Directions and Ongoing Research
14. Impact on Constrained Devices and Networks 14. Impact on Constrained Devices and Networks
skipping to change at line 224 skipping to change at line 223
CRQCs pose a threat to both symmetric and asymmetric cryptographic CRQCs pose a threat to both symmetric and asymmetric cryptographic
schemes. However, the threat to asymmetric cryptography is schemes. However, the threat to asymmetric cryptography is
significantly greater due to Shor's algorithm [Shors], which can significantly greater due to Shor's algorithm [Shors], which can
break widely used public key schemes like RSA and ECC. Symmetric break widely used public key schemes like RSA and ECC. Symmetric
cryptography and hash functions face a lower risk from Grover's cryptography and hash functions face a lower risk from Grover's
algorithm [Grovers], although the impact is less severe and can algorithm [Grovers], although the impact is less severe and can
typically be mitigated by doubling key and digest lengths where the typically be mitigated by doubling key and digest lengths where the
risk applies. It is crucial for the reader to understand that when risk applies. It is crucial for the reader to understand that when
"PQC" is mentioned in the document, it means asymmetric cryptography "PQC" is mentioned in the document, it means asymmetric cryptography
(or public key cryptography) and not any symmetric algorithms based (or public key cryptography) and not any symmetric algorithms based
on stream ciphers, block ciphers, hash functions, MACs, etc., which on stream ciphers, block ciphers, hash functions, Message
are less vulnerable to quantum computers. This document does not Authentication Codes (MACs), etc., which are less vulnerable to
cover topics such as when traditional algorithms might become quantum computers. This document does not cover topics such as when
vulnerable (for that, see documents such as [QC-DNS] and others). traditional algorithms might become vulnerable (for that, see
documents such as [QC-DNS] and others).
This document does not cover unrelated technologies like quantum key This document does not cover unrelated technologies like quantum key
distribution (QKD) or quantum key generation, which use quantum distribution (QKD) or quantum key generation, which use quantum
hardware to exploit quantum effects to protect communications and hardware to exploit quantum effects to protect communications and
generate keys, respectively. PQC is based on conventional math (not generate keys, respectively. PQC is based on conventional math (not
on quantum mechanics) and software, and it can be run on any general- on quantum mechanics) and software, and it can be run on any general-
purpose computer. purpose computer.
This document does not go into the deep mathematics or technical This document does not go into the deep mathematics or technical
specification of the PQC algorithms but rather provides an overview specification of the PQC algorithms but rather provides an overview
skipping to change at line 382 skipping to change at line 382
Finally, in their evaluation criteria for PQC, NIST is assessing the Finally, in their evaluation criteria for PQC, NIST is assessing the
security levels of proposed post-quantum algorithms by comparing them security levels of proposed post-quantum algorithms by comparing them
against the equivalent traditional and quantum security of AES-128, against the equivalent traditional and quantum security of AES-128,
AES-192, and AES-256. This indicates that NIST is confident in the AES-192, and AES-256. This indicates that NIST is confident in the
stable security properties of AES, even in the presence of both stable security properties of AES, even in the presence of both
traditional and quantum attacks. As a result, 128-bit algorithms can traditional and quantum attacks. As a result, 128-bit algorithms can
be considered quantum-safe for the foreseeable future. However, for be considered quantum-safe for the foreseeable future. However, for
compliance purposes, some organizations, such as the French National compliance purposes, some organizations, such as the French National
Agency for the Security of Information Systems (ANSSI) [ANSSI] and Agency for the Security of Information Systems (ANSSI) [ANSSI] and
Commercial National Security Algorithm Suite 2.0 (CNSA 2.0) the National Security Agency (NSA) (CNSA 2.0) [CNSA2-0], recommend
[CNSA2-0], recommend the use of AES-256. the use of AES-256.
3.2. Asymmetric Cryptography 3.2. Asymmetric Cryptography
"Shor's algorithm" efficiently solves the integer factorization "Shor's algorithm" efficiently solves the integer factorization
problem (and the related discrete logarithm problem), which underpin problem (and the related discrete logarithm problem), which underpin
the foundations of the vast majority of public key cryptography that the foundations of the vast majority of public key cryptography that
the world uses today. This implies that, if a CRQC is developed, the world uses today. This implies that, if a CRQC is developed,
today's public key algorithms (e.g., RSA, Diffie-Hellman, and ECC, as today's public key algorithms (e.g., RSA, Diffie-Hellman, and ECC, as
well as less commonly used variants such as ElGamal [RFC6090] and well as less commonly used variants such as ElGamal [RFC6090] and
Schnorr signatures [RFC8235]) and protocols would need to be replaced Schnorr signatures [RFC8235]) and protocols would need to be replaced
skipping to change at line 481 skipping to change at line 481
encryption of the data using symmetric key algorithms, such as encryption of the data using symmetric key algorithms, such as
AES, to ensure confidentiality. The threat to symmetric AES, to ensure confidentiality. The threat to symmetric
cryptography is discussed in Section 3.1. cryptography is discussed in Section 3.1.
5. NIST PQC Algorithms 5. NIST PQC Algorithms
At the time of writing, NIST has standardized three PQC algorithms, At the time of writing, NIST has standardized three PQC algorithms,
with more expected to be standardized in the future (see with more expected to be standardized in the future (see
[NISTFINAL]). These algorithms are not necessarily drop-in [NISTFINAL]). These algorithms are not necessarily drop-in
replacements for traditional asymmetric cryptographic algorithms. replacements for traditional asymmetric cryptographic algorithms.
For instance, RSA [RSA] and ECC [RFC6090] can be used as both a key For instance, RSA [RSA] and ECC [RFC6090] can be used as both a KEM
encapsulation method (KEM) and a signature scheme, whereas there is and a signature scheme, whereas there is currently no post-quantum
currently no post-quantum algorithm that can perform both functions. algorithm that can perform both functions. When upgrading protocols,
When upgrading protocols, it is important to replace the existing use it is important to replace the existing use of traditional algorithms
of traditional algorithms with either a PQC KEM or a PQC signature with either a PQC KEM or a PQC signature method, depending on how the
method, depending on how the traditional algorithm was previously traditional algorithm was previously being used. Additionally, KEMs,
being used. Additionally, KEMs, as described in Section 9, present a as described in Section 9, present a different API than either key
different API than either key agreement or key transport primitives. agreement or key transport primitives. As a result, they may require
As a result, they may require protocol-level or application-level protocol-level or application-level changes in order to be
changes in order to be incorporated. incorporated.
5.1. NIST Candidates Selected for Standardization 5.1. NIST Candidates Selected for Standardization
5.1.1. PQC Key Encapsulation Mechanisms (KEMs) 5.1.1. PQC Key Encapsulation Mechanisms (KEMs)
[ML-KEM]: Module-Lattice-Based Key-Encapsulation Mechanism Standard ML-KEM: Module-Lattice-Based Key-Encapsulation Mechanism. See FIPS
(FIPS 203). 203 [ML-KEM].
[HQC]: Hamming Quasi-Cyclic coding algorithm, which is based on the HQC: Hamming Quasi-Cyclic. See [HQC]. The coding algorithm based
hardness of the syndrome decoding problem for quasi-cyclic on the hardness of the syndrome decoding problem for quasi-cyclic
concatenated Reed-Muller and Reed-Solomon (RMRS) codes in the concatenated Reed-Muller and Reed-Solomon (RMRS) codes in the
Hamming metric. Reed-Muller (RM) codes are a class of block Hamming metric. Reed-Muller (RM) codes are a class of block
error-correcting codes commonly used in wireless and deep-space error-correcting codes commonly used in wireless and deep-space
communications, while Reed-Solomon (RS) codes are widely used to communications, while Reed-Solomon (RS) codes are widely used to
detect and correct multiple-bit errors. HQC has been selected as detect and correct multiple-bit errors. HQC has been selected as
part of the NIST post-quantum cryptography project but has not yet part of the NIST post-quantum cryptography project but has not yet
been standardized. been standardized.
5.1.2. PQC Signatures 5.1.2. PQC Signatures
[ML-DSA]: Module-Lattice-Based Digital Signature Standard (FIPS ML-DSA: Module-Lattice-Based Digital Signature Algorithm. See FIPS
204). 204 [ML-DSA].
[SLH-DSA]: Stateless Hash-Based Digital Signature (FIPS 205). SLH-DSA: Stateless Hash-Based Digital Signature Algorithm. See FIPS
205 [SLH-DSA].
[FN-DSA]: FN-DSA is a lattice signature scheme (FIPS 206) (see FN-DSA: Fast-Fourier Transform over NTRU-Lattice-Based Digital
Sections 8.1 and 10.2). Signature Algorithm. See [FN-DSA]; note that, at the time of
publication, FIPS 206 has not been published.
For more information about these, see Sections 8.1, 8.2, and 10.2.
6. ISO Candidates Selected for Standardization 6. ISO Candidates Selected for Standardization
At the time of writing, ISO has selected three PQC KEM algorithms as At the time of writing, ISO has selected three PQC KEM algorithms as
candidates for standardization; these are mentioned in the following candidates for standardization; these are mentioned in the following
subsection. subsection.
6.1. PQC Key Encapsulation Mechanisms (KEMs) 6.1. PQC Key Encapsulation Mechanisms (KEMs)
[FrodoKEM]: KEM based on the hardness of learning with errors in FrodoKEM: KEM based on the hardness of learning with errors in
algebraically unstructured lattices. algebraically unstructured lattices. See [FrodoKEM].
[ClassicMcEliece]: KEM based on the hardness of syndrome decoding of ClassicMcEliece: KEM based on the hardness of syndrome decoding of
Goppa codes. Goppa codes are a class of error-correcting codes Goppa codes. Goppa codes are a class of error-correcting codes
that can correct a certain number of errors in a transmitted that can correct a certain number of errors in a transmitted
message. The decoding problem involves recovering the original message. The decoding problem involves recovering the original
message from the received noisy codeword. message from the received noisy codeword. See [ClassicMcEliece].
[NTRU]: KEM based on the "N-th degree Truncated polynomial Ring NTRU: KEM based on the "N-th degree Truncated polynomial Ring Units"
Units" (NTRU) lattices. Variants include Streamlined NTRU Prime (NTRU) lattices. Variants include Streamlined NTRU Prime
(sntrup761), which is leveraged for use in SSH [RFC9941]. (sntrup761), which is leveraged for use in SSH [RFC9941]. See
[NTRU].
7. Timeline for Transition 7. Timeline for Transition
The timeline and driving motivation for transition differ slightly The timeline and driving motivation for transition differ slightly
between data confidentiality (e.g., encryption) and data between data confidentiality (e.g., encryption) and data
authentication (e.g., signature) use cases. authentication (e.g., signature) use cases.
For data confidentiality, one is concerned with the so-called For data confidentiality, one is concerned with the so-called
"harvest now, decrypt later" (HNDL) attack where a malicious actor "harvest now, decrypt later" (HNDL) attack where a malicious actor
with adequate resources can launch an attack to store sensitive with adequate resources can launch an attack to store sensitive
skipping to change at line 754 skipping to change at line 759
8.3. Code-Based Public Key Cryptography 8.3. Code-Based Public Key Cryptography
This area of cryptography started in the 1970s and 1980s and was This area of cryptography started in the 1970s and 1980s and was
based on the seminal work of McEliece and Niederreiter, which focuses based on the seminal work of McEliece and Niederreiter, which focuses
on the study of cryptosystems based on error-correcting codes. Some on the study of cryptosystems based on error-correcting codes. Some
popular error-correcting codes include Goppa codes (used in McEliece popular error-correcting codes include Goppa codes (used in McEliece
cryptosystems), encoding and decoding syndrome codes used in HQC, or cryptosystems), encoding and decoding syndrome codes used in HQC, or
quasi-cyclic moderate density parity check (QC-MDPC) codes. quasi-cyclic moderate density parity check (QC-MDPC) codes.
Examples include all the unbroken NIST Round 4 finalists: Classic Examples include all the unbroken NIST Round 4 finalists: Classic
McEliece, HQC (selected by NIST for standardization), and BIKE McEliece, HQC (selected by NIST for standardization), and Bit
[BIKE]. Flipping Key Encapsulation (BIKE) [BIKE].
9. KEMs 9. KEMs
A Key Encapsulation Mechanism (KEM) is a cryptographic technique used A Key Encapsulation Mechanism (KEM) is a cryptographic technique used
for securely exchanging symmetric key material between two parties for securely exchanging symmetric key material between two parties
over an insecure channel. It is commonly used in hybrid encryption over an insecure channel. It is commonly used in hybrid encryption
schemes where a combination of asymmetric (public key) and symmetric schemes where a combination of asymmetric (public key) and symmetric
encryption is employed. The KEM encapsulation results in a fixed- encryption is employed. The encapsulation operation of a KEM results
length symmetric key that can be used with a symmetric algorithm, in a fixed-length symmetric key that can be used with a symmetric
typically a block cipher, in one of two different ways: algorithm, typically a block cipher, in one of two different ways:
* To derive a data encryption key (DEK) to encrypt the data * To derive a data encryption key (DEK) to encrypt the data
* To derive a key encryption key (KEK) used to wrap a DEK * To derive a key encryption key (KEK) used to wrap a DEK
These techniques are often referred to as the Hybrid Public Key These techniques are often referred to as the Hybrid Public Key
Encryption (HPKE) [RFC9180] mechanism. Encryption (HPKE) [RFC9180] mechanism.
The term "encapsulation" is chosen intentionally to indicate that KEM The term "encapsulation" is chosen intentionally to indicate that KEM
algorithms behave differently at the API level from the key agreement algorithms behave differently at the API level from the key agreement
skipping to change at line 871 skipping to change at line 876
a key exchange, called Non-Interactive Key Exchange (NIKE), that a key exchange, called Non-Interactive Key Exchange (NIKE), that
refers to whether the sender can compute the shared secret ss and refers to whether the sender can compute the shared secret ss and
encrypt content without requiring active interaction (an exchange of encrypt content without requiring active interaction (an exchange of
network messages) with the recipient. Figure 3 shows a DH key network messages) with the recipient. Figure 3 shows a DH key
exchange, which is an AKE since both parties are using long-term keys exchange, which is an AKE since both parties are using long-term keys
that can have established trust (for example, via certificates), but that can have established trust (for example, via certificates), but
it is not a NIKE since the client needs to wait for the network it is not a NIKE since the client needs to wait for the network
interaction to receive the receiver's public key pk2 before it can interaction to receive the receiver's public key pk2 before it can
compute the shared secret ss and begin content encryption. However, compute the shared secret ss and begin content encryption. However,
a DH key exchange can be an AKE and a NIKE at the same time if the a DH key exchange can be an AKE and a NIKE at the same time if the
receiver's public key is known to the sender in advance, and many receiver's public key is known to the sender in advance (see
Internet protocols rely on this property of DH-based key exchanges. Figure 4), and many Internet protocols rely on this property of DH-
based key exchanges.
+---------+ +---------+ +---------+ +---------+
| Client | | Server | | Client | | Server |
+---------+ +---------+ +---------+ +---------+
+-----------------------+ | | +-----------------------+ | |
| Long-term client key: | | | | Long-term client key: | | |
| sk1, pk1 |-| | | sk1, pk1 |-| |
| Long-term server key: | | | | Long-term server key: | | |
| pk2 | | | | pk2 | | |
| ss = KeyEx(pk2, sk1) | | | | ss = KeyEx(pk2, sk1) | | |
skipping to change at line 897 skipping to change at line 903
| encrypted | | encrypted |
| content | | content |
|---------->| |---------->|
| | +------------------------+ | | +------------------------+
| |-| Long-term server key: | | |-| Long-term server key: |
| | | sk2, pk2 | | | | sk2, pk2 |
| | | ss = KeyEx(pk1, sk2) | | | | ss = KeyEx(pk1, sk2) |
| | | decryptContent(ss) | | | | decryptContent(ss) |
| | +------------------------+ | | +------------------------+
Figure 4: DH-Based AKE and NIKE Simultaneously Figure 4: Simultaneous DH-Based AKE and NIKE
The complication with KEMs is that a KEM Encaps() is non- The complication with KEMs is that a KEM Encaps() is non-
deterministic; it involves randomness chosen by the sender of that deterministic; it involves randomness chosen by the sender of that
message. Therefore, in order to perform an AKE, the client must wait message. Therefore, in order to perform an AKE, the client must wait
for the server to generate the needed randomness and perform Encaps() for the server to generate the needed randomness and perform Encaps()
against the client key, which necessarily requires a network round- against the client key, which necessarily requires a network round-
trip. Therefore, a KEM-based protocol can either be an AKE or a trip. Therefore, a KEM-based protocol can either be an AKE or a
NIKE, but it cannot be both at the same time. Consequently, certain NIKE, but it cannot be both at the same time. Consequently, certain
Internet protocols will necessitate a redesign to accommodate this Internet protocols will necessitate a redesign to accommodate this
distinction, either by introducing extra network round trips or by distinction, either by introducing extra network round trips or by
skipping to change at line 926 skipping to change at line 932
| | | |
|pk1 | |pk1 |
|---------->| |---------->|
| | +--------------------------+ | | +--------------------------+
| |-| ss1, ct1 = kemEncaps(pk1)| | |-| ss1, ct1 = kemEncaps(pk1)|
| | | pk2, sk2 = kemKeyGen() | | | | pk2, sk2 = kemKeyGen() |
| | +--------------------------+ | | +--------------------------+
| | | |
| ct1,pk2| | ct1,pk2|
|<----------| |<----------|
+------------------------+ | | +--------------------------+ | |
| ss1 = kemDecaps(ct1, sk1)|-| | | ss1 = kemDecaps(ct1, sk1)| | |
| ss2, ct2 = kemEncaps(pk2)| | | ss2, ct2 = kemEncaps(pk2)|-| |
| ss = Combiner(ss1, ss2)| | | | ss = Combiner(ss1, ss2) | | |
+------------------------+ | | +--------------------------+ | |
| | | |
|ct2 | |ct2 |
|---------->| |---------->|
| | +--------------------------+ | | +--------------------------+
| |-| ss2 = kemDecaps(ct2, sk2)| | |-| ss2 = kemDecaps(ct2, sk2)|
| | | ss = Combiner(ss1, ss2) | | | | ss = Combiner(ss1, ss2) |
| | +--------------------------+ | | +--------------------------+
Figure 5: KEM-Based AKE Figure 5: KEM-Based AKE
In the figure above, Combiner(ss1, ss2), often referred to as a KEM In the figure above, Combiner(ss1, ss2), often referred to as a KEM
combiner, is a cryptographic construction that takes in two shared combiner, is a cryptographic construction that takes in two shared
secrets and returns a single combined shared secret. The simplest secrets and returns a single combined shared secret. The simplest
combiner is concatenation ss1 || ss2, but combiners can vary in combiner is concatenation ss1 || ss2, but combiners can vary in
complexity depending on the cryptographic properties required. For complexity depending on the cryptographic properties required. For
example, if the combination should preserve IND-CCA2 (see example, if the combination should preserve IND-CCA2 (see
Section 9.2.1) of either input, even if the other is chosen Section 9.2.1) of either input, even if the other is chosen
maliciously, then a more complex construct is required. Another maliciously, then a more complex construct is required. Another
consideration for combiner design is the so-called "binding consideration for combiner design is the so-called "binding
skipping to change at line 981 skipping to change at line 987
chosen-ciphertext attacks. An appropriate definition of IND-CCA2 chosen-ciphertext attacks. An appropriate definition of IND-CCA2
security for KEMs can be found in [CS01] and [BHK09]. ML-KEM security for KEMs can be found in [CS01] and [BHK09]. ML-KEM
[ML-KEM] and Classic McEliece provide IND-CCA2 security. [ML-KEM] and Classic McEliece provide IND-CCA2 security.
Understanding IND-CCA2 security is essential for individuals involved Understanding IND-CCA2 security is essential for individuals involved
in designing or implementing cryptographic systems and protocols in in designing or implementing cryptographic systems and protocols in
order to evaluate the strength of the algorithm, assess its order to evaluate the strength of the algorithm, assess its
suitability for specific use cases, and ensure that data suitability for specific use cases, and ensure that data
confidentiality and security requirements are met. Understanding confidentiality and security requirements are met. Understanding
IND-CCA2 security is generally not necessary for developers migrating IND-CCA2 security is generally not necessary for developers migrating
to using an IETF-vetted key establishment method (KEM) within a given to using an IETF-vetted KEM within a given protocol or flow. IND-
protocol or flow. IND-CCA2 is a widely accepted security notion for CCA2 is a widely accepted security notion for public key encryption
public key encryption mechanisms, making it suitable for a broad mechanisms, making it suitable for a broad range of applications.
range of applications. When an IETF specification defines a new KEM, When an IETF specification defines a new KEM, its security
its security considerations should fully describe the relevant considerations should fully describe the relevant cryptographic
cryptographic properties, including IND-CCA2. properties, including IND-CCA2.
9.2.2. Binding 9.2.2. Binding
KEMs also have an orthogonal set of properties to consider when KEMs also have an orthogonal set of properties to consider when
designing protocols around them: binding [KEEPINGUP]. This can be designing protocols around them: binding [KEEPINGUP]. This can be
"ciphertext binding", "public key binding", "context binding", or any "ciphertext binding", "public key binding", "context binding", or any
other property that is important to not be substituted between KEM other property that is important to not be substituted between KEM
invocations. In general, a KEM is considered to bind a certain value invocations. In general, a KEM is considered to bind a certain value
if substitution of that value by an attacker will necessarily result if substitution of that value by an attacker will necessarily result
in a different shared secret being derived. As an example, if an in a different shared secret being derived. As an example, if an
skipping to change at line 1018 skipping to change at line 1024
The solution to binding is generally achieved at the protocol design The solution to binding is generally achieved at the protocol design
level: It is recommended to avoid using the KEM output shared secret level: It is recommended to avoid using the KEM output shared secret
directly without integrating it into an appropriate protocol. While directly without integrating it into an appropriate protocol. While
KEM algorithms provide key secrecy, they do not inherently ensure KEM algorithms provide key secrecy, they do not inherently ensure
source authenticity, protect against replay attacks, or guarantee source authenticity, protect against replay attacks, or guarantee
freshness. These security properties should be addressed by freshness. These security properties should be addressed by
incorporating the KEM into a protocol that has been analyzed for such incorporating the KEM into a protocol that has been analyzed for such
protections. Even though modern KEMs such as ML-KEM produce full- protections. Even though modern KEMs such as ML-KEM produce full-
entropy shared secrets, it is still advisable for binding reasons to entropy shared secrets, it is still advisable for binding reasons to
pass it through a key derivation function (KDF) and also include all pass the shared secret through a key derivation function (KDF) and
values that you wish to bind; then, you will have a shared secret also include all values that you wish to bind; finally, you will have
that is safe to use at the protocol level. a shared secret that is safe to use at the protocol level.
9.3. HPKE 9.3. HPKE
Modern cryptography has long used the notion of "hybrid encryption" Modern cryptography has long used the notion of "hybrid encryption"
where an asymmetric algorithm is used to establish a key and then a where an asymmetric algorithm is used to establish a key and then a
symmetric algorithm is used for bulk content encryption. The symmetric algorithm is used for bulk content encryption. The
previous sections explained important security properties of KEMs, previous sections explained important security properties of KEMs,
such as IND-CCA2 security and binding, and emphasized that these such as IND-CCA2 security and binding, and emphasized that these
properties must be supported by proper protocol design. One widely properties must be supported by proper protocol design. One widely
deployed scheme that achieves this is Hybrid Public Key Encryption deployed scheme that achieves this is Hybrid Public Key Encryption
skipping to change at line 1126 skipping to change at line 1132
also offers very efficient signing and verification procedures. The also offers very efficient signing and verification procedures. The
main potential downsides of FN-DSA refer to the non-triviality of its main potential downsides of FN-DSA refer to the non-triviality of its
algorithms and the need for floating-point arithmetic support in algorithms and the need for floating-point arithmetic support in
order to support Gaussian-distributed random number sampling where order to support Gaussian-distributed random number sampling where
the other lattice schemes use the less efficient but easier to the other lattice schemes use the less efficient but easier to
support uniformly distributed random number sampling. support uniformly distributed random number sampling.
Implementers of FN-DSA need to be aware that FN-DSA signing is highly Implementers of FN-DSA need to be aware that FN-DSA signing is highly
susceptible to side-channel attacks unless constant-time 64-bit susceptible to side-channel attacks unless constant-time 64-bit
floating-point operations are used. This requirement is extremely floating-point operations are used. This requirement is extremely
platform-dependent, as noted in NIST's report. platform-dependent, as noted in NIST's report [NIST].
The performance characteristics of ML-DSA and FN-DSA may differ based The performance characteristics of ML-DSA and FN-DSA may differ based
on the specific implementation and hardware platform. Generally, ML- on the specific implementation and hardware platform. Generally, ML-
DSA is known for its relatively fast signature generation, while FN- DSA is known for its relatively fast signature generation, while FN-
DSA can provide more efficient signature verification. The choice DSA can provide more efficient signature verification. The choice
may depend on whether the application requires more frequent may depend on whether the application requires more frequent
signature generation or signature verification (see [LIBOQS]). For signature generation or signature verification (see [LIBOQS]). For
further clarity on the sizes and security levels, please refer to the further clarity on the sizes and security levels, please refer to the
tables in Sections 11 and 12. tables in Sections 11 and 12.
skipping to change at line 1240 skipping to change at line 1246
messages that need to be transmitted between application and messages that need to be transmitted between application and
cryptographic module and making the signature size predictable and cryptographic module and making the signature size predictable and
manageable. As a corollary, hashing remains mandatory even for short manageable. As a corollary, hashing remains mandatory even for short
messages and assigns a further computational requirement onto the messages and assigns a further computational requirement onto the
verifier. This makes the performance of hash-then-sign schemes more verifier. This makes the performance of hash-then-sign schemes more
consistent, but not necessarily more efficient. consistent, but not necessarily more efficient.
Using a hash function to produce a fixed-size digest of a message Using a hash function to produce a fixed-size digest of a message
ensures that the signature is compatible with a wide range of systems ensures that the signature is compatible with a wide range of systems
and protocols, regardless of the specific message size or format. and protocols, regardless of the specific message size or format.
Crucially for hardware security modules, Hash-then-Sign also Crucially for hardware security modules, hash-then-sign also
significantly reduces the amount of data that needs to be transmitted significantly reduces the amount of data that needs to be transmitted
and processed by a Hardware Security Module (HSM). Consider and processed by a Hardware Security Module (HSM). Consider
scenarios such as a networked HSM located in a different data center scenarios such as a networked HSM located in a different data center
from the calling application or a smart card connected over a USB from the calling application or a smart card connected over a USB
interface. In these cases, streaming a message that is megabytes or interface. In these cases, streaming a message that is megabytes or
gigabytes long can result in notable network latency, on-device gigabytes long can result in notable network latency, on-device
signing delays, or even depletion of available on-device memory. signing delays, or even depletion of available on-device memory.
Note that the vast majority of Internet protocols that sign large Note that the vast majority of Internet protocols that sign large
messages already perform some form of content hashing at the protocol messages already perform some form of content hashing at the protocol
level, so this tends to be more of a concern with proprietary level, so this tends to be more of a concern with proprietary
cryptographic protocols and protocols from non-IETF standards bodies. cryptographic protocols and protocols from non-IETF standards bodies.
Protocols like TLS 1.3 and DNSSEC use the Hash-then-Sign paradigm. Protocols like TLS 1.3 and DNSSEC use the hash-then-sign paradigm.
In TLS 1.3 [RFC8446] CertificateVerify messages, the content that is In TLS 1.3 [RFC8446] CertificateVerify messages, the content that is
covered under the signature includes the transcript hash output covered under the signature includes the transcript hash output
(Section 4.4.1 of [RFC8446]) while DNSSEC [RFC4034] uses it to (Section 4.4.1 of [RFC8446]) while DNSSEC [RFC4034] uses it to
provide origin authentication and integrity assurance services for provide origin authentication and integrity assurance services for
DNS data. Similarly, the Cryptographic Message Syntax (CMS) DNS data. Similarly, the Cryptographic Message Syntax (CMS)
[RFC5652] includes a mandatory message digest step before invoking [RFC5652] includes a mandatory message digest step before invoking
the signature algorithm. the signature algorithm.
In the case of ML-DSA, it internally incorporates the necessary hash In the case of ML-DSA, it internally incorporates the necessary hash
operations as part of its signing algorithm. ML-DSA directly takes operations as part of its signing algorithm. ML-DSA directly takes
the original message, applies a hash function internally, and then the original message, applies a hash function internally, and then
uses the resulting hash value for the signature generation process. uses the resulting hash value for the signature generation process.
In the case of SLH-DSA, it internally performs randomized message In the case of SLH-DSA, it internally performs randomized message
compression using a keyed hash function that can process arbitrary compression using a keyed hash function that can process arbitrary
length messages. In the case of FN-DSA, the SHAKE-256 hash function length messages. In the case of FN-DSA, the SHAKE-256 hash function
is used as part of the signature process to derive a digest of the is used as part of the signature process to derive a digest of the
message being signed. message being signed.
Therefore, ML-DSA, FN-DSA, and SLH-DSA offer enhanced security over Therefore, ML-DSA, FN-DSA, and SLH-DSA offer enhanced security over
the traditional Hash-then-Sign paradigm because, by incorporating the traditional hash-then-sign paradigm because, by incorporating
dynamic key material into the message digest, a pre-computed hash dynamic key material into the message digest, a pre-computed hash
collision on the message to be signed no longer yields a signature collision on the message to be signed no longer yields a signature
forgery. Applications requiring the performance and bandwidth forgery. Applications requiring the performance and bandwidth
benefits of Hash-then-Sign may still pre-hash at the protocol level benefits of hash-then-sign may still pre-hash at the protocol level
prior to invoking ML-DSA, FN-DSA, or SLH-DSA, but protocol designers prior to invoking ML-DSA, FN-DSA, or SLH-DSA, but protocol designers
should be aware that doing so reintroduces the weakness that hash should be aware that doing so reintroduces the weakness that hash
collisions directly yield signature forgeries. Signing the full un- collisions directly yield signature forgeries. Signing the full un-
digested message is recommended where applications can tolerate it. digested message is recommended where applications can tolerate it.
11. NIST Recommendations for Security and Performance Trade-offs 11. NIST Recommendations for Security and Performance Trade-offs
This information is a reprint of information provided in the NIST PQC This information is a reprint of information provided in the NIST PQC
project [NIST] as of the time this document is published. Table 2 project [NIST] as of the time this document is published. Table 2
denotes the five security levels provided by NIST for PQC algorithms. denotes the five security levels provided by NIST for PQC algorithms.
skipping to change at line 1386 skipping to change at line 1392
+----------+-------------+------------+------------+----------------+ +----------+-------------+------------+------------+----------------+
| 5 | FN-DSA-1024 | 1793 | 2305 | 1280 | | 5 | FN-DSA-1024 | 1793 | 2305 | 1280 |
+----------+-------------+------------+------------+----------------+ +----------+-------------+------------+------------+----------------+
| 5 | ML-KEM-1024 | 1568 | 3168 | 1588 | | 5 | ML-KEM-1024 | 1568 | 3168 | 1588 |
+----------+-------------+------------+------------+----------------+ +----------+-------------+------------+------------+----------------+
| 5 | ML-DSA-87 | 2592 | 4896 | 4627 | | 5 | ML-DSA-87 | 2592 | 4896 | 4627 |
+----------+-------------+------------+------------+----------------+ +----------+-------------+------------+------------+----------------+
Table 4 Table 4
12. Comparing PQC KEMs/Signatures and Traditional KEMs 12. Comparing PQC KEMs/Signatures and Traditional KEMs/Signatures
(KEXs)/Signatures
This section provides two tables for comparison of different KEMs and This section provides two tables for comparison of different KEMs and
signatures, respectively, in the traditional and post-quantum signatures, respectively, in the traditional and post-quantum
scenarios. These tables focus on the secret key sizes, public key scenarios. These tables focus on the secret key sizes, public key
sizes, and ciphertext/signature sizes for the PQC algorithms and sizes, and ciphertext/signature sizes for the PQC algorithms and
their traditional counterparts of similar security levels. their traditional counterparts of similar security levels.
The first table compares traditional and PQC KEMs in terms of The first table compares traditional and PQC KEMs in terms of
security, public and private key sizes, and ciphertext sizes. security, public and private key sizes, and ciphertext sizes.
skipping to change at line 1606 skipping to change at line 1611
should be followed for combining cryptographic algorithms and that should be followed for combining cryptographic algorithms and that
"known good" pairs should be explicitly listed ("explicit composite") "known good" pairs should be explicitly listed ("explicit composite")
instead of just allowing arbitrary combinations of any two instead of just allowing arbitrary combinations of any two
cryptographic algorithms ("generic composite"). cryptographic algorithms ("generic composite").
The same considerations apply when using multiple certificates to The same considerations apply when using multiple certificates to
transport a pair of related keys for the same subject. Exactly how transport a pair of related keys for the same subject. Exactly how
two certificates should be managed in order to avoid some of the two certificates should be managed in order to avoid some of the
pitfalls mentioned above is still an active area of investigation. pitfalls mentioned above is still an active area of investigation.
Using two certificates keeps the certificate tooling simple and Using two certificates keeps the certificate tooling simple and
straightforward, but in the end, simply moves the problems with straightforward, but in the end, this simply moves problems (i.e.,
requiring that both certificates are intended to be used as a pair, problems with the requirement that both certificates be used as a
must produce two signatures that must be carried separately, and both pair, that two signatures that must be carried separately, and that
must validate, to the certificate management layer, where addressing both validate) to the certificate management layer, where addressing
these concerns in a robust way can be difficult. these concerns in a robust way can be difficult.
At least one scheme has been proposed that allows the pair of At least one scheme has been proposed that allows the pair of
certificates to exist as a single certificate when being issued and certificates to exist as a single certificate when being issued and
managed but dynamically split into individual certificates when managed but dynamically split into individual certificates when
needed (see [ENC-PAIR-CERTS]). needed (see [ENC-PAIR-CERTS]).
13.3.3. Key Reuse in Hybrid Schemes 13.3.3. Key Reuse in Hybrid Schemes
An important security note, particularly when using hybrid signature An important security note, particularly when using hybrid signature
keys, but also to a lesser extent hybrid KEM keys, is key reuse. In keys, but also to a lesser extent hybrid KEM keys, is key reuse. In
traditional cryptography, problems can occur with so-called "cross- traditional cryptography, problems can occur with so-called "cross-
protocol attacks" when the same key can be used for multiple protocol attacks" when the same key can be used for multiple
protocols; for example, signing TLS handshakes and signing S/MIME protocols; for example, signing TLS handshakes and signing S/MIME
emails. While it is not best practice to reuse keys within the same emails. While it is not best practice to reuse keys within the same
protocol, e.g., using the same key for multiple S/MIME certificates protocol, e.g., using the same key for multiple S/MIME certificates
for the same user, it is not generally catastrophic for security. for the same user, it is not generally catastrophic for security.
However, key reuse becomes a large security problem within hybrids. However, key reuse becomes a large security problem within hybrid
schemes.
Consider an {RSA, ML-DSA} hybrid key where the RSA key also appears Consider an {RSA, ML-DSA} hybrid key where the RSA key also appears
within a single-algorithm certificate. In this case, an attacker within a single-algorithm certificate. In this case, an attacker
could perform a "stripping attack" where they take some piece of data could perform a "stripping attack" where they take some piece of data
signed with the {RSA, ML-DSA} key, remove the ML-DSA signature, and signed with the {RSA, ML-DSA} key, remove the ML-DSA signature, and
present the data as if it was intended for the RSA only certificate. present the data as if it was intended for the RSA only certificate.
This leads to a set of security definitions called "non-separability This leads to a set of security definitions called "non-separability
properties", which refers to how well the signature scheme resists properties", which refers to how well the signature scheme resists
various complexities of downgrade/stripping attacks various complexities of downgrade/stripping attacks
[HYBRID-SIG-SPECT]. Therefore, it is recommended that implementers [HYBRID-SIG-SPECT]. Therefore, it is recommended that implementers
either reuse the entire hybrid key as a whole or perform fresh key either reuse the entire hybrid key as a whole or perform fresh key
generation of all component keys per usage, and must not take an generation of all component keys per usage, and must not take an
existing key and reuse it as a component of a hybrid. existing key and reuse it as a component of a hybrid key.
13.3.4. Future Directions and Ongoing Research 13.3.4. Future Directions and Ongoing Research
Many aspects of hybrid cryptography are still under investigation. Many aspects of hybrid cryptography are still under investigation.
The LAMPS Working Group at IETF is actively exploring the security The LAMPS Working Group at IETF is actively exploring the security
properties of these combinations, and future standards will reflect properties of these combinations, and future standards will reflect
the evolving consensus on these issues. the evolving consensus on these issues.
14. Impact on Constrained Devices and Networks 14. Impact on Constrained Devices and Networks
PQC algorithms generally have larger keys, ciphertext, and signature PQC algorithms generally have larger keys, ciphertext, and signature
sizes than traditional public key algorithms. This has particular sizes than traditional public key algorithms. This has particular
impact on constrained devices that operate with limited data rates. impact on constrained devices that operate with limited data rates.
In the Internet of Things (IoT) space, these constraints have In the IoT space, these constraints have historically driven
historically driven significant optimization efforts in the IETF significant optimization efforts in the IETF (e.g., in the LAKE and
(e.g., LAKE and CoRE) to adapt security protocols to resource- CoRE Working Groups) to adapt security protocols to resource-
constrained environments. constrained environments.
As the transition to PQC progresses, these environments will face As the transition to PQC progresses, these environments will face
similar challenges. Larger message sizes can increase handshake similar challenges. Larger message sizes can increase handshake
latency, raise energy consumption, and require fragmentation logic. latency, raise energy consumption, and require fragmentation logic.
Work is ongoing in the IETF to study how PQC can be deployed in Work is ongoing in the IETF to study how PQC can be deployed in
constrained devices (see [CONSTRAIN-DEV-PCQ]). constrained devices (see [CONSTRAIN-DEV-PCQ]).
15. Security Considerations 15. Security Considerations
skipping to change at line 1733 skipping to change at line 1739
organization-wide written cryptographic policies or automated organization-wide written cryptographic policies or automated
cryptographic policy systems. cryptographic policy systems.
Numerous commercial solutions are available for detecting hard-coded Numerous commercial solutions are available for detecting hard-coded
cryptographic algorithms in source code and compiled binaries, as cryptographic algorithms in source code and compiled binaries, as
well as providing cryptographic policy management control planes for well as providing cryptographic policy management control planes for
enterprise and production environments. enterprise and production environments.
15.3. Jurisdictional Fragmentation 15.3. Jurisdictional Fragmentation
Another potential application of hybrids bears mentioning, even Another potential application of hybrid schemes bears mentioning,
though it is not directly related to PQC: using hybrids to navigate even though it is not directly related to PQC: using hybrids to
inter-jurisdictional cryptographic connections. Traditional navigate inter-jurisdictional cryptographic connections. Traditional
cryptography is already fragmented by jurisdiction. Consider that cryptography is already fragmented by jurisdiction. Consider that
while most jurisdictions support ECDH, those in the United States while most jurisdictions support ECDH, those in the United States
will prefer the NIST curves while those in Germany will prefer the will prefer the NIST curves while those in Germany will prefer the
Brainpool curves. China, Russia, and other jurisdictions have their Brainpool curves. China, Russia, and other jurisdictions have their
own national cryptography standards. This situation of fragmented own national cryptography standards. This situation of fragmented
global cryptography standards is unlikely to improve with PQC. If global cryptography standards is unlikely to improve with PQC. If
"and" mode hybrids become standardized for the reasons mentioned "and" mode hybrid schemes become standardized for the reasons
above, then one could imagine leveraging them to create ciphersuites mentioned above, then one could imagine leveraging them to create
in which a single cryptographic operation simultaneously satisfies ciphersuites in which a single cryptographic operation simultaneously
the cryptographic requirements of both endpoints. satisfies the cryptographic requirements of both endpoints.
15.4. Hybrid Key Exchange and Signatures: Bridging the Gap Between PQ/T 15.4. Hybrid Key Exchange and Signatures: Bridging the Gap Between PQ/T
Cryptography Cryptography
Post-quantum algorithms selected for standardization are relatively Post-quantum algorithms selected for standardization are relatively
new and have not been subject to the same depth of study as new and have not been subject to the same depth of study as
traditional algorithms. PQC implementations will also be new and traditional algorithms. PQC implementations will also be new and
therefore more likely to contain implementation bugs than the battle- therefore more likely to contain implementation bugs than the battle-
tested crypto implementations that are relied on today. In addition, tested crypto implementations that are relied on today. In addition,
certain deployments may need to retain traditional algorithms due to certain deployments may need to retain traditional algorithms due to
skipping to change at line 1942 skipping to change at line 1948
BBS Signature Scheme", Work in Progress, Internet-Draft, BBS Signature Scheme", Work in Progress, Internet-Draft,
draft-irtf-cfrg-bbs-signatures-10, 8 January 2026, draft-irtf-cfrg-bbs-signatures-10, 8 January 2026,
<https://datatracker.ietf.org/doc/html/draft-irtf-cfrg- <https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-
bbs-signatures-10>. bbs-signatures-10>.
[BHK09] Bellare, M., Hofheinz, D., and E. Kiltz, "Subtleties in [BHK09] Bellare, M., Hofheinz, D., and E. Kiltz, "Subtleties in
the Definition of IND-CCA: When and How Should Challenge- the Definition of IND-CCA: When and How Should Challenge-
Decryption be Disallowed?", Cryptology ePrint Archive, Decryption be Disallowed?", Cryptology ePrint Archive,
Paper 2009/418, 2009, <https://eprint.iacr.org/2009/418>. Paper 2009/418, 2009, <https://eprint.iacr.org/2009/418>.
[BIKE] "BIKE", <http://pqc-hqc.org/>. [BIKE] "BIKE", <https://bikesuite.org/>.
[BPQS] Chalkias, K., Brown, J., Hearn, M., Lillehagen, T., Nitto, [BPQS] Chalkias, K., Brown, J., Hearn, M., Lillehagen, T., Nitto,
I., and T. Schroeter, "Blockchained Post-Quantum I., and T. Schroeter, "Blockchained Post-Quantum
Signatures", Cryptology ePrint Archive, Paper 2018/658, Signatures", Cryptology ePrint Archive, Paper 2018/658,
n.d., <https://eprint.iacr.org/2018/658>. n.d., <https://eprint.iacr.org/2018/658>.
[BSI-PQC] BSI, "Quantum-safe cryptography - fundamentals, current [BSI-PQC] BSI, "Quantum-safe cryptography - fundamentals, current
developments and recommendations", 18 May 2022, developments and recommendations", 18 May 2022,
<https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/ <https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/
Publications/Brochure/quantum-safe- Publications/Brochure/quantum-safe-
skipping to change at line 2302 skipping to change at line 2308
DigiCert DigiCert
Pittsburgh, PA Pittsburgh, PA
United States of America United States of America
Email: tim.hollebeek@digicert.com Email: tim.hollebeek@digicert.com
Mike Ounsworth Mike Ounsworth
Entrust Limited Entrust Limited
2500 Solandt Road, Suite 100 2500 Solandt Road, Suite 100
Ottawa, Ontario K2K 3G5 Ottawa, Ontario K2K 3G5
Canada Canada
Email: mike.ounsworth@entrust.com Email: mike@ounsworth.ca
 End of changes. 36 change blocks. 
82 lines changed or deleted 88 lines changed or added

This html diff was produced by rfcdiff 1.48.