rfc9958v1.md   rfc9958.md 
--- ---
title: "Post-Quantum Cryptography for Engineers" title: "Post-Quantum Cryptography for Engineers"
abbrev: "PQC for Engineers" abbrev: "PQC for Engineers"
category: info category: info
ipr: trust200902 ipr: trust200902
docname: draft-ietf-pquip-pqc-engineers-14 docname: draft-ietf-pquip-pqc-engineers-14
submissiontype: IETF submissiontype: IETF
number: 9958 number: 9958
date: 2026-04 date: 2026-05
consensus: true consensus: true
v: 3 v: 3
lang: en lang: en
pi: [toc, symrefs, sortrefs] pi: [toc, symrefs, sortrefs]
area: SEC area: SEC
workgroup: pquip workgroup: pquip
keyword: keyword:
- PQC - PQC
stand_alone: yes stand_alone: yes
skipping to change at line 59 skipping to change at line 59
email: "tim.hollebeek@digicert.com" email: "tim.hollebeek@digicert.com"
- -
ins: M. Ounsworth ins: M. Ounsworth
name: Mike Ounsworth name: Mike Ounsworth
org: Entrust Limited org: Entrust Limited
abbrev: Entrust abbrev: Entrust
street: 2500 Solandt Road, Suite 100 street: 2500 Solandt Road, Suite 100
city: Ottawa, Ontario city: Ottawa, Ontario
country: Canada country: Canada
code: K2K 3G5 code: K2K 3G5
email: mike.ounsworth@entrust.com email: mike@ounsworth.ca
normative: normative:
ML-KEM: ML-KEM:
title: "Module-Lattice-Based Key-Encapsulation Mechanism Standard" title: "Module-Lattice-Based Key-Encapsulation Mechanism Standard"
target: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.203.pdf target: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.203.pdf
seriesinfo: seriesinfo:
NIST FIPS: 203 NIST FIPS: 203
DOI: 10.6028/nist.fips.203 DOI: 10.6028/nist.fips.203
author: author:
- -
skipping to change at line 547 skipping to change at line 547
author: author:
- -
org: ANSSI org: ANSSI
date: 2023-12-21 date: 2023-12-21
HQC: HQC:
title: "HQC" title: "HQC"
target: http://pqc-hqc.org/ target: http://pqc-hqc.org/
date: false date: false
BIKE: BIKE:
title: "BIKE" title: "BIKE"
target: http://pqc-hqc.org/ target: https://bikesuite.org/
date: false date: false
PQUIP-WG: PQUIP-WG:
title: "Post-Quantum Use In Protocols (pquip)" title: "Post-Quantum Use In Protocols (pquip)"
author: author:
- -
org: IETF org: IETF
target: https://datatracker.ietf.org/group/pquip/documents/ target: https://datatracker.ietf.org/group/pquip/documents/
date: false date: false
OQS: OQS:
title: "Open Quantum Safe Project" title: "Open Quantum Safe Project"
skipping to change at line 700 skipping to change at line 700
a) Correct author names in reference entry for draft-ietf-pquip-pqc-hsm-constrained. a) Correct author names in reference entry for draft-ietf-pquip-pqc-hsm-constrained.
Current: Current:
[I-D.ietf-pquip-pqc-hsm-constrained] [I-D.ietf-pquip-pqc-hsm-constrained]
Reddy.K, T., Wing, D., S, B., and K. Kwiatkowski, Reddy.K, T., Wing, D., S, B., and K. Kwiatkowski,
"Adapting Constrained Devices for Post-Quantum "Adapting Constrained Devices for Post-Quantum
Cryptography", Work in Progress, Internet-Draft, draft- Cryptography", Work in Progress, Internet-Draft, draft-
ietf-pquip-pqc-hsm-constrained-02, 18 October 2025, ietf-pquip-pqc-hsm-constrained-02, 18 October 2025,
<https://datatracker.ietf.org/doc/html/draft-ietf-pquip- <https://datatracker.ietf.org/doc/html/draft-ietf-pquip-
pqc-hsm-constrained-02>. pqc-hsm-constrained-02>.
b) Update the document to include <sup> elements (per A48 mail).
c) Update draft-hale-mls-combiner-01 to draft-ietf-mls-combiner-02 since -01 was repl
aced (note title change).
--> -->
<!-- XML for reference update (draft-ietf-pquip-pqc-hsm-constrained): <!-- XML for reference update (draft-ietf-pquip-pqc-hsm-constrained):
<reference anchor="I-D.ietf-pquip-pqc-hsm-constrained" target="https://datatracker.ie tf.org/doc/html/draft-ietf-pquip-pqc-hsm-constrained-02"> <reference anchor="I-D.ietf-pquip-pqc-hsm-constrained" target="https://datatracker.ie tf.org/doc/html/draft-ietf-pquip-pqc-hsm-constrained-02">
<front> <front>
<title>Adapting Constrained Devices for Post-Quantum Cryptography</title> <title>Adapting Constrained Devices for Post-Quantum Cryptography</title>
<author initials="T." surname="Reddy" fullname="Tirumaleswar Reddy.K"> <author initials="T." surname="Reddy" fullname="Tirumaleswar Reddy.K">
<organization>Nokia</organization> <organization>Nokia</organization>
</author> </author>
skipping to change at line 748 skipping to change at line 752
PQC is sometimes referred to as "quantum-proof", "quantum-safe", or "quantum-resistan t". It is the development of cryptographic algorithms designed to secure communicatio n and data in a world where quantum computers are powerful enough to break traditiona l cryptographic systems, such as RSA (Rivest-Shamir-Adleman) and ECC (Elliptic Curve Cryptography). PQC algorithms are intended to be resistant to attacks by quantum comp uters, which use quantum-mechanical phenomena to solve mathematical problems that are infeasible for classical computers. PQC is sometimes referred to as "quantum-proof", "quantum-safe", or "quantum-resistan t". It is the development of cryptographic algorithms designed to secure communicatio n and data in a world where quantum computers are powerful enough to break traditiona l cryptographic systems, such as RSA (Rivest-Shamir-Adleman) and ECC (Elliptic Curve Cryptography). PQC algorithms are intended to be resistant to attacks by quantum comp uters, which use quantum-mechanical phenomena to solve mathematical problems that are infeasible for classical computers.
As the threat of CRQCs draws nearer, engineers responsible for designing, maintaining , and securing cryptographic systems must prepare for the significant changes that th e existence of CRQCs will bring. Engineers need to understand how to implement post-q uantum algorithms in applications, how to evaluate the trade-offs between security an d performance, and how to ensure backward compatibility with current systems where ne eded. This is not merely a one-for-one replacement of algorithms; in many cases, the shift to PQC will involve redesigning protocols and infrastructure to accommodate the significant differences in resource utilization and key sizes between traditional an d PQC algorithms. Due to the wide-ranging nature of these impacts, discussions of pro tocol changes are integrated throughout this document rather than being confined to a single section. As the threat of CRQCs draws nearer, engineers responsible for designing, maintaining , and securing cryptographic systems must prepare for the significant changes that th e existence of CRQCs will bring. Engineers need to understand how to implement post-q uantum algorithms in applications, how to evaluate the trade-offs between security an d performance, and how to ensure backward compatibility with current systems where ne eded. This is not merely a one-for-one replacement of algorithms; in many cases, the shift to PQC will involve redesigning protocols and infrastructure to accommodate the significant differences in resource utilization and key sizes between traditional an d PQC algorithms. Due to the wide-ranging nature of these impacts, discussions of pro tocol changes are integrated throughout this document rather than being confined to a single section.
This document aims to provide general guidance to engineers working on cryptographic libraries, network security, and infrastructure development, where long-term security planning is crucial. The document covers topics such as selecting appropriate PQC al gorithms and understanding the differences between PQC Key Encapsulation Mechanisms ( KEMs) and traditional Diffie-Hellman (DH) and RSA-style key exchanges, and it provide s insights into expected differences in keys, ciphertext, signature sizes, and proces sing times between PQC and traditional algorithms. Additionally, it discusses the pot ential threat to symmetric cryptography and hash functions from CRQCs. This document aims to provide general guidance to engineers working on cryptographic libraries, network security, and infrastructure development, where long-term security planning is crucial. The document covers topics such as selecting appropriate PQC al gorithms and understanding the differences between PQC Key Encapsulation Mechanisms ( KEMs) and traditional Diffie-Hellman (DH) and RSA-style key exchanges, and it provide s insights into expected differences in keys, ciphertext, signature sizes, and proces sing times between PQC and traditional algorithms. Additionally, it discusses the pot ential threat to symmetric cryptography and hash functions from CRQCs.
It is important to remember that asymmetric algorithms (also known as public key algo rithms) are largely used for secure communications between organizations or endpoints that may not have previously interacted, so a significant amount of coordination bet ween organizations, and within and between ecosystems, needs to be taken into account . Such transitions are some of the most complicated in the tech industry and will req uire staged migrations in which upgraded agents need to coexist and communicate with non-upgraded agents at a scale never before undertaken. It is important to remember that asymmetric algorithms (also known as public key algo rithms) are largely used for secure communications between organizations or endpoints that may not have previously interacted, so a significant amount of coordination bet ween organizations, and within and between ecosystems, needs to be taken into account . Such transitions are some of the most complicated in the tech industry and will req uire staged migrations in which upgraded agents need to coexist and communicate with non-upgraded agents at a scale never before undertaken.
The National Security Agency (NSA) of the United States released an article on future PQC algorithm requirements for US national security systems {{CNSA2-0}} based on the need to protect against deployments of CRQCs in the future. The German Federal Offic e for Information Security (BSI) has also released a PQC migration and recommendation s document {{BSI-PQC}} that largely aligns with United States National Institute of S tandards and Technology (NIST) and NSA guidance but differs in aspects such as specif ic PQC algorithm profiles. The National Security Agency (NSA) of the United States released an article on future PQC algorithm requirements for US national security systems {{CNSA2-0}} based on the need to protect against deployments of CRQCs in the future. The German Federal Offic e for Information Security (BSI) has also released a PQC migration and recommendation s document {{BSI-PQC}} that largely aligns with United States National Institute of S tandards and Technology (NIST) and NSA guidance but differs in aspects such as specif ic PQC algorithm profiles.
CRQCs pose a threat to both symmetric and asymmetric cryptographic schemes. However, the threat to asymmetric cryptography is significantly greater due to Shor's algorith m {{Shors}}, which can break widely used public key schemes like RSA and ECC. Symmetr ic cryptography and hash functions face a lower risk from Grover's algorithm {{Grover s}}, although the impact is less severe and can typically be mitigated by doubling ke y and digest lengths where the risk applies. It is crucial for the reader to understa nd that when "PQC" is mentioned in the document, it means asymmetric cryptography (or public key cryptography) and not any symmetric algorithms based on stream ciphers, b lock ciphers, hash functions, MACs, etc., which are less vulnerable to quantum comput ers. This document does not cover topics such as when traditional algorithms might be come vulnerable (for that, see documents such as {{QC-DNS}} and others). CRQCs pose a threat to both symmetric and asymmetric cryptographic schemes. However, the threat to asymmetric cryptography is significantly greater due to Shor's algorith m {{Shors}}, which can break widely used public key schemes like RSA and ECC. Symmetr ic cryptography and hash functions face a lower risk from Grover's algorithm {{Grover s}}, although the impact is less severe and can typically be mitigated by doubling ke y and digest lengths where the risk applies. It is crucial for the reader to understa nd that when "PQC" is mentioned in the document, it means asymmetric cryptography (or public key cryptography) and not any symmetric algorithms based on stream ciphers, b lock ciphers, hash functions, Message Authentication Codes (MACs), etc., which are le ss vulnerable to quantum computers. This document does not cover topics such as when traditional algorithms might become vulnerable (for that, see documents such as {{QC- DNS}} and others).
This document does not cover unrelated technologies like quantum key distribution (QK D) or quantum key generation, which use quantum hardware to exploit quantum effects t o protect communications and generate keys, respectively. PQC is based on conventiona l math (not on quantum mechanics) and software, and it can be run on any general-purp ose computer. This document does not cover unrelated technologies like quantum key distribution (QK D) or quantum key generation, which use quantum hardware to exploit quantum effects t o protect communications and generate keys, respectively. PQC is based on conventiona l math (not on quantum mechanics) and software, and it can be run on any general-purp ose computer.
This document does not go into the deep mathematics or technical specification of the PQC algorithms but rather provides an overview to engineers on the current threat la ndscape and the relevant algorithms designed to help prevent those threats. Also, the cryptographic and algorithmic guidance given in this document should be taken as non -authoritative if it conflicts with emerging and evolving guidance from the IRTF's Cr ypto Forum Research Group (CFRG). This document does not go into the deep mathematics or technical specification of the PQC algorithms but rather provides an overview to engineers on the current threat la ndscape and the relevant algorithms designed to help prevent those threats. Also, the cryptographic and algorithmic guidance given in this document should be taken as non -authoritative if it conflicts with emerging and evolving guidance from the IRTF's Cr ypto Forum Research Group (CFRG).
# Terminology # Terminology
Quantum computer: Quantum computer:
: A computer that performs computations using quantum-mechanical phenomena such as su perposition and entanglement. : A computer that performs computations using quantum-mechanical phenomena such as su perposition and entanglement.
skipping to change at line 797 skipping to change at line 801
For unstructured data such as symmetric encrypted data or cryptographic hashes, altho ugh CRQCs can search for specific solutions across all possible input combinations (e .g., Grover's algorithm), no quantum algorithm is known to break the underlying secur ity properties of these classes of algorithms. Symmetric-key cryptography, which incl udes keyed primitives such as block ciphers (e.g., AES) and message authentication me chanisms (e.g., HMAC-SHA256), relies on secret keys shared between the sender and rec eiver and remains secure even in a post-quantum world. Symmetric cryptography also in cludes hash functions (e.g., SHA-256) that are used for secure message digesting with out any shared key material. Hashed Message Authentication Code (HMAC) is a specific construction that utilizes a cryptographic hash function and a secret key shared betw een the sender and receiver to produce a message authentication code. For unstructured data such as symmetric encrypted data or cryptographic hashes, altho ugh CRQCs can search for specific solutions across all possible input combinations (e .g., Grover's algorithm), no quantum algorithm is known to break the underlying secur ity properties of these classes of algorithms. Symmetric-key cryptography, which incl udes keyed primitives such as block ciphers (e.g., AES) and message authentication me chanisms (e.g., HMAC-SHA256), relies on secret keys shared between the sender and rec eiver and remains secure even in a post-quantum world. Symmetric cryptography also in cludes hash functions (e.g., SHA-256) that are used for secure message digesting with out any shared key material. Hashed Message Authentication Code (HMAC) is a specific construction that utilizes a cryptographic hash function and a secret key shared betw een the sender and receiver to produce a message authentication code.
Grover's algorithm is a quantum search algorithm that provides a theoretical quadrati c speedup for searching an unstructured database, compared to traditional search algo rithms. Grover's algorithm is a quantum search algorithm that provides a theoretical quadrati c speedup for searching an unstructured database, compared to traditional search algo rithms.
This has led to the common misconception that symmetric key lengths need to be double d for quantum security. When you consider the mapping of hash values to their corresp onding hash inputs (also known as pre-image) or of ciphertext blocks to the correspon ding plaintext blocks as an unstructured database, then Grover's algorithm theoretica lly requires doubling the key sizes of the symmetric algorithms that are currently de ployed at the time of publication to counter the quadratic speedup and maintain the c urrent security level. This is because Grover's algorithm reduces the amount of opera tions to break 128-bit symmetric cryptography to 2^{64} quantum operations, which mig ht sound computationally feasible. However, quantum operations are fundamentally diff erent from classical ones, as 2^{64} classical operations can be efficiently parallel ized but 2^{64} quantum operations must be performed serially, making them infeasible on practical quantum computers. This has led to the common misconception that symmetric key lengths need to be double d for quantum security. When you consider the mapping of hash values to their corresp onding hash inputs (also known as pre-image) or of ciphertext blocks to the correspon ding plaintext blocks as an unstructured database, then Grover's algorithm theoretica lly requires doubling the key sizes of the symmetric algorithms that are currently de ployed at the time of publication to counter the quadratic speedup and maintain the c urrent security level. This is because Grover's algorithm reduces the amount of opera tions to break 128-bit symmetric cryptography to 2^{64} quantum operations, which mig ht sound computationally feasible. However, quantum operations are fundamentally diff erent from classical ones, as 2^{64} classical operations can be efficiently parallel ized but 2^{64} quantum operations must be performed serially, making them infeasible on practical quantum computers.
Grover's algorithm is highly non-parallelizable and even if one deploys 2^c computati onal units in parallel to brute-force a key using Grover's algorithm, it will complet e in time proportional to 2^{(128-c)/2}, or, put simply, using 256 quantum computers will only reduce runtime by a factor of 16, 1024 quantum computers will only reduce r untime by a factor of 32, and so forth (see {{NIST}} and {{Cloudflare}}). Due to this inherent limitation, the general expert consensus is that AES-128 remains secure in practice and key sizes do not necessarily need to be doubled. Grover's algorithm is highly non-parallelizable and even if one deploys 2^c computati onal units in parallel to brute-force a key using Grover's algorithm, it will complet e in time proportional to 2^{(128-c)/2}, or, put simply, using 256 quantum computers will only reduce runtime by a factor of 16, 1024 quantum computers will only reduce r untime by a factor of 32, and so forth (see {{NIST}} and {{Cloudflare}}). Due to this inherent limitation, the general expert consensus is that AES-128 remains secure in practice and key sizes do not necessarily need to be doubled.
It would be natural to ask whether future research will develop a superior algorithm that could outperform Grover's algorithm in the general case. However, Christof Zalka has shown that Grover's algorithm achieves the best possible complexity for this typ e of search, meaning no significantly faster quantum approach is expected {{Grover-Se arch}}. It would be natural to ask whether future research will develop a superior algorithm that could outperform Grover's algorithm in the general case. However, Christof Zalka has shown that Grover's algorithm achieves the best possible complexity for this typ e of search, meaning no significantly faster quantum approach is expected {{Grover-Se arch}}.
<!-- [rfced] "CNSA 2.0" is a suite of algorithms from the NSA, not an Finally, in their evaluation criteria for PQC, NIST is assessing the security levels
organization. The organization is the National Security Agency (NSA). May we of proposed post-quantum algorithms by comparing them against the equivalent traditio
update the sentence as follows to clarify? nal and quantum security of AES-128, AES-192, and AES-256. This indicates that NIST i
s confident in the stable security properties of AES, even in the presence of both tr
Current: aditional and quantum attacks. As a result, 128-bit algorithms can be considered quan
However, for compliance purposes, some organizations, such as the French tum-safe for the foreseeable future. However, for compliance purposes, some organizat
National Agency for the Security of Information Systems (ANSSI) {{ANSSI}} and ions, such as the French National Agency for the Security of Information Systems (ANS
CNSA 2.0 (Commercial National Security Algorithm Suite 2.0) {{CNSA2-0}}, SI) {{ANSSI}} and the National Security Agency (NSA) (CNSA 2.0) {{CNSA2-0}}, recommen
recommend the use of AES-256. d the use of AES-256.
Perhaps:
However, for compliance purposes, some organizations, such as the French
National Agency for the Security of Information Systems (ANSSI) {{ANSSI}} and the
National Security Agency (NSA) {{CNSA2-0}}, recommend the use of AES-256.
Finally, in their evaluation criteria for PQC, NIST is assessing the security levels
of proposed post-quantum algorithms by comparing them against the equivalent traditio
nal and quantum security of AES-128, AES-192, and AES-256. This indicates that NIST i
s confident in the stable security properties of AES, even in the presence of both tr
aditional and quantum attacks. As a result, 128-bit algorithms can be considered quan
tum-safe for the foreseeable future. However, for compliance purposes, some organizat
ions, such as the French National Agency for the Security of Information Systems (ANS
SI) {{ANSSI}} and Commercial National Security Algorithm Suite 2.0 (CNSA 2.0) {{CNSA2
-0}}, recommend the use of AES-256.
## Asymmetric Cryptography ## Asymmetric Cryptography
"Shor's algorithm" efficiently solves the integer factorization problem (and the rela ted discrete logarithm problem), which underpin the foundations of the vast majority of public key cryptography that the world uses today. This implies that, if a CRQC is developed, today's public key algorithms (e.g., RSA, Diffie-Hellman, and ECC, as wel l as less commonly used variants such as ElGamal {{RFC6090}} and Schnorr signatures { {RFC8235}}) and protocols would need to be replaced by algorithms and protocols that can offer cryptanalytic resistance against CRQCs. Note that Shor's algorithm cannot r un solely on a classical computer; it requires a CRQC. "Shor's algorithm" efficiently solves the integer factorization problem (and the rela ted discrete logarithm problem), which underpin the foundations of the vast majority of public key cryptography that the world uses today. This implies that, if a CRQC is developed, today's public key algorithms (e.g., RSA, Diffie-Hellman, and ECC, as wel l as less commonly used variants such as ElGamal {{RFC6090}} and Schnorr signatures { {RFC8235}}) and protocols would need to be replaced by algorithms and protocols that can offer cryptanalytic resistance against CRQCs. Note that Shor's algorithm cannot r un solely on a classical computer; it requires a CRQC.
For example, studies show that, if a CRQC existed, it could break RSA-2048 in hours o r even seconds depending on assumptions about error correction {{RSAShor}} {{RSA8HRS} } {{RSA10SC}}. While such machines are purely theoretical at the time of writing, thi s illustrates the eventual vulnerability of RSA to CRQCs. For example, studies show that, if a CRQC existed, it could break RSA-2048 in hours o r even seconds depending on assumptions about error correction {{RSAShor}} {{RSA8HRS} } {{RSA10SC}}. While such machines are purely theoretical at the time of writing, thi s illustrates the eventual vulnerability of RSA to CRQCs.
For structured data such as public keys and signatures, CRQCs can fully solve the und erlying hard problems used in traditional cryptography (see Shor's algorithm). Becaus e an increase in the size of the key pair would not provide a secure solution (short of RSA keys that are many gigabytes in size {{PQRSA}}), a complete replacement of the algorithm is needed. Therefore, post-quantum public key cryptography must rely on pr oblems that are different from the ones used in traditional public key cryptography ( i.e., the integer factorization problem, the finite-field discrete logarithm problem, and the elliptic-curve discrete logarithm problem). For structured data such as public keys and signatures, CRQCs can fully solve the und erlying hard problems used in traditional cryptography (see Shor's algorithm). Becaus e an increase in the size of the key pair would not provide a secure solution (short of RSA keys that are many gigabytes in size {{PQRSA}}), a complete replacement of the algorithm is needed. Therefore, post-quantum public key cryptography must rely on pr oblems that are different from the ones used in traditional public key cryptography ( i.e., the integer factorization problem, the finite-field discrete logarithm problem, and the elliptic-curve discrete logarithm problem).
## Quantum Side-Channel Attacks ## Quantum Side-Channel Attacks
Cryptographic side-channel attacks exploit physical implementations (such as timing, power consumption, or electromagnetic leakage) to recover secret keys. Cryptographic side-channel attacks exploit physical implementations (such as timing, power consumption, or electromagnetic leakage) to recover secret keys.
The field of cryptographic side-channel attacks potentially stands to gain a boost in attacker power once cryptanalytic techniques can be enhanced with quantum computatio n techniques {{QuantSide}}. While a full discussion of quantum side-channel technique s is beyond the scope of this document, implementers of cryptographic hardware should be aware that current best practices for side-channel resistance may not be sufficie nt against quantum adversaries. The field of cryptographic side-channel attacks potentially stands to gain a boost in attacker power once cryptanalytic techniques can be enhanced with quantum computatio n techniques {{QuantSide}}. While a full discussion of quantum side-channel technique s is beyond the scope of this document, implementers of cryptographic hardware should be aware that current best practices for side-channel resistance may not be sufficie nt against quantum adversaries.
<!-- [rfced] We slightly rephrased the following to avoid repetition of "hence"
(i.e., made new sentence and replaced the first "hence" with "Because of
this"). Please review and let us know any concerns.
Original:
Similar to key agreement, signatures also depend on a public-private
key pair based on the same mathematics as for key agreement and key transport,
and hence a break in existing public key cryptography will also affect
traditional digital signatures, hence the importance of developing
post-quantum digital signatures.
Updated:
Similar to key agreement, signatures also depend on a public-private
key pair based on the same mathematics as for key agreement and key transport.
Because of this, a break in existing public key cryptography will also affect
traditional digital signatures, hence the importance of developing
post-quantum digital signatures.
# Traditional Cryptographic Primitives That Could Be Replaced by PQC # Traditional Cryptographic Primitives That Could Be Replaced by PQC
Any asymmetric cryptographic algorithm based on integer factorization, finite field d iscrete logarithms, or elliptic-curve discrete logarithms will be vulnerable to attac ks using Shor's algorithm on a CRQC. This document focuses on the principal functions of asymmetric cryptography: Any asymmetric cryptographic algorithm based on integer factorization, finite field d iscrete logarithms, or elliptic-curve discrete logarithms will be vulnerable to attac ks using Shor's algorithm on a CRQC. This document focuses on the principal functions of asymmetric cryptography:
Key agreement and key transport: Key agreement and key transport:
: Key agreement schemes, typically referred to as Diffie-Hellman (DH) or Elliptic Cur ve Diffie-Hellman (ECDH), as well as key transport, typically using RSA encryption, a re used to establish a shared cryptographic key for secure communication. They are on e of the mechanisms that can be replaced by PQC, as they are based on existing public key cryptography and are therefore vulnerable to Shor's algorithm. A CRQC can employ Shor's algorithm to efficiently find the prime factors of a large public key (in the case of RSA), which, in turn, can be exploited to derive the private key. In the cas e of DH, a CRQC has the potential to calculate the discrete logarithm of the (short- or long-term) DH public key. This, in turn, would reveal the secret required to deriv e the symmetric encryption key. : Key agreement schemes, typically referred to as Diffie-Hellman (DH) or Elliptic Cur ve Diffie-Hellman (ECDH), as well as key transport, typically using RSA encryption, a re used to establish a shared cryptographic key for secure communication. They are on e of the mechanisms that can be replaced by PQC, as they are based on existing public key cryptography and are therefore vulnerable to Shor's algorithm. A CRQC can employ Shor's algorithm to efficiently find the prime factors of a large public key (in the case of RSA), which, in turn, can be exploited to derive the private key. In the cas e of DH, a CRQC has the potential to calculate the discrete logarithm of the (short- or long-term) DH public key. This, in turn, would reveal the secret required to deriv e the symmetric encryption key.
Digital signatures: Digital signatures:
: Digital signature schemes are used to authenticate the identity of a sender, detect unauthorized modifications to data, and underpin trust in a system. Similar to key a greement, signatures also depend on a public-private key pair based on the same mathe matics as for key agreement and key transport. Because of this, a break in existing p ublic key cryptography will also affect traditional digital signatures, hence the imp ortance of developing post-quantum digital signatures. : Digital signature schemes are used to authenticate the identity of a sender, detect unauthorized modifications to data, and underpin trust in a system. Similar to key a greement, signatures also depend on a public-private key pair based on the same mathe matics as for key agreement and key transport. Because of this, a break in existing p ublic key cryptography will also affect traditional digital signatures, hence the imp ortance of developing post-quantum digital signatures.
Boneh-Boyen-Shacham (BBS) signatures: Boneh-Boyen-Shacham (BBS) signatures:
: BBS signatures are a privacy-preserving signature scheme that offers zero-knowledge proof-like properties by allowing selective disclosure of specific signed attributes without revealing the entire set of signed data. The security of BBS signatures reli es on the hardness of the discrete logarithm problem, making them vulnerable to Shor' s algorithm. A CRQC can break the data authenticity security property of BBS but not the data confidentiality ({{Section 6.9 of I-D.irtf-cfrg-bbs-signatures}}). : BBS signatures are a privacy-preserving signature scheme that offers zero-knowledge proof-like properties by allowing selective disclosure of specific signed attributes without revealing the entire set of signed data. The security of BBS signatures reli es on the hardness of the discrete logarithm problem, making them vulnerable to Shor' s algorithm. A CRQC can break the data authenticity security property of BBS but not the data confidentiality ({{Section 6.9 of I-D.irtf-cfrg-bbs-signatures}}).
Content encryption: Content encryption:
: Content encryption typically refers to the encryption of the data using symmetric k ey algorithms, such as AES, to ensure confidentiality. The threat to symmetric crypto graphy is discussed in {{symmetric}}. : Content encryption typically refers to the encryption of the data using symmetric k ey algorithms, such as AES, to ensure confidentiality. The threat to symmetric crypto graphy is discussed in {{symmetric}}.
# NIST PQC Algorithms # NIST PQC Algorithms
At the time of writing, NIST has standardized three PQC algorithms, with more expecte d to be standardized in the future (see {{NISTFINAL}}). These algorithms are not nece ssarily drop-in replacements for traditional asymmetric cryptographic algorithms. For instance, RSA {{RSA}} and ECC {{RFC6090}} can be used as both a key encapsulation me thod (KEM) and a signature scheme, whereas there is currently no post-quantum algori thm that can perform both functions. When upgrading protocols, it is important to rep lace the existing use of traditional algorithms with either a PQC KEM or a PQC signat ure method, depending on how the traditional algorithm was previously being used. Add itionally, KEMs, as described in {{KEMs}}, present a different API than either key ag reement or key transport primitives. As a result, they may require protocol-level or application-level changes in order to be incorporated. At the time of writing, NIST has standardized three PQC algorithms, with more expecte d to be standardized in the future (see {{NISTFINAL}}). These algorithms are not nece ssarily drop-in replacements for traditional asymmetric cryptographic algorithms. For instance, RSA {{RSA}} and ECC {{RFC6090}} can be used as both a KEM and a signature scheme, whereas there is currently no post-quantum algorithm that can perform both fu nctions. When upgrading protocols, it is important to replace the existing use of tra ditional algorithms with either a PQC KEM or a PQC signature method, depending on how the traditional algorithm was previously being used. Additionally, KEMs, as describe d in {{KEMs}}, present a different API than either key agreement or key transport pri mitives. As a result, they may require protocol-level or application-level changes in order to be incorporated.
## NIST Candidates Selected for Standardization ## NIST Candidates Selected for Standardization
<!-- [rfced] In Sections 5.1.1, 5.1.2, and 6.1, may we update the lists to <!-- [rfced] In Sections 5.1.1, 5.1.2, and 6.1, may we update the lists to
better indicate the term being defined? We suggest placing the term rather better indicate the term being defined? We suggest placing the term rather
than the citation before the colon. See the suggested text in a), b), and c) than the citation before the colon. See the suggested text in a), b), and c)
below. below.
We also have some additional questions regarding Section 5.1.2: We also have some additional questions regarding Section 5.1.2:
skipping to change at line 974 skipping to change at line 943
message. The decoding problem involves recovering the original message. The decoding problem involves recovering the original
message from the received noisy codeword. See [ClassicMcEliece]. message from the received noisy codeword. See [ClassicMcEliece].
NTRU: KEM based on the "N-th degree Truncated polynomial Ring NTRU: KEM based on the "N-th degree Truncated polynomial Ring
Units" (NTRU) lattices. Variants include Streamlined NTRU Prime Units" (NTRU) lattices. Variants include Streamlined NTRU Prime
(sntrup761), which is leveraged for use in SSH [RFC9941]. See [NTRU]. (sntrup761), which is leveraged for use in SSH [RFC9941]. See [NTRU].
--> -->
### PQC Key Encapsulation Mechanisms (KEMs) ### PQC Key Encapsulation Mechanisms (KEMs)
{{ML-KEM}}: ML-KEM:
: Module-Lattice-Based Key-Encapsulation Mechanism Standard (FIPS 203). : Module-Lattice-Based Key-Encapsulation Mechanism. See FIPS 203 {{ML-KEM}}.
{{HQC}}: HQC:
: Hamming Quasi-Cyclic coding algorithm, which is based on the hardness of the syndro : Hamming Quasi-Cyclic. See {{HQC}}. The coding algorithm based on the hardness of th
me decoding problem for quasi-cyclic concatenated Reed-Muller and Reed-Solomon (RMRS) e syndrome decoding problem for quasi-cyclic concatenated Reed-Muller and Reed-Solomo
codes in the Hamming metric. Reed-Muller (RM) codes are a class of block error-corre n (RMRS) codes in the Hamming metric. Reed-Muller (RM) codes are a class of block err
cting codes commonly used in wireless and deep-space communications, while Reed-Solom or-correcting codes commonly used in wireless and deep-space communications, while Re
on (RS) codes are widely used to detect and correct multiple-bit errors. HQC has been ed-Solomon (RS) codes are widely used to detect and correct multiple-bit errors. HQC
selected as part of the NIST post-quantum cryptography project but has not yet been has been selected as part of the NIST post-quantum cryptography project but has not y
standardized. et been standardized.
### PQC Signatures ### PQC Signatures
{{ML-DSA}}: ML-DSA:
: Module-Lattice-Based Digital Signature Standard (FIPS 204). : Module-Lattice-Based Digital Signature Algorithm. See FIPS 204 {{ML-DSA}}.
{{SLH-DSA}}: SLH-DSA:
: Stateless Hash-Based Digital Signature (FIPS 205). : Stateless Hash-Based Digital Signature Algorithm. See FIPS 205 {{SLH-DSA}}.
{{FN-DSA}}: FN-DSA:
: FN-DSA is a lattice signature scheme (FIPS 206) (see Sections {{lattice-based}}{: f : Fast-Fourier Transform over NTRU-Lattice-Based Digital Signature Algorithm. See {{F
ormat="counter"} and {{sig-scheme}}{: format="counter"}). N-DSA}}; note that, at the time of publication, FIPS 206 has not been published.
For more information about these, see Sections {{lattice-based}}{: format="counter"},
{{hash-based}}{: format="counter"}, and {{sig-scheme}}{: format="counter"}.
# ISO Candidates Selected for Standardization # ISO Candidates Selected for Standardization
At the time of writing, ISO has selected three PQC KEM algorithms as candidates for s tandardization; these are mentioned in the following subsection. At the time of writing, ISO has selected three PQC KEM algorithms as candidates for s tandardization; these are mentioned in the following subsection.
## PQC Key Encapsulation Mechanisms (KEMs) ## PQC Key Encapsulation Mechanisms (KEMs)
{{FrodoKEM}}: FrodoKEM:
: KEM based on the hardness of learning with errors in algebraically unstructured lat : KEM based on the hardness of learning with errors in algebraically unstructured lat
tices. tices. See {{FrodoKEM}}.
{{ClassicMcEliece}}: ClassicMcEliece:
: KEM based on the hardness of syndrome decoding of Goppa codes. Goppa codes are a cl : KEM based on the hardness of syndrome decoding of Goppa codes. Goppa codes are a cl
ass of error-correcting codes that can correct a certain number of errors in a transm ass of error-correcting codes that can correct a certain number of errors in a transm
itted message. The decoding problem involves recovering the original message from the itted message. The decoding problem involves recovering the original message from the
received noisy codeword. received noisy codeword. See {{ClassicMcEliece}}.
{{NTRU}}: NTRU:
: KEM based on the "N-th degree Truncated polynomial Ring Units" (NTRU) lattices. Var : KEM based on the "N-th degree Truncated polynomial Ring Units" (NTRU) lattices. Var
iants include Streamlined NTRU Prime (sntrup761), which is leveraged for use in SSH { iants include Streamlined NTRU Prime (sntrup761), which is leveraged for use in SSH {
{?RFC9941}}. {?RFC9941}}. See {{NTRU}}.
# Timeline for Transition {#timeline} # Timeline for Transition {#timeline}
The timeline and driving motivation for transition differ slightly between data confi dentiality (e.g., encryption) and data authentication (e.g., signature) use cases. The timeline and driving motivation for transition differ slightly between data confi dentiality (e.g., encryption) and data authentication (e.g., signature) use cases.
For data confidentiality, one is concerned with the so-called "harvest now, decrypt l ater" (HNDL) attack where a malicious actor with adequate resources can launch an att ack to store sensitive encrypted data today that they hope to decrypt once a CRQC is available. This implies that, every day, sensitive encrypted data is susceptible to t he attack by not implementing quantum-safe strategies, as it corresponds to data poss ibly being deciphered in the future. For data confidentiality, one is concerned with the so-called "harvest now, decrypt l ater" (HNDL) attack where a malicious actor with adequate resources can launch an att ack to store sensitive encrypted data today that they hope to decrypt once a CRQC is available. This implies that, every day, sensitive encrypted data is susceptible to t he attack by not implementing quantum-safe strategies, as it corresponds to data poss ibly being deciphered in the future.
For authentication, it is often the case that signatures have a very short lifetime b etween signing and verifying (such as during a TLS handshake), but some authenticatio n use cases do require long lifetimes, such as signing firmware or software that will be active for decades, signing legal documents, or signing certificates that will be embedded into hardware devices such as smart cards. Even for short-lived signature u se cases, the infrastructure often relies on long-lived root keys, which can be diffi cult to update or replace on in-field devices. For authentication, it is often the case that signatures have a very short lifetime b etween signing and verifying (such as during a TLS handshake), but some authenticatio n use cases do require long lifetimes, such as signing firmware or software that will be active for decades, signing legal documents, or signing certificates that will be embedded into hardware devices such as smart cards. Even for short-lived signature u se cases, the infrastructure often relies on long-lived root keys, which can be diffi cult to update or replace on in-field devices.
~~~~ aasvg ~~~~ aasvg
skipping to change at line 1065 skipping to change at line 1036
Hash-based Public Key Cryptography (PKC) has been around since the 1970s, when it was developed by Lamport and Merkle. It is used to create digital signature algorithms, and its security is based on the security of the underlying cryptographic hash functi on. Many variants of hash-based signatures (HBSs) have been developed since the 1970s , including the recent XMSS {{RFC8391}}, HSS/LMS {{RFC8554}}, or BPQS {{BPQS}} scheme s. Unlike many other digital signature techniques, most hash-based signature schemes are stateful, which means that signing necessitates the update and careful tracking o f the state of the secret key. Producing multiple signatures using the same secret ke y state results in loss of security and may ultimately enable signature forgery attac ks against that key. Hash-based Public Key Cryptography (PKC) has been around since the 1970s, when it was developed by Lamport and Merkle. It is used to create digital signature algorithms, and its security is based on the security of the underlying cryptographic hash functi on. Many variants of hash-based signatures (HBSs) have been developed since the 1970s , including the recent XMSS {{RFC8391}}, HSS/LMS {{RFC8554}}, or BPQS {{BPQS}} scheme s. Unlike many other digital signature techniques, most hash-based signature schemes are stateful, which means that signing necessitates the update and careful tracking o f the state of the secret key. Producing multiple signatures using the same secret ke y state results in loss of security and may ultimately enable signature forgery attac ks against that key.
Stateful hash-based signatures with long service lifetimes require additional operati onal complexity compared to other signature types. For example, consider a 20-year ro ot key; there is an expectation that 20 years is longer than the expected lifetime of the hardware that key is stored on, so the key will need to be migrated to new hardw are at some point. Disaster-recovery scenarios where the primary node fails without w arning can be similarly tricky. This requires careful operational and compliance cons ideration to ensure that no private key state can be reused across the migration or d isaster recovery event. One approach for avoiding these issues is to only use statefu l HBSs for short-term use cases that do not require horizontal scaling, for example, signing a batch of firmware images and then retiring the signing key. Stateful hash-based signatures with long service lifetimes require additional operati onal complexity compared to other signature types. For example, consider a 20-year ro ot key; there is an expectation that 20 years is longer than the expected lifetime of the hardware that key is stored on, so the key will need to be migrated to new hardw are at some point. Disaster-recovery scenarios where the primary node fails without w arning can be similarly tricky. This requires careful operational and compliance cons ideration to ensure that no private key state can be reused across the migration or d isaster recovery event. One approach for avoiding these issues is to only use statefu l HBSs for short-term use cases that do not require horizontal scaling, for example, signing a batch of firmware images and then retiring the signing key.
The SLH-DSA algorithm, which was standardized by NIST, leverages the HORST (Hash to O btain Random Subset with Trees) technique and remains the only standardized hash base d signature scheme that is stateless, thus avoiding the complexities associated with state management. SLH-DSA is an advancement on SPHINCS that reduces the signature siz es in SPHINCS and makes it more compact. The SLH-DSA algorithm, which was standardized by NIST, leverages the HORST (Hash to O btain Random Subset with Trees) technique and remains the only standardized hash base d signature scheme that is stateless, thus avoiding the complexities associated with state management. SLH-DSA is an advancement on SPHINCS that reduces the signature siz es in SPHINCS and makes it more compact.
## Code-Based Public Key Cryptography {#code-based} ## Code-Based Public Key Cryptography {#code-based}
This area of cryptography started in the 1970s and 1980s and was based on the seminal work of McEliece and Niederreiter, which focuses on the study of cryptosystems based on error-correcting codes. Some popular error-correcting codes include Goppa codes ( used in McEliece cryptosystems), encoding and decoding syndrome codes used in HQC, or quasi-cyclic moderate density parity check (QC-MDPC) codes. This area of cryptography started in the 1970s and 1980s and was based on the seminal work of McEliece and Niederreiter, which focuses on the study of cryptosystems based on error-correcting codes. Some popular error-correcting codes include Goppa codes ( used in McEliece cryptosystems), encoding and decoding syndrome codes used in HQC, or quasi-cyclic moderate density parity check (QC-MDPC) codes.
Examples include all the unbroken NIST Round 4 finalists: Classic McEliece, HQC (sele Examples include all the unbroken NIST Round 4 finalists: Classic McEliece, HQC (sele
cted by NIST for standardization), and BIKE {{BIKE}}. cted by NIST for standardization), and Bit Flipping Key Encapsulation (BIKE) {{BIKE}}
.
<!-- [rfced] Please review the following sentence. The expansion of "KEM
encapsulation" would be "key encapsulation mechanism encapsulation" if it were
left as is. Is this correct? Or may we update as follows to avoid repetition?
Current:
The KEM encapsulation results in a fixed-length symmetric key that
can be used with a symmetric algorithm, typically a block cipher, in one of
two different ways:
Perhaps:
The KEM results in a fixed-length symmetric key that can be used with
a symmetric algorithm, typically a block cipher, in one of two different ways:
# KEMs {#KEMs} # KEMs {#KEMs}
A Key Encapsulation Mechanism (KEM) is a cryptographic technique used for securely ex changing symmetric key material between two parties over an insecure channel. It is c ommonly used in hybrid encryption schemes where a combination of asymmetric (public k ey) and symmetric encryption is employed. The KEM encapsulation results in a fixed-le ngth symmetric key that can be used with a symmetric algorithm, typically a block cip her, in one of two different ways: A Key Encapsulation Mechanism (KEM) is a cryptographic technique used for securely ex changing symmetric key material between two parties over an insecure channel. It is c ommonly used in hybrid encryption schemes where a combination of asymmetric (public k ey) and symmetric encryption is employed. The encapsulation operation of a KEM result s in a fixed-length symmetric key that can be used with a symmetric algorithm, typica lly a block cipher, in one of two different ways:
* To derive a data encryption key (DEK) to encrypt the data * To derive a data encryption key (DEK) to encrypt the data
* To derive a key encryption key (KEK) used to wrap a DEK * To derive a key encryption key (KEK) used to wrap a DEK
These techniques are often referred to as the Hybrid Public Key Encryption (HPKE) {{! RFC9180}} mechanism. These techniques are often referred to as the Hybrid Public Key Encryption (HPKE) {{! RFC9180}} mechanism.
The term "encapsulation" is chosen intentionally to indicate that KEM algorithms beha ve differently at the API level from the key agreement or key encipherment and key tr ansport mechanisms that are in use today. Key agreement schemes imply that both parti es contribute a public-private key pair to the exchange, while key encipherment and k ey transport schemes imply that the symmetric key material is chosen by one party and "encrypted" or "wrapped" for the other party. KEMs, on the other hand, behave accord ing to the following API primitives {{PQCAPI}}: The term "encapsulation" is chosen intentionally to indicate that KEM algorithms beha ve differently at the API level from the key agreement or key encipherment and key tr ansport mechanisms that are in use today. Key agreement schemes imply that both parti es contribute a public-private key pair to the exchange, while key encipherment and k ey transport schemes imply that the symmetric key material is chosen by one party and "encrypted" or "wrapped" for the other party. KEMs, on the other hand, behave accord ing to the following API primitives {{PQCAPI}}:
* def kemKeyGen() -> (pk, sk) * def kemKeyGen() -> (pk, sk)
* def kemEncaps(pk) -> (ss, ct) * def kemEncaps(pk) -> (ss, ct)
skipping to change at line 1123 skipping to change at line 1080
|<----------| |<----------|
+------------------------+ | | +------------------------+ | |
| ss = kemDecaps(ct, sk) |-| | | ss = kemDecaps(ct, sk) |-| |
+------------------------+ | | +------------------------+ | |
| | | |
~~~~ ~~~~
{: #tab-kem-ke title="KEM-Based Key Exchange"} {: #tab-kem-ke title="KEM-Based Key Exchange"}
## Authenticated Key Exchange ## Authenticated Key Exchange
<!-- [rfced] May we update the title of Figure 4 as follows?
Original:
Figure 4: Diffie-Hellman based AKE and NIKE simultaneously
Perhaps:
Figure 4: Simultaneous DH-Based AKE and NIKE
Authenticated Key Exchange (AKE) with KEMs where both parties contribute a KEM public key to the overall session key is interactive as described in {{Section 9.4 of ?RFC9 528}}. However, a single-sided KEM, such as when one peer has a KEM key in a certific ate and the other peer wants to encrypt for it (as in S/MIME or OpenPGP email), can b e achieved using non-interactive HPKE {{RFC9180}}. The following figure illustrates t he DH Key exchange: Authenticated Key Exchange (AKE) with KEMs where both parties contribute a KEM public key to the overall session key is interactive as described in {{Section 9.4 of ?RFC9 528}}. However, a single-sided KEM, such as when one peer has a KEM key in a certific ate and the other peer wants to encrypt for it (as in S/MIME or OpenPGP email), can b e achieved using non-interactive HPKE {{RFC9180}}. The following figure illustrates t he DH Key exchange:
~~~~ aasvg ~~~~ aasvg
+---------+ +---------+ +---------+ +---------+
| Client | | Server | | Client | | Server |
+---------+ +---------+ +---------+ +---------+
+-----------------------+ | | +-----------------------+ | |
| Long-term client key: | | | | Long-term client key: | | |
| sk1, pk1 |-| | | sk1, pk1 |-| |
+-----------------------+ | | +-----------------------+ | |
skipping to change at line 1166 skipping to change at line 1114
+-------------------------+ | | +-------------------------+ | |
| encrypted | | encrypted |
| content | | content |
|---------->| |---------->|
| | +------------------------+ | | +------------------------+
| | | decryptContent(ss) | | | | decryptContent(ss) |
| | +------------------------+ | | +------------------------+
~~~~ ~~~~
{: #tab-dh-ake title="DH-Based AKE"} {: #tab-dh-ake title="DH-Based AKE"}
<!-- [rfced] Figure 4 is not referred to in the text. May we update this sentence as In the sample flow above, it is important to note that the shared secret `ss` is deri
shown below? ved using key material from both the client and the server, which classifies it as an
AKE. There is another property of a key exchange, called Non-Interactive Key Exchang
Original: e (NIKE), that refers to whether the sender can compute the shared secret `ss` and e
However, a DH key exchange can be an AKE and a NIKE at ncrypt content without requiring active interaction (an exchange of network messages)
the same time if the receiver's public key is known to the sender in with the recipient. {{tab-dh-ake}} shows a DH key exchange, which is an AKE since bo
advance, and many Internet protocols rely on this property of DH- th parties are using long-term keys that can have established trust (for example, via
based key exchanges. certificates), but it is not a NIKE since the client needs to wait for the network i
nteraction to receive the receiver's public key `pk2` before it can compute the share
Perhaps: d secret `ss` and begin content encryption. However, a DH key exchange can be an AKE
However, a DH key exchange can be an AKE and a NIKE at and a NIKE at the same time if the receiver's public key is known to the sender in ad
the same time if the receiver's public key is known to the sender in vance (see {{tab-dh-ake-nike}}), and many Internet protocols rely on this property of
advance (see Figure 4), and many Internet protocols rely on this property of DH- DH-based key exchanges.
based key exchanges.
In the sample flow above, it is important to note that the shared secret `ss` is deri
ved using key material from both the client and the server, which classifies it as an
AKE. There is another property of a key exchange, called Non-Interactive Key Exchang
e (NIKE), that refers to whether the sender can compute the shared secret `ss` and e
ncrypt content without requiring active interaction (an exchange of network messages)
with the recipient. {{tab-dh-ake}} shows a DH key exchange, which is an AKE since bo
th parties are using long-term keys that can have established trust (for example, via
certificates), but it is not a NIKE since the client needs to wait for the network i
nteraction to receive the receiver's public key `pk2` before it can compute the share
d secret `ss` and begin content encryption. However, a DH key exchange can be an AKE
and a NIKE at the same time if the receiver's public key is known to the sender in ad
vance, and many Internet protocols rely on this property of DH-based key exchanges.
~~~~ aasvg ~~~~ aasvg
+---------+ +---------+ +---------+ +---------+
| Client | | Server | | Client | | Server |
+---------+ +---------+ +---------+ +---------+
+-----------------------+ | | +-----------------------+ | |
| Long-term client key: | | | | Long-term client key: | | |
| sk1, pk1 |-| | | sk1, pk1 |-| |
| Long-term server key: | | | | Long-term server key: | | |
| pk2 | | | | pk2 | | |
skipping to change at line 1207 skipping to change at line 1140
| encrypted | | encrypted |
| content | | content |
|---------->| |---------->|
| | +------------------------+ | | +------------------------+
| |-| Long-term server key: | | |-| Long-term server key: |
| | | sk2, pk2 | | | | sk2, pk2 |
| | | ss = KeyEx(pk1, sk2) | | | | ss = KeyEx(pk1, sk2) |
| | | decryptContent(ss) | | | | decryptContent(ss) |
| | +------------------------+ | | +------------------------+
~~~~ ~~~~
{: #tab-dh-ake-nike title="DH-Based AKE and NIKE Simultaneously"} {: #tab-dh-ake-nike title="Simultaneous DH-Based AKE and NIKE"}
The complication with KEMs is that a KEM `Encaps()` is non-deterministic; it involves randomness chosen by the sender of that message. Therefore, in order to perform an A KE, the client must wait for the server to generate the needed randomness and perform `Encaps()` against the client key, which necessarily requires a network round-trip. Therefore, a KEM-based protocol can either be an AKE or a NIKE, but it cannot be both at the same time. Consequently, certain Internet protocols will necessitate a redesi gn to accommodate this distinction, either by introducing extra network round trips o r by making trade-offs in security properties. The complication with KEMs is that a KEM `Encaps()` is non-deterministic; it involves randomness chosen by the sender of that message. Therefore, in order to perform an A KE, the client must wait for the server to generate the needed randomness and perform `Encaps()` against the client key, which necessarily requires a network round-trip. Therefore, a KEM-based protocol can either be an AKE or a NIKE, but it cannot be both at the same time. Consequently, certain Internet protocols will necessitate a redesi gn to accommodate this distinction, either by introducing extra network round trips o r by making trade-offs in security properties.
<!-- [rfced] In Figure 5, please review the second box on the left side of the
diagram. There seems to be an extra "-|", and the box is not closed. Would you
like to make any updates here? Please check out the suggested update in these
test files and let us know your thoughts:
https://www.rfc-editor.org/authors/rfc9958-TEST.md
https://www.rfc-editor.org/authors/rfc9958-TEST.txt
https://www.rfc-editor.org/authors/rfc9958-TEST.html
~~~~ aasvg ~~~~ aasvg
+---------+ +---------+ +---------+ +---------+
| Client | | Server | | Client | | Server |
+---------+ +---------+ +---------+ +---------+
+------------------------+ | | +------------------------+ | |
| pk1, sk1 = kemKeyGen() |-| | | pk1, sk1 = kemKeyGen() |-| |
+------------------------+ | | +------------------------+ | |
| | | |
|pk1 | |pk1 |
|---------->| |---------->|
| | +--------------------------+ | | +--------------------------+
| |-| ss1, ct1 = kemEncaps(pk1)| | |-| ss1, ct1 = kemEncaps(pk1)|
| | | pk2, sk2 = kemKeyGen() | | | | pk2, sk2 = kemKeyGen() |
| | +--------------------------+ | | +--------------------------+
| | | |
| ct1,pk2| | ct1,pk2|
|<----------| |<----------|
+------------------------+ | | +--------------------------+ | |
| ss1 = kemDecaps(ct1, sk1)|-| | | ss1 = kemDecaps(ct1, sk1)| | |
| ss2, ct2 = kemEncaps(pk2)| | | ss2, ct2 = kemEncaps(pk2)|-| |
| ss = Combiner(ss1, ss2)| | | | ss = Combiner(ss1, ss2) | | |
+------------------------+ | | +--------------------------+ | |
| | | |
|ct2 | |ct2 |
|---------->| |---------->|
| | +--------------------------+ | | +--------------------------+
| |-| ss2 = kemDecaps(ct2, sk2)| | |-| ss2 = kemDecaps(ct2, sk2)|
| | | ss = Combiner(ss1, ss2) | | | | ss = Combiner(ss1, ss2) |
| | +--------------------------+ | | +--------------------------+
~~~~ ~~~~
{: #tab-kem-ake title="KEM-Based AKE"} {: #tab-kem-ake title="KEM-Based AKE"}
In the figure above, `Combiner(ss1, ss2)`, often referred to as a KEM combiner, is a cryptographic construction that takes in two shared secrets and returns a single comb ined shared secret. The simplest combiner is concatenation `ss1 || ss2`, but combiner s can vary in complexity depending on the cryptographic properties required. For exam ple, if the combination should preserve IND-CCA2 (see {{INDCCA2}}) of either input, e ven if the other is chosen maliciously, then a more complex construct is required. An other consideration for combiner design is the so-called "binding properties" introdu ced in {{KEEPINGUP}}, which may require the ciphertexts and recipient public keys to be included in the combiner. KEM combiner security analysis becomes more complicated in hybrid settings where the two KEMs represent different algorithms, for example, wh ere one is ML-KEM and the other is ECDH. For a more thorough discussion of KEM combin ers, see {{KEEPINGUP}}, {{I-D.ounsworth-cfrg-kem-combiners}}, and {{I-D.irtf-cfrg-hyb rid-kems}}. In the figure above, `Combiner(ss1, ss2)`, often referred to as a KEM combiner, is a cryptographic construction that takes in two shared secrets and returns a single comb ined shared secret. The simplest combiner is concatenation `ss1 || ss2`, but combiner s can vary in complexity depending on the cryptographic properties required. For exam ple, if the combination should preserve IND-CCA2 (see {{INDCCA2}}) of either input, e ven if the other is chosen maliciously, then a more complex construct is required. An other consideration for combiner design is the so-called "binding properties" introdu ced in {{KEEPINGUP}}, which may require the ciphertexts and recipient public keys to be included in the combiner. KEM combiner security analysis becomes more complicated in hybrid settings where the two KEMs represent different algorithms, for example, wh ere one is ML-KEM and the other is ECDH. For a more thorough discussion of KEM combin ers, see {{KEEPINGUP}}, {{I-D.ounsworth-cfrg-kem-combiners}}, and {{I-D.irtf-cfrg-hyb rid-kems}}.
## Security Properties of KEMs ## Security Properties of KEMs
The security properties described in this section (IND-CCA2 and binding) are not an e xhaustive list of all possible KEM security considerations. They were selected becaus e they are fundamental to evaluating KEM suitability in protocol design and are commo nly discussed in current PQC work. The security properties described in this section (IND-CCA2 and binding) are not an e xhaustive list of all possible KEM security considerations. They were selected becaus e they are fundamental to evaluating KEM suitability in protocol design and are commo nly discussed in current PQC work.
### IND-CCA2 {#INDCCA2} ### IND-CCA2 {#INDCCA2}
IND-CCA2 (INDistinguishability under adaptive Chosen-Ciphertext Attack) is an advance d security notion for encryption schemes. It ensures the confidentiality of the plain text and resistance against chosen-ciphertext attacks. An appropriate definition of I ND-CCA2 security for KEMs can be found in {{CS01}} and {{BHK09}}. ML-KEM {{ML-KEM}} a nd Classic McEliece provide IND-CCA2 security. IND-CCA2 (INDistinguishability under adaptive Chosen-Ciphertext Attack) is an advance d security notion for encryption schemes. It ensures the confidentiality of the plain text and resistance against chosen-ciphertext attacks. An appropriate definition of I ND-CCA2 security for KEMs can be found in {{CS01}} and {{BHK09}}. ML-KEM {{ML-KEM}} a nd Classic McEliece provide IND-CCA2 security.
Understanding IND-CCA2 security is essential for individuals involved in designing or implementing cryptographic systems and protocols in order to evaluate the strength o f the algorithm, assess its suitability for specific use cases, and ensure that data confidentiality and security requirements are met. Understanding IND-CCA2 security is generally not necessary for developers migrating to using an IETF-vetted key establi shment method (KEM) within a given protocol or flow. IND-CCA2 is a widely accepted se curity notion for public key encryption mechanisms, making it suitable for a broad ra nge of applications. When an IETF specification defines a new KEM, its security consi derations should fully describe the relevant cryptographic properties, including IND- CCA2. Understanding IND-CCA2 security is essential for individuals involved in designing or implementing cryptographic systems and protocols in order to evaluate the strength o f the algorithm, assess its suitability for specific use cases, and ensure that data confidentiality and security requirements are met. Understanding IND-CCA2 security is generally not necessary for developers migrating to using an IETF-vetted KEM within a given protocol or flow. IND-CCA2 is a widely accepted security notion for public ke y encryption mechanisms, making it suitable for a broad range of applications. When a n IETF specification defines a new KEM, its security considerations should fully desc ribe the relevant cryptographic properties, including IND-CCA2.
### Binding ### Binding
KEMs also have an orthogonal set of properties to consider when designing protocols a round them: binding {{KEEPINGUP}}. This can be "ciphertext binding", "public key bind ing", "context binding", or any other property that is important to not be substitute d between KEM invocations. In general, a KEM is considered to bind a certain value if substitution of that value by an attacker will necessarily result in a different sha red secret being derived. As an example, if an attacker can construct two different c iphertexts that will decapsulate to the same shared secret, can construct a ciphertex t that will decapsulate to the same shared secret under two different public keys, or can substitute whole KEM exchanges from one session into another, then the construct ion is not ciphertext binding, public key binding, or context binding, respectively. Similarly, protocol designers may wish to bind protocol state information such as a t ransaction ID or nonce so that attempts to replay ciphertexts from one session inside a different session will be blocked at the cryptographic level because the server de rives a different shared secret and is thus is unable to decrypt the content. KEMs also have an orthogonal set of properties to consider when designing protocols a round them: binding {{KEEPINGUP}}. This can be "ciphertext binding", "public key bind ing", "context binding", or any other property that is important to not be substitute d between KEM invocations. In general, a KEM is considered to bind a certain value if substitution of that value by an attacker will necessarily result in a different sha red secret being derived. As an example, if an attacker can construct two different c iphertexts that will decapsulate to the same shared secret, can construct a ciphertex t that will decapsulate to the same shared secret under two different public keys, or can substitute whole KEM exchanges from one session into another, then the construct ion is not ciphertext binding, public key binding, or context binding, respectively. Similarly, protocol designers may wish to bind protocol state information such as a t ransaction ID or nonce so that attempts to replay ciphertexts from one session inside a different session will be blocked at the cryptographic level because the server de rives a different shared secret and is thus is unable to decrypt the content.
<!-- [rfced] Will readers understand what "it" in the phrase "pass it through" The solution to binding is generally achieved at the protocol design level: It is rec
refers to here? Does "it" refer to "KEMs", "secrets", or something else? ommended to avoid using the KEM output shared secret directly without integrating it
into an appropriate protocol. While KEM algorithms provide key secrecy, they do not i
Original: nherently ensure source authenticity, protect against replay attacks, or guarantee fr
Even though modern KEMs such as ML-KEM produce full- eshness. These security properties should be addressed by incorporating the KEM into
entropy shared secrets, it is still advisable for binding reasons to a protocol that has been analyzed for such protections. Even though modern KEMs such
pass it through a key derivation function (KDF) and also include all as ML-KEM produce full-entropy shared secrets, it is still advisable for binding reas
values that you wish to bind; then finally you will have a shared ons to pass the shared secret through a key derivation function (KDF) and also includ
secret that is safe to use at the protocol level. e all values that you wish to bind; finally, you will have a shared secret that is sa
fe to use at the protocol level.
The solution to binding is generally achieved at the protocol design level: It is rec
ommended to avoid using the KEM output shared secret directly without integrating it
into an appropriate protocol. While KEM algorithms provide key secrecy, they do not i
nherently ensure source authenticity, protect against replay attacks, or guarantee fr
eshness. These security properties should be addressed by incorporating the KEM into
a protocol that has been analyzed for such protections. Even though modern KEMs such
as ML-KEM produce full-entropy shared secrets, it is still advisable for binding reas
ons to pass it through a key derivation function (KDF) and also include all values th
at you wish to bind; then, you will have a shared secret that is safe to use at the p
rotocol level.
## HPKE {#hpke} ## HPKE {#hpke}
Modern cryptography has long used the notion of "hybrid encryption" where an asymmetr ic algorithm is used to establish a key and then a symmetric algorithm is used for bu lk content encryption. The previous sections explained important security properties of KEMs, such as IND-CCA2 security and binding, and emphasized that these properties must be supported by proper protocol design. One widely deployed scheme that achieves this is Hybrid Public Key Encryption (HPKE) {{RFC9180}}. Modern cryptography has long used the notion of "hybrid encryption" where an asymmetr ic algorithm is used to establish a key and then a symmetric algorithm is used for bu lk content encryption. The previous sections explained important security properties of KEMs, such as IND-CCA2 security and binding, and emphasized that these properties must be supported by proper protocol design. One widely deployed scheme that achieves this is Hybrid Public Key Encryption (HPKE) {{RFC9180}}.
HPKE {{RFC9180}} works with a combination of KEMs, KDFs, and Authenticated Encryption with Associated Data (AEAD) schemes. HPKE includes three authenticated variants, inc luding one that authenticates possession of a pre-shared key and two optional ones th at authenticate possession of a KEM private key. HPKE can be extended to support hybr id post-quantum KEM {{I-D.ietf-hpke-pq}}. ML-KEM does not support the static-ephemera l key exchange that allows HPKE that is based on DH-based KEMs and its optional authe nticated modes as discussed in {{Section 1.5 of I-D.connolly-cfrg-xwing-kem}}. HPKE {{RFC9180}} works with a combination of KEMs, KDFs, and Authenticated Encryption with Associated Data (AEAD) schemes. HPKE includes three authenticated variants, inc luding one that authenticates possession of a pre-shared key and two optional ones th at authenticate possession of a KEM private key. HPKE can be extended to support hybr id post-quantum KEM {{I-D.ietf-hpke-pq}}. ML-KEM does not support the static-ephemera l key exchange that allows HPKE that is based on DH-based KEMs and its optional authe nticated modes as discussed in {{Section 1.5 of I-D.connolly-cfrg-xwing-kem}}.
# PQC Signatures # PQC Signatures
Any digital signature scheme that provides a construction defining security under a p ost-quantum setting falls under this category of PQC signatures. Any digital signature scheme that provides a construction defining security under a p ost-quantum setting falls under this category of PQC signatures.
skipping to change at line 1314 skipping to change at line 1226
## Details of FN-DSA, ML-DSA, and SLH-DSA {#sig-scheme} ## Details of FN-DSA, ML-DSA, and SLH-DSA {#sig-scheme}
ML-DSA {{ML-DSA}} is a digital signature algorithm based on the hardness of lattice p roblems over module lattices (i.e., the Module Learning with Errors (MLWE) problem). The design of the algorithm is based on the "Fiat-Shamir with Aborts" {{Lyu09}} frame work introduced by Lyubashevsky that leverages rejection sampling to render lattice-b ased Fiat-Shamir (FS) schemes compact and secure. ML-DSA uses uniformly distributed r andom number sampling over small integers to compute coefficients in error vectors, w hich makes the scheme easier to implement compared to FN-DSA {{FN-DSA}}, which uses Gaussian-distributed numbers, necessitating the need to use floating-point arithmetic during signature generation. ML-DSA {{ML-DSA}} is a digital signature algorithm based on the hardness of lattice p roblems over module lattices (i.e., the Module Learning with Errors (MLWE) problem). The design of the algorithm is based on the "Fiat-Shamir with Aborts" {{Lyu09}} frame work introduced by Lyubashevsky that leverages rejection sampling to render lattice-b ased Fiat-Shamir (FS) schemes compact and secure. ML-DSA uses uniformly distributed r andom number sampling over small integers to compute coefficients in error vectors, w hich makes the scheme easier to implement compared to FN-DSA {{FN-DSA}}, which uses Gaussian-distributed numbers, necessitating the need to use floating-point arithmetic during signature generation.
ML-DSA offers both deterministic and randomized signing and is instantiated with thre e parameter sets providing different security levels. Security properties of ML-DSA a re discussed in {{Section 9 of !RFC9881}}. ML-DSA offers both deterministic and randomized signing and is instantiated with thre e parameter sets providing different security levels. Security properties of ML-DSA a re discussed in {{Section 9 of !RFC9881}}.
FN-DSA {{FN-DSA}} is based on the GPV hash-and-sign lattice-based signature framework introduced by Gentry, Peikert, and Vaikuntanathan, which is a framework that require s a certain class of lattices and a trapdoor sampler technique. FN-DSA {{FN-DSA}} is based on the GPV hash-and-sign lattice-based signature framework introduced by Gentry, Peikert, and Vaikuntanathan, which is a framework that require s a certain class of lattices and a trapdoor sampler technique.
The main design principle of FN-DSA is compactness, i.e., it was designed in a way th at achieves minimal total memory bandwidth requirement (the sum of the signature size plus the public key size). This is possible due to the compactness of NTRU lattices. FN-DSA also offers very efficient signing and verification procedures. The main pote ntial downsides of FN-DSA refer to the non-triviality of its algorithms and the need for floating-point arithmetic support in order to support Gaussian-distributed random number sampling where the other lattice schemes use the less efficient but easier to support uniformly distributed random number sampling. The main design principle of FN-DSA is compactness, i.e., it was designed in a way th at achieves minimal total memory bandwidth requirement (the sum of the signature size plus the public key size). This is possible due to the compactness of NTRU lattices. FN-DSA also offers very efficient signing and verification procedures. The main pote ntial downsides of FN-DSA refer to the non-triviality of its algorithms and the need for floating-point arithmetic support in order to support Gaussian-distributed random number sampling where the other lattice schemes use the less efficient but easier to support uniformly distributed random number sampling.
<!-- [rfced] Will readers know what "NIST's report" is here? Would a citation Implementers of FN-DSA need to be aware that FN-DSA signing is highly susceptible to
be helpful? If so, please provide the appropriate reference entry. side-channel attacks unless constant-time 64-bit floating-point operations are used.
This requirement is extremely platform-dependent, as noted in NIST's report {{NIST}}.
Original:
This requirement is extremely
platform-dependent, as noted in NIST's report.
Implementers of FN-DSA need to be aware that FN-DSA signing is highly susceptible to
side-channel attacks unless constant-time 64-bit floating-point operations are used.
This requirement is extremely platform-dependent, as noted in NIST's report.
The performance characteristics of ML-DSA and FN-DSA may differ based on the specific implementation and hardware platform. Generally, ML-DSA is known for its relatively fast signature generation, while FN-DSA can provide more efficient signature verifica tion. The choice may depend on whether the application requires more frequent signatu re generation or signature verification (see {{LIBOQS}}). For further clarity on the sizes and security levels, please refer to the tables in Sections {{RecSecurity}}{: f ormat="counter"} and {{Comparisons}}{: format="counter"}. The performance characteristics of ML-DSA and FN-DSA may differ based on the specific implementation and hardware platform. Generally, ML-DSA is known for its relatively fast signature generation, while FN-DSA can provide more efficient signature verifica tion. The choice may depend on whether the application requires more frequent signatu re generation or signature verification (see {{LIBOQS}}). For further clarity on the sizes and security levels, please refer to the tables in Sections {{RecSecurity}}{: f ormat="counter"} and {{Comparisons}}{: format="counter"}.
SLH-DSA {{SLH-DSA}} utilizes the concept of stateless hash-based signatures, where ea ch signature is unique and unrelated to any previous signature (as discussed in {{has h-based}}). This property eliminates the need for maintaining state information durin g the signing process. SLH-DSA was designed to sign up to 2^64 messages under a given key pair, and it offers three security levels. The parameters for each of the securi ty levels were chosen to provide 128 bits of security, 192 bits of security, and 256 bits of security. SLH-DSA offers smaller public key sizes, larger signature sizes, sl ower signature generation, and slower verification when compared to ML-DSA and FN-DSA . SLH-DSA does not introduce a new hardness assumption beyond those inherent to the u nderlying hash functions. It builds upon established foundations in cryptography, mak ing it a reliable and robust digital signature scheme for a post-quantum world. SLH-DSA {{SLH-DSA}} utilizes the concept of stateless hash-based signatures, where ea ch signature is unique and unrelated to any previous signature (as discussed in {{has h-based}}). This property eliminates the need for maintaining state information durin g the signing process. SLH-DSA was designed to sign up to 2^64 messages under a given key pair, and it offers three security levels. The parameters for each of the securi ty levels were chosen to provide 128 bits of security, 192 bits of security, and 256 bits of security. SLH-DSA offers smaller public key sizes, larger signature sizes, sl ower signature generation, and slower verification when compared to ML-DSA and FN-DSA . SLH-DSA does not introduce a new hardness assumption beyond those inherent to the u nderlying hash functions. It builds upon established foundations in cryptography, mak ing it a reliable and robust digital signature scheme for a post-quantum world.
All of these algorithms (ML-DSA, FN-DSA, and SLH-DSA) include two signature modes: pu re mode, where the entire content is signed directly, and pre-hash mode, where a dige st of the content is signed. All of these algorithms (ML-DSA, FN-DSA, and SLH-DSA) include two signature modes: pu re mode, where the entire content is signed directly, and pre-hash mode, where a dige st of the content is signed.
## Details of XMSS and LMS ## Details of XMSS and LMS
The eXtended Merkle Signature Scheme (XMSS) {{RFC8391}} and Hierarchical Signature Sc heme (HSS) / Leighton-Micali Signature (LMS) {{RFC8554}} are stateful hash-based sign ature schemes, where the secret key state changes over time. In both schemes, reusing a secret key state compromises cryptographic security guarantees. The eXtended Merkle Signature Scheme (XMSS) {{RFC8391}} and Hierarchical Signature Sc heme (HSS) / Leighton-Micali Signature (LMS) {{RFC8554}} are stateful hash-based sign ature schemes, where the secret key state changes over time. In both schemes, reusing a secret key state compromises cryptographic security guarantees.
skipping to change at line 1357 skipping to change at line 1261
|----|----|---|------|------|------|------|------| |----|----|---|------|------|------|------|------|
| 56 | 52 | 1 | 8684 | 8844 | 9004 | 9164 | 9324 | | 56 | 52 | 1 | 8684 | 8844 | 9004 | 9164 | 9324 |
| 56 | 52 | 2 | 4460 | 4620 | 4780 | 4940 | 5100 | | 56 | 52 | 2 | 4460 | 4620 | 4780 | 4940 | 5100 |
| 56 | 52 | 4 | 2348 | 2508 | 2668 | 2828 | 2988 | | 56 | 52 | 4 | 2348 | 2508 | 2668 | 2828 | 2988 |
| 56 | 52 | 8 | 1292 | 1452 | 1612 | 1772 | 1932 | | 56 | 52 | 8 | 1292 | 1452 | 1612 | 1772 | 1932 |
## Hash-then-Sign ## Hash-then-Sign
Within the hash-then-sign paradigm, the message is hashed before signing it. By pre-h ashing, the onus of resistance to existential forgeries becomes heavily reliant on th e collision-resistance of the hash function in use. The hash-then-sign paradigm has t he ability to improve application performance by reducing the size of signed messages that need to be transmitted between application and cryptographic module and making the signature size predictable and manageable. As a corollary, hashing remains mandat ory even for short messages and assigns a further computational requirement onto the verifier. This makes the performance of hash-then-sign schemes more consistent, but n ot necessarily more efficient. Within the hash-then-sign paradigm, the message is hashed before signing it. By pre-h ashing, the onus of resistance to existential forgeries becomes heavily reliant on th e collision-resistance of the hash function in use. The hash-then-sign paradigm has t he ability to improve application performance by reducing the size of signed messages that need to be transmitted between application and cryptographic module and making the signature size predictable and manageable. As a corollary, hashing remains mandat ory even for short messages and assigns a further computational requirement onto the verifier. This makes the performance of hash-then-sign schemes more consistent, but n ot necessarily more efficient.
Using a hash function to produce a fixed-size digest of a message ensures that the si gnature is compatible with a wide range of systems and protocols, regardless of the s pecific message size or format. Crucially for hardware security modules, Hash-then-Si gn also significantly reduces the amount of data that needs to be transmitted and pro cessed by a Hardware Security Module (HSM). Consider scenarios such as a networked HS M located in a different data center from the calling application or a smart card con nected over a USB interface. In these cases, streaming a message that is megabytes or gigabytes long can result in notable network latency, on-device signing delays, or e ven depletion of available on-device memory. Using a hash function to produce a fixed-size digest of a message ensures that the si gnature is compatible with a wide range of systems and protocols, regardless of the s pecific message size or format. Crucially for hardware security modules, hash-then-si gn also significantly reduces the amount of data that needs to be transmitted and pro cessed by a Hardware Security Module (HSM). Consider scenarios such as a networked HS M located in a different data center from the calling application or a smart card con nected over a USB interface. In these cases, streaming a message that is megabytes or gigabytes long can result in notable network latency, on-device signing delays, or e ven depletion of available on-device memory.
Note that the vast majority of Internet protocols that sign large messages already pe rform some form of content hashing at the protocol level, so this tends to be more of a concern with proprietary cryptographic protocols and protocols from non-IETF stand ards bodies. Protocols like TLS 1.3 and DNSSEC use the Hash-then-Sign paradigm. In TL S 1.3 {{RFC8446}} CertificateVerify messages, the content that is covered under the s ignature includes the transcript hash output ({{Section 4.4.1 of RFC8446}}) while DNS SEC {{RFC4034}} uses it to provide origin authentication and integrity assurance serv ices for DNS data. Similarly, the Cryptographic Message Syntax (CMS) {{?RFC5652}} inc ludes a mandatory message digest step before invoking the signature algorithm. Note that the vast majority of Internet protocols that sign large messages already pe rform some form of content hashing at the protocol level, so this tends to be more of a concern with proprietary cryptographic protocols and protocols from non-IETF stand ards bodies. Protocols like TLS 1.3 and DNSSEC use the hash-then-sign paradigm. In TL S 1.3 {{RFC8446}} CertificateVerify messages, the content that is covered under the s ignature includes the transcript hash output ({{Section 4.4.1 of RFC8446}}) while DNS SEC {{RFC4034}} uses it to provide origin authentication and integrity assurance serv ices for DNS data. Similarly, the Cryptographic Message Syntax (CMS) {{?RFC5652}} inc ludes a mandatory message digest step before invoking the signature algorithm.
In the case of ML-DSA, it internally incorporates the necessary hash operations as pa rt of its signing algorithm. ML-DSA directly takes the original message, applies a ha sh function internally, and then uses the resulting hash value for the signature gene ration process. In the case of SLH-DSA, it internally performs randomized message com pression using a keyed hash function that can process arbitrary length messages. In t he case of FN-DSA, the SHAKE-256 hash function is used as part of the signature proce ss to derive a digest of the message being signed. In the case of ML-DSA, it internally incorporates the necessary hash operations as pa rt of its signing algorithm. ML-DSA directly takes the original message, applies a ha sh function internally, and then uses the resulting hash value for the signature gene ration process. In the case of SLH-DSA, it internally performs randomized message com pression using a keyed hash function that can process arbitrary length messages. In t he case of FN-DSA, the SHAKE-256 hash function is used as part of the signature proce ss to derive a digest of the message being signed.
Therefore, ML-DSA, FN-DSA, and SLH-DSA offer enhanced security over the traditional H ash-then-Sign paradigm because, by incorporating dynamic key material into the messag e digest, a pre-computed hash collision on the message to be signed no longer yields a signature forgery. Applications requiring the performance and bandwidth benefits of Hash-then-Sign may still pre-hash at the protocol level prior to invoking ML-DSA, FN -DSA, or SLH-DSA, but protocol designers should be aware that doing so reintroduces t he weakness that hash collisions directly yield signature forgeries. Signing the full un-digested message is recommended where applications can tolerate it. Therefore, ML-DSA, FN-DSA, and SLH-DSA offer enhanced security over the traditional h ash-then-sign paradigm because, by incorporating dynamic key material into the messag e digest, a pre-computed hash collision on the message to be signed no longer yields a signature forgery. Applications requiring the performance and bandwidth benefits of hash-then-sign may still pre-hash at the protocol level prior to invoking ML-DSA, FN -DSA, or SLH-DSA, but protocol designers should be aware that doing so reintroduces t he weakness that hash collisions directly yield signature forgeries. Signing the full un-digested message is recommended where applications can tolerate it.
# NIST Recommendations for Security and Performance Trade-offs {#RecSecurity} # NIST Recommendations for Security and Performance Trade-offs {#RecSecurity}
This information is a reprint of information provided in the NIST PQC project {{NIST} } as of the time this document is published. {{security-levels-table}} denotes the fi ve security levels provided by NIST for PQC algorithms. Neither NIST nor the IETF mak es any specific recommendations about which security level to use. In general, protoc ols will include algorithm choices at multiple levels so that users can choose the le vel appropriate to their policies and data classification, similar to how organizatio ns today choose which size of RSA key to use. The security levels are defined as requ iring computational resources comparable to or greater than an attack on AES (128, 19 2, and 256) and SHA2/SHA3 algorithms, i.e., exhaustive key recovery for AES and optim al collision search for SHA2/SHA3. This information is a reprint of information provided in the NIST PQC project {{NIST} } as of the time this document is published. {{security-levels-table}} denotes the fi ve security levels provided by NIST for PQC algorithms. Neither NIST nor the IETF mak es any specific recommendations about which security level to use. In general, protoc ols will include algorithm choices at multiple levels so that users can choose the le vel appropriate to their policies and data classification, similar to how organizatio ns today choose which size of RSA key to use. The security levels are defined as requ iring computational resources comparable to or greater than an attack on AES (128, 19 2, and 256) and SHA2/SHA3 algorithms, i.e., exhaustive key recovery for AES and optim al collision search for SHA2/SHA3.
| PQ Security Level | AES/SHA(2/3) hardness | PQC Algorithm | | PQ Security Level | AES/SHA(2/3) hardness | PQC Algorithm |
| ----------------- | ----------------------------------------------- | ------------- --------------------------------------------- | | ----------------- | ----------------------------------------------- | ------------- --------------------------------------------- |
| 1 | AES-128 (exhaustive key recovery) | ML-KEM-51 2, FN-DSA-512, SLH-DSA-SHA2/SHAKE-128f/s | | 1 | AES-128 (exhaustive key recovery) | ML-KEM-51 2, FN-DSA-512, SLH-DSA-SHA2/SHAKE-128f/s |
| 2 | SHA-256/SHA3-256 (collision search) | ML-DSA-44 | | 2 | SHA-256/SHA3-256 (collision search) | ML-DSA-44 |
| 3 | AES-192 (exhaustive key recovery) | ML-KEM-76 8, ML-DSA-65, SLH-DSA-SHA2/SHAKE-192f/s | | 3 | AES-192 (exhaustive key recovery) | ML-KEM-76 8, ML-DSA-65, SLH-DSA-SHA2/SHAKE-192f/s |
skipping to change at line 1404 skipping to change at line 1308
| ------------------ | -------------------------- | --------------------------- | --- ------------------------ | ------------------------------------ | | ------------------ | -------------------------- | --------------------------- | --- ------------------------ | ------------------------------------ |
| 1 | ML-KEM-512 | 800 | 1632 | 768 | | 1 | ML-KEM-512 | 800 | 1632 | 768 |
| 1 | FN-DSA-512 | 897 | 1281 | 666 | | 1 | FN-DSA-512 | 897 | 1281 | 666 |
| 2 | ML-DSA-44 | 1312 | 2560 | 2420 | | 2 | ML-DSA-44 | 1312 | 2560 | 2420 |
| 3 | ML-KEM-768 | 1184 | 2400 | 1088 | | 3 | ML-KEM-768 | 1184 | 2400 | 1088 |
| 3 | ML-DSA-65 | 1952 | 4032 | 3309 | | 3 | ML-DSA-65 | 1952 | 4032 | 3309 |
| 5 | FN-DSA-1024 | 1793 | 2305 | 1280 | | 5 | FN-DSA-1024 | 1793 | 2305 | 1280 |
| 5 | ML-KEM-1024 | 1568 | 3168 | 1588 | | 5 | ML-KEM-1024 | 1568 | 3168 | 1588 |
| 5 | ML-DSA-87 | 2592 | 4896 | 4627 | | 5 | ML-DSA-87 | 2592 | 4896 | 4627 |
<!-- [rfced] We note that the title of Section 12 contains the only # Comparing PQC KEMs/Signatures and Traditional KEMs/Signatures {#Comparisons}
abbreviation of KEX in the document. May we rephrase the section title as
follows? Or should "(KEXs)" be left here as is?
Original:
Comparing PQC KEMs/Signatures vs. Traditional KEMs (KEXs)/Signatures
Perhaps:
Comparing PQC KEMs/Signatures and Traditional KEMs/Signatures
# Comparing PQC KEMs/Signatures and Traditional KEMs (KEXs)/Signatures {#Comparisons}
This section provides two tables for comparison of different KEMs and signatures, res pectively, in the traditional and post-quantum scenarios. These tables focus on the s ecret key sizes, public key sizes, and ciphertext/signature sizes for the PQC algorit hms and their traditional counterparts of similar security levels. This section provides two tables for comparison of different KEMs and signatures, res pectively, in the traditional and post-quantum scenarios. These tables focus on the s ecret key sizes, public key sizes, and ciphertext/signature sizes for the PQC algorit hms and their traditional counterparts of similar security levels.
The first table compares traditional and PQC KEMs in terms of security, public and pr ivate key sizes, and ciphertext sizes. The first table compares traditional and PQC KEMs in terms of security, public and pr ivate key sizes, and ciphertext sizes.
| PQ Security Level | Algorithm | Public key size (in bytes) | Priv ate key size (in bytes) | Ciphertext size (in bytes) | | PQ Security Level | Algorithm | Public key size (in bytes) | Priv ate key size (in bytes) | Ciphertext size (in bytes) |
| ----------------- | -------------------------- | --------------------------- | ---- ----------------------- | ------------------------------------ | | ----------------- | -------------------------- | --------------------------- | ---- ----------------------- | ------------------------------------ |
| Traditional | P256_HKDF_SHA-256 | 65 | 32 | 65 | | Traditional | P256_HKDF_SHA-256 | 65 | 32 | 65 |
| Traditional | P521_HKDF_SHA-512 | 133 | 66 | 133 | | Traditional | P521_HKDF_SHA-512 | 133 | 66 | 133 |
| Traditional | X25519_HKDF_SHA-256 | 32 | 32 | 32 | | Traditional | X25519_HKDF_SHA-256 | 32 | 32 | 32 |
skipping to change at line 1484 skipping to change at line 1377
It is also possible to use more than two algorithms together in a hybrid scheme, with various methods for combining them. For post-quantum transition purposes, the combin ation of a post-quantum algorithm with a traditional algorithm is the most straightfo rward and recommended. The use of multiple post-quantum algorithms with different mat hematical bases has also been considered. Combining algorithms in a way that requires both to be used together ensures stronger security, while combinations that do not r equire both will sacrifice security but offer other benefits like backwards compatibi lity and crypto agility. Including a traditional key alongside a post-quantum key oft en has minimal bandwidth impact. It is also possible to use more than two algorithms together in a hybrid scheme, with various methods for combining them. For post-quantum transition purposes, the combin ation of a post-quantum algorithm with a traditional algorithm is the most straightfo rward and recommended. The use of multiple post-quantum algorithms with different mat hematical bases has also been considered. Combining algorithms in a way that requires both to be used together ensures stronger security, while combinations that do not r equire both will sacrifice security but offer other benefits like backwards compatibi lity and crypto agility. Including a traditional key alongside a post-quantum key oft en has minimal bandwidth impact.
### Composite Keys in Hybrid Schemes {#COMPOSITE} ### Composite Keys in Hybrid Schemes {#COMPOSITE}
When combining keys in an "and" mode, it may make more sense to consider them to be a single composite key instead of two keys. This generally requires fewer changes to v arious components of PKI ecosystems, many of which are not prepared to deal with two keys or dual signatures. To those protocol- or application-layer parsers, a "composit e" algorithm composed of two "component" algorithms is simply a new algorithm, and su pport for adding new algorithms generally already exists. Treating multiple "componen t" keys as a single "composite" key also has security advantages, such as preventing cross-protocol reuse of the individual component keys and guarantees about revoking o r retiring all component keys together at the same time, especially if the composite is treated as a single object all the way down into the cryptographic module. When combining keys in an "and" mode, it may make more sense to consider them to be a single composite key instead of two keys. This generally requires fewer changes to v arious components of PKI ecosystems, many of which are not prepared to deal with two keys or dual signatures. To those protocol- or application-layer parsers, a "composit e" algorithm composed of two "component" algorithms is simply a new algorithm, and su pport for adding new algorithms generally already exists. Treating multiple "componen t" keys as a single "composite" key also has security advantages, such as preventing cross-protocol reuse of the individual component keys and guarantees about revoking o r retiring all component keys together at the same time, especially if the composite is treated as a single object all the way down into the cryptographic module.
All that needs to be done is to standardize the formats of how the two keys from the two algorithms are combined into a single data structure and how the two resulting si gnatures or KEMs are combined into a single signature or KEM. The answer can be as si mple as concatenation if the lengths are fixed or easily determined. At the time this document is published, security research is ongoing as to the security properties of concatenation-based composite signatures and KEMs versus more sophisticated signatur e and KEM combiners and protocol contexts in which those simpler combiners are suffic ient. All that needs to be done is to standardize the formats of how the two keys from the two algorithms are combined into a single data structure and how the two resulting si gnatures or KEMs are combined into a single signature or KEM. The answer can be as si mple as concatenation if the lengths are fixed or easily determined. At the time this document is published, security research is ongoing as to the security properties of concatenation-based composite signatures and KEMs versus more sophisticated signatur e and KEM combiners and protocol contexts in which those simpler combiners are suffic ient.
One last consideration is the specific pairs of algorithms that can be combined. A re cent trend in protocols is to only allow a small number of "known good" configuration s that make sense, often referred to in cryptography as a "ciphersuite", instead of a llowing arbitrary combinations of individual configuration choices that may interact in dangerous ways. The current consensus is that the same approach should be followed for combining cryptographic algorithms and that "known good" pairs should be explici tly listed ("explicit composite") instead of just allowing arbitrary combinations of any two cryptographic algorithms ("generic composite"). One last consideration is the specific pairs of algorithms that can be combined. A re cent trend in protocols is to only allow a small number of "known good" configuration s that make sense, often referred to in cryptography as a "ciphersuite", instead of a llowing arbitrary combinations of individual configuration choices that may interact in dangerous ways. The current consensus is that the same approach should be followed for combining cryptographic algorithms and that "known good" pairs should be explici tly listed ("explicit composite") instead of just allowing arbitrary combinations of any two cryptographic algorithms ("generic composite").
The same considerations apply when using multiple certificates to transport a pair of related keys for the same subject. Exactly how two certificates should be managed in order to avoid some of the pitfalls mentioned above is still an active area of inves tigation. Using two certificates keeps the certificate tooling simple and straightfor ward, but in the end, simply moves the problems with requiring that both certificates are intended to be used as a pair, must produce two signatures that must be carried separately, and both must validate, to the certificate management layer, where addres sing these concerns in a robust way can be difficult. The same considerations apply when using multiple certificates to transport a pair of related keys for the same subject. Exactly how two certificates should be managed in order to avoid some of the pitfalls mentioned above is still an active area of inves tigation. Using two certificates keeps the certificate tooling simple and straightfor ward, but in the end, this simply moves problems (i.e., problems with the requirement that both certificates be used as a pair, that two signatures that must be carried s eparately, and that both validate) to the certificate management layer, where address ing these concerns in a robust way can be difficult.
At least one scheme has been proposed that allows the pair of certificates to exist a s a single certificate when being issued and managed but dynamically split into indiv idual certificates when needed (see {{I-D.bonnell-lamps-chameleon-certs}}). At least one scheme has been proposed that allows the pair of certificates to exist a s a single certificate when being issued and managed but dynamically split into indiv idual certificates when needed (see {{I-D.bonnell-lamps-chameleon-certs}}).
<!-- [rfced] This sentence is difficult to follow, especially the phrase "with
requiring...must validate". How may we revise for clarity?
Current:
Using two certificates keeps the certificate tooling simple and
straightforward, but in the end simply moves the problems with
requiring that both certs are intended to be used as a pair, must
produce two signatures that must be carried separately, and both
must validate, to the certificate management layer, where addressing
these concerns in a robust way can be difficult.
Perhaps:
Using two certificates keeps the certificate tooling simple and
straightforward, but in the end, this simply moves problems (i.e., problems with
the requirement that both certificates be used as a pair, that
two signatures that must be carried separately, and that both
validate) to the certificate management layer, where addressing
these concerns in a robust way can be difficult.
<!-- [rfced] Will readers understand "hybrids" and "a hybrid" in these
sentences? The document discusses "hybrid keys", "hybrid schemes", "hybrid
settings", etc.
Original:
However, key reuse becomes a large security problem within hybrids.
...
Therefore, it is recommended that
implementers either reuse the entire hybrid key as a whole, or
perform fresh key generation of all component keys per usage, and
must not take an existing key and reuse it as a component of a
hybrid.
...
Another potential application of hybrids bears mentioning, even
though it is not directly PQC-related. That is using hybrids to
navigate inter-jurisdictional cryptographic connections.
...
If "and" mode hybrids become standardized for the reasons mentioned
above,
### Key Reuse in Hybrid Schemes {#REUSE} ### Key Reuse in Hybrid Schemes {#REUSE}
An important security note, particularly when using hybrid signature keys, but also t o a lesser extent hybrid KEM keys, is key reuse. In traditional cryptography, problem s can occur with so-called "cross-protocol attacks" when the same key can be used for multiple protocols; for example, signing TLS handshakes and signing S/MIME emails. W hile it is not best practice to reuse keys within the same protocol, e.g., using the same key for multiple S/MIME certificates for the same user, it is not generally cata strophic for security. However, key reuse becomes a large security problem within hyb rids. An important security note, particularly when using hybrid signature keys, but also t o a lesser extent hybrid KEM keys, is key reuse. In traditional cryptography, problem s can occur with so-called "cross-protocol attacks" when the same key can be used for multiple protocols; for example, signing TLS handshakes and signing S/MIME emails. W hile it is not best practice to reuse keys within the same protocol, e.g., using the same key for multiple S/MIME certificates for the same user, it is not generally cata strophic for security. However, key reuse becomes a large security problem within hyb rid schemes.
Consider an \{RSA, ML-DSA\} hybrid key where the RSA key also appears within a single -algorithm certificate. In this case, an attacker could perform a "stripping attack" where they take some piece of data signed with the \{RSA, ML-DSA\} key, remove the ML -DSA signature, and present the data as if it was intended for the RSA only certifica te. This leads to a set of security definitions called "non-separability properties", which refers to how well the signature scheme resists various complexities of downgr ade/stripping attacks {{I-D.ietf-pquip-hybrid-signature-spectrums}}. Therefore, it is recommended that implementers either reuse the entire hybrid key as a whole or perfo rm fresh key generation of all component keys per usage, and must not take an existin g key and reuse it as a component of a hybrid. Consider an \{RSA, ML-DSA\} hybrid key where the RSA key also appears within a single -algorithm certificate. In this case, an attacker could perform a "stripping attack" where they take some piece of data signed with the \{RSA, ML-DSA\} key, remove the ML -DSA signature, and present the data as if it was intended for the RSA only certifica te. This leads to a set of security definitions called "non-separability properties", which refers to how well the signature scheme resists various complexities of downgr ade/stripping attacks {{I-D.ietf-pquip-hybrid-signature-spectrums}}. Therefore, it is recommended that implementers either reuse the entire hybrid key as a whole or perfo rm fresh key generation of all component keys per usage, and must not take an existin g key and reuse it as a component of a hybrid key.
### Future Directions and Ongoing Research ### Future Directions and Ongoing Research
Many aspects of hybrid cryptography are still under investigation. The LAMPS Working Group at IETF is actively exploring the security properties of these combinations, an d future standards will reflect the evolving consensus on these issues. Many aspects of hybrid cryptography are still under investigation. The LAMPS Working Group at IETF is actively exploring the security properties of these combinations, an d future standards will reflect the evolving consensus on these issues.
# Impact on Constrained Devices and Networks # Impact on Constrained Devices and Networks
<!-- [rfced] Please review the parenthetical here. Is the intent of "(e.g., PQC algorithms generally have larger keys, ciphertext, and signature sizes than tradi
LAKE, Core)" to be "(e.g., in the LAKE and CoRE Working Groups)"? tional public key algorithms. This has particular impact on constrained devices that
operate with limited data rates. In the IoT space, these constraints have historicall
Original: y driven significant optimization efforts in the IETF (e.g., in the LAKE and CoRE Wor
In the IoT space, these constraints have historically driven king Groups) to adapt security protocols to resource-constrained environments.
significant optimization efforts in the IETF (e.g., LAKE, CoRE) to
adapt security protocols to resource-constrained environments.
Perhaps:
In the IoT space, these constraints have historically driven
significant optimization efforts in the IETF (e.g., in the LAKE and CoRE
Working Groups) to
adapt security protocols to resource-constrained environments.
PQC algorithms generally have larger keys, ciphertext, and signature sizes than tradi
tional public key algorithms. This has particular impact on constrained devices that
operate with limited data rates. In the Internet of Things (IoT) space, these constra
ints have historically driven significant optimization efforts in the IETF (e.g., LAK
E and CoRE) to adapt security protocols to resource-constrained environments.
As the transition to PQC progresses, these environments will face similar challenges. Larger message sizes can increase handshake latency, raise energy consumption, and r equire fragmentation logic. Work is ongoing in the IETF to study how PQC can be deplo yed in constrained devices (see {{I-D.ietf-pquip-pqc-hsm-constrained}}). As the transition to PQC progresses, these environments will face similar challenges. Larger message sizes can increase handshake latency, raise energy consumption, and r equire fragmentation logic. Work is ongoing in the IETF to study how PQC can be deplo yed in constrained devices (see {{I-D.ietf-pquip-pqc-hsm-constrained}}).
# Security Considerations # Security Considerations
## Cryptanalysis ## Cryptanalysis
Traditional cryptanalysis exploits weaknesses in algorithm design, mathematical vulne rabilities, or implementation flaws that are exploitable with classical (i.e., non-qu antum) hardware, whereas quantum cryptanalysis harnesses the power of CRQCs to solve specific mathematical problems more efficiently. Quantum side-channel attacks are ano ther form of quantum cryptanalysis. In such attacks, a device under threat is directl y connected to a quantum computer, which then injects entangled or superimposed data streams to exploit hardware that lacks protection against quantum side channels. Both pose threats to the security of cryptographic algorithms, including those used in PQ C. It is crucial to develop and adopt new cryptographic algorithms resilient against these threats to ensure long-term security in the face of advancing cryptanalysis tec hniques. Traditional cryptanalysis exploits weaknesses in algorithm design, mathematical vulne rabilities, or implementation flaws that are exploitable with classical (i.e., non-qu antum) hardware, whereas quantum cryptanalysis harnesses the power of CRQCs to solve specific mathematical problems more efficiently. Quantum side-channel attacks are ano ther form of quantum cryptanalysis. In such attacks, a device under threat is directl y connected to a quantum computer, which then injects entangled or superimposed data streams to exploit hardware that lacks protection against quantum side channels. Both pose threats to the security of cryptographic algorithms, including those used in PQ C. It is crucial to develop and adopt new cryptographic algorithms resilient against these threats to ensure long-term security in the face of advancing cryptanalysis tec hniques.
Recent attacks on the side-channel implementations using deep learning-based power an alysis have also shown that one needs to be cautious while implementing the required PQC algorithms in hardware. Two of the most recent works include one attack on ML-KEM {{KyberSide}} and one attack on Saber {{SaberSide}}. An evolving threat landscape po ints to the fact that lattice-based cryptography is indeed more vulnerable to side-ch annel attacks as in {{SideCh}} and {{LatticeSide}}. Consequently, some mitigation tec hniques for side-channel attacks have been proposed; see {{Mitigate1}}, {{Mitigate2}} , and {{Mitigate3}}. Recent attacks on the side-channel implementations using deep learning-based power an alysis have also shown that one needs to be cautious while implementing the required PQC algorithms in hardware. Two of the most recent works include one attack on ML-KEM {{KyberSide}} and one attack on Saber {{SaberSide}}. An evolving threat landscape po ints to the fact that lattice-based cryptography is indeed more vulnerable to side-ch annel attacks as in {{SideCh}} and {{LatticeSide}}. Consequently, some mitigation tec hniques for side-channel attacks have been proposed; see {{Mitigate1}}, {{Mitigate2}} , and {{Mitigate3}}.
skipping to change at line 1579 skipping to change at line 1416
Cryptographic agility is recommended for both traditional and quantum cryptanalysis a s it enables organizations to adapt to emerging threats, adopt stronger algorithms, c omply with standards, and plan for long-term security in the face of evolving cryptan alytic techniques and the advent of CRQCs. Cryptographic agility is recommended for both traditional and quantum cryptanalysis a s it enables organizations to adapt to emerging threats, adopt stronger algorithms, c omply with standards, and plan for long-term security in the face of evolving cryptan alytic techniques and the advent of CRQCs.
Several PQC schemes are available that need to be tested; cryptography experts around the world are pushing for the best possible solutions, and the first standards that will ease the introduction of PQC are being prepared. This is of paramount importance and is a call for imminent action for organizations, bodies, and enterprises to star t evaluating their cryptographic agility, assess the complexity of implementing PQC i nto their products, processes, and systems, and develop a migration plan that achieve s their security goals to the best possible extent. Several PQC schemes are available that need to be tested; cryptography experts around the world are pushing for the best possible solutions, and the first standards that will ease the introduction of PQC are being prepared. This is of paramount importance and is a call for imminent action for organizations, bodies, and enterprises to star t evaluating their cryptographic agility, assess the complexity of implementing PQC i nto their products, processes, and systems, and develop a migration plan that achieve s their security goals to the best possible extent.
An important and often overlooked step in achieving cryptographic agility is maintain ing a cryptographic inventory. Modern software stacks incorporate cryptography in num erous places, making it challenging to identify all instances. Therefore, cryptograph ic agility and inventory management take two major forms. First, application develope rs responsible for software maintenance should actively search for instances of hard- coded cryptographic algorithms within applications. When possible, they should design the choice of algorithm to be dynamic, based on application configuration. Second, a dministrators, policy officers, and compliance teams should take note of any instance s where an application exposes cryptographic configurations. These instances should b e managed through either organization-wide written cryptographic policies or automate d cryptographic policy systems. An important and often overlooked step in achieving cryptographic agility is maintain ing a cryptographic inventory. Modern software stacks incorporate cryptography in num erous places, making it challenging to identify all instances. Therefore, cryptograph ic agility and inventory management take two major forms. First, application develope rs responsible for software maintenance should actively search for instances of hard- coded cryptographic algorithms within applications. When possible, they should design the choice of algorithm to be dynamic, based on application configuration. Second, a dministrators, policy officers, and compliance teams should take note of any instance s where an application exposes cryptographic configurations. These instances should b e managed through either organization-wide written cryptographic policies or automate d cryptographic policy systems.
Numerous commercial solutions are available for detecting hard-coded cryptographic al gorithms in source code and compiled binaries, as well as providing cryptographic pol icy management control planes for enterprise and production environments. Numerous commercial solutions are available for detecting hard-coded cryptographic al gorithms in source code and compiled binaries, as well as providing cryptographic pol icy management control planes for enterprise and production environments.
## Jurisdictional Fragmentation ## Jurisdictional Fragmentation
Another potential application of hybrids bears mentioning, even though it is not dire ctly related to PQC: using hybrids to navigate inter-jurisdictional cryptographic con nections. Traditional cryptography is already fragmented by jurisdiction. Consider th at while most jurisdictions support ECDH, those in the United States will prefer the NIST curves while those in Germany will prefer the Brainpool curves. China, Russia, a nd other jurisdictions have their own national cryptography standards. This situation of fragmented global cryptography standards is unlikely to improve with PQC. If "and " mode hybrids become standardized for the reasons mentioned above, then one could im agine leveraging them to create ciphersuites in which a single cryptographic operatio n simultaneously satisfies the cryptographic requirements of both endpoints. Another potential application of hybrid schemes bears mentioning, even though it is n ot directly related to PQC: using hybrids to navigate inter-jurisdictional cryptograp hic connections. Traditional cryptography is already fragmented by jurisdiction. Cons ider that while most jurisdictions support ECDH, those in the United States will pref er the NIST curves while those in Germany will prefer the Brainpool curves. China, Ru ssia, and other jurisdictions have their own national cryptography standards. This si tuation of fragmented global cryptography standards is unlikely to improve with PQC. If "and" mode hybrid schemes become standardized for the reasons mentioned above, the n one could imagine leveraging them to create ciphersuites in which a single cryptogr aphic operation simultaneously satisfies the cryptographic requirements of both endpo ints.
## Hybrid Key Exchange and Signatures: Bridging the Gap Between PQ/T Cryptography ## Hybrid Key Exchange and Signatures: Bridging the Gap Between PQ/T Cryptography
Post-quantum algorithms selected for standardization are relatively new and have not been subject to the same depth of study as traditional algorithms. PQC implementation s will also be new and therefore more likely to contain implementation bugs than the battle-tested crypto implementations that are relied on today. In addition, certain d eployments may need to retain traditional algorithms due to regulatory constraints, e .g., FIPS {{SP-800-56C}} or Payment Card Industry (PCI) compliance {{PCI}}. Hybrid ke y exchange is recommended to enhance security against the HNDL attack. Additionally, hybrid signatures provide for time to react in the case of the announcement of a deva stating attack against any one algorithm, while not fully abandoning traditional cryp tosystems. Post-quantum algorithms selected for standardization are relatively new and have not been subject to the same depth of study as traditional algorithms. PQC implementation s will also be new and therefore more likely to contain implementation bugs than the battle-tested crypto implementations that are relied on today. In addition, certain d eployments may need to retain traditional algorithms due to regulatory constraints, e .g., FIPS {{SP-800-56C}} or Payment Card Industry (PCI) compliance {{PCI}}. Hybrid ke y exchange is recommended to enhance security against the HNDL attack. Additionally, hybrid signatures provide for time to react in the case of the announcement of a deva stating attack against any one algorithm, while not fully abandoning traditional cryp tosystems.
Hybrid key exchange performs both a classical and a post-quantum key exchange in para llel. It provides security redundancy against potential weaknesses in PQC algorithms, allows for a gradual transition of trust in PQC algorithms, and, in backward-compati ble designs, enables gradual adoption without breaking compatibility with existing sy stems. For instance, in TLS 1.3, a hybrid key exchange can combine a widely supported classical algorithm, such as X25519, with a post-quantum algorithm like ML-KEM. This allows legacy clients to continue using the classical algorithm while enabling upgra ded clients to proceed with hybrid key exchange. In contrast, overhead-spreading hybr id designs focus on reducing the PQ overhead. For example, approaches like those desc ribed in {{I-D.hale-mls-combiner}} amortize PQ costs by selectively applying PQ updat es in key exchange processes, allowing systems to balance security and efficiency. Th is strategy ensures a post-quantum secure channel while keeping the overhead manageab le, making it particularly suitable for constrained environments. Hybrid key exchange performs both a classical and a post-quantum key exchange in para llel. It provides security redundancy against potential weaknesses in PQC algorithms, allows for a gradual transition of trust in PQC algorithms, and, in backward-compati ble designs, enables gradual adoption without breaking compatibility with existing sy stems. For instance, in TLS 1.3, a hybrid key exchange can combine a widely supported classical algorithm, such as X25519, with a post-quantum algorithm like ML-KEM. This allows legacy clients to continue using the classical algorithm while enabling upgra ded clients to proceed with hybrid key exchange. In contrast, overhead-spreading hybr id designs focus on reducing the PQ overhead. For example, approaches like those desc ribed in {{I-D.hale-mls-combiner}} amortize PQ costs by selectively applying PQ updat es in key exchange processes, allowing systems to balance security and efficiency. Th is strategy ensures a post-quantum secure channel while keeping the overhead manageab le, making it particularly suitable for constrained environments.
While some hybrid key exchange options introduce additional computational and bandwid th overhead, the impact of traditional key exchange algorithms (e.g., key size) is ty pically small, helping to keep the overall increase in resource usage manageable for most systems. In highly constrained environments, however, those hybrid key exchange protocols may be impractical due to their higher resource requirements compared to pu re post-quantum or traditional key exchange approaches. However, some hybrid key exch ange designs distribute the PQC overhead, making them more suitable for constrained e nvironments. The choice of hybrid key exchange design depends on the specific system requirements and use case, so the appropriate approach may vary. While some hybrid key exchange options introduce additional computational and bandwid th overhead, the impact of traditional key exchange algorithms (e.g., key size) is ty pically small, helping to keep the overall increase in resource usage manageable for most systems. In highly constrained environments, however, those hybrid key exchange protocols may be impractical due to their higher resource requirements compared to pu re post-quantum or traditional key exchange approaches. However, some hybrid key exch ange designs distribute the PQC overhead, making them more suitable for constrained e nvironments. The choice of hybrid key exchange design depends on the specific system requirements and use case, so the appropriate approach may vary.
## Caution: Ciphertext Commitment in KEM vs. DH ## Caution: Ciphertext Commitment in KEM vs. DH
skipping to change at line 1608 skipping to change at line 1445
A good book on modern cryptography is "Serious Cryptography, 2nd Edition" by Jean-Phi lippe Aumasson {{Serious-Crypt}}. A good book on modern cryptography is "Serious Cryptography, 2nd Edition" by Jean-Phi lippe Aumasson {{Serious-Crypt}}.
The Open Quantum Safe (OQS) Project {{OQS}} is an open-source project that aims to su pport the transition to quantum-resistant cryptography. The Open Quantum Safe (OQS) Project {{OQS}} is an open-source project that aims to su pport the transition to quantum-resistant cryptography.
The IETF's PQUIP Working Group {{PQUIP-WG}} maintains a list of PQC-related protocol work within the IETF. The IETF's PQUIP Working Group {{PQUIP-WG}} maintains a list of PQC-related protocol work within the IETF.
--- back --- back
<!-- [rfced] References <!-- [rfced] References
a) Update -01 version to -02.
a) FYI - We note that draft-hale-mls-combiner-01 has been replaced with
draft-ietf-mls-combiner-02. Should this reference entry be updated
accordingly? Note that the title has changed.
Original:
[I-D.hale-mls-combiner]
Joël, Hale, B., Mularczyk, M., and X. Tian, "Flexible
Hybrid PQ MLS Combiner", Work in Progress, Internet-Draft,
draft-hale-mls-combiner-01, 26 September 2024,
<https://datatracker.ietf.org/doc/html/draft-hale-mls-
combiner-01>.
Perhaps: Perhaps:
[PQ-MLS] [PQ-MLS]
Tian, X., Hale, B., Mularczyk, M., and J. Alwen, "Amortized Tian, X., Hale, B., Mularczyk, M., and J. Alwen, "Amortized
PQ MLS Combiner", Work in Progress, Internet-Draft, PQ MLS Combiner", Work in Progress, Internet-Draft,
draft-ietf-mls-combiner-02, 20 October 2025, draft-ietf-mls-combiner-02, 20 October 2025,
<https://datatracker.ietf.org/doc/html/draft-ietf-mls-combiner-02>. <https://datatracker.ietf.org/doc/html/draft-ietf-mls-combiner-02>.
b) The URLs in both of the following reference entries point to the same
URL. Should the URL for [BIKE] be updated to something else? We do not see
BIKE mentioned at this URL. Note that we found the following page for BIKE
(Bit Flipping Key Encapsulation): https://bikesuite.org/.
Current:
[BIKE] "BIKE", <http://pqc-hqc.org/>.
...
[HQC] "HQC", <http://pqc-hqc.org/>.
Perhaps (update URL for [BIKE]):
[BIKE] "BIKE", <https://bikesuite.org/>.
...
[HQC] "HQC", <http://pqc-hqc.org/>.
c) We updated many of the reference entries in the references section to
include titles, URLs, and additional publication information that may be
helpful for future readers. Please review and let us know if you have any
concerns or corrections.
--> -->
# Acknowledgements # Acknowledgements
{:numbered="false"} {:numbered="false"}
<!-- [rfced] Acknowledgements:
a) Would you like the cite the draft here? If so, please provide the
draftstring so we can create a reference entry.
Original:
This document leverages text from an earlier draft by Paul Hoffman.
b) Would you like to include a surname for "Florence D" and "Ben S" rather
than just an initial? If so, please provide the surnames.
Original:
This document leverages text from an earlier draft by Paul Hoffman.
Thanks to Dan Wing, Florence D, Thom Wiggers, Sophia Grundner-
Culemann, Panos Kampanakis, Ben S, Sofia Celi, Melchior Aelmans,
Falko Strenzke, Deirdre Connolly, Hani Ezzadeen, Britta Hale, Scott
Rose, Hilarie Orman, Thomas Fossati, Roman Danyliw, Mike Bishop,
Mališa Vučinić, Éric Vyncke, Deb Cooley, Dirk Von Hugo and Daniel Van
Geest for the discussion, review, and comments.
This document leverages text from an earlier Internet-Draft by {{{Paul Hoffman}}}. Th anks to {{{Dan Wing}}}, {{{Florence D}}}, {{{Thom Wiggers}}}, {{{Sophia Grundner-Cule mann}}}, {{{Panos Kampanakis}}}, {{{Ben S}}}, {{{Sofia Celi}}}, {{{Melchior Aelmans}} }, {{{Falko Strenzke}}}, {{{Deirdre Connolly}}}, {{{Hani Ezzadeen}}}, {{{Britta Hale} }}, {{{Scott Rose}}}, {{{Hilarie Orman}}}, {{{Thomas Fossati}}}, {{{Roman Danyliw}}}, {{{Mike Bishop}}}, {{{Mališa Vučinić}}}, {{{Éric Vyncke}}}, {{{Deb Cooley}}}, {{{Dir k Von Hugo}}}, and {{{Daniel Van Geest}}} for the discussion, review and comments. This document leverages text from an earlier Internet-Draft by {{{Paul Hoffman}}}. Th anks to {{{Dan Wing}}}, {{{Florence D}}}, {{{Thom Wiggers}}}, {{{Sophia Grundner-Cule mann}}}, {{{Panos Kampanakis}}}, {{{Ben S}}}, {{{Sofia Celi}}}, {{{Melchior Aelmans}} }, {{{Falko Strenzke}}}, {{{Deirdre Connolly}}}, {{{Hani Ezzadeen}}}, {{{Britta Hale} }}, {{{Scott Rose}}}, {{{Hilarie Orman}}}, {{{Thomas Fossati}}}, {{{Roman Danyliw}}}, {{{Mike Bishop}}}, {{{Mališa Vučinić}}}, {{{Éric Vyncke}}}, {{{Deb Cooley}}}, {{{Dir k Von Hugo}}}, and {{{Daniel Van Geest}}} for the discussion, review and comments.
In particular, the authors would like to acknowledge the contributions to this docume nt by {{{Kris Kwiatkowski}}}. In particular, the authors would like to acknowledge the contributions to this docume nt by {{{Kris Kwiatkowski}}}.
<!-- [rfced] Would you like to make use of <sup> for superscript in this <!-- [rfced] Would you like to make use of <sup> for superscript in this
document? In the HTML and PDF, it appears as superscript. In the text output, document? In the HTML and PDF, it appears as superscript. In the text output,
<sup> generates a^b, which was used in the original document. (Note that if <sup> generates a^b, which was used in the original document. (Note that if
you would like to use <sup>, we will make the update once the file is you would like to use <sup>, we will make the update once the file is
converted to RFCXML.) converted to RFCXML.)
skipping to change at line 1699 skipping to change at line 1484
<!-- [rfced] Please review the "Inclusive Language" portion of the online <!-- [rfced] Please review the "Inclusive Language" portion of the online
Style Guide <https://www.rfc-editor.org/styleguide/part2/#inclusive_language> Style Guide <https://www.rfc-editor.org/styleguide/part2/#inclusive_language>
and let us know if any changes are needed. Updates of this nature typically and let us know if any changes are needed. Updates of this nature typically
result in more precise language, which is helpful for readers. For example, result in more precise language, which is helpful for readers. For example,
please consider whether "tradition" should be updated for clarity. While the please consider whether "tradition" should be updated for clarity. While the
NIST website NIST website
<https://web.archive.org/web/20250214092458/https://www.nist.gov/nist-research-librar y/nist-technical-series-publications-author-instructions#table1> <https://web.archive.org/web/20250214092458/https://www.nist.gov/nist-research-librar y/nist-technical-series-publications-author-instructions#table1>
indicates that this term is potentially biased, it is also ambiguous. indicates that this term is potentially biased, it is also ambiguous.
"Tradition" is a subjective term, as it is not the same for everyone. --> "Tradition" is a subjective term, as it is not the same for everyone. -->
<!-- [rfced] Abbreviations
a) We note that KEM is expanded in the following ways in this document:
key encapsulation mechanism (KEM)
key encapsulation method (KEM)
key establishment method (KEM)
Should the latter two (one instance each) be updated to "key encapsulation
mechanism (KEM)" (most common in document) or simply "KEM" (as the
abbreviation was already expanded)? Or should these be handled in some other
way so that the expansion of KEM is consistent in the document?
b) How should "MAC" be expanded? As "Media Access Control (MAC)", "Message
Authentication Code (MAC)", or something else?
Original:
It is crucial for the reader to understand that when
the word "PQC" is mentioned in the document, it means asymmetric
cryptography (or public key cryptography), and not any symmetric
algorithms based on stream ciphers, block ciphers, hash functions,
MACs, etc., which are less vulnerable to quantum computers.
c) We have updated the expansion for "AEAD" below as follows. Please review
and let us know any objections.
Original:
HPKE [RFC9180] works with a combination of KEMs, KDFs, and
authenticated encryption with additional data (AEAD) schemes.
Current:
HPKE [RFC9180] works with a combination of KEMs, KDFs, and
Authenticated Encryption with Associated Data (AEAD) schemes.
d) How should "BIKE" be expanded? As "Bit Flipping Key Encapsulation"?
Original:
Examples include all the unbroken NIST Round 4 finalists: Classic
McEliece, HQC (selected by NIST for standardization), and [BIKE].
e) We have added expansions for the following abbreviations upon first
use per Section 3.6 of RFC 7322 ("RFC Style Guide"). Please review each
expansion in the document carefully to ensure correctness.
Security Association (SA)
Trusted Execution Environments (TEEs)
Hash to Obtain Random Subset with Trees (HORST)
Hashed Message Authentication Code (HMAC)
Internet of Things (IoT)
Payment Card Industry (PCI)
<!-- [rfced] We see both of the following forms used in the document. Should
these be uniform? If so, please let us know which form is preferred.
hash-then-sign
Hash-then-Sign
 End of changes. 40 change blocks. 
324 lines changed or deleted 125 lines changed or added

This html diff was produced by rfcdiff 1.48.