Low Latency, Low Loss, Scalable
Throughput (L4S) Internet Service: ArchitectureIndependentUKietf@bobbriscoe.nethttps://bobbriscoe.net/Nokia Bell LabsAntwerpBelgiumkoen.de_schepper@nokia.comhttps://www.bell-labs.com/about/researcher-profiles/koende_schepper/Universidad Carlos III de MadridAv. Universidad 30Leganes, Madrid 28911Spain34 91 6249500marcelo@it.uc3m.eshttps://www.it.uc3m.esCableLabsUSG.White@CableLabs.com
Transport
Transport Area Working GroupInternet-DraftI-DThis document describes the L4S architecture, which enables Internet
applications to achieve Low queuing Latency, Low Loss, and Scalable
throughput (L4S). L4S is based on the insight that the root cause of
queuing delay is in the capacity-seeking congestion controllers of
senders, not in the queue itself. With the L4S architecture all Internet
applications could (but do not have to) transition away from congestion
control algorithms that cause substantial queuing delay, to a new class
of congestion controls that can seek capacity with very little queuing.
These are aided by a modified form of explicit congestion notification
(ECN) from the network. With this new architecture, applications can
have both low latency and high throughput.The architecture primarily concerns incremental deployment. It
defines mechanisms that allow the new class of L4S congestion controls
to coexist with 'Classic' congestion controls in a shared network. The
aim is for L4S latency and throughput to be usually much better (and
rarely worse), while typically not impacting Classic performance.At any one time, it is increasingly common for all of the traffic in
a bottleneck link (e.g. a household's Internet access) to come from
applications that prefer low delay: interactive Web, Web services,
voice, conversational video, interactive video, interactive remote
presence, instant messaging, online gaming, remote desktop, cloud-based
applications and video-assisted remote control of machinery and
industrial processes. In the last decade or so, much has been done to
reduce propagation delay by placing caches or servers closer to users.
However, queuing remains a major, albeit intermittent, component of
latency. For instance spikes of hundreds of milliseconds are not
uncommon, even with state-of-the-art active queue management
(AQM) , . Queuing
in access network bottlenecks is typically configured to cause overall
network delay to roughly double during a long-running flow, relative to
expected base (unloaded) path delay .
Low loss is also important because, for interactive applications, losses
translate into even longer retransmission delays.It has been demonstrated that, once access network bit rates reach
levels now common in the developed world, increasing link capacity
offers diminishing returns if latency (delay) is not addressed , . Therefore, the
goal is an Internet service with very Low queueing Latency, very Low
Loss and Scalable throughput (L4S). Very low queuing latency means less
than 1 millisecond (ms) on average and less than about 2 ms at
the 99th percentile. End-to-end delay above 50 ms or even above 20 ms
starts to feel unnatural for more demanding interactive applications. So
removing unnecessary delay variability increases the reach of these
applications (the distance over which they are comfortable to use). This
document describes the L4S architecture for achieving these goals.Differentiated services (Diffserv) offers Expedited Forwarding
(EF ) for some packets at the expense of
others, but this makes no difference when all (or most) of the traffic
at a bottleneck at any one time requires low latency. In contrast, L4S
still works well when all traffic is L4S - a service that gives without
taking needs none of the configuration or management baggage (traffic
policing, traffic contracts) associated with favouring some traffic
flows over others.Queuing delay degrades performance intermittently . It occurs when a large enough capacity-seeking
(e.g. TCP) flow is running alongside the user's traffic in the
bottleneck link, which is typically in the access network. Or when the
low latency application is itself a large capacity-seeking or adaptive
rate (e.g. interactive video) flow. At these times, the performance
improvement from L4S must be sufficient that network operators will be
motivated to deploy it.Active Queue Management (AQM) is part of the solution to queuing
under load. AQM improves performance for all traffic, but there is a
limit to how much queuing delay can be reduced by solely changing the
network; without addressing the root of the problem.The root of the problem is the presence of standard congestion
control (Reno ) or compatible variants
(e.g. CUBIC ) that are used in TCP and
in other transports such as QUIC . We shall use
the term 'Classic' for these Reno-friendly congestion controls. Classic
congestion controls induce relatively large saw-tooth-shaped excursions
of queue occupancy. So if a network operator naively attempts to reduce
queuing delay by configuring an AQM to operate at a shallower queue, a
Classic congestion control will significantly underutilize the link at
the bottom of every saw-tooth. These sawteeth have also been growing in
duration as flow rate scales .It has been demonstrated that if the sending host replaces a Classic
congestion control with a 'Scalable' alternative, when a suitable AQM is
deployed in the network the performance under load of all the above
interactive applications can be significantly improved. For instance,
queuing delay under heavy load with the example DCTCP/DualQ solution
cited below on a DSL or Ethernet link is roughly 1 to 2 milliseconds at
the 99th percentile without losing link utilization , (for other link
types, see ). This compares with
5-20 ms on average with a Classic
congestion control and current state-of-the-art AQMs such as
FQ-CoDel , PIE or DOCSIS PIE and about
20-30 ms at the 99th percentile .L4S is designed for incremental deployment. It is possible to deploy
the L4S service at a bottleneck link alongside the existing best efforts
service so that unmodified
applications can start using it as soon as the sender's stack is
updated. Access networks are typically designed with one link as the
bottleneck for each site (which might be a home, small enterprise or
mobile device), so deployment at either or both ends of this link should
give nearly all the benefit in the respective direction. With some
transport protocols, namely TCP and SCTP, the sender has to check that
the receiver has been suitably updated to give more accurate feedback,
whereas with more recent transport protocols such as QUIC and DCCP, all
receivers have always been suitable.This document presents the L4S architecture. It consists of three
components: network support to isolate L4S traffic from classic traffic;
protocol features that allow network elements to identify L4S traffic;
and host support for L4S congestion controls. The protocol is defined
separately as an experimental
change to Explicit Congestion Notification (ECN). This document
describes and justifies the component parts and how they interact to
provide the scalable, low latency, low loss Internet service. It also
details the approach to incremental deployment, as briefly summarized
above.This document describes the L4S architecture in three passes. First
the brief overview in gives
the very high level idea and states the main components with minimal
rationale. This is only intended to give some context for the
terminology definitions that follow in , and to explain the structure of the rest
of the document. Then goes into more
detail on each component with some rationale, but still mostly stating
what the architecture is, rather than why. Finally, justifies why each element of the solution
was chosen () and why
these choices were different from other solutions ().Having described the architecture, clarifies its applicability; that is,
the applications and use-cases that motivated the design, the
challenges applying the architecture to various link technologies, and
various incremental deployment models: including the two main
deployment topologies, different sequences for incremental deployment
and various interactions with pre-existing approaches. The document
ends with the usual tailpieces, including extensive discussion of
traffic policing and other security considerations in .Below we outline the three main components to the L4S architecture;
1) the scalable congestion control on the sending host; 2) the AQM at
the network bottleneck; and 3) the protocol between them.But first, the main point to grasp is that low latency is not
provided by the network - low latency results from the careful behaviour
of the scalable congestion controllers used by L4S senders. The network
does have a role - primarily to isolate the low latency of the carefully
behaving L4S traffic from the higher queuing delay needed by traffic
with pre-existing Classic behaviour. The network also alters the way it
signals queue growth to the transport - It uses the Explicit Congestion
Notification (ECN) protocol, but it signals the very start of queue
growth - immediately without the smoothing delay typical of Classic
AQMs. Because ECN support is essential for L4S, senders use the ECN
field as the protocol that allows the network to identify which packets
are L4S and which are Classic.Scalable congestion controls already exist.
They solve the scaling problem with Classic congestion controls,
such as Reno or Cubic. Because flow rate has scaled since TCP
congestion control was first designed in 1988, assuming the flow
lasts long enough, it now takes hundreds of round trips (and
growing) to recover after a congestion signal (whether a loss or an
ECN mark) as shown in the examples in and . Therefore, control of queuing and utilization
becomes very slack, and the slightest disturbances (e.g. from
new flows starting) prevent a high rate from being attained.With a scalable congestion control, the average time
from one congestion signal to the next (the recovery time) remains
invariant as the flow rate scales, all other factors being equal.
This maintains the same degree of control over queueing and
utilization whatever the flow rate, as well as ensuring that high
throughput is more robust to disturbances. The scalable control used
most widely (in controlled environments) is Data Center TCP
(DCTCP ), which has been implemented
and deployed in Windows Server Editions (since 2012), in Linux and
in FreeBSD. Although DCTCP as-is functions well over wide-area round
trip times, most implementations lack certain safety features that
would be necessary for use outside controlled environments like data
centres (see ). So scalable
congestion control needs to be implemented in TCP and other
transport protocols (QUIC, SCTP, RTP/RTCP, RMCAT, etc.). Indeed,
between the present document being drafted and published, the
following scalable congestion controls were implemented: TCP
Prague , QUIC Prague, an L4S
variant of the RMCAT SCReAM controller
and the L4S ECN part of BBRv2 intended
for TCP and QUIC transports.L4S traffic needs to be isolated from the
queuing latency of Classic traffic. One queue per application flow
(FQ) is one way to achieve this, e.g. FQ-CoDel . However, using just two queues is sufficient and
does not require inspection of transport layer headers in the
network, which is not always possible (see ). With just two queues, it might seem
impossible to know how much capacity to schedule for each queue
without inspecting how many flows at any one time are using each.
And it would be undesirable to arbitrarily divide access network
capacity into two partitions. The Dual Queue Coupled AQM was
developed as a minimal complexity solution to this problem. It acts
like a 'semi-permeable' membrane that partitions latency but not
bandwidth. As such, the two queues are for transition from Classic
to L4S behaviour, not bandwidth prioritization. gives a high level
explanation of how the per-flow-queue (FQ) and DualQ variants of L4S
work, and gives a
full explanation of the DualQ Coupled AQM framework. A specific
marking algorithm is not mandated for L4S AQMs. Appendices of give non-normative
examples that have been implemented and evaluated, and give
recommended default parameter settings. It is expected that L4S
experiments will improve knowledge of parameter settings and whether
the set of marking algorithms needs to be limited.A sending host needs to distinguish L4S
and Classic packets with an identifier so that the network can
classify them into their separate treatments. The L4S identifier
spec concludes that
all alternatives involve compromises, but the ECT(1) and CE
codepoints of the ECN field represent a workable solution. As
already explained, the network also uses ECN to immediately signal
the very start of queue growth to the transport.[Note to the RFC Editor (to be removed before publication as an RFC):
The following definitions are copied from the L4S ECN spec for the reader's convenience.
Except, here, Classic CC and Scalable CC are condensed because they
refer to later. Also the
definition of Traffic Policing is not needed in .]A congestion control
behaviour that can co-exist with standard Reno without causing significantly negative impact on
its flow rate . The scaling problem
with Classic congestion control is explained, with examples, in
and in .A congestion control
where the average time from one congestion signal to the next (the
recovery time) remains invariant as the flow rate scales, all other
factors being equal. For instance, DCTCP averages 2 congestion
signals per round-trip whatever the flow rate, as do other recently
developed scalable congestion controls, e.g. Relentless
TCP , TCP
Prague , , BBRv2 , and the L4S
variant of SCReAM for real-time media , ). See Section 4.3
of for more
explanation.The Classic service is intended for
all the congestion control behaviours that co-exist with
Reno (e.g. Reno itself,
Cubic , Compound , TFRC ). The term 'Classic queue' means a queue
providing the Classic service.The
'L4S' service is intended for traffic from scalable congestion
control algorithms, such as the Prague congestion control , which was
derived from DCTCP . The L4S service
is for more general traffic than just Prague — it allows the
set of congestion controls with similar scaling properties to Prague
to evolve, such as the examples listed above (Relentless, SCReAM).
The term 'L4S queue' means a queue providing the L4S service.The terms Classic or L4S can also qualify other
nouns, such as 'queue', 'codepoint', 'identifier', 'classification',
'packet', 'flow'. For example: an L4S packet means a packet with an
L4S identifier sent from an L4S congestion control.Both Classic and L4S services can cope with a
proportion of unresponsive or less-responsive traffic as well, but
in the L4S case its rate has to be smooth enough or low enough to
not build a queue (e.g. DNS, VoIP, game sync datagrams,
etc.).The subset of Classic traffic that is
friendly to the standard Reno congestion control defined for TCP in
. The TFRC spec indirectly implies that 'friendly' is defined as
"generally within a factor of two of the sending rate of a TCP flow
under the same conditions". Reno-friendly is used here in place of
'TCP-friendly', given the latter has become imprecise, because the
TCP protocol is now used with so many different congestion control
behaviours, and Reno is used in non-TCP transports such as
QUIC .The original Explicit Congestion
Notification (ECN) protocol , which
requires ECN signals to be treated as equivalent to drops, both when
generated in the network and when responded to by the sender.L4S uses the ECN field as an identifier with the names for the four
codepoints of the 2-bit IP-ECN field unchanged from those defined in
the ECN spec : Not ECT, ECT(0), ECT(1)
and CE, where ECT stands for ECN-Capable Transport and CE stands for
Congestion Experienced. A packet marked with the CE codepoint is
termed 'ECN-marked' or sometimes just 'marked' where the context
makes ECN obvious.A home, mobile device, small enterprise or
campus, where the network bottleneck is typically the access link to
the site. Not all network arrangements fit this model but it is a
useful, widely applicable generalization.Limiting traffic by dropping packets
or shifting them to a lower service class (as opposed to introducing
delay, which is termed traffic shaping). Policing can involve
limiting average rate and/or burst size. Policing focused on
limiting queuing but not average flow rate is termed congestion
policing, latency policing, burst policing or queue protection in
this document. Otherwise, the term rate policing is used.The L4S architecture is composed of the elements in the following
three subsections.The L4S architecture involves: a) unassignment of the previous use
of the identifier; b) reassignment of the same identifier; and c)
optional further identifiers:An essential aspect of a scalable congestion control is the use
of explicit congestion signals. 'Classic' ECN requires an ECN signal to be treated as
equivalent to drop, both when it is generated in the network and
when it is responded to by hosts. L4S needs networks and hosts to
support a more fine-grained meaning for each ECN signal that is
less severe than a drop, so that the L4S signals:can be much more frequent;can be signalled immediately, without the significant delay
required to smooth out fluctuations in the queue.To enable L4S, the standards track Classic ECN
spec has had to be updated to allow
L4S packets to depart from the 'equivalent to drop' constraint.
is a standards track update to relax
specific requirements in RFC 3168 (and certain other standards
track RFCs), which clears the way for the experimental changes
proposed for L4S. Also, the ECT(1) codepoint was previously
assigned as the experimental ECN nonce , which RFC 8311 recategorizes as historic to
make the codepoint available again. specifies that
ECT(1) is used as the identifier to classify L4S packets into a
separate treatment from Classic packets. This satisfies the
requirement for identifying an alternative ECN treatment in .The CE codepoint is
used to indicate Congestion Experienced by both L4S and Classic
treatments. This raises the concern that a Classic AQM earlier on
the path might have marked some ECT(0) packets as CE. Then these
packets will be erroneously classified into the L4S queue.
Appendix B of the L4S ECN spec explains why five unlikely
eventualities all have to coincide for this to have any
detrimental effect, which even then would only involve a
vanishingly small likelihood of a spurious retransmission.A network operator might wish to include certain unresponsive,
non-L4S traffic in the L4S queue if it is deemed to be smoothly
enough paced and low enough rate not to build a queue. For
instance, VoIP, low rate datagrams to sync online games,
relatively low rate application-limited traffic, DNS, LDAP, etc.
This traffic would need to be tagged with specific identifiers,
e.g. a low latency Diffserv Codepoint such as Expedited
Forwarding (EF ), Non-Queue-Building
(NQB ), or
operator-specific identifiers.The L4S architecture aims to provide low latency without the need for per-flow operations in network
components. Nonetheless, the architecture does not preclude per-flow
solutions. The following bullets describe the known arrangements: a)
the DualQ Coupled AQM with an L4S AQM in one queue coupled from a
Classic AQM in the other; b) Per-Flow Queues with an instance of a
Classic and an L4S AQM in each queue; c) Dual queues with per-flow
AQMs, but no per-flow queues:The Dual Queue Coupled AQM (illustrated in ) achieves the 'semi-permeable'
membrane property mentioned earlier as follows:Latency isolation: Two separate queues are used to isolate
L4S queuing delay from the larger queue that Classic traffic
needs to maintain full utilization. Bandwidth pooling: The two queues act as if they are a
single pool of bandwidth in which flows of either type get
roughly equal throughput without the scheduler needing to
identify any flows. This is achieved by having an AQM in each
queue, but the Classic AQM provides a congestion signal to
both queues in a manner that ensures a consistent response
from the two classes of congestion control. Specifically, the
Classic AQM generates a drop/mark probability based on
congestion in its own queue, which it uses both to drop/mark
packets in its own queue and to affect the marking probability
in the L4S queue. The strength of the coupling of the
congestion signalling between the two queues is enough to make
the L4S flows slow down to leave the right amount of capacity
for the Classic flows (as they would if they were the same
type of traffic sharing the same queue).Then the scheduler can serve the L4S queue with priority
(denoted by the '1' on the higher priority input), because the L4S
traffic isn't offering up enough traffic to use all the priority
that it is given. Therefore:for latency isolation on short time-scales (sub-round-trip)
the prioritization of the L4S queue protects its low latency
by allowing bursts to dissipate quickly;but for bandwidth pooling on longer time-scales (round-trip
and longer) the Classic queue creates an equal and opposite
pressure against the L4S traffic to ensure that neither has
priority when it comes to bandwidth - the tension between
prioritizing L4S and coupling the marking from the Classic AQM
results in approximate per-flow fairness.To protect against unresponsive traffic taking advantage
of the prioritization of the L4S queue and starving the Classic
queue, it is advisable for the priority to be conditional, not
strict (see Appendix A of the DualQ spec ). When there is no Classic traffic, the L4S
queue’s own AQM comes into play. It starts congestion
marking with a very shallow queue, so L4S traffic maintains very
low queuing delay.If either queue becomes
persistently overloaded, drop of ECN-capable packets is
introduced, as recommended in Section 7 of the ECN spec and Section 4.2.1 of the AQM
recommendations . Then both queues
introduce the same level of drop (not shown in the figure).The Dual Queue Coupled AQM has been specified as
generically as possible without specifying the
particular AQMs to use in the two queues so that designers are
free to implement diverse ideas. Informational appendices in that
draft give pseudocode examples of two different specific AQM
approaches: one called DualPI2 (pronounced Dual PI
Squared) that uses the PI2
variant of PIE, and a zero-config variant of RED called Curvy RED.
A DualQ Coupled AQM based on PIE has also been specified and
implemented for Low Latency DOCSIS .Per-Flow Queues and AQMs: A scheduler with per-flow queues such
as FQ-CoDel or FQ-PIE can be used for L4S. For instance within
each queue of an FQ-CoDel system, as well as a CoDel AQM, there is
typically also the option of ECN marking at an immediate
(unsmoothed) shallow threshold to support use in data centres (see
Sec.5.2.7 of the FQ-CoDel spec ). In
Linux, this has been modified so that the shallow threshold can be
solely applied to ECT(1) packets . Then, if there is a flow of non-ECN or
ECT(0) packets in the per-flow-queue, the Classic AQM
(e.g. CoDel) is applied; while if there is a flow of ECT(1)
packets in the queue, the shallower (typically sub-millisecond)
threshold is applied. In addition, ECT(0) and not-ECT packets
could potentially be classified into a separate flow-queue from
ECT(1) and CE packets to avoid them mixing if they share a common
flow-identifier (e.g. in a VPN).Dual-queues, but per-flow AQMs: It should also be possible to
use dual queues for isolation, but with per-flow marking to
control flow-rates (instead of the coupled per-queue marking of
the Dual Queue Coupled AQM). One of the two queues would be for
isolating L4S packets, which would be classified by the ECN
codepoint. Flow rates could be controlled by flow-specific
marking. The policy goal of the marking could be to differentiate
flow rates (e.g. , which requires
additional signalling of a per-flow 'value'), or to equalize
flow-rates (perhaps in a similar way to Approx Fair
CoDel , , but with two queues
not one).Note that whenever the term
'DualQ' is used loosely without saying whether marking is
per-queue or per-flow, it means a dual queue AQM with per-queue
marking.The L4S architecture includes two main mechanisms in the end host
that we enumerate next:Scalable Congestion Control at the sender: defines a scalable congestion
control as one where the average time from one congestion signal
to the next (the recovery time) remains invariant as the flow rate
scales, all other factors being equal. Data Center TCP is the most
widely used example. It has been documented as an informational
record of the protocol currently in use in controlled
environments . A draft list of safety
and performance improvements for a scalable congestion control to
be usable on the public Internet has been drawn up (the so-called
'Prague L4S requirements' in Appendix A of ). The subset that involve
risk of harm to others have been captured as normative
requirements in Section 4 of . TCP Prague has been
implemented in Linux as a reference implementation to address
these requirements .Transport protocols other than TCP use various
congestion controls that are designed to be friendly with Reno.
Before they can use the L4S service, they will need to be updated
to implement a scalable congestion response, which they will have
to indicate by using the ECT(1) codepoint. Scalable variants are
under consideration for more recent transport protocols,
e.g. QUIC, and the L4S ECN part of BBRv2 , is a scalable
congestion control intended for the TCP and QUIC transports,
amongst others. Also, an L4S variant of the RMCAT SCReAM
controller has been
implemented for media transported
over RTP.Section 4.3 of the L4S ECN
spec defines
scalable congestion control in more detail, and specifies the
requirements that an L4S scalable congestion control has to comply
with.The ECN feedback in some transport protocols is already
sufficiently fine-grained for L4S (specifically DCCP and QUIC ). But
others either require update or are in the process of being
updated:For the case of TCP, the feedback protocol for ECN embeds
the assumption from Classic ECN
that an ECN mark is equivalent to a drop, making it unusable
for a scalable TCP. Therefore, the implementation of TCP
receivers will have to be upgraded . Work to standardize and implement more
accurate ECN feedback for TCP (AccECN) is in
progress ,
.ECN feedback was only roughly sketched in an appendix of
the now obsoleted second specification of SCTP , while a fuller specification was proposed
in a long-expired draft . A new design would need
to be implemented and deployed before SCTP could support
L4S.For RTP, sufficient ECN feedback was defined in , but defines the
latest standards track improvements.Explicit
congestion signalling is a key part of the L4S approach. In
contrast, use of drop as a congestion signal creates a tension
because drop is both an impairment (less would be better) and a
useful signal (more would be better):Explicit congestion signals can be used many times per
round trip, to keep tight control, without any impairment.
Under heavy load, even more explicit signals can be applied,
so that the queue can be kept short whatever the load. In
contrast, Classic AQMs have to introduce very high packet drop
at high load to keep the queue short. By using ECN, an L4S
congestion control's sawtooth reduction can be smaller and
therefore return to the operating point more often, without
worrying that more sawteeth will cause more signals. The
consequent smaller amplitude sawteeth fit between an empty
queue and a very shallow marking threshold (~1 ms in the
public Internet), so queue delay variation can be very low,
without risk of under-utilization.Explicit congestion signals can be emitted immediately to
track fluctuations of the queue. L4S shifts smoothing from the
network to the host. The network doesn't know the round trip
times of any of the flows. So if the network is responsible
for smoothing (as in the Classic approach), it has to assume a
worst case RTT, otherwise long RTT flows would become
unstable. This delays Classic congestion signals by 100-200
ms. In contrast, each host knows its own round trip time. So,
in the L4S approach, the host can smooth each flow over its
own RTT, introducing no more smoothing delay than strictly
necessary (usually only a few milliseconds). A host can also
choose not to introduce any smoothing delay if appropriate,
e.g. during flow start-up.Neither of the above are feasible if explicit congestion
signalling has to be considered 'equivalent to drop' (as was
required with Classic ECN ), because
drop is an impairment as well as a signal. So drop cannot be
excessively frequent, and drop cannot be immediate, otherwise too
many drops would turn out to have been due to only a transient
fluctuation in the queue that would not have warranted dropping a
packet in hindsight. Therefore, in an L4S AQM, the L4S queue uses
a new L4S variant of ECN that is not equivalent to drop (see
section 5.2 of the L4S ECN spec ), while the Classic queue
uses either Classic ECN or drop,
which are equivalent to each other.Before
Classic ECN was standardized, there were various proposals to give
an ECN mark a different meaning from drop. However, there was no
particular reason to agree on any one of the alternative meanings,
so 'equivalent to drop' was the only compromise that could be
reached. RFC 3168 contains a statement that:"An environment where all end nodes were ECN-Capable could
allow new criteria to be developed for setting the CE
codepoint, and new congestion control mechanisms for end-node
reaction to CE packets. However, this is a research issue, and
as such is not addressed in this document."L4S congestion controls
keep queue delay low whereas Classic congestion controls need a
queue of the order of the RTT to avoid under-utilization. One
queue cannot have two lengths, therefore L4S traffic needs to be
isolated in a separate queue (e.g. DualQ) or queues
(e.g. FQ).Coupling the
congestion notification between two queues as in the DualQ Coupled
AQM is not necessarily essential, but it is a simple way to allow
senders to determine their rate, packet by packet, rather than be
overridden by a network scheduler. An alternative is for a network
scheduler to control the rate of each application flow (see
discussion in ).Once there are at
least two treatments in the network, hosts need an identifier at
the IP layer to distinguish which treatment they intend to
use.A scalable
congestion control in the host keeps the signalling frequency from
the network high whatever the flow rate, so that queue delay
variations can be small when conditions are stable, and rate can
track variations in available capacity as rapidly as possible
otherwise.Latency is not the only concern of L4S.
The 'Low Loss' part of the name denotes that L4S generally
achieves zero congestion loss due to its use of ECN. Otherwise,
loss would itself cause delay, particularly for short flows, due
to retransmission delay .The "Scalable throughput" part
of the name denotes that the per-flow throughput of scalable
congestion controls should scale indefinitely, avoiding the
imminent scaling problems with Reno-friendly congestion control
algorithms . It was known when TCP
congestion avoidance was first developed in 1988 that it would not
scale to high bandwidth-delay products (see footnote 6 in ). Today, regular broadband flow rates over WAN
distances are already beyond the scaling range of Classic Reno
congestion control. So `less unscalable' Cubic and Compound variants of TCP have been
successfully deployed. However, these are now approaching their
scaling limits. For instance, we will
consider a scenario with a maximum RTT of 30 ms at the peak
of each sawtooth. As Reno packet rate scales 8x from 1,250 to
10,000 packet/s (from 15 to 120 Mb/s with 1500 B
packets), the time to recover from a congestion event rises
proportionately by 8x as well, from 422 ms to 3.38 s. It
is clearly problematic for a congestion control to take multiple
seconds to recover from each congestion event. Cubic was developed to be less unscalable, but it is
approaching its scaling limit; with the same max RTT of
30 ms, at 120 Mb/s Cubic is still fully in its
Reno-friendly mode, so it takes about 4.3 s to recover.
However, once the flow rate scales by 8x again to 960 Mb/s it
enters true Cubic mode, with a recovery time of 12.2 s. From
then on, each further scaling by 8x doubles Cubic's recovery time
(because the cube root of 8 is 2), e.g. at 7.68 Gb/s the
recovery time is 24.3 s. In contrast, a scalable congestion
control like DCTCP or TCP Prague induces 2 congestion signals per
round trip on average, which remains invariant for any flow rate,
keeping dynamic control very tight.For a
feel of where the global average lone-flow download sits on this
scale at the time of writing (2021), according to globally averaged fixed access capacity was 103
Mb/s in 2020 and averaged base RTT to a CDN was 25-34ms in 2019.
Averaging of per-country data was weighted by Internet user
population (data collected globally is necessarily of variable
quality, but the paper does double-check that the outcome compares
well against a second source). So a lone CUBIC flow would at best
take about 200 round trips (5 s) to recover from each of its
sawtooth reductions, if the flow even lasted that long. This is
described as 'at best' because it assumes everyone uses an AQM,
whereas in reality most users still have a (probably bloated)
tail-drop buffer. In the tail-drop case, likely average recovery
time would be at least 4x 5 s, if not more, because RTT under load
would be at least double that of an AQM, and recovery time depends
on the square of RTT.Although work on
scaling congestion controls tends to start with TCP as the
transport, the above is not intended to exclude other transports
(e.g. SCTP, QUIC) or less elastic algorithms
(e.g. RMCAT), which all tend to adopt the same or similar
developments.All the following approaches address some part of the same problem
space as L4S. In each case, it is shown that L4S complements them or
improves on them, rather than being a mutually exclusive
alternative:Diffserv addresses the problem of
bandwidth apportionment for important traffic as well as queuing
latency for delay-sensitive traffic. Of these, L4S solely
addresses the problem of queuing latency. Diffserv will still be
necessary where important traffic requires priority (e.g. for
commercial reasons, or for protection of critical infrastructure
traffic) - see .
Nonetheless, the L4S approach can provide low latency for all
traffic within each Diffserv class (including the case where there
is only the one default Diffserv class).Also, Diffserv can only provide a latency benefit
if a small subset of the traffic on a bottleneck link requests low
latency. As already explained, it has no effect when all the
applications in use at one time at a single site (home, small
business or mobile device) require low latency. In contrast,
because L4S works for all traffic, it needs none of the management
baggage (traffic policing, traffic contracts) associated with
favouring some packets over others. This lack of management
baggage ought to give L4S a better chance of end-to-end
deployment.In particular, because networks
tend not to trust end systems to identify which packets should be
favoured over others, where networks assign packets to Diffserv
classes they tend to use packet inspection of application flow
identifiers or deeper inspection of application signatures. Thus,
nowadays, Diffserv doesn't always sit well with encryption of the
layers above IP . So users have to choose
between privacy and QoS.As with Diffserv,
the L4S identifier is in the IP header. But, in contrast to
Diffserv, the L4S identifier does not convey a want or a need for
a certain level of quality. Rather, it promises a certain
behaviour (scalable congestion response), which networks can
objectively verify if they need to. This is because low delay
depends on collective host behaviour, whereas bandwidth priority
depends on network behaviour.AQMs such as PIE and FQ-CoDel
give a significant reduction in queuing delay relative to no AQM
at all. L4S is intended to complement these AQMs, and should not
distract from the need to deploy them as widely as possible.
Nonetheless, AQMs alone cannot reduce queuing delay too far
without significantly reducing link utilization, because the root
cause of the problem is on the host - where Classic congestion
controls use large saw-toothing rate variations. The L4S approach
resolves this tension between delay and utilization by enabling
hosts to minimize the amplitude of their sawteeth. A single-queue
Classic AQM is not sufficient to allow hosts to use small sawteeth
for two reasons: i) smaller sawteeth would not get lower delay in
an AQM designed for larger amplitude Classic sawteeth, because a
queue can only have one length at a time; and ii) much smaller
sawteeth implies much more frequent sawteeth, so L4S flows would
drive a Classic AQM into a high level of ECN-marking, which would
appear as heavy congestion to Classic flows, which in turn would
greatly reduce their rate as a result (see ).Similarly, per-flow
approaches such as FQ-CoDel or Approx Fair CoDel are not incompatible with the L4S approach.
However, per-flow queuing alone is not enough - it only isolates
the queuing of one flow from others; not from itself. Per-flow
implementations need to have support for scalable congestion
control added, which has already been done for FQ-CoDel in Linux
(see Sec.5.2.7 of and ). Without this simple modification,
per-flow AQMs like FQ-CoDel would still not be able to support
applications that need both very low delay and high bandwidth,
e.g. video-based control of remote procedures, or interactive
cloud-based video (see Note below).Although per-flow techniques are not incompatible
with L4S, it is important to have the DualQ alternative. This is
because handling end-to-end (layer 4) flows in the network (layer
3 or 2) precludes some important end-to-end functions. For
instance:Per-flow forms of L4S like FQ-CoDel are incompatible with
full end-to-end encryption of transport layer identifiers for
privacy and confidentiality (e.g. IPSec or encrypted VPN
tunnels, as opposed to DTLS over UDP), because they require
packet inspection to access the end-to-end transport flow
identifiers. In contrast, the DualQ
form of L4S requires no deeper inspection than the IP layer.
So, as long as operators take the DualQ approach, their users
can have both very low queuing delay and full end-to-end
encryption .With per-flow forms of L4S, the network takes over control
of the relative rates of each application flow. Some see it as
an advantage that the network will prevent some flows running
faster than others. Others consider it an inherent part of the
Internet's appeal that applications can control their rate
while taking account of the needs of others via congestion
signals. They maintain that this has allowed applications with
interesting rate behaviours to evolve, for instance, variable
bit-rate video that varies around an equal share rather than
being forced to remain equal at every instant, or e2e
scavenger behaviours that use
less than an equal share of capacity .The L4S
architecture does not require the IETF to commit to one
approach over the other, because it supports both, so that the
'market' can decide. Nonetheless, in the spirit of 'Do one
thing and do it well' , the
DualQ option provides low delay without prejudging the issue
of flow-rate control. Then, flow rate policing can be added
separately if desired. A policer would allow application
control up to a point, but the network would still be able
choose to set the point at which it intervened to prevent one
flow completely starving another.Note: It might seem that
self-inflicted queuing delay within a per-flow queue should
not be counted, because if the delay wasn't in the network it
would just shift to the sender. However, modern adaptive
applications, e.g. HTTP/2
or some interactive media applications (see ), can keep low latency objects at the
front of their local send queue by shuffling priorities of
other objects dependent on the progress of other transfers
(for example see ). They cannot shuffle
objects once they have released them into the network.Here again, L4S is
not an alternative to ABE but a complement that introduces much
lower queuing delay. ABE alters the
host behaviour in response to ECN marking to utilize a link better
and give ECN flows faster throughput. It uses ECT(0) and assumes
the network still treats ECN and drop the same. Therefore, ABE
exploits any lower queuing delay that AQMs can provide. But, as
explained above, AQMs still cannot reduce queuing delay too far
without losing link utilization (to allow for other, non-ABE,
flows).Bottleneck Bandwidth and Round-trip propagation
time (BBR ) controls
queuing delay end-to-end without needing any special logic in the
network, such as an AQM. So it works pretty-much on any path. BBR
keeps queuing delay reasonably low, but perhaps not quite as low
as with state-of-the-art AQMs such as PIE or FQ-CoDel, and
certainly nowhere near as low as with L4S. Queuing delay is also
not consistently low, due to BBR's regular bandwidth probing
spikes and its aggressive flow start-up phase.L4S complements BBR. Indeed, BBRv2 can use L4S ECN
where available and a scalable L4S congestion control behaviour in
response to any ECN signalling from the path . The L4S ECN signal complements the delay based
congestion control aspects of BBR with an explicit indication that
hosts can use, both to converge on a fair rate and to keep below a
shallow queue target set by the network. Without L4S ECN, both
these aspects need to be assumed or estimated.A transport layer that solves the current latency issues will
provide new service, product and application opportunities.With the L4S approach, the following existing applications also
experience significantly better quality of experience under load:
Gaming, including cloud based gaming;VoIP;Video conferencing;Web browsing;(Adaptive) video streaming;Instant messaging.The significantly lower queuing latency also enables some
interactive application functions to be offloaded to the cloud that
would hardly even be usable today: Cloud based interactive video;Cloud based virtual and augmented reality.The above two applications have been successfully demonstrated with
L4S, both running together over a 40 Mb/s broadband access link
loaded up with the numerous other latency sensitive applications in
the previous list as well as numerous downloads - all sharing the same
bottleneck queue simultaneously . For
the former, a panoramic video of a football stadium could be swiped
and pinched so that, on the fly, a proxy in the cloud could generate a
sub-window of the match video under the finger-gesture control of each
user. For the latter, a virtual reality headset displayed a viewport
taken from a 360-degree camera in a racing car. The user's head
movements controlled the viewport extracted by a cloud-based proxy. In
both cases, with 7 ms end-to-end base delay, the additional
queuing delay of roughly 1 ms was so low that it seemed the video
was generated locally.Using a swiping finger gesture or head movement to pan a video are
extremely latency-demanding actions — far more demanding than
VoIP. Because human vision can detect extremely low delays of the
order of single milliseconds when delay is translated into a visual
lag between a video and a reference point (the finger or the
orientation of the head sensed by the balance system in the inner ear
— the vestibular system). With an alternative AQM, the video
noticeably lagged behind the finger gestures and head movements.Without the low queuing delay of L4S, cloud-based applications like
these would not be credible without significantly more access
bandwidth (to deliver all possible video that might be viewed) and
more local processing, which would increase the weight and power
consumption of head-mounted displays. When all interactive processing
can be done in the cloud, only the data to be rendered for the end
user needs to be sent.Other low latency high bandwidth applications such as:Interactive remote presence;Video-assisted remote control of machinery or industrial
processes.are not credible at all without very low queuing delay. No
amount of extra access bandwidth or local processing can make up for
lost time.The following use-cases for L4S are being considered by various
interested parties:Where the bottleneck is one of various types of access network:
e.g. DSL, Passive Optical Networks (PON), DOCSIS cable,
mobile, satellite (see for
some technology-specific details)Private networks of heterogeneous data centres, where there is
no single administrator that can arrange for all the simultaneous
changes to senders, receivers and network needed to deploy
DCTCP:a set of private data centres interconnected over a wide
area with separate administrations, but within the same
companya set of data centres operated by separate companies
interconnected by a community of interest network
(e.g. for the finance sector)multi-tenant (cloud) data centres where tenants choose
their operating system stack (Infrastructure as a Service -
IaaS)Different types of transport (or application) congestion
control:elastic (TCP/SCTP);real-time (RTP, RMCAT);query-response (DNS/LDAP).Where low delay quality of service is required, but without
inspecting or intervening above the IP layer :mobile and other networks have tended to inspect higher
layers in order to guess application QoS requirements.
However, with growing demand for support of privacy and
encryption, L4S offers an alternative. There is no need to
select which traffic to favour for queuing, when L4S can give
favourable queuing to all traffic.If queuing delay is minimized, applications with a fixed delay
budget can communicate over longer distances, or via a longer
chain of service functions or onion
routers.If delay jitter is minimized, it is possible to reduce the
dejitter buffers on the receive end of video streaming, which
should improve the interactive experienceCertain link technologies aggregate data from multiple packets into
bursts, and buffer incoming packets while building each burst. Wi-Fi,
PON and cable all involve such packet aggregation, whereas fixed
Ethernet and DSL do not. No sender, whether L4S or not, can do
anything to reduce the buffering needed for packet aggregation. So an
AQM should not count this buffering as part of the queue that it
controls, given no amount of congestion signals will reduce it.Certain link technologies also add buffering for other reasons,
specifically:Radio links (cellular, Wi-Fi, satellite) that are distant from
the source are particularly challenging. The radio link capacity
can vary rapidly by orders of magnitude, so it is considered
desirable to hold a standing queue that can utilize sudden
increases of capacity;Cellular networks are further complicated by a perceived need
to buffer in order to make hand-overs imperceptible;L4S cannot remove the need for all these different forms of
buffering. However, by removing 'the longest pole in the tent'
(buffering for the large sawteeth of Classic congestion controls), L4S
exposes all these 'shorter poles' to greater scrutiny.Until now, the buffering needed for these additional reasons tended
to be over-specified - with the excuse that none were 'the longest
pole in the tent'. But having removed the 'longest pole', it becomes
worthwhile to minimize them, for instance reducing packet aggregation
burst sizes and MAC scheduling intervals.Also certain link types, particularly radio-based links, are far
more prone to transmission losses. explains how an L4S response to
loss has to be as drastic as a Classic response. Nonetheless, research
referred to in the same section has demonstrated potential for
considerably more effective loss repair at the link layer, due to the
relaxed ordering constraints of L4S packets.L4S AQMs, whether DualQ or FQ, e.g. are, in themselves, an incremental deployment
mechanism for L4S - so that L4S traffic can coexist with existing
Classic (Reno-friendly) traffic.
explains why only deploying an L4S AQM in one node at each end of the
access link will realize nearly all the benefit of L4S.L4S involves both end systems and the network, so suggests some typical sequences to
deploy each part, and why there will be an immediate and significant
benefit after deploying just one part. and describe the converse
incremental deployment case where there is no L4S AQM at the network
bottleneck, so any L4S flow traversing this bottleneck has to take
care in case it is competing with Classic traffic.L4S AQMs will not have to be deployed throughout the Internet
before L4S can benefit anyone. Operators of public Internet access
networks typically design their networks so that the bottleneck will
nearly always occur at one known (logical) link. This confines the
cost of queue management technology to one place.The case of mesh networks is different and will be discussed
later in this section. But the known bottleneck case is generally
true for Internet access to all sorts of different 'sites', where
the word 'site' includes home networks, small- to medium-sized
campus or enterprise networks and even cellular devices (). Also, this known-bottleneck
case tends to be applicable whatever the access link technology;
whether xDSL, cable, PON, cellular, line of sight wireless or
satellite.Therefore, the full benefit of the L4S service should be
available in the downstream direction when an L4S AQM is deployed at
the ingress to this bottleneck link. And similarly, the full
upstream service will be available once an L4S AQM is deployed at
the ingress into the upstream link. (Of course, multi-homed sites
would only see the full benefit once all their access links were
covered.)Deployment in mesh topologies depends on how overbooked the core
is. If the core is non-blocking, or at least generously provisioned
so that the edges are nearly always the bottlenecks, it would only
be necessary to deploy an L4S AQM at the edge bottlenecks. For
example, some data-centre networks are designed with the bottleneck
in the hypervisor or host NICs, while others bottleneck at the
top-of-rack switch (both the output ports facing hosts and those
facing the core).An L4S AQM would often next be needed where the Wi-Fi links in a
home sometimes become the bottleneck. And an L4S AQM would
eventually also need to be deployed at any other persistent
bottlenecks such as network interconnections, e.g. some public
Internet exchange points and the ingress and egress to WAN links
interconnecting data-centres.For any one L4S flow to provide benefit, it requires three (or
sometimes two) parts to have been deployed: i) the congestion
control at the sender; ii) the AQM at the bottleneck; and iii) older
transports (namely TCP) need upgraded receiver feedback too. This
was the same deployment problem that ECN faced so we have learned from that experience.Firstly, L4S deployment exploits the fact that DCTCP already
exists on many Internet hosts (Windows, FreeBSD and Linux); both
servers and clients. Therefore, an L4S AQM can be deployed at a
network bottleneck to immediately give a working deployment of all
the L4S parts for testing, as long as the ECT(0) codepoint is
switched to ECT(1). DCTCP needs some safety concerns to be fixed for
general use over the public Internet (see Section 4.3 of the L4S ECN
spec ), but DCTCP is
not on by default, so these issues can be managed within controlled
deployments or controlled trials.Secondly, the performance improvement with L4S is so significant
that it enables new interactive services and products that were not
previously possible. It is much easier for companies to initiate new
work on deployment if there is budget for a new product trial. If,
in contrast, there were only an incremental performance improvement
(as with Classic ECN), spending on deployment tends to be much
harder to justify.Thirdly, the L4S identifier is defined so that initially network
operators can enable L4S exclusively for certain customers or
certain applications. But this is carefully defined so that it does
not compromise future evolution towards L4S as an Internet-wide
service. This is because the L4S identifier is defined not only as
the end-to-end ECN field, but it can also optionally be combined
with any other packet header or some status of a customer or their
access link (see section 5.4 of ). Operators could do this
anyway, even if it were not blessed by the IETF. However, it is best
for the IETF to specify that, if they use their own local
identifier, it must be in combination with the IETF's identifier.
Then, if an operator has opted for an exclusive local-use approach,
later they only have to remove this extra rule to make the service
work Internet-wide - it will already traverse middleboxes, peerings,
etc. illustrates some example
sequences in which the parts of L4S might be deployed. It consists
of the following stages, preceded by a presumption that DCTCP is
already installed at both ends:DCTCP is not applicable for use over the public Internet, so
it is emphasized here that any DCTCP flow has to be completely
contained within a controlled trial environment. Within this trial environment, once an L4S AQM
has been deployed, the trial DCTCP flow will experience
immediate benefit, without any other deployment being needed. In
this example downstream deployment is first, but in other
scenarios the upstream might be deployed first. If no AQM at all
was previously deployed for the downstream access, an L4S AQM
greatly improves the Classic service (as well as adding the L4S
service). If an AQM was already deployed, the Classic service
will be unchanged (and L4S will add an improvement on top).In this stage, the name 'TCP Prague' is used
to represent a variant of DCTCP that is designed to be used in a
production Internet environment (that is, it has to comply with
all the requirements in Section 4 of the L4S ECN spec , which then means it can be
used over the public Internet). If the application is primarily
unidirectional, 'TCP Prague' at one end will provide all the
benefit needed.For TCP transports,
Accurate ECN feedback (AccECN) is needed at the other
end, but it is a generic ECN feedback facility that is already
planned to be deployed for other purposes, e.g. DCTCP, BBR.
The two ends can be deployed in either order, because, in TCP,
an L4S congestion control only enables itself if it has
negotiated the use of AccECN feedback with the other end during
the connection handshake. Thus, deployment of TCP Prague on a
server enables L4S trials to move to a production service in one
direction, wherever AccECN is deployed at the other end. This
stage might be further motivated by the performance improvements
of TCP Prague relative to DCTCP (see Appendix A.2 of the L4S ECN
spec ).Unlike TCP, from the outset, QUIC ECN
feedback has supported L4S.
Therefore, if the transport is QUIC, one-ended deployment of a
Prague congestion control at this stage is simple and
sufficient.For QUIC, if a proxy sits in
the path between multiple origin servers and the access
bottlenecks to multiple clients, then upgrading the proxy with a
Scalable congestion control would provide the benefits of L4S
over all the clients' downstream bottlenecks in one go ---
whether or not all the origin servers were upgraded. Conversely,
where a proxy has not been upgraded, the clients served by it
will not benefit from L4S at all in the downstream, even when
any origin server behind the proxy has been upgraded to support
L4S.For TCP, a proxy upgraded to support
'TCP Prague' would provide the benefits of L4S downstream to all
clients that support AccECN (whether or not they support L4S as
well). And in the upstream, the proxy would also support AccECN
as a receiver, so that any client deploying its own L4S support
would benefit in the upstream direction, irrespective of whether
any origin server beyond the proxy supported AccECN.This is a two-move stage to enable L4S upstream. An L4S AQM
or TCP Prague can be deployed in either order as already
explained. To motivate the first of two independent moves, the
deferred benefit of enabling new services after the second move
has to be worth it to cover the first mover's investment risk.
As explained already, the potential for new interactive services
provides this motivation. An L4S AQM also improves the upstream
Classic service - significantly if no other AQM has already been
deployed.Note that other deployment sequences might occur. For
instance: the upstream might be deployed first; a non-TCP protocol
might be used end-to-end, e.g. QUIC, RTP; a body such as the
3GPP might require L4S to be implemented in 5G user equipment, or
other random acts of kindness.If L4S is enabled between two hosts, the L4S sender is required
to coexist safely with Reno in response to any drop (see Section 4.3
of the L4S ECN spec ).Unfortunately, as well as protecting Classic traffic, this rule
degrades the L4S service whenever there is any loss, even if the
cause is not persistent congestion at a bottleneck, e.g.:congestion loss at other transient bottlenecks, e.g. due
to bursts in shallower queues;transmission errors, e.g. due to electrical
interference;rate policing.Three complementary approaches are in progress to address this
issue, but they are all currently research:In Prague congestion control, ignore certain losses deemed
unlikely to be due to congestion (using some ideas from
BBR regarding
isolated losses). This could mask any of the above types of loss
while still coexisting with drop-based congestion controls.A combination of RACK, L4S and link retransmission without
resequencing could repair transmission errors without the head
of line blocking delay usually associated with link-layer
retransmission , ;Hybrid ECN/drop rate policers (see ).L4S deployment scenarios that minimize these issues
(e.g. over wireline networks) can proceed in parallel to this
research, in the expectation that research success could continually
widen L4S applicability.Classic ECN support is starting to materialize on the Internet as
an increased level of CE marking. It is hard to detect whether this
is all due to the addition of support for ECN in implementations of
FQ-CoDel and/or FQ-COBALT, which is not generally problematic,
because flow-queue (FQ) scheduling inherently prevents a flow from
exceeding the 'fair' rate irrespective of its aggressiveness.
However, some of this Classic ECN marking might be due to
single-queue ECN deployment. This case is discussed in Section 4.3
of the L4S ECN spec .An L4S AQM uses the ECN field to signal congestion. So, in common
with Classic ECN, if the AQM is within a tunnel or at a lower layer,
correct functioning of ECN signalling requires correct propagation
of the ECN field up the layers , , .This specification contains no IANA considerations.In the current Internet, ISPs usually enforce separation between
the capacity of shared links assigned to different 'sites'
(e.g. households, businesses or mobile users - see terminology
in ) using some form of
scheduler . And they use various
techniques like redirection to traffic scrubbing facilities to deal
with flooding attacks. However, there has never been a universal
need to police the rate of individual application flows - the
Internet has generally always relied on self-restraint of congestion
controls at senders for sharing intra-'site' capacity.L4S has been designed not to upset this status quo. If a DualQ is
used to provide L4S service, section 4.2 of explains how it is
designed to give no more rate advantage to unresponsive flows than a
single-queue AQM would, whether or not there is traffic
overload.Also, in case per-flow rate policing is ever required, it can be
added because it is orthogonal to the distinction between L4S and
Classic. As explained in , the DualQ
variant of L4S provides low delay without prejudging the issue of
flow-rate control. So, if flow-rate control is needed,
per-flow-queuing (FQ) with L4S support can be used instead, or flow
rate policing can be added as a modular addition to a DualQ.
However, per-flow rate control is not usually deployed as a security
mechanism, because an active attacker can just shard its traffic
over more flow IDs if the rate of each is restricted. explains how Diffserv only makes a
difference if some packets get less favourable treatment than
others, which typically requires traffic rate policing for a low
latency class. In contrast, it should not be necessary to
rate-police access to the L4S service to protect the Classic
service, because L4S is designed to reduce delay without harming the
delay or rate of any Classic traffic.During early deployment (and perhaps always), some networks will
not offer the L4S service. In general, these networks should not
need to police L4S traffic. They are required (by both the ECN
spec and the L4S ECN spec ) not to change the L4S
identifier, which would interfere with end-to-end congestion
control. If they already treat ECN traffic as Not-ECT, they can
merely treat L4S traffic as Not-ECT too. At a bottleneck, such
networks will introduce some queuing and dropping. When a scalable
congestion control detects a drop it will have to respond safely
with respect to Classic congestion controls (as required in Section
4.3 of ). This will
degrade the L4S service to be no better (but never worse) than
Classic best efforts, whenever a non-ECN bottleneck is encountered
on a path (see ).In cases that are expected to be rare, networks that solely
support Classic ECN in a single queue
bottleneck might opt to police L4S traffic so as to protect
competing Classic ECN traffic (for instance, see Section 6.1.3 of
the L4S operational guidance ). However, Section 4.3 of the L4S
ECN spec recommends
that the sender adapts its congestion response to properly coexist
with Classic ECN flows, i.e. reverting to the self-restraint
approach.Certain network operators might choose to restrict access to the
L4S service, perhaps only to selected premium customers as a
value-added service. Their packet classifier (item 2 in ) could identify such customers
against some other field (e.g. source address range) as well as
classifying on the ECN field. If only the ECN L4S identifier
matched, but not the source address (say), the classifier could
direct these packets (from non-premium customers) into the Classic
queue. Explaining clearly how operators can use additional local
classifiers (see section 5.4 of the L4S ECN spec ) is intended to remove any
motivation to clear the L4S identifier. Then at least the L4S ECN
identifier will be more likely to survive end-to-end even though the
service may not be supported at every hop. Such local arrangements
would only require simple registered/not-registered packet
classification, rather than the managed, application-specific
traffic policing against customer-specific traffic contracts that
Diffserv uses.Like the Classic service, the L4S service relies on self-restraint
- limiting rate in response to congestion. In addition, the L4S
service requires self-restraint in terms of limiting latency
(burstiness). It is hoped that self-interest and guidance on dynamic
behaviour (especially flow start-up, which might need to be
standardized) will be sufficient to prevent transports from sending
excessive bursts of L4S traffic, given the application's own latency
will suffer most from such behaviour.Because the L4S service can reduce delay without discernibly
increasing the delay of any Classic traffic, it should not be
necessary to police L4S traffic to protect the delay of Classic.
However, whether burst policing becomes necessary to protect other L4S
traffic remains to be seen. Without it, there will be potential for
attacks on the low latency of the L4S service.If needed, various arrangements could be used to address this
concern:A per-flow
(5-tuple) queue protection function has been developed for
the low latency queue in DOCSIS, which has adopted the DualQ L4S
architecture. It protects the low latency service from any
queue-building flows that accidentally or maliciously classify
themselves into the low latency queue. It is designed to score
flows based solely on their contribution to queuing (not flow rate
in itself). Then, if the shared low latency queue is at risk of
exceeding a threshold, the function redirects enough packets of
the highest scoring flow(s) into the Classic queue to preserve low
latency.Rather than policing
locally at each bottleneck, it may only be necessary to address
problems reactively, e.g. punitively target any deployments
of new bursty malware, in a similar way to how traffic from
flooding attack sources is rerouted via scrubbing facilities.Per-flow
scheduling should inherently isolate non-bursty flows from bursty
(see for discussion of the merits
of per-flow scheduling relative to per-flow policing).Per-flow
queue protection could be arranged for a queue structure
distributed across a subnet intercommunicating using lower layer
control messages (see Section 2.1.4 of ). For
instance, in a radio access network, user equipment already sends
regular buffer status reports to a radio network controller, which
could use this information to remotely police individual
flows.The
Congestion Exposure (ConEx) architecture uses egress audit to motivate senders to
truthfully signal path congestion in-band where it can be used by
ingress policers. An edge-to-edge variant of this architecture is
also possible.An
architecture similar to Diffserv may
be preferred, where traffic is proactively conditioned on entry to
a domain, rather than reactively policed only if it leads to
queuing once combined with other traffic at a bottleneck.The
policing function could be divided between per-flow mechanisms at
the network ingress that characterize the burstiness of each flow
into a signal carried with the traffic, and per-class mechanisms
at bottlenecks that act on these signals if queuing actually
occurs once the traffic converges. This would be somewhat similar
to , which is in turn similar to the idea
behind core stateless fair queuing.No single one of these possible queue protection capabilities is
considered an essential part of the L4S architecture, which works
without any of them under non-attack conditions (much as the Internet
normally works without per-flow rate policing). Indeed, even where
latency policers are deployed, under normal circumstances they would
not intervene, and if operators found they were not necessary they
could disable them. Part of the L4S experiment will be to see whether
such a function is necessary, and which arrangements are most
appropriate to the size of the problem.As mentioned in , L4S should remove
the need for low latency Diffserv classes. However, those Diffserv
classes that give certain applications or users priority over
capacity, would still be applicable in certain scenarios
(e.g. corporate networks). Then, within such Diffserv classes,
L4S would often be applicable to give traffic low latency and low loss
as well. Within such a Diffserv class, the bandwidth available to a
user or application is often limited by a rate policer. Similarly, in
the default Diffserv class, rate policers are sometimes used to
partition shared capacity.A classic rate policer drops any packets exceeding a set rate,
usually also giving a burst allowance (variants exist where the
policer re-marks non-compliant traffic to a discard-eligible Diffserv
codepoint, so they can be dropped elsewhere during contention).
Whenever L4S traffic encounters one of these rate policers, it will
experience drops and the source will have to fall back to a Classic
congestion control, thus losing the benefits of L4S (). So, in networks that already use
rate policers and plan to deploy L4S, it will be preferable to
redesign these rate policers to be more friendly to the L4S
service.L4S-friendly rate policing is currently a research area (note that
this is not the same as latency policing). It might be achieved by
setting a threshold where ECN marking is introduced, such that it is
just under the policed rate or just under the burst allowance where
drop is introduced. For instance the two-rate three-colour
marker or a PCN threshold and
excess-rate marker could mark ECN at the
lower rate and drop at the higher. Or an existing rate policer could
have congestion-rate policing added, e.g. using the 'local'
(non-ConEx) variant of the ConEx aggregate congestion
policer . It might
also be possible to design scalable congestion controls to respond
less catastrophically to loss that has not been preceded by a period
of increasing delay.The design of L4S-friendly rate policers will require a separate
dedicated document. For further discussion of the interaction between
L4S and Diffserv, see .Various ways have been developed to protect the integrity of the
congestion feedback loop (whether signalled by loss, Classic ECN or
L4S ECN) against misbehaviour by the receiver, sender or network (or
all three). Brief details of each including applicability, pros and
cons is given in Appendix C.1 of the L4S ECN spec .As discussed in , the L4S
architecture does not preclude approaches that inspect end-to-end
transport layer identifiers. For instance, L4S support has been added
to FQ-CoDel, which classifies by application flow ID in the network.
However, the main innovation of L4S is the DualQ AQM framework that
does not need to inspect any deeper than the outermost IP header,
because the L4S identifier is in the IP-ECN field.Thus, the L4S architecture enables very low queuing delay without
requiring inspection of information above
the IP layer. This means that users who want to encrypt application
flow identifiers, e.g. in IPSec or other encrypted VPN tunnels,
don't have to sacrifice low delay .Because L4S can provide low delay for a broad set of applications
that choose to use it, there is no need for individual applications or
classes within that broad set to be distinguishable in any way while
traversing networks. This removes much of the ability to correlate
between the delay requirements of traffic and other identifying
features . There may be some types of
traffic that prefer not to use L4S, but the coarse binary
categorization of traffic reveals very little that could be exploited
to compromise privacy.A QoE Perspective on Sizing Network BuffersDual Queue Coupled AQM: Deployable Very Low Queuing Delay for
AllNokia Bell LabsSimula Research LabNokia Bell LabsIndependent (bobbriscoe.net)Ultra-Low Delay for All: Live Experience, Live
AnalysisSimula Research LabBell LabsBell LabsBTCongestion Avoidance and ControlImplementing immediate forwarding for 4G in a network
simulatorImplementing the `TCP Prague' Requirements for Low Latency
Low Loss Scalable Throughput (L4S)IndependentNokia Bell LabsSimula Research LabSimula Research LabNokia Bell LabsETH ZurichSimula Research LabDUALPI2 - Low Latency, Low Loss and Scalable (L4S)
AQMSimula Research LabNokia Bell LabsIndependentNokia Bell LabsSimula Research LabMAC and Upper Layer Protocols Interface (MULPI)
Specification, CM-SP-MULPIv3.1CableLabsTowards fair and low latency next generation high speed
networks: AFCD queuingA Congestion Control Independent L4S SchedulerRapid Signalling of Queue Dynamicsbobbriscoe.net LtdUNIX Time-Sharing System: ForewordCharacterising LEDBAT Performance Through Bottlenecks Using
PIE, FQ-CoDel and FQ-PIE Active Queue ManagementTCP BBR v2 Alpha/Preview ReleaseSCReAMPI2 ParametersSizing Router BuffersStanford UniStanford UniStanford UniDesign and Evaluation of COBALT Queue DisciplineActive Queue Management Algorithms for DOCSIS 3.0; A
Simulation Study of CoDel, SFQ-CoDel and PIE in DOCSIS 3.0
Networksfq_codel: generalise ce_threshold marking for subset of
trafficWhy Flow-Completion Time is the Right Metric for Congestion
ControlStanford UniStanford UniTowards a Low Latency Internet: Understanding and
SolutionsKarlstad UniOptimizing HTTP/2 prioritization with BBR and
tcp_notsent_lowatCloudflareLatency Requirements for Head-Worn Display S/EVS
ApplicationsNASANASANASALatency thresholds for usability in games: A surveyThanks to Richard Scheffenegger, Wes Eddy, Karen Nielsen, David
Black, Jake Holland, Vidhi Goel, Ermin Sakic, Praveen Balasubramanian,
Gorry Fairhurst, Mirja Kuehlewind, Philip Eardley, Neal Cardwell, Pete
Heist and Martin Duke for their useful review comments. Thanks also to
the area reviewers: Marco Tiloca, Lars Eggert, Roman Danyliw and
Éric Vyncke.Bob Briscoe and Koen De Schepper were part-funded by the European
Community under its Seventh Framework Programme through the Reducing
Internet Transport Latency (RITE) project (ICT-317700). The contribution
of Koen De Schepper was also part-funded by the 5Growth and DAEMON EU
H2020 projects. Bob Briscoe was also part-funded by the Research Council
of Norway through the TimeIn project, partly by CableLabs and partly by
the Comcast Innovation Fund. The views expressed here are solely those
of the authors.