HTTP Header Common StructureThe Varnish Cache Projectphk@varnish-cache.orgApplications and Real-TimeHTTP Working GroupInternet-DraftAn abstract data model for HTTP headers, “Common Structure”, and a
HTTP/1 serialization of it, generalized from current HTTP headers.Discussion of this draft takes place on the HTTP working group mailing list (ietf-http-wg@w3.org),
which is archived at https://lists.w3.org/Archives/Public/ietf-http-wg/.Working Group information can be found at http://httpwg.github.io/; source code and issues list
for this draft can be found at https://github.com/httpwg/http-extensions/labels/header-structure.The HTTP protocol does not impose any structure or datamodel on the
information in HTTP headers, the HTTP/1 serialization is the
datamodel: An ASCII string without control characters.HTTP header definitions specify how the string must be formatted
and while families of similar headers exist, it still requires an
uncomfortable large number of bespoke parser and validation routines
to process HTTP traffic correctly.In order to improve performance HTTP/2 and HPACK uses naive
text-compression, which incidentally decoupled the on-the-wire
serialization from the data model.During the development of HPACK it became evident that significantly
bigger gains were available if semantic compression could be used,
most notably with timestamps. However, the lack of a common
data structure for HTTP headers would make semantic compression
one long list of special cases.Parallel to this, various proposals for how to fulfill data-transportation
needs, and to a lesser degree to impose some kind of order on
HTTP headers, at least going forward, were floated.All of these proposals, JSON, CBOR etc. run into the same basic
problem: Their serialization is incompatible with RFC 7230’s
ABNF definition of ‘field-value’.For binary formats, such as CBOR, a wholesale base64/85
reserialization would be needed, with negative results for
both debugability and bandwidth.For textual formats, such as JSON, the format must first be neutered
to not violate field-value’s ABNF, and then workarounds added
to reintroduce the features just lost, for instance UNICODE strings.The post-surgery format is no longer JSON, and it experience indicates
that almost-but-not-quite compatibility is worse than no compatibility.This proposal starts from the other end, and builds and generalizes
a data structure definition from existing HTTP headers, which means
that HTTP/1 serialization and ‘field-value’ compatibility is built in.If all future HTTP headers are defined to fit into this Common Structure
we have at least halted the proliferation of bespoke parsers and
started to pave the road for semantic compression serializations of
HTTP traffic.In this document, the key words “MUST”, “MUST NOT”, “REQUIRED”,
“SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”,
and “OPTIONAL” are to be interpreted as described in BCP 14, RFC 2119
.The data model of Common Structure is an ordered sequence of named
dictionaries. Please see for how this model was derived.The definition of the data model is on purpose abstract, uncoupled
from any protocol serialization or programming environment
representation, it is meant as the foundation on which all such
manifestations of the model can be built.Common Structure in ABNF (Slightly bastardized relative to RFC5234 ):Recursion is included as a way to to support deep and more general
data structures, but its use is highly discouraged and where it is
used the depth of recursion SHALL always be explicitly limited
in the specifications of the HTTP headers which allow it.Integers SHALL be in the range +/- 2^63-1 (= +/- 9223372036854775807)The limit of 15 significant digits is chosen so that numbers can
be correctly represented by IEEE754 64 bit binary floating point.This is intended to be an efficient, “safe” and uncomplicated string
type, for uses where the string content is culturally neutral or
where it will not be user visible.Unicode-strings are unrestricted because there is no sane and/or
culturally neutral way to subset or otherwise make unicode “safe”,
and Unicode is still evolving new and interesting code points.Users of unicode-string SHALL be prepared for the full gammut of
glyph-gymnastics in order to avoid U+1F4A9 U+08 U+1F574.Blobs are intended primarily for cryptographic data, but can be
used for any otherwise unsatisfied needs.A timestamp counts seconds since the UNIX time_t epoch, including
the “invisible leap-seconds” misfeature.In ABNF:Only white-listed legacy headers (see ) can use
this format.The dim prospects of ever getting a majority of HTTP1 paths 8-bit
clean makes UTF-8 unviable as H1 serialization. Given that very
little of the information in HTTP headers is presented to users in
the first place, improving H1 and HPACK efficiency by inventing a
more efficient RFC5137 compliant escape-sequences seems unwarranted.XXX: Allow OWS in parsers, but not in generators ?In programming environments which do not define a native representation
or serialization of Common Structure, the HTTP/1 serialization
should be used.All future standardized and all private HTTP headers using Common
Structure should self identify as such. In the HTTP/1 serialization
by making the first character “>” and the last “<”. (These two
characters are deliberately “the wrong way” to not clash with
exsisting usages.)Legacy HTTP headers which fit into Common Structure, are marked as
such in the IANA Message Header Registry (see ), and a snapshot
of the registry can be used to trigger parsing according to Common
Structure of these headers.All new HTTP headers SHOULD use the Common Structure if at all possible.Should we allow splitting common structure data over multiple headers ?Pro:Avoids size restrictions, easier on-the-fly editingContra:Cannot act on any such header until all headers have been received.We must define where headers can be split (between identifier and
dictionary ?, in the middle of dictionaries ?)Most on-the-fly editing is hackish at best.The HTTP/1 serializations self-identification mechanism makes it
possible to extend the definition of existing headers
into Common Structure.For instance one could imagine:Which would be faster to parse and validate than
the current definition of the Date header and more precise too.Some kind of signal/negotiation mechanism would be required to make
this work in practice.A machine-readable specification of the legal contents of HTTP
headers would go a long way to improve efficiency and security
in HTTP implementations.The IANA Message Header Registry will be extended with an additional
field named “Common Structure” which can have the values “True”, “False”
or “Unknown”.The RFC723x headers listed in will get the value “True” in the
new field.The RFC723x headers listed in will get the value “False”
in the new field.All other existing entries in the registry will be set to “Unknown”
until and if the owner of the entry requests otherwise.Unique dictionary keys are required to reduce the risk of
smuggling attacks.Key words for use in RFCs to Indicate Requirement LevelsIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.ASCII Escaping of Unicode CharactersThere are a number of circumstances in which an escape mechanism is needed in conjunction with a protocol to encode characters that cannot be represented or transmitted directly. With ASCII coding, the traditional escape has been either the decimal or hexadecimal numeric value of the character, written in a variety of different ways. The move to Unicode, where characters occupy two or more octets and may be coded in several different forms, has further complicated the question of escapes. This document discusses some options now in use and discusses considerations for selecting one for use in new IETF protocols, and protocols that are now being internationalized. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.Augmented BNF for Syntax Specifications: ABNFInternet technical specifications often need to define a formal syntax. Over the years, a modified version of Backus-Naur Form (BNF), called Augmented BNF (ABNF), has been popular among many Internet specifications. The current specification documents ABNF. It balances compactness and simplicity with reasonable representational power. The differences between standard BNF and ABNF involve naming rules, repetition, alternatives, order-independence, and value ranges. This specification also supplies additional rule definitions and encoding for a core lexical analyzer of the type common to several Internet specifications. [STANDARDS-TRACK]Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and RoutingThe Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document provides an overview of HTTP architecture and its associated terminology, defines the "http" and "https" Uniform Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax and parsing requirements, and describes related security concerns for implementations.Hypertext Transfer Protocol (HTTP/1.1): Semantics and ContentThe Hypertext Transfer Protocol (HTTP) is a stateless \%application- level protocol for distributed, collaborative, hypertext information systems. This document defines the semantics of HTTP/1.1 messages, as expressed by request methods, request header fields, response status codes, and response header fields, along with the payload of messages (metadata and body content) and mechanisms for content negotiation.Hypertext Transfer Protocol (HTTP/1.1): Conditional RequestsThe Hypertext Transfer Protocol (HTTP) is a stateless application- level protocol for distributed, collaborative, hypertext information systems. This document defines HTTP/1.1 conditional requests, including metadata header fields for indicating state changes, request header fields for making preconditions on such state, and rules for constructing the responses to a conditional request when one or more preconditions evaluate to false.Hypertext Transfer Protocol (HTTP/1.1): Range RequestsThe Hypertext Transfer Protocol (HTTP) is a stateless application- level protocol for distributed, collaborative, hypertext information systems. This document defines range requests and the rules for constructing and combining responses to those requests.Hypertext Transfer Protocol (HTTP/1.1): CachingThe Hypertext Transfer Protocol (HTTP) is a stateless \%application- level protocol for distributed, collaborative, hypertext information systems. This document defines HTTP caches and the associated header fields that control cache behavior or indicate cacheable response messages.Hypertext Transfer Protocol (HTTP/1.1): AuthenticationThe Hypertext Transfer Protocol (HTTP) is a stateless application- level protocol for distributed, collaborative, hypermedia information systems. This document defines the HTTP Authentication framework.Forwarded HTTP ExtensionThis document defines an HTTP extension header field that allows proxy components to disclose information lost in the proxying process, for example, the originating IP address of a request or IP address of the proxy on the user-agent-facing interface. In a path of proxying components, this makes it possible to arrange it so that each subsequent component will have access to, for example, all IP addresses used in the chain of proxied HTTP requests.This document also specifies guidelines for a proxy administrator to anonymize the origin of a request.Hypertext Transfer Protocol (HTTP) Client-Initiated Content-EncodingIn HTTP, content codings allow for payload encodings such as for compression or integrity checks. In particular, the "gzip" content coding is widely used for payload data sent in response messages.Content codings can be used in request messages as well; however, discoverability is not on par with response messages. This document extends the HTTP "Accept-Encoding" header field for use in responses, to indicate the content codings that are supported in requests.Several proposals have been floated in recent years to use
some preexisting structured data serialization or other
for HTTP headers, to impose some sanity.None of these proposals have gained traction and no obvious
candidate data serializations have been left unexamined.This effort tries to tackle the question from the other side,
by asking if there is a common structure in existing HTTP
headers we can generalize for this purpose.The RFC723x family of HTTP/1 standards control 49 entries in the
IANA Message Header Registry, and they share two common motifs.The majority of RFC723x HTTP headers are lists. A few of them are
ordered, (‘Content-Encoding’), some are unordered (‘Connection’)
and some are ordered by ‘q=%f’ weight parameters (‘Accept’)In most cases, the list elements are some kind of identifier, usually
derived from ABNF ‘token’ as defined by .A subgroup of headers, mostly related to MIME, uses what one could
call a ‘qualified token’::The second motif is parameterized list elements. The best known
is the “q=0.5” weight parameter, but other parameters exist as well.Generalizing from these motifs, our candidate “Common Structure”
data model becomes an ordered list of named dictionaries.In pidgin ABNF, ignoring white-space for the sake of clarity, the
HTTP/1.1 serialization of Common Structure is is something like:Nineteen out of the RFC723x’s 48 headers, almost 40%, can already
be parsed using this definition, and none the rest have requirements
which could not be met by this data model. See and
for the full survey details.Surveying the datatypes of HTTP headers, standardized as well as
private, the following picture emerges:Integer and floating point are both used. Range and precision
is mostly unspecified in controlling documents.Scientific notation (9.192631770e9) does not seem to be used anywhere.The ranges used seem to be minus several thousand to plus a couple
of billions, the high end almost exclusively being POSIX time_t
timestamps.RFC723x text format, but POSIX time_t represented as integer or
floating point is not uncommon. ISO8601 have also been spotted.The vast majority are pure ASCII strings, with either no escapes,
%xx URL-like escapes or C-style back-slash escapes, possibly with
the addition of \uxxxx UNICODE escapes.Where non-ASCII character sets are used, they are almost always
implicit, rather than explicit. UTF8 and ISO-8859-1 seem to be
most common.Often used for cryptographic data. Usually in base64 encoding,
sometimes ““-quoted more often not. base85 encoding is also
seen, usually quoted.Seems to almost always fit in the RFC723x ‘token’ definition.The number one wishlist item seems to be UNICODE strings,
with a big side order of not having to write a new parser
routine every time somebody comes up with a new header.Having a common parser would indeed be a good thing, and having an
underlying data model which makes it possible define a compressed
serialization, rather than rely on serialization to text followed
by text compression (ie: HPACK) seems like a good idea too.However, when using a datamodel and a parser general enough to
transport useful data, it will have to be followed by a validation
step, which checks that the data also makes sense.Today validation, such as it is, is often done by the bespoke parsers.This then is probably where the next big potential for improvement lies:Ideally a machine readable “data dictionary” which makes it possibly
to copy that text out of RFCs, run it through a code generator which
spits out validation code which operates on the output of the common
parser.But history has been particularly unkind to that idea.Most attempts studied as part of this effort, have sunk under
complexity caused by reaching for generality, but where scope
has been wisely limited, it seems to be possible.So file that idea under “future work”.Accept , Section 5.3.2Accept-Charset , Section 5.3.3Accept-Encoding , Section 5.3.4, , Section 3Accept-Language , Section 5.3.5Age , Section 5.1Allow , Section 7.4.1Connection , Section 6.1Content-Encoding , Section 3.1.2.2Content-Language , Section 3.1.3.2Content-Length , Section 3.3.2Content-Type , Section 3.1.1.5Expect , Section 5.1.1Max-Forwards , Section 5.1.2MIME-Version , Appendix A.1TE , Section 4.3Trailer , Section 4.4Transfer-Encoding , Section 3.3.1Upgrade , Section 6.7Vary , Section 7.1.41 of the RFC723x headers is only reserved, and therefore
have no structure at all:Close , Section 8.15 of the RFC723x headers are HTTP dates:Date , Section 7.1.1.2Expires , Section 5.3If-Modified-Since , Section 3.3If-Unmodified-Since , Section 3.4Last-Modified , Section 2.224 of the RFC723x headers use bespoke formats
which only a single or in rare cases two headers
share:Accept-Ranges , Section 2.3
bytes-unit / other-range-unitAuthorization , Section 4.2Proxy-Authorization , Section 4.4
credentialsCache-Control , Section 5.2
1#cache-directiveContent-Location , Section 3.1.4.2
absolute-URI / partial-URIContent-Range , Section 4.2
byte-content-range / other-content-rangeETag , Section 2.3
entity-tagForwarded 1#forwarded-elementFrom , Section 5.5.1
mailboxIf-Match , Section 3.1If-None-Match , Section 3.2
“*” / 1#entity-tagIf-Range , Section 3.2
entity-tag / HTTP-dateHost , Section 5.4
uri-host [ “:” port ]Location , Section 7.1.2
URI-referencePragma , Section 5.4
1#pragma-directiveRange , Section 3.1
byte-ranges-specifier / other-ranges-specifierReferer , Section 5.5.2
absolute-URI / partial-URIRetry-After , Section 7.1.3
HTTP-date / delay-secondsServer , Section 7.4.2User-Agent , Section 5.5.3
product *( RWS ( product / comment ) )Via , Section 5.7.1
1#( received-protocol RWS received-by [ RWS comment ] )Warning , Section 5.5
1#warning-valueProxy-Authenticate , Section 4.3WWW-Authenticate , Section 4.1
1#challengeAdded signed 64bit integer type.Drop UTF8, and settle on BCP137 ::EmbeddedUnicodeChar for
h1-unicode-string.Change h1_blob delimiter to “:” since “’” is valid t_char