The Update Framework Specification 1. Introduction 1.1. Scope This document describes a framework for securing software update systems. 1.2. Motivation Software is commonly updated through software update systems. These systems can be package managers that are responsible for all of the software that is installed on a system, application updaters that are only responsible for individual installed applications, or software library managers that install software that adds functionality such as plugins or programming language libraries. Software update systems all have the common behavior of downloading files that identify whether updates exist and, when updates do exist, downloading the files that are required for the update. For the implementations concerned with security, various integrity and authenticity checks are performed on downloaded files. Software update systems are vulnerable to a variety of known attacks. This is generally true even for implementations that have tried to be secure. 1.3. History and credit Work on TUF began in late 2009. The core ideas are based off of previous work done by Justin Cappos and Justin Samuel that identified security flaws in all popular Linux package managers. More information and current versions of this document can be found at https://www.updateframework.com/ The Global Environment for Network Innovations (GENI) and the National Science Foundation (NSF) have provided support for the development of TUF. (http://www.geni.net/) (http://www.nsf.gov/) TUF's Python implementation is based heavily on Thandy, the application updater for Tor (http://www.torproject.org/). Its design and this spec are also largely based on Thandy's, with many parts being directly borrowed from Thandy. The Thandy spec can be found here: https://gitweb.torproject.org/thandy.git?a=blob_plain;f=specs/thandy-spec.txt;hb=HEAD Whereas Thandy is an application updater for an individual software project, TUF aims to provide a way to secure any software update system. We're very grateful to the Tor Project and the Thandy developers as it is doubtful our design and implementation would have been anywhere near as good without being able to use their great work as a starting point. Thandy is the hard work of Nick Mathewson, Sebastian Hahn, Roger Dingledine, Martin Peck, and others. 1.4. Non-goals We aren't creating a universal update system, but rather a simple and flexible way that applications can have high levels of security with their software update systems. Creating a universal software update system would not be a reasonable goal due to the diversity of application-specific functionality in software update systems and the limited usefulness that such a system would have for securing legacy software update systems. We won't be defining package formats or even performing the actual update of application files. We will provide the simplest mechanism possible that remains easy to use and provides a secure way for applications to obtain and verify files being distributed by trusted parties. We are not providing a means to bootstrap security so that arbitrary installation of new software is secure. In practice this means that people still need to use other means to verify the integrity and authenticity of files they download manually. The framework will not have the responsibility of deciding on the correct course of action in all error situations, such as those that can occur when certain attacks are being performed. Instead, the framework will provide the software update system the relevant information about any errors that require security decisions which are situation-specific. How those errors are handled is up to the software update system. 1.5. Goals We need to provide a framework (a set of libraries, file formats, and utilities) that can be used to secure new and existing software update systems. The framework should enable applications to be secure from all known attacks on the software update process. It is not concerned with exposing information about what software is being updating (and thus what software the client may be running) or the contents of updates. The framework should provide means to minimize the impact of key compromise. To do so, it must support roles with multiple keys and threshold/quorum trust (with the exception of minimally trusted roles designed to use a single key). The compromise of roles using highly vulnerable keys should have minimal impact. Therefore, online keys (keys which are used in an automated fashion) must not be used for any role that clients ultimately trust for files they may install. The framework must be flexible enough to meet the needs of a wide variety of software update systems. The framework must be easy to integrate with software update systems. 1.5.1 Goals for implementation The client side of the framework must be straightforward to implement in any programming language and for any platform with the requisite networking and crypto support. The framework should be easily customizable for use with any crypto libraries. The process by which developers push updates to the repository must be simple. The repository must serve only static files and be easy to mirror. The framework must be secure to use in environments that lack support for SSL (TLS). This does not exclude the optional use of SSL when available, but the framework will be designed without it. 1.5.2. Goals for specific attacks to protect against Note: When saying the framework protects against an attack, this means that the attack will not be successful. It does not mean that a client will always be able to successfully update during an attack. Fundamentally, an attacker positioned to intercept and modify a client's communication will always be able to perform a denial of service. The part we have control over is not allowing an inability to update to go unnoticed. Rollback attacks. Attackers should not be able to trick clients into installing software that is older than that which the client previously knew to be available. Indefinite freeze attacks. Attackers should not be able respond to client requests with the same, outdated metadata without the client being aware of the problem. Endless data attacks. Attackers should not be able to respond to client requests with huge amounts of data (extremely large files) that interfere with the client's system. Slow retrieval attacks. Attackers should not be able to prevent clients from being aware of interference with receiving updates by responding to client requests so slowly that automated updates never complete. Extraneous dependencies attacks. Attackers should not be able to cause clients to download or install software dependencies that are not the intended dependencies. Mix-and-match attacks. Attackers should not be able to trick clients into using a combination of metadata that never existed together on the repository at the same time. Malicious repository mirrors should not be able to prevent updates from good mirrors. 1.5.3. Goals for PKIs Software update systems using the framework's client code interface should never have to directly manage keys. All keys must be easily and safely revocable. Trusting new keys for a role must be easy. For roles where trust delegation is meaningful, a role should be able to delegate full or limited trust to another role. The root of trust will not rely on external PKI. That is, no authority will be derived from keys outside of the framework. 2. System overview The framework ultimately provides a secure method of obtaining trusted files. To avoid ambiguity, we will refer to the files the framework is used to distribute as "target files". Target files are opaque to the framework. Whether target files are packages containing multiple files, single text files, or executable binaries is irrelevant to the framework. The metadata describing target files is the information necessary to securely identity the file and indicate which roles are trusted to provide the file. As providing additional information about target files may be important to some software update systems using the framework, additional arbitrary information can be provided with any target file. This information will be included in signed metadata that describes the target files. The following are the high-level steps of using the framework from the viewpoint of a software update system using the framework. This is an error-free case. Polling: - Periodically, the software update system using the framework instructs the framework to check each repository for updates. If the framework reports to the application code that there are updates, the application code determines whether it wants to download the updated target files. Only target files that are trusted (referenced by properly signed and timely metadata) are made available by the framework. Fetching: - For each file that the application wants, it asks the framework to download the file. The framework downloads the file and performs security checks to ensure that the downloaded file is exactly what is expected according to the signed metadata. The application code is not given access to the file until the security checks have been completed. The application asks the framework to copy the downloaded file to a location specified by the application. At this point, the application has securely obtained the target file and can do with it whatever it wishes. 2.1. Roles and PKI In the discussion of roles that follows, it is important to remember that the framework has been designed to allow a large amount of flexibility for many different use cases. For example, it is possible to use the framework with a single key that is the only key used in the entire system. This is considered to be insecure but the flexibility is provided in order to meet the needs of diverse use cases. There are four fundamental top-level roles in the framework: - Root role - Targets role - Snapshot role - Timestamp role There is also one optional top-level role: - Mirrors role All roles can use one or more keys and require a threshold of signatures of the role's keys in order to trust a given metadata file. 2.1.1 Root role The root role delegates trust to specific keys trusted for all other top-level roles used in the system. The client-side of the framework must ship with trusted root keys for each configured repository. The root role's private keys must be kept very secure and thus should be kept offline. 2.1.2 Targets role The targets role's signature indicates which target files are trusted by clients. The targets role signs metadata that describes these files, not the actual target files themselves. In addition, the targets role can delegate full or partial trust to other roles. Delegating trust means that the targets role indicates another role (that is, another set of keys and the threshold required for trust) is trusted to sign target file metadata. Partial trust delegation is when the delegated role is only trusted for some of the target files that the delegating role is trusted for. Delegated developer roles can further delegate trust to other delegated roles. This provides for multiple levels of trust delegation where each role can delegate full or partial trust for the target files they are trusted for. The delegating role in these cases is still trusted. That is, a role does not become untrusted when it has delegated trust. Delegated trust can be revoked at any time by the delegating role signing new metadata that indicates the delegated role is no longer trusted. 2.1.3 Snapshot role The snapshot role signs a metadata file that provides information about the latest version of all of the other metadata on the repository (excluding the timestamp file, discussed below). This information allows clients to know which metadata files have been updated and also prevents mix-and-match attacks. 2.1.4 Timestamp role To prevent an adversary from replaying an out-of-date signed metadata file whose signature has not yet expired, an automated process periodically signs a timestamped statement containing the the hash of the snapshot file. Even though this timestamp key must be kept online, the risk posed to clients by compromise of this key is minimal. 2.1.5 Mirrors role Every repository has one or more mirrors from which files can be downloaded by clients. A software update system using the framework may choose to hard-code the mirror information in their software or they may choose to use mirror metadata files that can optionally be signed by a mirrors role. The importance of using signed mirror lists depends on the application and the users of that application. There is minimal risk to the application's security from being tricked into contacting the wrong mirrors. This is because the framework has very little trust in repositories. 2.2. Threat Model And Analysis We assume an adversary who can respond to client requests, whether by acting as a man-in-the-middle or through compromising repository mirrors. At worst, such an adversary can deny updates to users if no good mirrors are accessible. An inability to obtain updates is noticed by the framework. If an adversary compromises enough keys to sign metadata, the best that can be done is to limit the number of users who are affected. The level to which this threat is mitigated is dependent on how the application is using the framework. This includes whether different keys have been used for different signing roles. A detailed threat analysis is outside the scope of this document. This is partly because the specific threat posted to clients in many situations is largely determined by how the framework is being used. 3. The repository An application uses the framework to interact with one or more repositories. A repository is a conceptual source of target files of interest to the application. Each repository has one or more mirrors which are the actual providers of files to be downloaded. For example, each mirror may specify a different host where files can be downloaded from over HTTP. The mirrors can be full or partial mirrors as long as the application-side of the framework can ultimately obtain all of the files it needs. A mirror is a partial mirror if it is missing files that a full mirror should have. If a mirror is intended to only act as a partial mirror, the metadata and target paths available from that mirror can be specified. Roles, trusted keys, and target files are completely separate between repositories. A multi-repository setup is a multi-root system. When an application uses the framework with multiple repositories, the framework does not perform any "mixing" of the trusted content from each repository. It is up to the application to determine the significance of the same or different target files provided from separate repositories. 3.1 Repository layout The filesystem layout in the repository is used for two purposes: - To give mirrors an easy way to mirror only some of the repository. - To specify which parts of the repository a given role has authority to sign/provide. 3.1.1 Target files The filenames and the directory structure of target files available from a repository are not specified by the framework. The names of these files and directories are completely at the discretion of the application using the framework. 3.1.2 Metadata files The filenames and directory structure of repository metadata are strictly defined. The following are the metadata files of top-level roles relative to the base URL of metadata available from a given repository mirror. /root.json Signed by the root keys; specifies trusted keys for the other top-level roles. /snapshot.json Signed by the snapshot role's keys. Lists hashes and sizes of all metadata files other than timestamp.json. /targets.json Signed by the target role's keys. Lists hashes and sizes of target files. /timestamp.json Signed by the timestamp role's keys. Lists hashes and size of the snapshot file. This is the first and potentially only file that needs to be downloaded when clients poll for the existence of updates. /mirrors.json (optional) Signed by the mirrors role's keys. Lists information about available mirrors and the content available from each mirror. An implementation of the framework may optionally choose to make available any metadata files in compressed (e.g. gzip'd) format. In doing so, the filename of the compressed file should be the same as the original with the addition of the file name extension for the compression type (e.g. snapshot.json.gz). The original (uncompressed) file should always be made available, as well. 3.1.2.1 Metadata files for targets delegation When the targets role delegates trust to other roles, each delegated role provides one signed metadata file. This file is located at: /targets/DELEGATED_ROLE.json where DELEGATED_ROLE is the name of the delegated role that has been specified in targets.json. If this role further delegates trust to a role named ANOTHER_ROLE, that role's signed metadata file is made available at: /targets/DELEGATED_ROLE/ANOTHER_ROLE.json 4. Document formats All of the formats described below include the ability to add more attribute-value fields for backwards-compatible format changes. If a backwards incompatible format change is needed, a new filename can be used. 4.1. Metaformat All documents use a subset of the JSON object format, with floating-point numbers omitted. When calculating the digest of an object, we use the "canonical JSON" subdialect as described at http://wiki.laptop.org/go/Canonical_JSON 4.2. File formats: general principles All signed metadata files have the format: { "signed" : ROLE, "signatures" : [ { "keyid" : KEYID, "method" : METHOD, "sig" : SIGNATURE } , ... ] } where: ROLE is a dictionary whose "_type" field describes the role type. KEYID is the identifier of the key signing the ROLE dictionary. METHOD is the key signing method used to generate the signature. SIGNATURE is a signature of the canonical JSON form of ROLE. The current Python implementation of TUF defines two signing methods, although TUF is not restricted to any particular key signing method, key type, or cryptographic library: "RSASSA-PSS" : RSA Probabilistic signature scheme with appendix. "ed25519" : Elliptic curve digital signature algorithm based on Twisted Edwards curves. RSASSA-PSS: http://tools.ietf.org/html/rfc3447#page-29 ed25519: http://ed25519.cr.yp.to/ All keys have the format: { "keytype" : KEYTYPE, "keyval" : KEYVAL } where KEYTYPE is a string describing the type of the key and how it's used to sign documents. The type determines the interpretation of KEYVAL. We define two keytypes at present: 'rsa' and 'ed25519'. The 'rsa' format is: { "keytype" : "rsa", "keyval" : { "public" : PUBLIC, "private" : PRIVATE } } where PUBLIC and PRIVATE are in PEM format and are strings. All RSA keys must be at least 2048 bits. The 'ed25519' format is: { "keytype" : "ed25519", "keyval" : { "public" : PUBLIC, "private" : PRIVATE } } where PUBLIC and PRIVATE are both 32-byte strings. Metadata does not include the private portion of the key object: { "keytype" : "rsa", "keyval" : { "public" : PUBLIC} } The KEYID of a key is the hexdigest of the SHA-256 hash of the canonical JSON form of the key, where the "private" object key is excluded. Metadata date-time data follows the ISO 8601 standard. The expected format of the combined date and time string is "YYYY-MM-DDTHH:MM:SSZ". Time is always in UTC, and the "Z" time zone designator is attached to indicate a zero UTC offset. An example date-time string is "1985-10-21T01:21:00Z". 4.3. File formats: root.json The root.json file is signed by the root role's keys. It indicates which keys are authorized for all top-level roles, including the root role itself. Revocation and replacement of top-level role keys, including for the root role, is done by changing the keys listed for the roles in this file. The format of root.json is as follows: { "_type" : "Root", "version" : VERSION, "expires" : EXPIRES, "keys" : { KEYID : KEY , ... }, "roles" : { ROLE : { "keyids" : [ KEYID, ... ] , "threshold" : THRESHOLD } , ... } } VERSION is an integer that is greater than 0. Clients MUST NOT replace a metadata file with a version number less than the one currently trusted. EXPIRES determines when metadata should be considered expired and no longer trusted by clients. Clients MUST NOT trust an expired file. A ROLE is one of "root", "snapshot", "targets", "timestamp", or "mirrors". A role for each of "root", "snapshot", "timestamp", and "targets" MUST be specified in the key list. The role of "mirror" is optional. If not specified, the mirror list will not need to be signed if mirror lists are being used. The KEYID must be correct for the specified KEY. Clients MUST calculate each KEYID to verify this is correct for the associated key. Clients MUST ensure that for any KEYID represented in this key list and in other files, only one unique key has that KEYID. The THRESHOLD for a role is an integer of the number of keys of that role whose signatures are required in order to consider a file as being properly signed by that role. 4.4. File formats: snapshot.json The snapshot.json file is signed by the snapshot role. It lists hashes and sizes of all metadata on the repository, excluding timestamp.json and mirrors.json. The format of snapshot.json is as follows: { "_type" : "Snapshot", "version" : VERSION, "expires" : EXPIRES, "meta" : METAFILES } METAFILES is an object whose format is the following: { METAPATH : { "length" : LENGTH, "hashes" : HASHES, ("custom" : { ... }) } , ... } METAPATH is the the metadata file's path on the repository relative to the metadata base URL. 4.5. File formats: targets.json and delegated target roles The format of targets.json is as follows: { "_type" : "Targets", "version" : VERSION, "expires" : EXPIRES, "targets" : TARGETS, ("delegations" : DELEGATIONS) } TARGETS is an object whose format is the following: { TARGETPATH : { "length" : LENGTH, "hashes" : HASHES, ("custom" : { ... }) } , ... } Each key of the TARGETS object is a TARGETPATH. A TARGETPATH is a path to a file that is relative to a mirror's base URL of targets. It is allowed to have a TARGETS object with no TARGETPATH elements. This can be used to indicate that no target files are available. The HASH and LENGTH are the hash and length of the target file. If defined, the elements and values of "custom" will be made available to the client application. The information in "custom" is opaque to the framework and can include version numbers, dependencies, requirements, and any other data that the application wants to include to describe the file at TARGETPATH. The application may use this information to guide download decisions. DELEGATIONS is an object whose format is the following: { "keys" : { KEYID : KEY, ... }, "roles" : [{ "name": ROLE, "keyids" : [ KEYID, ... ] , "threshold" : THRESHOLD, ("path_hash_prefixes" : [ HEX_DIGEST, ... ] | "paths" : [ PATHPATTERN, ... ]) }, ... ] } In order to discuss target paths, a role MUST specify only one of the "path_hash_prefixes" or "paths" attributes, each of which we discuss next. The "path_hash_prefixes" list is used to succinctly describe a set of target paths. Specifically, each HEX_DIGEST in "path_hash_prefixes" describes a set of target paths; therefore, "path_hash_prefixes" is the union over each prefix of its set of target paths. The target paths must meet this condition: each target path, when hashed with the SHA-256 hash function to produce a 64-byte hexadecimal digest (HEX_DIGEST), must share the same prefix as one of the prefixes in "path_hash_prefixes". This is useful to split a large number of targets into separate bins identified by consistent hashing. TODO: Should the TUF spec restrict the repository to one particular algorithm? Should we allow the repository to specify in the role dictionary the algorithm used for these generated hashed paths? The "paths" list describes paths that the role is trusted to provide. Clients MUST check that a target is in one of the trusted paths of all roles in a delegation chain, not just in a trusted path of the role that describes the target file. The format of a PATHPATTERN may be either a path to a single file, or a path to a directory to indicate all files and/or subdirectories under that directory. A path to a directory is used to indicate all possible targets sharing that directory as a prefix; e.g. if the directory is "targets/A", then targets which match that directory include "targets/A/B.json" and "targets/A/B/C.json". We are currently investigating a few "priority tag" schemes to resolve conflicts between delegated roles that share responsibility for overlapping target paths. One of the simplest of such schemes is for the client to consider metadata in order of appearance of delegations; we treat the order of delegations such that the first delegation is trusted more than the second one, the second delegation is trusted more than the third one, and so on. The metadata of the first delegation will override that of the second delegation, the metadata of the second delegation will override that of the third delegation, and so on. In order to accommodate this scheme, the "roles" key in the DELEGATIONS object above points to an array, instead of a hash table, of delegated roles. Another priority tag scheme would have the clients prefer the delegated role with the latest metadata for a conflicting target path. Similar ideas were explored in the Stork package manager (University of Arizona Tech Report 08-04)[https://isis.poly.edu/~jcappos/papers/cappos_stork_dissertation_08.pdf]. The metadata files for delegated target roles has the same format as the top-level targets.json metadata file. 4.6. File formats: timestamp.json The timestamp file is signed by a timestamp key. It indicates the latest versions of other files and is frequently resigned to limit the amount of time a client can be kept unaware of interference with obtaining updates. Timestamp files will potentially be downloaded very frequently. Unnecessary information in them will be avoided. The format of the timestamp file is as follows: { "_type" : "Timestamp", "version" : VERSION, "expires" : EXPIRES, "meta" : METAFILES } METAFILES is the same is described for the snapshot.json file. In the case of the timestamp.json file, this will commonly only include a description of the snapshot.json file. 4.7. File formats: mirrors.json The mirrors.json file is signed by the mirrors role. It indicates which mirrors are active and believed to be mirroring specific parts of the repository. The format of mirrors.json is as follows: { "_type" : "Mirrorlist", "version" : VERSION, "expires" : EXPIRES, "mirrors" : [ { "urlbase" : URLBASE, "metapath" : METAPATH, "targetspath" : TARGETSPATH, "metacontent" : [ PATHPATTERN ... ] , "targetscontent" : [ PATHPATTERN ... ] , ("custom" : { ... }) } , ... ] } URLBASE is the URL of the mirror which METAPATH and TARGETSPATH are relative to. All metadata files will be retrieved from METAPATH and all target files will be retrieved from TARGETSPATH. The lists of PATHPATTERN for "metacontent" and "targetscontent" describe the metadata files and target files available from the mirror. The order of the list of mirrors is important. For any file to be downloaded, whether it is a metadata file or a target file, the framework on the client will give priority to the mirrors that are listed first. That is, the first mirror in the list whose "metacontent" or "targetscontent" include a path that indicate the desired file can be found there will the first mirror that will be used to download that file. Successive mirrors with matching paths will only be tried if downloading from earlier mirrors fails. This behavior can be modified by the client code that uses the framework to, for example, randomly select from the listed mirrors. 5. Detailed Workflows 5.1. The client application Note: If at any point in the following process there is a problem (e.g., only expired metadata can be retrieved), the Root file is downloaded and the process starts over. Optionally, the software update system using the framework can decide how to proceed rather than automatically downloading a new Root file. The client code instructs the framework to check for updates. The framework downloads the timestamp.json file from a mirror and checks that the file is properly signed by the timestamp role, is not expired, and is not older than the last timestamp.json file retrieved. If the timestamp file lists the same snapshot.json file as was previously seen, the client code is informed that no updates are available and the update checking process stops. If the snapshot.json file has changed, the framework downloads the file and verifies that it is properly signed by the snapshot role, is not expired, has a newer timestamp than the last snapshot.json file seen, and matches the description (hashes and size) in the timestamp.json file. The framework then checks which metadata files listed in snapshot.json differ from those described in the last snapshot.json file the framework had seen. If the root.json file has changed, the framework updates this (following the same security measures as with the other files) and starts the process over. If any other metadata files have changed, the framework downloads and checks those. By comparing the trusted targets from the old trusted metadata with the new metadata, the framework is able to determine which target files have changed. The framework ensures that any targets described in delegated targets files are allowed to be provided by the delegated role. When the client code asks the framework to download a target file, the framework downloads the file from (potentially trying multiple mirrors), checks the downloaded file to ensure that it matches the information described in the targets files, and then makes the file available to the client code. 6. Usage See http://www.theupdateframework.com/ for discussion of recommended usage in various situations. 6.1. Key management and migration All keys except the timestamp file signing key and the mirror list signing key should be stored securely offline (e.g. encrypted and on a separate machine, in special-purpose hardware, etc.). To replace a compromised root key or any other top-level role key, the root role signs a new root.json file that lists the updated trusted keys for the role. When replacing root keys, an application will sign the new root.json file with both the new and old root keys until all clients are known to have obtained the new root.json file (a safe assumption is that this will be a very long time or never). There is no risk posed by continuing to sign the root.json file with revoked keys as once clients have updated they no longer trust the revoked key. This is only to ensure outdated clients remain able to update. To replace a delegated developer key, the role that delegated to that key just replaces that key with another in the signed metadata where the delegation is done. 7. Consistent Snapshots So far, we have considered a TUF repository that is relatively static (in terms of how often metadata and target files are updated). The problem is that if the repository (which may be a community repository such as PyPI, RubyGems, CPAN, or SourceForge) is volatile, in the sense that the repository is continually producing new TUF metadata as well as its targets, then should clients read metadata while the same metadata is being written to, they would effectively see denial-of-service attacks. Therefore, the repository needs to be careful about how it writes metadata and targets. The high-level idea of the solution is that each snapshot will be contained in a so-called consistent snapshot. If a client is reading from one consistent snapshot, then the repository is free to write another consistent snapshot without interrupting that client. For more reasons on why we need consistent snapshots, please see https://github.com/theupdateframework/pep-on-pypi-with-tuf#why-do-we-need-consistent-snapshots 7.1. Writing consistent snapshots We now explain how a repository should write metadata and targets to produce self-contained consistent snapshots. Simply put, TUF should write every metadata and target file as such: if the file had the original name of filename.ext, then it should be written to disk as digest.filename.ext, where digest is the hex digest of a cryptographic hash of the file. This means that if the referrer metadata lists N cryptographic hashes of the referred file, then there must be N identical copies of the referred file, where each file will be distinguished only by the value of the digest in its filename. The modified filename need not include the name of the cryptographic hash function used to produce the digest because, on a read, the choice of function follows from the selection of a digest (which includes the name of the cryptographic function) from all digests in the referred file. Additionally, the timestamp metadata (timestamp.json) should also be written to disk whenever it is updated. It is optional for an implementation to write identical copies at digest.timestamp.json for record-keeping purposes, because a cryptographic hash of the timestamp metadata is usually not known in advance. The same step applies to the root metadata (root.json), although an implementation must write both root.json and digest.root.json because it is possible to download root metadata both with and without known hashes. These steps are required because these are the only metadata files that may be requested without known hashes. Most importantly, no metadata file format must be updated to refer to the names of metadata or target files with their hashes included. In other words, if a metadata file A refers to another metadata or target file B as filename.ext, then the filename must remain as filename.ext and not digest.filename.ext. This rule is in place so that metadata signed by roles with offline keys will not be forced to sign for the metadata file whenever it is updated. In the next subsection, we will see how clients will reproduce the name of the intended file. Finally, the root metadata should write the Boolean "consistent_snapshot" attribute at the root level of its keys of attributes. If consistent snapshots are not written by the repository, then the attribute may either be left unspecified or be set to the False value. Otherwise, it must be set to the True value. For more details on how this would apply on a community repository, please see https://github.com/theupdateframework/pep-on-pypi-with-tuf#producing-consistent-snapshots 7.2. Reading consistent snapshots We now explain how a client should read a self-contained consistent snapshot. If the root metadata (root.json) is either missing the Boolean "consistent_snapshot" attribute or the attribute is set to False, then the client should do nothing different from the workflow in Section 5.1. Otherwise, the client must perform as follows: 1. It must first retrieve the timestamp metadata (timestamp.json) from the repository. 2. If a threshold number of signatures of the timestamp or snapshot metadata are not valid, then the client must download the root metadata (root.json) from the repository and return to step 1. 3. Otherwise, the client must download every subsequent metadata or target file as follows: if the metadata or target file has the name filename.ext, then the client must actually retrieve the file with the name digest.filename.ext, where digest is the hex digest of a cryptographic hash of the referred file as listed by its referrer file. Even though the modified filename does not include the name of the cryptographic hash function used to produce the chosen digest value, the choice of function follows from the selection of the digest (which includes the name of the cryptographic function) from all digests in the referred file. 4. Finally, the client must be careful to rename every metadata or target file retrieved with the name digest.filename.ext to the name filename.ext. F. Future directions and open questions F.1. Support for bogus clocks. The framework may need to offer an application-enablable "no, my clock is _supposed_ to be wrong" mode, since others have noticed that many users seem to have incorrect clocks.