Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review and clarify spec selection criteria #1481

Open
tidoust opened this issue Sep 2, 2024 · 1 comment
Open

Review and clarify spec selection criteria #1481

tidoust opened this issue Sep 2, 2024 · 1 comment

Comments

@tidoust
Copy link
Member

tidoust commented Sep 2, 2024

Via #1477 (comment). The criteria have always been fuzzy. That makes it hard to evaluate when a new spec should join (and/or what standing to give it). Now that the list contains >650 specs, we should probably be able to reflect on experience and refine the criteria accordingly.

To start with, it would probably be useful to document the consequences of adding a spec to the list for Webref, MDN, Specref, WPT.

@tidoust
Copy link
Member Author

tidoust commented Sep 4, 2024

A dump of notes to document the impact of adding a spec to browser-specs. I see @Elchi3 proposed a breakout session at TPAC 2024 that takes a similar approach, see Curating the web platform's data and documentation.

When a spec gets added to browser-specs with a "good" standing and released in the web-specs npm package, the following happens:

  1. Reffy starts crawling the new spec, raw extracts of definitions, CSS, elements, IDL present in the spec get added to Webref as a result. Based on the raw extracts:
    1. Respec and Bikeshed update their cross-references databases.
    2. Strudy alerts us about anomalies in the spec (for anomalies that we do check automatically).
    3. Webref produces curated versions of the extracts. This may need to be postponed until patches get produced when there are problems in the spec for CSS, elements and IDL definitions. Note there is no real notion of curation for definitions of terms (a few patches are applied through post-processing in Reffy, but that should be viewed as a last resort option). Based on the curated extracts:
      1. New versions of the @webref/css, @webref/elements, @webref/events, @webref/idl get released from curated extracts after manual review, usually done on a weekly basis. Based on these packages:
        1. Web platform tests pulls IDL updates from @webref/idl.
        2. The MDN BCD collector runs tests on CSS, elements and IDL support across browsers, using @webref/css, @webref/elements, and @webref/idl, and signals support updates to BCD.
        3. The TypeScript DOM lib generator updates types from @webref/css, @webref/elements, and @webref/idl, filtering out features that are not "supported by two or more major browser engines" (using BCD to check the condition)
        4. The browser API bindings for Dart uses @webref/css, @webref/elements, and @webref/idl to update browser bindings.
        5. The Pulsar Text editor uses @webref/css and @webref/elements to provide autocomplete functions.
        6. The PostCSS preset env plugin uses @webref/css to list logical property groups.
        7. The nodysseus editor uses @webref/css, @webref/elements, and @webref/idl to update known types.
  2. The spec gets added to Specref if not already there.
  3. The web-features project allows features that reference the spec's URL.

Something similar happens when a spec gets added to browser-specs with a "pending" standing and released in the web-specs npm package, except that:

  • The raw extracts are not curated (point 1.iii above, and everything under it). Said differently, the spec becomes visibile to spec authoring tools and related (Bikeshed, Respec, Specref, Strudy) but remains invisible to other projects.
  • The web-features project ignores the spec (point 3. above).

Adding a spec with a "good" standing means that we commit to data curation and patching. We don't want to do that lightly if the spec is too much in flux (unless it is in scope of a chartered standardization group).

Adding a spec with a "good" standing may also be seen as recognition that the spec is on the standards track, even if the data is clear that the spec is incubated in a Community Group with no guarantee whatsoever.

Adding a spec with a "pending" standing means that we commit to reporting anomalies.

Adding a spec with a "pending" standing should not create particular issues. One exception to the rule: the spec may export terms that are already defined in another spec. Such duplicates are not a problem for Respec, but may confuse Bikeshed. We could add a check in Strudy to detect these duplicates once the spec has been added to the list. Catching these duplicates when the spec gets added to browser-specs could theoretically be done too, see #1289.

Adding a spec with a "pending" standing means Strudy will analyze the spec, but that analysis typically does not include detection of most Web IDL anomalies, which are rather done during the curation process for specs in "good" standing. It could be interesting to report these anomalies early on, to smoothen the switch to a "good" standing.

The web-features project should not need to see new specs early for now: features typically only emerge after keys have been added to BCD.

Web platform tests and the MDN BCD collector probably prefer to see new specs relatively early, so as to detect support as soon as possible, where relatively early is something like "when the spec is about to ship in a browser".

Other projects would probably prefer to see new specs only when they start being implemented somewhere and have a clearer standardization status.

All in all, looking at spec selection criteria, we could perhaps be flexible on criteria for adding a spec with a "pending" standing:

  • It seems fine to me to add a spec as soon as it starts some incubation process.
  • The commitments on our side remain acceptable, and useful to get the spec in order early on. We may also restrict analysis in Strudy to specs that are in "good" standing if we're overwhelmed.

In practice, the criteria listed in the README seem already pretty good as-is to add a spec with a "pending" standing.

I think we need additional criteria for adding a spec with a "good" standing, to clarify the conditions that we consider to evaluate support:

  • Spec developed by an actual standardization group (e.g., a Working Group in W3C)? Added in "good" standing without further ado.
  • Spec developed in a pre-standardization group (e.g., WICG in W3C)? We look at development status, and positions expressed. If the spec already ships in two or more browser engines, it gets added without further ado. Otherwise:
    • Known positions from main browser vendors must be one of pending, neutral or positive. In particular, standard position requests must have been filed where possible.
    • Implementation within a browser engine must be ongoing, or have shipped already. Alternatively, the spec must be actively developed by people from two or more distinct browser engines (e.g., two editors from different browser engines).

The above does not restrict addition to "shipped in one of the main browser engines" for Web Platform Tests and the MDN BCD Collector. That restriction should perhaps be made though, especially if the spec can be added with a "pending" standing. Also, the evaluation remains subjective without it: what does "ongoing" mean for implementation? Intent to prototype, Intent to ship, known milestones? Granted, the term "shipped" is also overloaded: by default? behind a flag? origin trial?

MDN defines a similar set of requirements for documenting a new technology. I think the notion of standards track in that document includes what I call pre-standardization groups. Details include looking at "signs of interest" from non-supporting browsers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant