Skip to content

High-level web platform vision and improving the TAG’s Impact/Effort ratio #36

Open
@LeaVerou

Description

@LeaVerou

In the recent TAG+AB joint meeting there was a very good discussion on some issues I’ve been informally raising for a while, which are perfectly summarized by this (member-only) comment.

The 2012 TAG reform focused on shifting the TAG’s work from "architecture astronauts" into the practical, and it was certainly the right direction. However, at this point I think the pendulum has swung too far to the other side, and the TAG is not spending its time in a way that maximizes Impact/Effort but is lost in the low-level minutiae of incremental improvements to the web platform.

Most of our time is spent reviewing low-level details of new APIs, incremental improvements, often of features shipped across browsers where our feedback is unlikely to produce any change. Even many of the early reviews we get are too far gone for their authors to be open to radical change.

Maximizing I/E

The primary advantage of the TAG is the combination of deep technical expertise, architectural mindset, and broad bird’s eye view of the web platform as a whole. Our time is better spent on tasks that utilize the combination of these.

Web Platform gap analysis

The TAG should document developer needs the platform is either failing to meet entirely, or where it introduces usability cliffs, where a small increase in use case complexity results in a disproportionate increase in API complexity. This includes instances where basic, common use cases require an inordinate amount of author effort.

As much as I believe in user research, it’s important to know the limitations of each method. A lot of these issues do not surface as developer complaints in surveys. Developers will complain about very specific problems they face, but will rarely be able to see the bigger picture or connect the dots between related problems. Also, they (just like all users) tend to complain more about things that cannot be done at all, rather than things that are possible, but hard.

High level architectural guidance

Point groups interested in solving a given problem to the right direction early on, potentially connecting them with other groups solving similar problems where we feel the architecturally better solution is to join forces.

This would involve fostering a culture of much earlier design reviews, and prioritizing early reviews. Spending 5 minutes discussing an idea early on can have a lot more impact than spending a whole telcon reviewing it later on.

I’m not saying that later reviews are not valuable. But our time is extremely limited, so maximizing impact per unit of time matters. The earlier we look at an idea, the less time it requires, and the more impactful our feedback can be.

Concrete suggestions

A scoring system to prioritize design reviews

With factors like:

  • Prioritize earlier reviews: The earlier the review, the more points. Has a mature spec? subtract points. Shipped anywhere? subtract points. Shipped across multiple browsers? subtract a ton of points.
  • Prioritize impactful features: The larger the scope of the feature, the more points. Completely new API: more points. Incremental improvement on existing tech: fewer points. Brand new technology: +∞ points!
  • Prioritize recent review requests: While FIFO is more fair, the longer a review request has been open for, the less likely it is our feedback will make a difference.

The data for each of the factors could even be part of the design review template and supplied by the requester, so then all we need to do is score it.

Explainer reform

Our process currently assumes TAG participants spend time pre-telcon reviewing explainers. In practice, explainers are reviewed synchronously during the telcons. The longer the explainer, the more likely it is that important parts are skimmed. Rather than pretending that is not the case, we should embrace it and communicate it to requesters. That is also more respectful of the requester time, since many are under the impression that the longer the explainer, the better.

Introduce explicit guidance that explainers need to be written so they can be read and processed in N minutes. We can discuss what a reasonable N is, but I suspect it would be somewhere between 2-5. They can include pointers to more information, but the main explainer page should not be too long and should stand alone as an overview.

Wading through incomplete explainers and trying to guess the information that is missing is not a good use of our time. We should skip reviews whose explainers do not include essential bits like:

  • User needs / high level use cases (at least two to ensure no overfit)
  • Usage examples, with actual code using the API for common use cases.
  • Alternatives considered

We can have a template response for these rather than allocating call time for them. Perhaps the chairs could even do this pre-filtering so we don't spend any call time on this.

A repo for gap analysis

We need a place to record gaps we notice in the web platform and author pain points, which are currently lost in ad hoc discussions. Sure, it’s not enough, we also need a way to raise awareness and create WG and implementer interest, but it’s the first step. And this way we can also ensure that the user needs are actually backed by TAG consensus, since there have been cases in the past where that was not the case.

A new repo where we open these as issues is probably the most lightweight way to start this. @plinss mentioned there have been some previously identified gaps. Where do these live? Can we centralize them or point to them from that repo?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions