Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High-level web platform vision and improving the TAG’s Impact/Effort ratio #36

Open
LeaVerou opened this issue Jul 17, 2024 · 2 comments

Comments

@LeaVerou
Copy link
Member

LeaVerou commented Jul 17, 2024

In the recent TAG+AB joint meeting there was a very good discussion on some issues I’ve been informally raising for a while, which are perfectly summarized by this (member-only) comment.

The 2012 TAG reform focused on shifting the TAG’s work from "architecture astronauts" into the practical, and it was certainly the right direction. However, at this point I think the pendulum has swung too far to the other side, and the TAG is not spending its time in a way that maximizes Impact/Effort but is lost in the low-level minutiae of incremental improvements to the web platform.

Most of our time is spent reviewing low-level details of new APIs, incremental improvements, often of features shipped across browsers where our feedback is unlikely to produce any change. Even many of the early reviews we get are too far gone for their authors to be open to radical change.

Maximizing I/E

The primary advantage of the TAG is the combination of deep technical expertise, architectural mindset, and broad bird’s eye view of the web platform as a whole. Our time is better spent on tasks that utilize the combination of these.

Web Platform gap analysis

The TAG should document developer needs the platform is either failing to meet entirely, or where it introduces usability cliffs, where a small increase in use case complexity results in a disproportionate increase in API complexity. This includes instances where basic, common use cases require an inordinate amount of author effort.

As much as I believe in user research, it’s important to know the limitations of each method. A lot of these issues do not surface as developer complaints in surveys. Developers will complain about very specific problems they face, but will rarely be able to see the bigger picture or connect the dots between related problems. Also, they (just like all users) tend to complain more about things that cannot be done at all, rather than things that are possible, but hard.

High level architectural guidance

Point groups interested in solving a given problem to the right direction early on, potentially connecting them with other groups solving similar problems where we feel the architecturally better solution is to join forces.

This would involve fostering a culture of much earlier design reviews, and prioritizing early reviews. Spending 5 minutes discussing an idea early on can have a lot more impact than spending a whole telcon reviewing it later on.

I’m not saying that later reviews are not valuable. But our time is extremely limited, so maximizing impact per unit of time matters. The earlier we look at an idea, the less time it requires, and the more impactful our feedback can be.

Concrete suggestions

A scoring system to prioritize design reviews

With factors like:

  • Prioritize earlier reviews: The earlier the review, the more points. Has a mature spec? subtract points. Shipped anywhere? subtract points. Shipped across multiple browsers? subtract a ton of points.
  • Prioritize impactful features: The larger the scope of the feature, the more points. Completely new API: more points. Incremental improvement on existing tech: fewer points. Brand new technology: +∞ points!
  • Prioritize recent review requests: While FIFO is more fair, the longer a review request has been open for, the less likely it is our feedback will make a difference.

The data for each of the factors could even be part of the design review template and supplied by the requester, so then all we need to do is score it.

Explainer reform

Our process currently assumes TAG participants spend time pre-telcon reviewing explainers. In practice, explainers are reviewed synchronously during the telcons. The longer the explainer, the more likely it is that important parts are skimmed. Rather than pretending that is not the case, we should embrace it and communicate it to requesters. That is also more respectful of the requester time, since many are under the impression that the longer the explainer, the better.

Introduce explicit guidance that explainers need to be written so they can be read and processed in N minutes. We can discuss what a reasonable N is, but I suspect it would be somewhere between 2-5. They can include pointers to more information, but the main explainer page should not be too long and should stand alone as an overview.

Wading through incomplete explainers and trying to guess the information that is missing is not a good use of our time. We should skip reviews whose explainers do not include essential bits like:

  • User needs / high level use cases (at least two to ensure no overfit)
  • Usage examples, with actual code using the API for common use cases.
  • Alternatives considered

We can have a template response for these rather than allocating call time for them. Perhaps the chairs could even do this pre-filtering so we don't spend any call time on this.

A repo for gap analysis

We need a place to record gaps we notice in the web platform and author pain points, which are currently lost in ad hoc discussions. Sure, it’s not enough, we also need a way to raise awareness and create WG and implementer interest, but it’s the first step. And this way we can also ensure that the user needs are actually backed by TAG consensus, since there have been cases in the past where that was not the case.

A new repo where we open these as issues is probably the most lightweight way to start this. @plinss mentioned there have been some previously identified gaps. Where do these live? Can we centralize them or point to them from that repo?

@adrianhopebailie
Copy link

Re-posting my comment which is linked above so that it is public:

@torgo the Web Platform Design Principles is a great document to use when designing an API but my comment is about the higher level architecture and the features of the Web as a complete platform.

In my opinion we need to recognise that the Web platform competes with other platforms for both users and developers.

What are we doing as W3C to ensure that a developer who has a great idea for a new application or services chooses to build it on the Web vs some other platform?

What are we doing as W3C to ensure that a user looking for an application or service they need starts by looking for it on the Web vs an app store?

As I say in my original comment, I believe we are missing some high level product management/strategy and proactive, co-ordinated feature development.

I don't think this has been a mandate of the TAG historically so my comment is not a criticism but rather a request to include this in the TAG mandate (or in somebody's mandate).

The whole standards process favours very small simple APIs that are tightly scoped and seems optimised for success of the process not necessarily of the platform in the long term.

This process works when feature development is reactive and the motivation for adding the feature is: "it addresses a problem and doesn't break anything" but I'm not aware of anything in the process that is evaluating market trends, looking at existing features of competitive platforms and formulating a product strategy that informs new feature development.

@LeaVerou
Copy link
Member Author

LeaVerou commented Jul 24, 2024

We discussed this in a plenary session during the last day of our Seattle F2F.

Summary:

  • Consensus to start a gap analysis repo. (edit: started)
  • Consensus that it’s a good idea to prirotize reviews based on some rubric, though the devil is in the details. Factors discussed:
    • Early vs Late: no consensus
    • Buy-in: consensus
    • Urgency: consensus, as long as we're the ones assessing it
    • Scope: no consensus
    • Recency: consensus
  • Consensus to update explainer guidance explicitly encouraging conciseness.
    We don’t necessarily have to mention that they will be reviewed synchronously.
  • Consensus that new design review template is a mess, move towards a form

Explainer reform:

  • Our explainer explainer is a wall of text that does not make it clear what is the top priority
  • Some good material here: https://github.com/mozilla/explainers#minimum-viable-explainer
  • Consensus that we should make it clear that the most important sections are:
    1. User needs / high level use cases (at least 2 diverse ones to avoid overfit)
    2. Usage examples iff code is involved
    3. Alternatives considered

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants