-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-level web platform vision and improving the TAG’s Impact/Effort ratio #36
Comments
Re-posting my comment which is linked above so that it is public:
|
We discussed this in a plenary session during the last day of our Seattle F2F. Summary:
Explainer reform:
|
In the recent TAG+AB joint meeting there was a very good discussion on some issues I’ve been informally raising for a while, which are perfectly summarized by this (member-only) comment.
The 2012 TAG reform focused on shifting the TAG’s work from "architecture astronauts" into the practical, and it was certainly the right direction. However, at this point I think the pendulum has swung too far to the other side, and the TAG is not spending its time in a way that maximizes Impact/Effort but is lost in the low-level minutiae of incremental improvements to the web platform.
Most of our time is spent reviewing low-level details of new APIs, incremental improvements, often of features shipped across browsers where our feedback is unlikely to produce any change. Even many of the early reviews we get are too far gone for their authors to be open to radical change.
Maximizing I/E
The primary advantage of the TAG is the combination of deep technical expertise, architectural mindset, and broad bird’s eye view of the web platform as a whole. Our time is better spent on tasks that utilize the combination of these.
Web Platform gap analysis
The TAG should document developer needs the platform is either failing to meet entirely, or where it introduces usability cliffs, where a small increase in use case complexity results in a disproportionate increase in API complexity. This includes instances where basic, common use cases require an inordinate amount of author effort.
As much as I believe in user research, it’s important to know the limitations of each method. A lot of these issues do not surface as developer complaints in surveys. Developers will complain about very specific problems they face, but will rarely be able to see the bigger picture or connect the dots between related problems. Also, they (just like all users) tend to complain more about things that cannot be done at all, rather than things that are possible, but hard.
High level architectural guidance
Point groups interested in solving a given problem to the right direction early on, potentially connecting them with other groups solving similar problems where we feel the architecturally better solution is to join forces.
This would involve fostering a culture of much earlier design reviews, and prioritizing early reviews. Spending 5 minutes discussing an idea early on can have a lot more impact than spending a whole telcon reviewing it later on.
I’m not saying that later reviews are not valuable. But our time is extremely limited, so maximizing impact per unit of time matters. The earlier we look at an idea, the less time it requires, and the more impactful our feedback can be.
Concrete suggestions
A scoring system to prioritize design reviews
With factors like:
The data for each of the factors could even be part of the design review template and supplied by the requester, so then all we need to do is score it.
Explainer reform
Our process currently assumes TAG participants spend time pre-telcon reviewing explainers. In practice, explainers are reviewed synchronously during the telcons. The longer the explainer, the more likely it is that important parts are skimmed. Rather than pretending that is not the case, we should embrace it and communicate it to requesters. That is also more respectful of the requester time, since many are under the impression that the longer the explainer, the better.
Introduce explicit guidance that explainers need to be written so they can be read and processed in N minutes. We can discuss what a reasonable N is, but I suspect it would be somewhere between 2-5. They can include pointers to more information, but the main explainer page should not be too long and should stand alone as an overview.
Wading through incomplete explainers and trying to guess the information that is missing is not a good use of our time. We should skip reviews whose explainers do not include essential bits like:
We can have a template response for these rather than allocating call time for them. Perhaps the chairs could even do this pre-filtering so we don't spend any call time on this.
A repo for gap analysis
We need a place to record gaps we notice in the web platform and author pain points, which are currently lost in ad hoc discussions. Sure, it’s not enough, we also need a way to raise awareness and create WG and implementer interest, but it’s the first step. And this way we can also ensure that the user needs are actually backed by TAG consensus, since there have been cases in the past where that was not the case.
A new repo where we open these as issues is probably the most lightweight way to start this. @plinss mentioned there have been some previously identified gaps. Where do these live? Can we centralize them or point to them from that repo?
The text was updated successfully, but these errors were encountered: