Skip to content
This repository has been archived by the owner on Feb 6, 2025. It is now read-only.

Feedback #9

Open
addyosmani opened this issue Feb 12, 2017 · 2 comments
Open

Feedback #9

addyosmani opened this issue Feb 12, 2017 · 2 comments

Comments

@addyosmani
Copy link

addyosmani commented Feb 12, 2017

First, it's awesome that you've been exploring different architecture patterns here @sokra. Thanks for reaching out for some thoughts.

Patterns 🖌

Here are the different patterns for PWAs that I'm aware of:

  • Server-side render of App Shell + use JS to fetch and populate content
  • Full server-side render of Page Shell
  • Full client-side render of Page Shell
  • Hydration. Server-side render of App Shell + content for the page - like Page Shell. Use JS to fetch content for any further routes by 'hydrating' the app into an SPA
  • Server-side render of App Shell with Streams for body content
  • Server-side render App Shell + use AMP as leaf nodes

In general, the Chrome team encourage optimizing for getting a page interactive really quickly (vs showing UI a user can't interact with), but YMMV depending on the metrics you care about.

App Shell

Pro: instantly load your UI on repeat visits, only fetching minimal payloads from the network as you navigate to different routes. Avoids refetching UI from page to page.

Con: For many, this requires a site-wide rearchitecture or shipping a new app. Can push out first contentful paint / first meaningful paint for first load as you're waiting on the network to fetch JS bundle which will then fetch your JSON data.

screen shot 2017-02-12 at 1 53 31 pm

In general, this approach works best for optimizing time-to-first-paint (sometimes meaningful paint) and structures your app a lot more like a native app. On repeat visits the whole UI gets loaded locally (from Service Worker) without touching the network and it becomes straight-forward to keep caching any JSON data or static resources the page needs to be useful. Pages only download the content they need instead of re-fetching pieces of UI, like toolbars and footers.

The downside of this approach is that at its most basic, you're giving a user a skeleton user-interface without any real content for the first load and then populating it using JavaScript. This can be less optimal on spotty networks where a delay of your Webpack bundles can mean the user is just waiting looking at the skeleton screen for a while.

The advice we've been giving folks is use code-splitting and route-based chunking (see PRPL) to keep your Webpack bundles for a route very small and hopefully that makes it easier to fetch both your JSON payloads and JS without too much of a wait on the network.

Page Shell

Pro: doesn't require a site-wide rearchitecture. Full-age caching possible and caching of static resources should also be fine. Could give you a faster time to first meaningful paint.
Con although page shells can be loaded on repeat visits from the SW Cache, each route needs to fetch the UI skeleton itself (toolbars, footers etc) meaning that it isn't quite as optimal for repeat visits as the App Shell model might be.

If you're working on a CMS or classic content site, it might be really hard to rearchitect for the App Shell model. You might find adding a simple SW for caching individual pages easier initially although we try to encourage AppShell's perf benefits where possible.

With Page Shell, you have a a bundle per route/page and are SW caching those, you can still give users the benefit of not having to refetch scripts for repeat visits but the HTML won't be cached quite as optimally. It's also harder to manage atomic updates when you're only updating smaller pieces of the UI.

Hybrid Shell

A hybrid model between App Shell and Page Shell offers an interesting combination of the benefits:

  • You server-side render your initial Application Shell prepopulated with data (Page Shell). This might have to be done for every route your user can land on. It increases the size of your HTML response, but means that 1) get meaningful text/content on the screen quicker, 2) not blocked on JavaScript/Webpack bundle loading before the user can read the page
  • When the server-side rendered page has completed rendering, you 'hydrate' it using your JavaScript bundle (as you would have with the App Shell). This effectively changes a static-side into an SPA - attach event handlers, routing etc. When the user navigates around the app now, it treats the Page Shell like an App Shell, reuses locally cached resources and just fetches JSON for your data instead of requiring a fully-server-rendered Page Shell for each route.
  • This can be harder to implement. It generally requires thinking about isomorphic data fetching and rendering, carefully looking at how much the SSR rendered Page Shell + hydration pushes out your metrics and I've seen fewer folks implement it. It is another option, though :)

screen shot 2017-02-12 at 1 56 14 pm

Comparisons

Thanks to Jake for putting the below demos together a while back. We’re going to quickly walk through a comparison of some of the above models.

Server render - 3G

image

image

http://www.webpagetest.org/video/compare.php?tests=160112_VA_KFA-r%3A8-c%3A0&thumbSize=200&ival=100&end=visual
First render: 0.8s
First content render: 1.7s

Repeat visits will load fully cached pages from the SW cache. However, each network fetch will re-request common “shell” UI blocks like headers and footers as they’re being served in the same page. The “app-shell” pattern doesn’t suffer from this problem.

App Shell render - 3G

image

image

http://www.webpagetest.org/video/compare.php?tests=160112_VA_KFA-r%3A4-c%3A1&thumbSize=200&ival=100&end=visual
First render: 0.4s
First content render: 3.7s

Repeat visits now of course don’t have to re-fetch the application shell or UI pieces that have already been fetched from the network, unlike the pure server-rendered version.

However, this demonstrates a flaw in the app shell approach. The shell loads from the cache, getting a quicker first render, then the JS fetches the content, then it writes it to the page. We have no access to the streaming parser from the page, so the content has to be fully downloaded before it can be displayed. The larger the content, the more you lose vs a streamed server render.

There are some hacks going on already to reduce the issue. The service worker will start fetching the content as soon as it serves the shell, so it starts the fetch earlier than the page's JS would. But there's another hack that helps…

App shell + partial content write, 3g

image

image

http://www.webpagetest.org/video/compare.php?tests=160112_ZW_KVN-r%3A4-c%3A1&thumbSize=200&ival=100&end=visual
First render: 0.2s
First content render: 2.5s

This hack streams the main content inside the page's JS. There's no access to the streaming parser, but this kind-of fakes it by streaming content until 9k is available (post-unzip), then writes the partial content to innerHTML. Once the rest of the content is fetched it writes to innerHTML again. This results in some elements being created twice, but the performance improvement is > 1s over 3g. Still not as fast as a server render though.

Jake hacked the same page together using streams, where the top & tail of the page are streamed from the cache, but the middle is streamed from the server…

Stream from service worker, 3g

image

image

http://www.webpagetest.org/video/compare.php?tests=160112_7B_M9B-r%3A6-c%3A1&thumbSize=200&ival=100&end=visual
First render: 0.3s
First content render: 1.5s (1.7 for full above-the-fold content)

So now we've got the quick first render, but without any cost to the content render, perhaps even faster, and it'll become even faster as more of the primitives land in more browsers (transform steams & piping). This approach also means that parsing/execution rules are as you'd expect when it comes to <script> etc.

In a streams-based model:

  • Start getting page content from cache or network
  • Start getting page head from cache
  • Start getting page footer from cache
  • Stream head to response
  • Stream content to response
  • Stream footer to response

In the case the SW effectively becomes your server, requiring very few changes on the client.

Further thoughts

With any of these models, there's going to be nuance. A lot of PWAs are fine with the App Shell approach, however if you're a content heavy side like a News publisher I could see the Hybrid model or Page Shell model being appealing. Some folks on our team are also hopeful that newer APIs like Streams will offer even better support for progressively rendering content in these types of models.

I personally suggest folks think about what metrics they are trying to optimize for and choose their architecture patterns accordingly :)

screen shot 2017-02-12 at 1 58 58 pm

@sokra
Copy link
Member

sokra commented Feb 13, 2017

Thanks for your feedback @addyosmani.

@ezekielchentnik
Copy link

ezekielchentnik commented Feb 13, 2017

'micro-apps' are another approach for high-availability. Perhaps not exactly what you are going for in this project, but maybe this comment will provide some insight from a real world project.

'micro-apps' offer autonomy & ownership across teams. stateless, like micro-services. Another term may be: MPAs 'multi-page-apps' https://gist.github.com/ezekielchentnik/4dd04df7094d59e80e7a

Perhaps much like 'page shell' but each micro-app acts independent and is only aware of it's self. Some obvious downsides (similar to @addyosmani cons on page shell). The pros are 'autonomy' and disposability.

Think of each page (app) as its own SPA, SSR, or whatever the app needs to do. The experience is cohesive by stitching together a shared header/footer via micro service. A micro-app can contain other micro apps (stitched via nginx server side includes). Each micro-app is containerized.

Attributes

  • Single responsibility
  • Self contained
  • Independent source repository
  • Mock all integrations during dev
  • Clear ownership
  • Disposable
  • Not meant to be shared

rules on sharing between micro-apps

  • Harvest front end components when needed
  • Harvest back end libraries when needed
  • Apps have options to opt-in components
  • Beware of shared ownership
  • Keep to a minimum
  • Shared at build/compile time, (almost) never at runtime

image

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants