Skip to content

consider adding attributes to track Cache API performance in a service worker FetchEvent handler #162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wanderview opened this issue Aug 16, 2018 · 6 comments
Milestone

Comments

@wanderview
Copy link
Member

It would be nice to give sites some more insight into the time spent in a service worker FetchEvent handler. Currently browser mostly (see #119) provide these two PerformanceResourceTiming markers:

  • requestStart marks when the FetchEvent is dispatched
  • responseStart marks when the evt.respondWith() promise resolves

For sites using a service worker to offline their resources the fetch handler will often be a mix of various javascript and cache API operations like:

evt.respondWith(async function() {
  let c = await caches.open("foo");
  return c.match(evt.request);
});

Its not easy to associate the cache.open() time with the FetchEvent, but we could add PerformanceResourceTiming attributes like:

  • cacheMatchStart marks the beginning of the cache.match() or cache.matchAll() that produced the Response passed to evt.respondWith()
  • cacheMatchEnd marks when the cache.match() promise resolved the Response passed to evt.respondWith()

This would require setting some internal timing information on the Response. This information would be copied automatically across a Response.clone(). Any other consumption and re-packing of the Response would lose the information, though.

@wanderview
Copy link
Member Author

@asutherland @mrbkap @bakulf @youennf @toddreifsteck @aliams

What do you all think? I can put together a spec text PR and WPT if folks think this is a reasonable approach.

@asutherland
Copy link

asutherland commented Aug 20, 2018

Sounds reasonable and useful.

I think an interesting question is whether it's worth including a third number like cacheMatchFirstProcessed in there that would help indicate when there is a delay in processing the match() calls in the back-end due to a foot-gun Cache.keys() operation dominating the Cache backend with a ton of IO/allocations/etc. The answer to that question is probably "no", but it does raise the issue of how we can make it easier to diagnose that pathological situation.

@wanderview
Copy link
Member Author

I think an interesting question is whether it's worth including a third number like cacheMatchFirstProcessed in there that would help indicate when there is a delay in processing the match() calls in the back-end due to a foot-gun Cache.keys() operation dominating the Cache backend with a ton of IO/allocations/etc. The answer to that question is probably "no", but it does raise the issue of how we can make it easier to diagnose that pathological situation.

Interesting idea, but I think I'd prefer to start with something minimal and expand to more granular information like this if necessary in the future.

@wanderview
Copy link
Member Author

wanderview commented Sep 5, 2018

Here's another possible approach that is simple, but might be a bit more flexible than the original comment above:

  • workerCacheMatchStart contains the time cache.match(), cache.matchAll(), or caches.match() were called iff the FetchEvent was fulfilled with a Response from one of these APIs.
  • workerFetchStart contains the time fetch() was called iff the FetchEvent was fulfilled with a Response produced by fetch().
  • workerResponseCreated contains the time the Response used to fulfill the Fetchevent was created. This could be when new Response() is called or when methods cache.match() or fetch() resolve their promise.
  • workerResponseCloned contains the time the Response used to fulfill the FetchEvent was cloned.

Some examples of what this would look like in practice:

//
// service worker script
//
self.addEventListener('fetch', evt => {
  if (evt.request.url.endsWith('cache')) {
    evt.respondWith(caches.match(evt.request));
    return;
  }

  if (evt.request.url.endsWith('fetch')) {
      evt.respondWith(fetch(evt.request));
      return;
  }

  if (evt.request.url.endsWith('constructor')) {
    evt.respondWith(new Response('hello world'));
    return;
  }
});

//
// main window script
//
async function test() {
  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired
  //  * workerCacheMatchStart: time caches.match() begins
  //  * workerResponseCreated: time caches.match() resolve the Response
  //  * responseStart: time respondWith promise is fulfilled with a Response
  const cacheURL = baseURL + "?q=cache";
  await fetch(cacheURL);
  let cacheEntry = performance.getEntriesByName(cacheURL)[0];
  console.log(cacheEntry);

  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired
  //  * workerFetchStart: time fetch() begins
  //  * workerResponseCreated: time fetch() resolve the Response
  //  * responseStart: time respondWith promise is fulfilled with a Response
  const fetchURL = baseURL + "?q=fetch";
  await fetch(fetchURL);
  let fetchEntry = performance.getEntriesByName(fetchURL)[0];
  console.log(fetchEntry);

  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired
  //  * workerResponseCreated: time `new Response()` is called
  //  * responseStart: time respondWith promise is fulfilled with a Response
  const constructorURL = baseURL + "?q=constructor";
  await fetch(constructorURL);
  let constructorEntry = performance.getEntriesByName(constructorURL)[0];
  console.log(constructorEntry);

  // Results in the following PerformanceResourceTiming attributes:
  //  * requestStart: time FetchEvent is fired (or time network fallback request starts???)
  //  * responseStart: time fallback network response is available
  const fallbackURL = baseURL + "?q=fallback";
  await fetch(fallback);
  let fallbackEntry = performance.getEntriesByName(fallbackURL)[0];
  console.log(fallbackEntry);
}

Of course, all of these examples are somewhat silly since the Response creation process encompasses the entire fetch event handler. In practice, though, real web sites do substantially more work in their fetch event handlers for analytics, etc. It would be useful to measure the Response creation bits separately from the overall time.

Edit: Updated example to make it clearer...

@yoavweiss
Copy link
Contributor

Let's discuss this at TPAC. Added it as an item on the proposals list

@wanderview
Copy link
Member Author

Thank you. I'll write an explainer for my larger "workerTiming" proposal to include in that discussion as well. It seemed like some folks were more excited/interested in that then this modest proposal.

@yoavweiss yoavweiss added this to the Level 3 milestone Oct 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants