the average response time is: performanceAlias.timing.responseEnd - performanceAlias.timing.requestStart
This includes the time it takes to generate response by server + time to download on client. We took this approach as it seemed to us the best “simple metric” to follow… let me know if you think there is better?
I’ve checked with my Google contacts and GA SiteSpeed uses navigationStart -> loadEventStart.
From an “end-user” perspective this would give the closest match to the subjective experience of “from when I clicked on a link to when the page was ready” (albeit the real “perceived load-time” is somewhere in the middle subject to render-start times, how the page is structured etc).
So if you’re trying to correlate “page load time” to things like bounce rates, conversion rates etc then that’s the better “front end performance / Real-user monitoring” metric.
That said, the “generation time” metric you currently use is probably closer to the “data start time aka time-to-first-byte” which gives you more insight into the health of your back-end (e.g. does it slow down under load). If GenerationTime was performanceAlias.timing.responseStart - performanceAlias.timing.NavigationStart then that would be even closer to the TTFB metric (including DNS performance and network latency).
Yes, good point. We preferred keep it simple with one metric, and make sure the metric couldnt be skewed with dodgy JS or DOM manipulations that would have made the metric less useful. In the future I could imagine a setting to also track this other time or so. cheers
Using the performance.timing API in the browser to send the full wait/load time works better for me. Great to have the Av. page gen. time, but this doesn’t provide the outliers and which pages/browsers are causing them.
I think that could be obtained by using the querying API, you get all the generationTimes and then using something like the IQR (or a custom threshold) you can get the outliers. Then you only associate where those values come from the visit and get the browser info.
I take five custom dimensions for every page load (1-4 specific times during the load and the fifth the whole page load time), to understand technical performance of our content. From this I use the API to extract a sample (1,000) page views per hour and the five dimensions values for each page view. Then visualise that using a scatter, box plot overlay and frequency bins for every 0.5 seconds from 0.5-20 seconds. This is all now at the push of a button in excel, mapping the API URL to a web query I created in VB. Works really well and can pass segments if I want to focus on a particular content type or path.