Skip to content
Advertisement

Would calling Performance API frequently be causing a performance issue?

I want to measure the memory usage of my web SPA using performance.memory, and the purpose is to detect if there is any problem i.e. memory leak during the webapp’s lifetime.

For this reason I would need to call this API for specific time interval – it could be every 3 second, every 30 second, or every 1 minute, … Then I have a question – to detect any issue quickly and effectively I would have to make the interval as short as I could, but then I come up with the concern about performance. The measuring itself could affect the performance of the webapp if the measuring is such a expensive task (hopefully I don’t think that is the case though)

With this background above, I have the following questions:

  1. Is performance.memory such a method which would affect browser’s main thread’s performance so that I should care about the frequency of usage?

  2. Would there be a right way or procedure to determine whether a (Javascript) task is affecting the performance of a device? If question 1 is uncertain, then I would have to try other way to find out the proper interval for calling of memory measurement.

Advertisement

Answer

(V8 developer here.)

Calling performance.memory is pretty fast. You can easily verify that in a quick test yourself: just call it a thousand times in a loop and measure how long that takes.

[EDIT: Thanks to @Kaiido for highlighting that this kind of microbenchmark can in general be very misleading; for example the first operation could be much more expensive; or the benchmark scenario could be so different from the real application’s scenario that the results don’t carry over. Do keep in mind that writing useful microbenchmarks always requires some understanding/inspection of what’s happening under the hood!

In this particular case, knowing a bit about how performance.memory works internally, the results of such a simple test are broadly accurate; however, as I explain below, they also don’t matter.
End of edit]

However, that observation is not enough to solve your problem. The reason why performance.memory is fast is also the reason why calling it frequently is pointless: it just returns a cached value, it doesn’t actually do any work to measure memory consumption. (If it did, then calling it would be super slow.) Here is a quick test to demonstrate both of these points:

function f() {
  if (!performance.memory) {
    console.error("unsupported browser");
    return;
  }
  let objects = [];
  for (let i = 0; i < 100; i++) {
    // We'd expect heap usage to increase by ~1MB per iteration.
    objects.push(new Array(256000));
    let before = performance.now();
    let memory = performance.memory.usedJSHeapSize;
    let after = performance.now();
    console.log(`Took ${after - before} ms, result: ${memory}`);
  }
}
f();
(You can also see that browsers clamp timer granularity for security reasons: it’s not a coincidence that the reported time is either 0ms or 0.1ms, never anything in between.)

(Second) however, that’s not as much of a problem as it may seem at first, because the premise “to detect any issue quickly and effectively I would have to make the interval as short as I could” is misguided: in garbage-collected languages, it is totally normal that memory usage goes up and down, possibly by hundreds of megabytes. That’s because finding objects that can be freed is an expensive exercise, so garbage collectors are carefully tuned for a good compromise: they should free up memory as quickly as possible without wasting CPU cycles on useless busywork. As part of that balance they adapt to the given workload, so there are no general numbers to quote here.

Checking memory consumption of your app in the wild is a fine idea, you’re not the first to do it, and performance.memory is the best tool for it (for now). Just keep in mind that what you’re looking for is a long-term upwards trend, not short-term fluctuations. So measuring every 10 minutes or so is totally sufficient, and you’ll still need lots of data points to see statistically-useful results, because any single measurement could have happened right before or right after a garbage collection cycle.

For example, if you determine that all of your users have higher memory consumption after 10 seconds than after 5 seconds, then that’s just working as intended, and there’s nothing to be done. Whereas if you notice that after 10 minutes, readings are in the 100-300 MB range, and after 20 minutes in the 200-400 MB range, and after an hour they’re 500-1000 MB, then it’s time to go looking for that leak.

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement