Skip to content
Advertisement

Global memoizing fetch() to prevent multiple of the same request

I have an SPA and for technical reasons I have different elements potentially firing the same fetch() call pretty much at the same time.[1]

Rather than going insane trying to prevent multiple unrelated elements to orchestrate loading of elements, I am thinking about creating a gloabalFetch() call where:

  • the init argument is serialised (along with the resource parameter) and used as hash
  • when a request is made, it’s queued and its hash is stored
  • when another request comes, and the hash matches (which means it’s in-flight), another request will NOT be made, and it will piggy back from the previous one
async function globalFetch(resource, init) {
  const sigObject = { ...init, resource }
  const sig = JSON.stringify(sigObject)

  // If it's already happening, return that one
  if (globalFetch.inFlight[sig]) {

  // NOTE: I know I don't yet have sig.timeStamp, this is just to show
  // the logic
  if (Date.now - sig.timeStamp < 1000 * 5) {  
    return globalFetch.inFlight[sig]
  } else {
    delete globalFetch.inFlight[sig]
  }

  const ret = globalFetch.inFlight[sig] = fetch(resource, init)
  return ret
}
globalFetch.inFlight = {}

It’s obviously missing a way to have the requests’ timestamps. Plus, it’s missing a way to delete old requests in batch. Other than that… is this a good way to go about it?

Or, is there something already out there, and I am reinventing the wheel…?

[1] If you are curious, I have several location-aware elements which will reload data independently based on the URL. It’s all nice and decoupled, except that it’s a little… too decoupled. Nested elements (with partially matching URLs) needing the same data potentially end up making the same request at the same time.

Advertisement

Answer

Your concept will generally work just fine.

Some thing missing from your implementation:

  1. Failed responses should either not be cached in the first place or removed from the cache when you see the failure. And failure is not just rejected promises, but also any request that doesn’t return an appropriate success status (probably a 2xx status).

  2. JSON.stringify(sigObject) is not a canonical representation of the exact same data because properties might not be stringified in the same order depending upon how the sigObject was built. If you grabbed the properties, sort them and inserted them in sorted order onto a temporary object and then stringified that, it would be more canonical.

  3. I’d recommend using a Map object instead of a regular object for globalFetch.inFlight because it’s more efficient when you’re adding/removing items regularly and will never have any name collision with property names or methods (though your hash would probably not conflict anyway, but it’s still a better practice to use a Map object for this kind of thing).

  4. Items should be aged from the cache (as you apparently know already). You can just use a setInterval() that runs every so often (it doesn’t have to run very often – perhaps every 30 minutes) that just iterates through all the items in the cache and removes any that are older than some amount of time. Since you’re already checking the time when you find one, you don’t have to clean the cache very often – you’re just trying to prevent non-stop build-up of stale data that isn’t going to be re-requested – so it isn’t getting automatically replaced with newer data and isn’t being used from the cache.

  5. If you have any case insensitive properties or values in the request parameters or the URL, the current design would see different case as different requests. Not sure if that matters in your situation or not or if it’s worth doing anything about it.

  6. When you write the real code, you need Date.now(), not Date.now.

Here’s a sample implementation that implements all of the above (except for case sensitivity because that’s data-specific):

function makeHash(url, obj) {
    // put properties in sorted order to make the hash canonical
    // the canonical sort is top level only, 
    //    does not sort properties in nested objects
    let items = Object.entries(obj).sort((a, b) => b[0].localeCompare(a[0]));
    // add URL on the front
    items.unshift(url);
    return JSON.stringify(items);
}

async function globalFetch(resource, init = {}) {
    const key = makeHash(resource, init);

    const now = Date.now();
    const expirationDuration = 5 * 1000;
    const newExpiration = now + expirationDuration;

    const cachedItem = globalFetch.cache.get(key);
    // if we found an item and it expires in the future (not expired yet)
    if (cachedItem && cachedItem.expires >= now) {
        // update expiration time
        cachedItem.expires = newExpiration;
        return cachedItem.promise;
    }

    // couldn't use a value from the cache
    // make the request
    let p = fetch(resource, init);
    p.then(response => {
        if (!response.ok) {
            // if response not OK, remove it from the cache
            globalFetch.cache.delete(key);
        }
    }, err => {
        // if promise rejected, remove it from the cache
        globalFetch.cache.delete(key);
    });
    // save this promise (will replace any expired value already in the cache)
    globalFetch.cache.set(key, { promise: p, expires: newExpiration });
    return p;
}
// initalize cache
globalFetch.cache = new Map();

// clean up interval timer to remove expired entries
// does not need to run that often because .expires is already checked above
// this just cleans out old expired entries to avoid memory increasing
// indefinitely
globalFetch.interval = setInterval(() => {
    const now = Date.now()
    for (const [key, value] of globalFetch.cache) {
        if (value.expires < now) {
            globalFetch.cache.delete(key);
        }
    }
}, 10 * 60 * 1000); // run every 10 minutes

Implementation Notes:

  1. Depending upon your situation, you may want to customize the cleanup interval time. This is set to run a cleanup pass every 10 minutes just to keep it from growing unbounded. If you were making millions of requests, you’d probably run that interval more often or cap the number of items in the cache. If you aren’t making that many requests, this can be less frequent. It is just to clean up old expired entries sometime so they don’t accumulate forever if never re-requested. The check for the expiration time in the main function already keeps it from using expired entries – that’s why this doesn’t have to run very often.

  2. This looks as response.ok from the fetch() result and promise rejection to determine a failed request. There could be some situations where you want to customize what is and isn’t a failed request with some different criteria than that. For example, it might be useful to cache a 404 to prevent repeating it within the expiration time if you don’t think the 404 is likely to be transitory. This really depends upon your specific use of the responses and behavior of the specific host you are targeting. The reason to not cache failed results is for cases where the failure is transitory (either a temporary hiccup or a timing issue and you want a new, clean request to go if the previous one failed).

  3. There is a design question for whether you should or should not update the .expires property in the cache when you get a cache hit. If you do update it (like this code does), then an item could stay in the cache a long time if it keeps getting requested over and over before it expires. But, if you really want it to only be cached for a maximum amount of time and then force a new request, you can just remove the update of the expiration time and let the original result expire. I can see arguments for either design depending upon the specifics of your situation. If this is largely invariant data, then you can just let it stay in the cache as long as it keeps getting requested. If it is data that can change regularly, then you may want it to be cached no more than the expiration time, even if its being requested regularly.

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement