Fetch cached data only once

February 16th 2024 .NET

The MemoryCache class is a common choice for in-memory caching in ASP.NET Core and .NET applications in general. Although it can work well in many scenarios, it's good to know its potential downsides. By design, it allows the factory method to fetch the value multiple times in case of a cache miss.

To make cache usage convenient in your code, you could use a method with a signature like this:

Task<T?> GetOrAddAsync<T>(string key, Func<Task<T>> factory);

If the cache already contains a value with the given key it will simply return it. If not, it's going to fetch (or create) it by invoking factory.

When using MemoryCache, the following naive implementation might work well for you (with expiration adapted to your needs):

public async Task<T?> GetOrAddAsync<T>(string key, Func<Task<T>> factory)
{
    return await memoryCache.GetOrCreateAsync(
        key,
        async entry =>
        {
            entry.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5);
            return await factory();
        }
    );
}

However, with this implementation, the value for the same key will be fetched multiple times if it is requested for the second time before the first request completes. The following diagram depicts this behavior:

The same value is fetched multiple times

This has two downsides:

  • Fetching the value twice unnecessarily increases the workload of the system providing the value.
  • The second request takes longer than it should, as the first request will put the value in the cache before the second request fetches it.

The following diagram shows the desired behavior:

The same value is fetched only once

The second request should simply wait until the first request fetches the value and then read it from the cache. This can be achieved by introducing a SemaphoreSlim instance to only allow one fetch at a time:

private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);

public async Task<T?> GetOrAddAsync<T>(string key, Func<Task<T>> factory)
{
    if (memoryCache.TryGetValue<T>(key, out var value))
    {
        return value;
    }

    try
    {
        await semaphore.WaitAsync();

        if (memoryCache.TryGetValue(key, out value))
        {
            return value;
        }

        value = await factory();
        memoryCache.Set(key, value, TimeSpan.FromMinutes(5));
        return value;
    }
    finally
    {
        semaphore.Release();
    }
}

This code only partially solves the problem, though. If a second request needs to fetch a value for one key while another request is already ongoing for a different key, it will still wait for the first request to complete. But at that time, the value for its requested key still won't be available, so it will have to fetch it. This means that it was waiting for no reason. The following diagram depicts this behavior:

The second fetch waits for the first one to complete

Ideally, the second request would only wait if it needed the value for the same key. Otherwise, it should start fetching the data immediately, in parallel to the first request:

The two values are fetched in parallel

Of course, the wrapper around MemoryCache can be further improved to support this scenario. Instead of a single SemaphoreSlim instance for locking, we would need one for each key. The lookup for those needs to be implemented in a thread safe manner. And we also need to remove them once they are not needed anymore to avoid memory leaks.

Instead of implementing such code ourselves, we can use a library that does this for us already, such as Lazy Cache. With it, we can achieve the desired behavior with a trivial wrapper implementation:

public async Task<T?> GetOrAddAsync<T>(string key, Func<Task<T>> factory)
{
    return await lazyCache.GetOrAddAsync(key, factory);
}

You can try out all three caching implementations yourself. I pushed the full source code to my GitHub repository. I also included tests which clearly show, how many times the fetch method is being called in each case and how long it takes for each scenario to complete.

MemoryCache is a nice, simple in-memory cache implementation. It can work really well for you. Just make sure that you test it for your scenarios and that the performance meets your needs. In this post, I've shown how it allows the same value to be fetched multiple times when it's not in the cache. Depending on the duration and processing requirements of that fetch method, this might be appropriate for you or not.

Get notified when a new blog post is published (usually every Friday):

If you're looking for online one-on-one mentorship on a related topic, you can find me on Codementor.
If you need a team of experienced software engineers to help you with a project, contact us at Razum.
Copyright
Creative Commons License