We are currently experiencing a lot of cache writes. The Vercel documentation states that the cache has a fixed size and when it is full, old entries are overwritten.
This would explain why there are so many writes, the cache is filled again and again and the costs increase.
Can you confirm this explanation?
Accordingly, it would be good to know the physical size of the cache in order to be able to better control the behavior.
Or even better, let the user set the size of the cache (paid service) and adapt it to the respective project.
The Vercel Data Cache is a black box right now - we have really little insights, no statistics or so … just the increasing bill as indicator that something goes wrong.
Thanks for reaching out and welcome to the community
I’ll cross-post a message here from a related thread that may be helpful. In the meantime, I’ve also flagged this internally and will let you know once I get a response.
@tiacop data cache uses a least recently used algorithm to determine what to evict.
VDC reads / writes are visible from the Usage tab on the Vercel dashboard.
If you are seeing a high number of writes relative to reads it can be due to a short time based revalidation period, or a lot of long tail requests that are only ever read a single time. For these high cardinality requests, I recommend disabling data cache.
Also, note that Vercel is not currently charging for VDC reads/writes.
We use on demand revalidation with cacheTags. The revalidation period is 1 year. But we have 10k+ pages with 2 or 3 graphql fetches on each page. So I guess the VDC is simply too small to cache all.
We will remove “generateStaticParams” from most pages (right now we prebuild around 1000 pages and we noticed spikes in cache writes on builds) and add a “no-store” to fetches for older content that is possibly “long-tail” with low traffic.