Postgres database consistently maxing out compute time out of nowhere

My postgres database associated with my project is using way too much compute time. There was no spike in users or changes made in the website since an initial deployment on 9/02/2024, but starting 9/11/2024 my compute time started spiking to a consistent ~6 hours a day with each hour taking 0.25 hours of compute time, which I believe is the max for hobby plan’s postgres database, despite being ~1 hr a day previously. I checked my logs and see that there are no calls to my api which accesses the database, so I don’t have any idea whats responsible for the spiking compute time. I completely deleted my connected project prod deployment that was associated it and also completely disconnected my project from the database. I also reset the credentials so there cannot be anything connected to it. But my compute time keeps going up. I am about to max out, and I’m not sure what to do. HELP

Hi, @zachkfan! Welcome to the Vercel Community.

Thanks for your patience :smile:

A few ideas of what could be causing this:

  • Background Processes: Make sure there are no background processes or maintenance tasks running on your database that could be consuming compute time. This includes tasks like vacuuming, indexing, or backups.
  • Idle Time Configuration: Verify the idle time settings for your database. Sometimes, reducing the idle time can help in minimizing unnecessary compute usage. However, you mentioned you already reset credentials and disconnected the project, so this might not be the primary issue.
  • Browsing the database: Have you been browsing the database through the Vercel dashboard? This incurs usage because the instance needs to be running in order to render contents.

Are you still seeing a similar pattern now?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.