Question on database connection concurrency handling for Fluid Compute

I was wondering how to handle Prisma connections with Supabase which uses supavisor for its connection pool with Fluid Compute.

My current apps have specific high traffic times and I can handle connections by using ?pgbouncer=true&connection_limit=1&pool_timeout=30, which essentially makes every function use only 1 connection per invocation. This works well as I let Supavisor do the management of the pooling with a connection pool of around 30.

With fluid compute, what I understand is that functions will stay warm for a moment and will be reused.

My code currently only puts prisma as a global variable when running the NextJs dev server and in production it creates a new prisma instance for each function invocation.

With Fluid compute, does that change how I need to manage connections? Should I put prisma as a global just like I do with my development server in order to handle multiple requests coming to my warm function to avoid going consuming too much connections from supavisor’s connection pool?

Moreover, if I put prisma as a global variable, wouldn’t long queries actually make other requests queue up waiting for the connection to be released?

Here is a sample code of my current prisma.ts file

declare global {
  // eslint-disable-next-line no-var
  var prisma: PrismaClient | undefined;
}

const createPrismaClientWithEncryption = () => {
  return new PrismaClient({
    datasources: {
      db: {
        url: process.env.DATABASE_URL,
      },
    },
  }).$extends(fieldEncryptionExtension()) as PrismaClient;
};

export const prisma: PrismaClient = global.prisma || createPrismaClientWithEncryption();

if (!isProd()) { // store prisma in global only for non production environments (aka non-serverless)
  global.prisma = prisma;
}