I’m interested in learning about the preferred tech stacks that developers here use for building and deploying projects on Vercel. Personally, I often work with Next.js, TailwindCSS, and Prisma for full-stack applications. However, I’m eager to hear what tools and frameworks you rely on, whether for front-end, back-end, or even any specific services you consider essential when working on Vercel.
What does your ideal stack look like, and what drives your choices?
I look forward to hearing your insights and best practices. Thank you in advance for sharing!
I truly enjoy the Vercel experience overall, especially the ease of deployments during rapid app iterations. At its best, it’s truly terrific. Full-stack Next.js apps with the magic of server components are amazing for swift development cycles.
However, I find myself increasingly running into AI-related use cases where a Python backend is preferable, if not somewhat necessary—and personally, I find FastAPI slightly more ergonomic for me than Flask. I’m not sure if others feel the same way.
With this rise in pythonic/AI use cases, I am really hitting a wall with Vercel because the serverless architecture often becomes problematic for these projects, sometimes at the last minute. Many AI applications involve file handling and processing that either require a containerized environment or more memory than the 250MB limit of serverless functions allows. This issue is worse than just annoying—it makes me hesitant to start new projects on Vercel. It leaves me contemplating whether to split a project between Vercel and another IaaS provider for the Python environment or to migrate everything elsewhere (but then to where? AWS?).
A good example of one of these serverless “walls” popped up on this forum just this week: Link to forum post. While this doesn’t always happen, encountering such limitations in the middle of a project is a significant productivity problem and makes me nervous about committing the next project to this environment.
I guess I’m curious how others are handling these challenges. Despite loving the rapid deployment experience Vercel offers, the boundaries coming up when they do, and the lack of a smooth off-ramp that I am aware of to server-full partners have me frustrated. But maybe I am missing some good, easy answers on what to do when I realize I actually need my app to store user uploads and apply NumPy on the files (or something like that?)
There are definitely people deploying FastAPI projects with Vercel! Artha.ai and the Next.js FastAPI Starter are two recent examples shared in other posts.
But you bring up a good point. Not every app is necessarily going make sense a serverless environment. I have seen people modify apps designed for traditional servers (e.g. Express, Python) to work with Streaming Functions. I’ve also see people deploy frontends with Vercel while using a separate service or backend hosted elsewhere.
I am also curious to know how others handle these AI app challenges and what suggestions or feedback everyone is willing to share here.
Thanks for the fast reply @amyegan, and I do indeed like the nextjs-fastapi example’s setup (although it might be a little more fun if it was extended to actually have the app talk to the api. )
However, the breeziness of that deployment kind of makes it more painful to get to the point where you realize that there needs to be either a code split or some kind of turborepo magic in order to support CI to multiple environments… I think no matter how cool things get with functions, there would always be that possiblity with serverless. I’d feel so much more confident if I had an easy approach to breaking out the backend to a reliable partner at least–with examples, ideally, to back that approach up.
Maybe some enterprising user out there already has built the templates, at least?