Support to share all users Vercel Feature Flag overrides in production

I recently want to use feature flags in production for maintenance or marketing.
It’s convenient to hide new feature without deploy.

As documentation, they can only affect as users explicitly enables them following recommended or shared url with query parameters. It doesn’t look suitable for our purpose.

We can do by combining with the external store like vercel edge config, but it’s a little expensive for us to read data from them by our all users, as we did an experiment to use it in advance because a few Million requests come to our service per day. The alternatives may have bottleneck of latency.

I want the feature to force user headers to be appended x-vercel-flag without code modification from vercel if administrator requires.

You outlined your options pretty well here.

Option #1: Flags Explorer
The Flags Explorer in the Vercel Toolbar lets you share overrides with colleagues, but those colleagues need to explicitly accept the overrides.

Option #2: Edge Config
Edge Config was built to store feature flag configuration, so this is a prime use case for it. It sounds like you want to store information about which user or group of users should be able to see certain features in Edge Config.

If I understand correctly, the feature you are asking for is to allow sharing links with overrides which apply automatically?

I’m the tech lead of the team that built the Flags Explorer and Edge Config. Are you okay with me sending a meeting invite to your email address? I’d love to chat for 30 minutes to find out more about your use case.

3 Likes

Thank you for quick response and suggestion, but I’m not good at speaking English.
So, It’s alright to take a time and share my email address but I’d like to communicate with text if you don’t mind.

To summarize,

  1. I want to use the feature flags for not only our colleagues but for our customers and force their flags on/off in our control. I often provide beta features and sometimes meat bugs affect our service health. So, I want them like circuit breaker to stop them as soon as possible without re-deploy.
  2. I want the feature flags to switch our services’ condition, on available or under maintenance. Our most service apis are hosted on Vercel using next.js, so we can use feature flags in edge middleware. To read flags, I want to redirect all requests to our maintenance page and return status code 503 from our api handlers.

It’s great that they can be without any modification of url or query parameters if it’s possible.
They may be difficult to be achieved by only session cookie and current feature flags overrides system. So, I first used edge config for our purpose and achieved them, but our service has many requests and cause many read of edge config though our edge config has only two flags and they seldom change. The result costs a few ten dollars per a day if no optimization (ex: our service keeps flag in session cookie and cache the object value during an request with feature flags for that though it’s still a little expensive) to read it and, I realized that edge config is not suitable for our purpose especially when the values seldom change. Alternative external low price storage is possible, but not optimal about latency.

Thus, I’m glad to achieve them with feature flags overrides or other features on edge runtime in vercel platform.

1 Like

Thanks for explaining in more detail. Edge Config sounds like the right choice in this case. It provides the lowest latency. It is intended exactly for cases like this where the value does not change often, but where the writes need to propagate quickly. Edge Config scales infinitely so it will be able to handle the high volume of reads easily. At $3 per million reads this should get you quite far.

Overrides can not be used for this case, and are not intended for this use case.

PS: Which languages do you speak? If german is one of them we can have a session in german. Otherwise I might be able to find someone who speaks your native language :slight_smile:

2 Likes

Ideally, I think so, too. I’m glad to hear that my understanding is right. But, I encountered a pitfall. It’s noting worth that we should optimize carefully to read edge config count when we use it. We used edge config with five feature flags use it at first, and get parallel asynchronously to optimize latency. Given that,


const flat1 = flags({
  ...
  decide: () => { return cookies().get('x-maintenance')?.match('true') ?? await edgeConfig.read('maintenance') }
})

<FlagProvider  value={await Promise.all([flag1(), flag2()...])}>...<FlagProvider/>

It causes to read edge config 5 times during a request. Our decide functions read session cookie and cache flags’ results to it and avoid machine requests by user agent. Though I optimized them (e.g.: a read object which has all flags per a request with singleton instance, cache session cookie), we have still met a few million requests per a day and totally reach a few hundreds close to thousand dollars per a month and it’s tough to ignore the costs higher than our expectations.

Of course, our service scale is large enough to deserve the cost if we tuned IO optimal enough, and good enough to use edge config if the cost can be allowed .

My native language is Japanese. I understand it may be tough to find someone who can speak it, so I don’t care if you can’t. Thank you for the kind thoughts.