We have need where we want to add dynamic unique IDs in the url, e.g. If user setup http-in node with url as /bulk-export then normally user need to hit BASE_URL + “node-red-api/bulk-export” but we want to add extra params like BASE_URL + “/integration-id/node-red-api/bulk-export”
How to achieve this?
You can use express endpoint patterns like /my-endpoint/:param1
or even /my-endpoint/*
Pretty sure this is detailed in the built in help.
Example: Handle url parameters in an HTTP endpoint : Node-RED
1 Like
E1cid
23 August 2025 13:53
3
Further to Steves suggestion, you can also make the path optional using ?
e.g /:param1/:param2?
1 Like
Hi, I’m from the same team. Thanks for the suggestion, using params makes sense. However, we want to have a dynamic base-url that is based on our context.
For example one can enter /main
in the url field and at the time of deployment, we want http-in node to append the integrationid
from the context to the baseurl.
In our current approach, we’re patching http-node at startup to take context from functionGlobalContext
and using it in the httpNode.get
. But this feels like an unreliable approach.
Also our nodered is running embedded in the express app and all the routes are passed through a middleware which provides the said context.
You can use a context variable and compare it in a switch node and return a 404 status in the http-response
+-base==flow.my_base → do-stuff → http-response(200)
http-in /:base/* → switch-|
+-otherwise → http-response(404)
1 Like
Thanks for the revert. Here are some more details -
We’ve built a multi-tenant Node-RED setup where:
Each integration can have multiple environments (e.g., sandbox, production)
Each integration+environment combo has its own Node-RED flow
Flows and configs are stored in S3 under:
s3://bucket/flows/{integrationId}/{environment}/{flowName}.json
s3://bucket/flows/{integrationId}/{environment}/_config/settings.json
s3://bucket/flows/{integrationId}/{environment}/_config/credentials.json
s3://bucket/flows/{integrationId}/{environment}/_config/sessions.json
We use a custom storage module that overrides getFlows, saveFlows, getCredentials, etc.
Flow ID is structured as: {integrationId}{environment}{flowName}
How It’s Working Now
1. Patched HTTP-in Node
We patched 21-httpin.js to auto-add integration context to routes:
function getUrlWithContext(url, RED) {
const context = RED.settings.functionGlobalContext.getIntegrationContext();
if (context?.integrationId && context?.environment) {
return '/' + context.integrationId + '/' + context.environment + url;
}
return url;
}
So a user-defined /webhook gets translated to:
/{integrationId}/{environment}/webhook
2. Flow Loading
We load flows dynamically:
const context = { integrationId, environment, flowName };
await RED.nodes.setFlows(flows, context);
3. Context Access
Flow context is set at runtime
Function nodes use flow.get('integrationId'), etc.
Global context exposes getIntegrationContext() method
Issues We’re Facing
1. Routes Disappear
When a new flow is loaded, routes from the previous flow vanish. Only the most recently loaded flow’s routes are active.
2. Single Active Flow Limitation
Node-RED core assumes one active flow. setFlows() replaces all routes and state.
3. Shared Runtime State
Even though we isolate settings and credentials in S3, the Node-RED runtime (context, routes, memory) is still global.
Our Current Stack Involves
Patching core file 21-httpin.js
Custom storage module for flows and credentials from S3
Middleware for request URL rewriting
Dynamically loading flows based on access
Manually registering HTTP routes per flow
Open Points We’re Thinking About
Whether to continue with a single Node-RED instance for all tenants
Or switch to per-tenant Node-RED instances
How to maintain multiple active flows and their HTTP routes concurrently
Ensuring route, credential, and context isolation
Whether to preload all flows at startup and how to manage scale
Also Considered
Multiple Node-RED instances (resource-intensive)
Projects feature (doesn’t solve HTTP routing issues)
Custom nodes for flow context (loses native node behavior)
Happy to share more if helpful — posting this to see if others have taken a similar route or found a cleaner pattern.
The usual way to handle this when using a microservice architecture like Node-RED is to have multiple service instances behind a proxy.
Anything less risks compromising individual customer's data.
Thanks.This does makes sense, so In this case if we deploy multiple Node Red instances per environment, still our issue around http-in route customization remains, so you meant add integration id using patch approach and using credentials middleware injection route internally to specific integration?
You can use the proxy to help route things, you don't just have to rely on Node-RED.
By having multiple instances, the node-red end of the url is greatly simplified. The proxy hides the gory details.
Too many people treat node-red as the complete answer when it doesn't need to be and often shouldn't be.
Using a proxy would also give you far better and stronger options for authentication.
2 Likes