Does your custom node have a config node that it uses? If so, that would be an ideal place to hold runtime stats (e.g. a total and/or rolling average of requests in the last hour). That way, it does not matter how many of your custom nodes are deployed across your flows -- they all use the same config node.
Of course, your custom node would have to check with the config node's stats to determine how long to wait before doing it's own work (like making an api call). Essentially you are building a feedback loop in your custom code, and including some logic similar to what the delay
node already does... so I agree with the others that it's better to use the available nodes to build a custom "flow" to do all of this.
If you are concerned about limiting the number of api requests that are "in-flight" at the same time, take a look at the node-red-contrib-semaphore
nodes -- start your flow with a semaphore-take
to get a ticket from the pool (with configurable size), and when the api processing is done, use the semaphore-leave
node to return the ticket for the next msg to use.
I've use this technique to process millions of database records/files... see this thread and the "ETL processing" discussions for more ideas.
__
Steve