Help with heap out of memory

Hi All,

We have numerous flows in the system. Periodically we see some flow executions failing with below error message.

My challenge : How do i understand from below which node is the one creating problem ? So that i tackle it.

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xb00e10 node::Abort() [node-red]
 2: 0xa1823b node::FatalError(char const*, char const*) [node-red]
 3: 0xcee09e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node-red]
 4: 0xcee417 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node-red]
 5: 0xea65d5  [node-red]
 6: 0xeb5cad v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node-red]
 7: 0xeb89ae v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node-red]
 8: 0xe79b12 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [node-red]
 9: 0xe7443c v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawArray(int, v8::internal::AllocationType) [node-red]
10: 0xe74515 v8::internal::FactoryBase<v8::internal::Factory>::NewFixedArrayWithFiller(v8::internal::Handle<v8::internal::Map>, int, v8::internal::Handle<v8::internal::Oddball>, v8::internal::AllocationType) [node-red]
11: 0x1020d52  [node-red]
12: 0x1020ed4  [node-red]
13: 0x1026ff8  [node-red]
14: 0x10c494d v8::internal::JSArray::SetLength(v8::internal::Handle<v8::internal::JSArray>, unsigned int) [node-red]
15: 0x10285d5 v8::internal::ArrayConstructInitializeElements(v8::internal::Handle<v8::internal::JSArray>, v8::internal::Arguments<(v8::internal::ArgumentsType)1>*) [node-red]
16: 0x11e3428 v8::internal::Runtime_NewArray(int, unsigned long*, v8::internal::Isolate*) [node-red]
17: 0x15e7cf9  [node-red]

Short answer is that you cannot.

There is no guarantee that the node that happened to try to allocate some additional memory and failed is the one that is actually consuming all your free memory.

I have never found a particularly satisfactory way of tracking down these sorts of issues. But I have spent a lot of time trying.

Ultimately you need to consider any parts of your flows handling large messages, or lots of small messages.

There are lots of articles online on understanding node.js memory usage from a heap dump; that may give you some idea of what is eating the memory.

Hi ,

Are there any known memory issues around filter node or trigger node.
After trial and error - i have managed to limit my investigation to a short flow which i have which uses the above.

The flow does below (high level)

  1. Gets input as msg.payload as X and msg.topic as Y.
    (The above input keeps coming in continuously till there is data on disk - if checks if there is data on disk - create array loop over array and send as input on msg,payload )
  2. For each input - send filter node (set to block unless value changes for msg.payload) - which in turns feeds to trigger node (both filter and trigger nodes are set to process per topic)
  3. When there is no more input left - trigger node would send a signal after waiting for certain time.

What i typically see is when i keep above flow out of picture i dont get heap error.

I am talking about a volume of 6000 or so messages on 1 topic with multiple topics running in parallel.

Any suggestions if these 2 nodes can run into heap issues ?

(I am also check to change the above design completely)

Can you share a working demo of the problem?

As it stands, there is not enough to go on. For example you didn't state how many messages over what time frame, you haven't specified the size of data in the messages etc etc.

Being a prod system - its tricky. But what we did see is since we bypassed the small flow i mentioned above the heap issue has come down drastically.

  1. Previously we had a flow hitting this small segment of flow (mentioned in Steps 1-3 above) , with 30-40k messages . There was no need to . So we removed it from that flow. The moment this was done - we moved to heap being generated once in 2-3 days against 1 daily.
  2. Today we have around 12 flows running daily concurrently each with its own topic.
    Each day is fresh run. The topic changes per day.
    The number of cumulative messages across all topics for a day would be around 10000 - 12000 messages.
    The topic which has max messages has around 6000 messages. Size per message is around 3000 bytes.
    Each message hits this segment of flow (filter + trigger - see steps 1-3 in earlier note). The flows run over a few hours.
    The flow 6000 messages runs around for 2 hours.
    Generally when the heap is seen - it is this flow which is still processing messages.

Could you run them sequentially rather than concurrently?

The challenge with that approach would be then what finishes in few hours today may take double or 3 times the time. The flows feed data to other non NR processes, which run in parallel and complete by their tasks.

Have you tested how long it takes for just one?

If i do sequentially process will take 8-10 hours against 2 hours together.

What device are you running node-red on, how much ram has it got, and exactly what command line are you using to run it?

In the critical flow, make sure you have not got any extra nodes, including debug nodes. Try and eliminate any places where you have two or more wires coming from a single output, as each time that happens the message will be cloned, doubling up the amount of memory required.

You say that 6000 messages takes 2 hours, which is about 1 second per message. What is it that takes a second to perform? Is it complex processing in nodes or are you calling an external process, such as a database or http api for example?

[edit] In fact you say that if you do it sequentially it takes 4 to 5 times as long, so each one takes 4 or 5 seconds.

Hi @SandeepA ,

Have you tried allocating virtual memory for node js?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.