Msg size limit in Dashboard 2.0

I am loading big amounts of data from a DB and displaying it in Dashboard-2.0 nodes.

I have done some stress-testing and noticed that when the msg size is too big, the dashboard page disconnects/reloads, and sometimes the whole Node-red process crashes due to heap out of memory. These big messages pass successfully in flows without dashboard nodes.

Is there any guideline or configuration for max msg size for dashboard nodes?
I am familiar with the settings.js parameter apiMaxLength but it seems to be related to API messages (loading flows etc.) and not dashboard messages..

Our SocketIO connextion defaults to 1MB max payload, but that can be overriden.

Had a couple of other people recently about memory usage. We do have plans to improve our data/state stores to help with this too, which I hope to tackle in next couple of weeks.

Any more details you're able to share abour memory usage would be most welcome as we are in a phase of gathering data/insights atm.

I also want us to handle these cases better, so even if it falls over, it should do so gracefully

The one should understand basic principle that increasing pipe diameter does not make receiver more performant. The limit is in place with reason and if limit is hit, the one should think about how to avoid that big payloads in prior change the limit.

1 Like

As a matter of interest, which node are you sending such large payloads to?

I'm not trying to push any limits, just trying to investigate & find the sweet spot.

The socket max payload setting seems to determine the transmitted chunk size but not the overall msg size the dashboard node can accept without blowing up.

I started stress-testing my own tabulator custom node (a raw tabulator instance can easily hold millions of rows), and then (to reduce interfering factors) went down to a bare setup using a simple Markdown node.

In the flow below, I fabricate a bi-dimensional table of strings, with configurable row & column count. I calculate the total table memory footprint as accumulation of cell sizes (cell size = 16 + cell[i][j].length*2 bytes). I also calculate the serialized size of the whole table as JSON.stringify(table).length*2, as I assume that's how the message is actually transmitted.

The message is sent to a Markdown node which only displays statistics, and then forwards to a debug node. Typically, issues begin (on my laptop) when msg size crosses 700MB (45,000 rows x 500 columns).

flows (47).json (4.4 KB)

45,000 rows x 500 columns

I think this is not a node-red issue, but you are hitting browser limits.

Rendering a table is relatively expensive due to the number of HTML elements need to be created and applied to the DOM.

This is the reason why alternatives exist to handle large datasets/tables, like Spread Grid or AG-grid. Those don't use the DOM, but canvas, where they become highly performant (ie. 60fps)

Thanks for your reply.
I don't think it's related to HTML rendering, since I'm not rendering much. In this example I'm sending the message to a plain text node, and only rendering 5 statistical figures (I use a table structure just to make it easy for me to play with the dimensions, but it could just as well be a flat blob).

I do agree that trying to render huge amounts of data can kill the browser. The tabulator-based node is following the exact concept which you describe - it only renders a sliding visible "viewport" on top of a background memory model, and I have worked with it seamlessly with millions of rows.

Bare in mind, with Dashboard, this gets stored in memory against the node... for general Node-RED the message is a fleeting moment and cleared once handled.

We do this for quick PoC, and low-memory use cases, but you're starting to reach the end of where this is sensible now. You can turn this off with "Accept Client Data" for the specific node types, whereby that data is not saved if a msg._client is specified

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.