CPU spikes while writting data to context

Hello,

So I have a problem. I am making data accumulation for graph. Data is received from modbus communication. Problems occurred when I tried to accumulate data. Random CPU spikes (95-98%) Does someone has any suggestion how to solve a problem ?

Hardware - beaglebone green gateway

Also I removed comments from nano/var/lib/node-red/.node-red/settings.js

contextStorage: {
    default: {
        module:"localfilesystem"
    },
},

Code for data accumulation

[
    {
        "id": "e2eb84e67135a1d1",
        "type": "tab",
        "label": "Flow 1",
        "disabled": false,
        "info": "",
        "env": []
    },
    {
        "id": "36994b8b41eac99e",
        "type": "function",
        "z": "e2eb84e67135a1d1",
        "name": "Data",
        "func": "const data = global.get(\"ALLDATA\");\n\nmsg.payload = {\t\n\"Timestamp\": new Date().getTime(),\n\"R5004\": Number.parseFloat(data.R5004.toFixed(2)),\n\"R5007\": Number.parseFloat(data.R5007.toFixed(2)),\n\"R5012\": Number.parseFloat(data.R5012.toFixed(2)),\n\"R5014\": Number.parseFloat(data.R5014.toFixed(2)),\n\"R5020\": Number.parseFloat(data.R5020.toFixed(2)),\n\"R5021\": Number.parseFloat(data.R5021.toFixed(2)),\n\"R5022\": Number.parseFloat(data.R5022.toFixed(2)),\n\"R5023\": Number.parseFloat(data.R5023.toFixed(2)),\n\"R5024\": Number.parseFloat(data.R5024.toFixed(2)),\n\"R5025\": Number.parseFloat(data.R5025.toFixed(2)),\n\"R5026\": Number.parseFloat(data.R5026.toFixed(2)),\n\"R5028\": Number.parseFloat(data.R5028.toFixed(2)),\n\"R5029\": Number.parseFloat(data.R5029.toFixed(2)),\n\"R5030\": Number.parseFloat(data.R5030.toFixed(2)),\n\"R5031\": Number.parseFloat(data.R5031.toFixed(2)),\n\"R5032\": Number.parseFloat(data.R5032.toFixed(2)),\n\"R5033\": Number.parseFloat(data.R5033.toFixed(2)),\n\"R5034\": Number.parseFloat(data.R5034.toFixed(2)),\n\"R5035\": Number.parseFloat(data.R5035.toFixed(2)),\n\"R5037\": Number.parseFloat(data.R5037.toFixed(2)),\n\"R5038\": Number.parseFloat(data.R5038.toFixed(2)),\n\"R5039\": Number.parseFloat(data.R5039.toFixed(2)),\n\"R5044\": Number.parseFloat(data.R5044.toFixed(2)),\n\"R5045\": Number.parseFloat(data.R5045.toFixed(2)),\n};\n\nreturn msg;",
        "outputs": 1,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 590,
        "y": 540,
        "wires": [
            [
                "ca9d2154a7fcb14d"
            ]
        ]
    },
    {
        "id": "f17d37076ba69816",
        "type": "inject",
        "z": "e2eb84e67135a1d1",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "1",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "",
        "payloadType": "date",
        "x": 430,
        "y": 540,
        "wires": [
            [
                "36994b8b41eac99e"
            ]
        ]
    },
    {
        "id": "ca9d2154a7fcb14d",
        "type": "function",
        "z": "e2eb84e67135a1d1",
        "name": "",
        "func": "const measures = context.get(\"DATA\") || [];\n\n\n\nif(measures.length >= 86400) {\n\n  measures.pop();\n\n}\n\n\n\nmeasures.unshift(msg.payload);\n\ncontext.set(\"DATA\", measures);\n\nmsg.payload = measures;\n\nreturn msg;",
        "outputs": 1,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 760,
        "y": 540,
        "wires": [
            []
        ]
    }
]

flows.json (3.1 KB)

Does this use an SD-Card for storage? Also, have you checked your filing system for errors? Final question, how large is the output data getting?

It isn't uncommon to see spikes on output. Especially with SD-Card interfaces which generally have quite limited capabilities. The cards can also be limited and cheap cards in particular can end up with large numbers of errors. Spikes are worse of course if the data is getting large.

From my mobile phone, it looks like you are setting up to 84000 elements of approx 40 float values. At a rough guess that's 16MB of data to convert to JSON when the context storage flushes to file (that's what happens under the hood)

Additionally, the JSON stringifier it uses is not the built in one because it has to check for circular references etc. In simple terms it's slower & more cpu intensive than you imagine.

I would not hold this much data in file backed context. I'd write it to database.

2 Likes

Hi, I am using internal Emmc. File size is about 16MB. I am attaching what I got from tracing function.


So maybe I need to use SQLite ?
Because I can't use external database for that.

I'd be more interested in memory and swap utilisation.

Perhaps. And maybe think whether you can restructure the data into a relational table. If you can, then you will be better off when using SQLite because otherwise you still have to stringify/parse the JSON.

If you were already running something like memcached, that might also work.

Also, depending on exactly what you are trying to achieve, even running InfluxDB and moving the charts to Grafana might be better.

Besides @TotallyInformation and @Steve-Mcl suggestions, you can use memory storage to optimize the performance. In your function node, you are writing 16MB data to EMMC every 30 seconds in default. This can affect the system performance and reduce EMMC life significantly.

For large-size temporary data, use memory storage. You can modify "setting.js" as follows:

   contextStorage: {
      default: "file",
      memoryOnly: { module: 'memory' },
      file: { module: 'localfilesystem' }
    },

Then in your function call, use the following to get temporary data from the memory:

var measures = context.get("DATA",'memoryOnly');
if(measures==undefined)
{
    measures=[];
    context.set('DATA',measures,'memoryOnly');
}

There is no need to write context using "context.set". Since in Node-RED, array is called by reference in context. When you update measures, "DATA" is updated automatically.

This way, you can avoid writing/reading from EMMC, boost your performance and prolong your EMMC life.

I also always forget to say that you can alter the writeback period for the file storage. That might be an option though obviously there is a danger that you might loose some data.

So context function will look like this ? Because I need only 24hours of data

var measures = context.get("DATA",'memoryOnly');
if(measures==undefined)
{
    measures=[];
    context.set('DATA',measures,'memoryOnly');
}


if(measures.length >= 86400) {

  measures.pop();

}



measures.unshift(msg.payload);

//context.set("DATA", measures);

msg.payload = measures;

return msg;

You can write that as:

const measures = context.get("DATA",'memoryOnly') || []

You only need to do the set once at the end.

So something like this ?

const measures = context.get("DATA",'memoryOnly') || []


if(measures.length >= 86400) {

  measures.pop();

}



measures.unshift(msg.payload);

context.set('DATA',measures,'memoryOnly');

msg.payload = measures;

return msg;

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.