Helpful criticism requested

I'm new to node-red and sorta new to javascript and would appreciate any helpful criticism concerning best practices.
This function stores sensor data in a context variable every 5 seconds then at one minute timer ticks sends the data to a file. It only sends the last sensor data stored in a variable, not all data for the past minute.

var ambient = context.get('ambient')||0;
var brewbox = context.get('brewbox')||0;
var brewbucket = context.get('brewbucket')||0;
var setpoint = context.get('setpoint')||0;
var heat = context.get('heat')||0;
var cool = context.get('cool')||0;
var enable = context.get('enable')||false;
var time_tick = context.get('time_tick')||0;
var payload = msg.payload;
var topic = msg.topic; 

switch (topic){
    case 'tick':
        context.set('time_tick', payload);
        if (context.get('enable') === true){
        msg.topic = 'status_tick';
        return msg;
    case 'enable':
        node.status({fill:"red",shape:"dot",text:topic +':' + payload});
        context.set('enable', payload);
        if (context.get('enable')){
            msg.topic = 'status_tick';
            return msg;
    case 'ambient':
    case 'brewbox':
    case 'brewbucket':
    case 'setpoint':
    case 'heat':
    case 'cool':
    case 'end':
        context.set('enable', false);
        msg.topic = 'status_tick';
        return msg;

Why bother with any of the var xxxxx = context.get('xxxxx')||0; since you don't use the variable and get them later on anyway. It looks to me like you could eliminate that whole first part other than the last two lines for payload and topic..

Well, the sensor data and output to file are on two separate timers so the data from the sensors have to be stored as they come in otherwise there would be no data for the json output object. If I were sending each sensor input individually I could do as you suggest. Am I wrong?

The 'enable' case sends the header information to a csv file for the first line.
The 'tick' case sends a data object line built from the stored data.
The 'enable' false puts an end marker on each column of data and ends the output.
Between 'tick's the sensor data is stored and ready for the 'tick' case.


Hi - nice. You can also achieve similar with the built in delay and join nodes.
The delay node has a rate limit mode where it will store messages per topic and release them all after an interval (only keeping the last value per topic) - and the join node can then join them all into a single message keyed by topic. So as long as your incoming data is identified by topic then it should be similar.


Thanks for the input!
It was fun coding my little bit but your solution seems easily implemented.
I usually go to the function before looking for an already debugged node but eventually I hope to have all this in hand.
Will the sensor data be joined in the same order every time? Right now I'm just out puting a csv file but have mysql installed and hope to use it in the future. In an csv file order is important.


So if I wanted to convert the output of the Join node to SQL:
How would I get the correct values out of the key/value object?

I'm confused. Lets take the first line:

var ambient = context.get('ambient')||0;

this sets the variable ambient to the stored value or 0. but where do you use the variable ambient in the rest of the code?

Maybe use a template node. Like

INSERT INTO mytable [(A,B,C)] VALUES({{payload.too}},{{}}, {{payload.other}});

Or for CSV use a CSV node

Case 'tick is on a 1 minute timer which triggers the function to gather the variables and put them into a json object and send it to msg.

case 'tick': context.set('time_tick', payload); //node.status({fill:"red",shape:"dot",text:msg.topic}); if (context.get('enable') === true){ msg.payload={ 'TIME':context.get('time_tick')||0, 'AMBIENT':context.get('ambient')||0, 'BREWBOX':context.get('brewbox')||0, 'BREWBUCKET':context.get('brewbucket')||0, 'SETPOINT':context.get('setpoint')||0, 'HEAT':context.get('heat')||0, 'COOL':context.get('cool')||0} msg.topic = 'status_tick'; return msg; }

Is that what the template node is for! I'll have to get up to speed on that node.
So, payload.too will extract the correct data from { A: 1, B: 0, C: 2 } as in payload.A?

But it’s gathering the context variable not the local variable you set in the first part of the code. Where do you test or use the local variables? You set them from the context variable and later set the context variable to msg.payload, but I don’t see the local variable (other than ‘topic’ and ‘payload’) being used.

Or was I out in the sun too long today?


switch (topic){
    case 'tick':
        context.set('time_tick', payload);

Right, as I said in my first post. You do use payload and topic but what about all the other local variables you set?

DOH! I get it now. haha
You are right, they are not needed.
Thanks for the heads up!

1 Like

Can I just check what you are doing with the sql insert statements. If you are using influxdb as mentioned in another thread you don't need to do any of that. The influxdb node does it all for you.

A more complete example

[{"id":"fa56247d.9dd868","type":"inject","z":"49810e1d.25d9d","name":"","topic":"A","payload":"1","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":190,"y":160,"wires":[["69333723.8735c8"]]},{"id":"69333723.8735c8","type":"delay","z":"49810e1d.25d9d","name":"","pauseType":"timed","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"x":350,"y":320,"wires":[["9828b60e.c77158"]]},{"id":"c02a5cfe.5ca99","type":"inject","z":"49810e1d.25d9d","name":"","topic":"B","payload":"0","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":190,"y":200,"wires":[["69333723.8735c8"]]},{"id":"9828b60e.c77158","type":"join","z":"49810e1d.25d9d","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"1","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":530,"y":320,"wires":[["d84c6be7.5739a8","a42f596b.45e818","1daf05b5.1ae54a"]]},{"id":"62e24859.3c6438","type":"inject","z":"49810e1d.25d9d","name":"","topic":"C","payload":"2","payloadType":"num","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":190,"y":240,"wires":[["69333723.8735c8"]]},{"id":"d84c6be7.5739a8","type":"debug","z":"49810e1d.25d9d","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":710,"y":320,"wires":[]},{"id":"a42f596b.45e818","type":"csv","z":"49810e1d.25d9d","name":"","sep":",","hdrin":"","hdrout":"","multi":"one","ret":"\\n","temp":"A,B,C","skip":"0","x":680,"y":380,"wires":[["d7b67037.af492"]]},{"id":"1daf05b5.1ae54a","type":"template","z":"49810e1d.25d9d","name":"","field":"payload","fieldType":"msg","format":"handlebars","syntax":"mustache","template":"INSERT INTO mytable [(A,B,C)] VALUES({{payload.A}},{{payload.B}},{{payload.C}});","output":"str","x":700,"y":260,"wires":[["a1d57787.302a08"]]},{"id":"d7b67037.af492","type":"debug","z":"49810e1d.25d9d","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":870,"y":380,"wires":[]},{"id":"a1d57787.302a08","type":"debug","z":"49810e1d.25d9d","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":870,"y":260,"wires":[]}]

Exactly I feed the InfluxDB with:


And the node takes care of it.

I totally mistook the concepts behind influxdb. I assumed it was a relational SQL db like I'm used to working with (mysql, sqlite, progress). After a little more reading I see that I need a lot more reading. :slight_smile:
I have been able to insert data into influxdb using the function above, somewhat modified thanks to helpful criticism ( and Colin's sample flows ), but I think dceejay's suggestion is the way to go using delay and join ( more helpful criticism happily received ).
I'm not finding a way to delete data and with a limited amount of space I think I need to (RPi3 with 32G SD). In SQL I'd use, "DELETE FROM bla bla" but I can't find a way in influxQL except to just drop a database. I suspect this is done automatically.
Secondarily I have been able to connect Grafana to influxdb over my network from my desktop pc but I can't figure out how to retrieve data. I've only just begun with Grafana so it will come, eventually.

All suggestions and criticism is deeply appreciated.
Many Thanks to all,

Thanks for the input. I think I'll have to rely on delay and join nodes for now. I need to make some black cherry wine for my daughter and it has to age for 6 months to a year. I have to get started soon; she's due to return next May :wink:

Thanks ghayne. Your suggestion is what I ended up with after looking over Colin's sample flows.