Helpful criticism requested

Are you sure they were my flows?

You spoke of your influxdb node so I assumed this one was yours.

Wrong?

Are you sure I did? If so it was a slip of the finger.

I'd argue the complete opposite :slight_smile:

I only write a function node if the standard nodes (or an existing contrib node) can't handle the job

Each to their own of course

But I see the main strength of Node-RED as a way of avoiding coding in javascript in the 1st place :slight_smile:

The correct answer is of course both ways are right. It's whatever gets the job done for you :rocket:

1 Like

The modular structure of node-red encourages learning.
I started just using the 'stock' nodes, but very quickly found that I could replace some nodes more efficiently with a few lines of JavaScript in a function node, and suddenly found that I was coding.

That's got to be a good thing? why avoid it?

Paul

1 Like

I've never really learnt javascript properly :slight_smile:

I always try and use blocks programmed by someone else before rolling my sleeves up :slight_smile:

(And only then if I can't use a 1 line JSONata expression)

1 Like

You are obviously an advanced node-red user.

I saw most of the beginners starting with coding functions probably because this is the easiest way when you don't know the potential of the build-in or contrib nodes.
It also depends on your background and programming experience.

At the beginning I used much more function nodes, the most of them have been replaced by build-in or contrib nodes and now sometime I prefer to develop contrib-nodes with the goal of simplify flows.

And there are use cases where it's better to use e.g. MQTT to exchange data with external applications.

Anyway this's only my opinion.

It all depends on what you mean by efficiency. If you are interested in processor resources then a function node takes up a lot more than a regular node, if that is your goal then better to use a handful of regular nodes rather than one function node. However if several nodes are required where a few lines of js will do the job then that route may well be preferable from the point of view of tidiness and making the flow more understandable. In practice very few systems run out of processor resources anyway.

Not at all - just really started with it a few months ago

I'm just a very poor javascript programmer

I rarely use more than 3% of my Raspberry Pi3 cpu power. If I install updates or do a full deploy of my flow it may reach 30%. I remember the days of strict data type when I could save 3 bytes of space by using type byte rather than word and I thought I'd never fill up my 10Mbyte hard drive.

Back on topic. I found that if I want to 'tag' a measurement I need my java script. I don't see how to build an object using dceejay's delay and join nodes. Using a 'tag' requires a complex object that I don't see delay and join accomplishing.

This is what is required to add a tag,

if (context.get('enable') === true){
            msg.payload=[{
               AMBIENT:context.get('ambient')||0,
               BREWBOX:context.get('brewbox')||0,
            BREWBUCKET:context.get('brewbucket')||0,
              SETPOINT:context.get('setpoint')||0,
                  HEAT:context.get('heat')||0,
                  COOL:context.get('cool')||0},{
               'batch':'Trial'     // tag added here
                }]

I also discovered that using 'tag' as a tag field identifier really fouls things up.

If the tag is always "Trial", you can add a change node after the join node:

change1

OK, but the idea of the tag is so that a specific batch can be identified more easily than by a date range. This is assuming that Grafana can be taught to select by tag.

ok, since you want some criticism, howabout fixint the title 'requested' not 'requrested' :stuck_out_tongue:

Fixint(g) criticism happily received. :smile:

1 Like

Does it mean you need a different batch name for every record?

Not every record but a measurement for a batch of say, Corona Extra clone. I'm not sure it will even work. I'm still experimenting during reset times.

I'm still working on the logic. I can filter data in Grafana by adding a field to the WHERE clause but with 6 series in a graph I really don't want to have to change every query to see a past batch.
Work in progress.

Ok, then you have to set the batch name for a measurement depending on your criteria before the delay node.

Aha, the best will be if you can at the end post a working flow.