Optimisation tactics - JSONata and State Machines

#1

Having been hooked on NR for about a year, and steadily evolving a massive spaghetti of many flows with somewhat diverse bouquet of common solutions, and getting to a point where it feels like some unnecessary weight, I want to set a strategy for optimisation. While also learning something new.

Will turning as much as possible of random function nodes coupled with switches and change and Template nodes, each place turned into maybe just one change node utilising jsonata (which I am just getting familiar with) both in general make sense and improve performance? I somewhat feel like the point of the visual GUI of NR is to stretch flows out with wires and not hide too much logic in massive single nodes but mine is certainly not looking good, and the only reason why it is workable is because I’m in it so often and mostly know where things are hiding.

Having also gone from retaining MQTT messages to increasingly updating global context variables all over the place in convoluted ways, also benefit from some kind of central state machine? Maybe not performance wise but is it a “next step” way to get overview and control, and be ready for further flows and development?

I have never bothered with subflows though. Maybe this is the better starting point.

Any other good general strategy or thematic tactics for optimisation ?

1 Like
#2

Subflows are a great way to do this especially with 0.20 where subflows can have per-instance environment variables.
Change node with JSONata can be very powerful however it'll be slower than equivalent JavaScript so if speed is an issue a function node is still the best.

#3

I use virtual wires quite alot, which cleans up the spaghetti quite a bit and eliminates a lot of global vars i used throughout the flows.

#4

Experience has taught me not to say "optimization" but rather "optimization of." If you are asking about speed or resource efficiency, these are hard to measure with NR, but if you are pushing the limits of your hardware you probably need a different approach. Actually, you seem to be asking about readability or maintainability. These can be in the eye of the beholder, but it might be useful to share opinions and experience. I will try to drop a few comments here in the next day or two.

1 Like
#5

I thought the opposite was true, but I may be wrong.

#6

Thanks. Will look into subflows for readability and reuse. And try to wrap my head around the importance of instance env variables...

#7

I have an (irrational?) dislike to links/virtual wires as I feel it is cheating the inherent readability in a visual environment like NR (when used on same tabs). Besides, I use global vars mostly to have the latest readings available on restart/breakdown (but some also to share across flows like you said). I used to put everything out to MQTT retained but it was harder to keep an overview over all the values available than the Context and maybe an unnecessary overhead when not used outside NR.

#8

Succinct differentiation. Yes, it may be mostly maintainability I am worried about, but resource efficiency at the same time. As a theory geek I am very much looking forward to further views and comments.

#9

I would be very interested in some more views (and hard facts?) concerning jsonata versus javascript in function nodes. Because as the readability isn’t necessarily better, if performance is worse too, then that leaves only that it’s faster to code (if you are very familiar with it).

#10

When we last looked at it, jsonata had worse throughout performance but better memory footprint. Function node could handle messages faster but with a larger memory overhead.

So it very much depends which tradeoff is more important to you.

I know the folk behind jsonata were aware of the performance issues and were looking at them. I'm hopeful they will sort them out and we can have the best of both worlds.

1 Like
#11

When manipulating large JSON structures I would say JSONata is usually easier to read. One of the main advantage I think is that in JSONata there's usually less different ways of solving a problem than in JavaScript.

1 Like
#12

That surprises me. I would have thought that retained messages with explicit wiring to show where they were used was easier to understand than convoluted use of global variables, which reduces wiring but makes operation opaque.

#13

Very interesting topic.

  1. Regarding performance and optimization (although the main subject of this post is about maintainability and readability). My viewpoint is that even simple hardware (like raspberry pi 3B) is more than fast enough for most use cases to not bother about performance issues. So if JSONata, State Machines, Subflows or whatever you come up would give better maintainable or readable flows, please go for it.
  2. Regarding your maintainability and readability concern. I am a big fan of JSONata in change nodes and make extensive use of flow context variables and also ends up with flows wondering if others would be able to understand its logic. I think the minimal you need to do is to split the desired behaviour in smaller meaningful chunks which can be encoded in a dedicated Node-RED flow (or subflow) and also assure that those chunks have a clear interface.
1 Like
#14

Always depends on the throughput you need and latencies. I found myself changing some JSONata parts to function nodes to optimize speed, but the beauty of Node-RED is you can always use both. When speed is critical (processing Artnet for example) node-red-contrib-unsafe function is quite useful too.

#15

I have great respect for this attitude and the effort it takes (@knolleary) to sort out performance trade-offs. But having started out (in the '70s) counting bytes and clock cycles, I love that the cost of hardware has come down so fast relative to the cost of coding. We've discussed lots of alternatives to the Raspberry Pi that can be called on when the little fellow starts to struggle, and there is now almost a continuum of price/performance ratios to chose from. There are also options to off-load time-critical, computation-intensive, or power-constrained tasks to coprocessors while keeping the demands on the main logic processor under control. I'm sure there are cases (embedded systems deployed in large numbers and never modified, for example) that benefit greatly from this kind of analysis, but I'm happy to throw a bit more money at hardware to solve my hobby-level problems.