Optimisation tactics - JSONata and State Machines

Having been hooked on NR for about a year, and steadily evolving a massive spaghetti of many flows with somewhat diverse bouquet of common solutions, and getting to a point where it feels like some unnecessary weight, I want to set a strategy for optimisation. While also learning something new.

Will turning as much as possible of random function nodes coupled with switches and change and Template nodes, each place turned into maybe just one change node utilising jsonata (which I am just getting familiar with) both in general make sense and improve performance? I somewhat feel like the point of the visual GUI of NR is to stretch flows out with wires and not hide too much logic in massive single nodes but mine is certainly not looking good, and the only reason why it is workable is because I’m in it so often and mostly know where things are hiding.

Having also gone from retaining MQTT messages to increasingly updating global context variables all over the place in convoluted ways, also benefit from some kind of central state machine? Maybe not performance wise but is it a “next step” way to get overview and control, and be ready for further flows and development?

I have never bothered with subflows though. Maybe this is the better starting point.

Any other good general strategy or thematic tactics for optimisation ?

1 Like

Subflows are a great way to do this especially with 0.20 where subflows can have per-instance environment variables.
Change node with JSONata can be very powerful however it'll be slower than equivalent JavaScript so if speed is an issue a function node is still the best.

I use virtual wires quite alot, which cleans up the spaghetti quite a bit and eliminates a lot of global vars i used throughout the flows.

Experience has taught me not to say "optimization" but rather "optimization of." If you are asking about speed or resource efficiency, these are hard to measure with NR, but if you are pushing the limits of your hardware you probably need a different approach. Actually, you seem to be asking about readability or maintainability. These can be in the eye of the beholder, but it might be useful to share opinions and experience. I will try to drop a few comments here in the next day or two.

1 Like

I thought the opposite was true, but I may be wrong.

Thanks. Will look into subflows for readability and reuse. And try to wrap my head around the importance of instance env variables...

I have an (irrational?) dislike to links/virtual wires as I feel it is cheating the inherent readability in a visual environment like NR (when used on same tabs). Besides, I use global vars mostly to have the latest readings available on restart/breakdown (but some also to share across flows like you said). I used to put everything out to MQTT retained but it was harder to keep an overview over all the values available than the Context and maybe an unnecessary overhead when not used outside NR.

Succinct differentiation. Yes, it may be mostly maintainability I am worried about, but resource efficiency at the same time. As a theory geek I am very much looking forward to further views and comments.

I would be very interested in some more views (and hard facts?) concerning jsonata versus javascript in function nodes. Because as the readability isn’t necessarily better, if performance is worse too, then that leaves only that it’s faster to code (if you are very familiar with it).

When we last looked at it, jsonata had worse throughout performance but better memory footprint. Function node could handle messages faster but with a larger memory overhead.

So it very much depends which tradeoff is more important to you.

I know the folk behind jsonata were aware of the performance issues and were looking at them. I'm hopeful they will sort them out and we can have the best of both worlds.

1 Like

When manipulating large JSON structures I would say JSONata is usually easier to read. One of the main advantage I think is that in JSONata there's usually less different ways of solving a problem than in JavaScript.

1 Like

That surprises me. I would have thought that retained messages with explicit wiring to show where they were used was easier to understand than convoluted use of global variables, which reduces wiring but makes operation opaque.

Very interesting topic.

  1. Regarding performance and optimization (although the main subject of this post is about maintainability and readability). My viewpoint is that even simple hardware (like raspberry pi 3B) is more than fast enough for most use cases to not bother about performance issues. So if JSONata, State Machines, Subflows or whatever you come up would give better maintainable or readable flows, please go for it.
  2. Regarding your maintainability and readability concern. I am a big fan of JSONata in change nodes and make extensive use of flow context variables and also ends up with flows wondering if others would be able to understand its logic. I think the minimal you need to do is to split the desired behaviour in smaller meaningful chunks which can be encoded in a dedicated Node-RED flow (or subflow) and also assure that those chunks have a clear interface.
1 Like

Always depends on the throughput you need and latencies. I found myself changing some JSONata parts to function nodes to optimize speed, but the beauty of Node-RED is you can always use both. When speed is critical (processing Artnet for example) node-red-contrib-unsafe function is quite useful too.

I have great respect for this attitude and the effort it takes (@knolleary) to sort out performance trade-offs. But having started out (in the '70s) counting bytes and clock cycles, I love that the cost of hardware has come down so fast relative to the cost of coding. We've discussed lots of alternatives to the Raspberry Pi that can be called on when the little fellow starts to struggle, and there is now almost a continuum of price/performance ratios to chose from. There are also options to off-load time-critical, computation-intensive, or power-constrained tasks to coprocessors while keeping the demands on the main logic processor under control. I'm sure there are cases (embedded systems deployed in large numbers and never modified, for example) that benefit greatly from this kind of analysis, but I'm happy to throw a bit more money at hardware to solve my hobby-level problems.

@drmibell Hi , I would like to point out that all turing machine can be optimised when you do reverse engeneer their grammar structure. JSONata is transvers LAR, as cuda you can not expect much since it can not jump. In don't know your context of execution, but you have to present your data as a matrix vector to get prod grade quality for JSONata. 1st reject all those crappy XML from lazy java coder are blasting around and fetch your data into a matrix. Since you want to maximize a 1-thread process, you better use vector traverse once. In worst case of you have a graph of data that cannot be flatten into a flare decomposition. You will need to setup as much turing machine as graph recurrence. A->[B,C,D], B->[E,D] D->[F,A]. Will require 2 separate code. A first one dealing with [A,B,C] and 2nd one [D,fixed A,F]

You will flat your graph reversing it evaluation as RPN
D
C
E
B
A

A
F
D

To obtain a matrix version of your structure and then you can use JSONata transformation, like you would do with Cuda be isolation data clustering.

Sorry for my english

True, relative to a FINITE set of problems defined in advance. In the general case, optimization is not a valid concept, since there will always be an infinite set of problems that cannot be solved in a finite time.

hi , isn't because function node is inside its own VM, there is a module called unsafe-node. Do you still have this memory consumption with it

For all and to summarize , JSONata is a poor SQL engine.

For ORACLE expert
REGEXP_LIKE
REGEXP_SUBSTR
REGEXP_INSTR
REGEXP_REPLACE

and virtual column generator REGEXP_EXTRACT

are poorly replaced by $map and ()?

object are limited to JSON static representation but roughtly the same object.leaf.leaf

and N-tree data graph are poorly replaced by but you cannot emulate connect by prior, and request to flatten the structure .

OVerall query and transformation in JSONata folow the same rule as SQL normalised algorithm access.

Reduce scope -> collect once -> expand once -> affect to rowset -> filter results

Ouch! That seems to be very dismissive. JSONata is a specialist processing language just as REGEX, or XSLT are.

JSONata makes certain types of transformation of JSON (hierarchical) data very easy. However, like any specialist processing language, there comes a point where the logic of the language becomes incomprehensible unless you are using JSONata constantly.

SQL is about manipulation of relational tables. In addition, SQL is vastly more mature than JSONata which is still maturing fairly rapidly.

Personally, I use JSONata where the resulting code will still be comprehensible in 6m+ - if it is taking me more than a few hours to work out the JSONata code, it is time to give up and go back to plain old JS. Of course, I like to stretch myself so I'll sometimes spend a bit longer on JSONata and if/when I work out something interesting, it goes into the blog article I have on the subject so that I will be able to look it up again in the future (being a believer, like Einstein was, of not cluttering my brain with things that can be looked up!).

If my job revolved around manipulation of JSON data, I would undoubtedly be using it far more just as there was a time where I used XLST extensively for XML manipulation or various multi-dimensional based languages (e.g. APL and Acumen) for analysing complex matrix data.

3 Likes