Just wanted to say — NR is way better than that silly n8n app, end of story.
Sad to see so many people wetting their pens.
Just wanted to say — NR is way better than that silly n8n app, end of story.
Sad to see so many people wetting their pens.
As much as I agree, it's a bit like say the colour black is way better than the colour white.
Each has its own raison d'être and neither is the ultimate solution for all problems and everyone has their own personal preferences.
On the other hand, Erlang-Red bakes cakes in space - far better than both of them combined - hahaha
Now you see what I mean ![]()
Does it not depend on what you want to do with it?
I wonder if the AI nodes for n8n currently hyped on Youtube are really THAT much superior to the Node Red nodes. Does anyone have experience with building automated flows that integrate some form of LLM (Claude / ChatGPT / etc.)?
I recently used AI to generate test data from JSON schemas --> GitHub - gorenje/vda5050_erlang_red: Breadboards for VDA5050 Protocol --> the 6th episode
for that, node red, ollama and gemma2 were plenty and it worked really well. So now I can say that I've combined AI and Node-RED.
But all I used from Node-RED was the http request node to connect to Ollama, is that using AI in Node-RED? Can I do that in n8n? For me that is important: using my locally installed AIs not some SaaS solution. For that Node-RED is probably better.
I agree. n8n seem to have gone the same route as other cloud vendors, making money from licensing. Including doing deals with other cloud vendors. This can, of course, be convenient. But, as always, you are then totally dependent on what those vendors permit you to do and you have to pay whatever they ask for. Saying nothing of what data about you that they might be gathering and reselling.
Node-RED is a lot more flexible (and free) but at the cost of you having to do a bit more of the logic yourself.
The more example flows that we publish publicly, the better that the general public LLM's like ChatGPT and Copilot will get at being able to generate flows.
But for speciality model uses - like doing a specific type of analysis using enhanced, guided prompts - it is better to use some kind of API driven AI tool. Where you get to choose your base LLM best suited to the task and where you can provide additional prompts and specialist inputs (such as standards or government legislation for example).
As a heavy user of the Claude Code CLI for MERN/MEVN projects I noticed how well it handles heavy lifting of repetitive tasks and often comes up with great solutions (after reviewing and fine-tuning the prompts, of course).
I'm curious if anyone has leveraged this amazing service to let it learn on existing NR flows and create new ones - or to produce quality NR nodes (extensions) with the assistance of Claude Code.
My opinion is that AI can both be a blessing and a curse, but if you know where the curse begins you can set the limits for the AI so that it is mostly a blessing for doing repetitive tasks automatically.
On the other hand, all my flows that I've created in over nearly 8 years or so were hand-crafted and finetuned to the specific needs and requirements of each task - so I wonder if AI in makes sense - since it is well when generalizing, but not doing so well when asked to work on specialized tasks.
I am not sure how well it would do at creating an entire custom node from scratch. Might be interesting to try but I would rather focus it on helping me deliver more interesting features since the "boilerplate" bits I can grab from a template very easily. I pay for GitHub Copilot and for Claude models (definitely the best for coding node/javascript/html/css at present), I only get a certain limit per month.
So I'll drop back to GPT-4.1 for simpler things. Claude Opus 4.5 has recently landed in VSCode/GH Copilot, I was using Sonnet 4.5 previously. Opus seems a little faster but I've not done much with it as yet. I just asked it to add a visual warning to the MQTT Explorer proof of concept for when the number of visual elements gets too heavy to be reliable in most browsers, just testing at the moment.
I don't really use models to generate flows since I'll generally be using the Editor as a thought processor anyway, using the visual nature to help the design process. You lose that if you try to rely on LLM's. Some people may be more text-design oriented rather than visual though, so it might help them more maybe?