Create an app that processes files from an s3 bucket, do something with the data, move them to another s3 bucket. The app has to track stages of this process. If you could make stages configurable it could be used in many business that have this type of process and doesn’t have money to spend with ETL tools. A stage could be a node that is hooked to events from the db, and can be used to configure custom actions. The file process has to be async and send email to the User that uploaded the file once the it has finished.
Businesses often have .csv, .json, .txt or excel spreadsheet that have to be parsed. So, as an additional festure to the above example, it would be fun if the parser is chosen dynamically based on the file type.
And don’t forget the Easter egg. When the user presses
Which “local” Node-RED server local or client browser local?
For me, the most common DATA files are JSON and CSV, occasionally Excel. But you can’t ignore SQLite either (though I think you should probably punt that to a future academy or it will get very complex).
However, I also very often am handling HTML, JS and CSS files.
Nothing. I’m not interested in Node-RED AI integration.
I tend to try and limit the number of third-party services I integrate with, is it’s overhead that the user then has to go and sign up for, or integrate with, but I will see what we can do.
I tend to try and limit the number of third-party services I integrate with
If you want to cover AI, you are depending on 3rd party services. This is actually where I think node-red is missing the boat. If you look at the popularity of n8n, this is purely because of AI integration. Something that should not be 'superhard' to integrate natively; 2 API's OpenAI and ollama. The same for storage/database. I see that flowfuse is now offering this feature, because it is an important aspect, although the flows may be stateless, it is often used in conjunction with existing data.
If you a really going to cover AI, then covering offline and local AI model use would be sensible since this would be more of a USP to Node-RED and is often required in the kind of enterprise environments where Node-RED could be really useful such as health analytics.
This involves both training local, specialist models and their use. Of course, you wouldn't want to go into massive detail probably because these are themselves specialist areas but how to integrate Node-RED into local model training and use.
I have discovered that AI search answers are often better related to my enquiry than non AI, so like many I'm not totally new to the subject.
AI in node-red though, I have no concept of what this has to offer.
So i guess an example getting astonishingly good results from minimal input would be welcome.
Not a chat bot please though, surely everyone despises those!
There is a big difference between public LLM's and local/private/specialist ones. Specially trained LLM's can produce far more accurate and "safe" outputs (depending on the training of course). But they are designed for a narrower range of tasks.
In the context of Node-RED being a tool to do analytical work - whether that is against text content or image/video - those specialist LLM's are much more interesting than simply getting ChatGPT to give you a flow to do something.
Rather humiliatingly, I just asked chatgpt to simplify what I would describe as two monumentally complex SQL queries. It claims to have simplified both and even returns the same data set (from a limited number of records).
Not yet analysed it's response to see if i agree on "simpler". Im hoping to find an error