Function Node and complete Flow for the LLaMA AI Text Generator Model

As promised yesterday: the function node for the LLaMA AI text model is fully available now. Please note: inference runs on the CPU (without requiring any special hardware) but still completes within a few seconds.

Having LLaMA inference as a self-contained Node-RED node gives you the possibility to create your own user interface or even build autonomous agents.

The distribution comes with a flow both for the basic function node and a complete HTTP endpoint which responds to incoming requests. Also attached is a trivial web page which can be used as a user interface for the AI model.

See the related GitHub repository for instructions on how to install and use it.