Hi There,
Is anyone working on an MCP layer for Node-RED?
Details:
TL;DR:
MCP seems to be a real-time interface for LLMs to access data and services on the fly.
it seems to be a form of API that is simpler for LLM (as opposed to something like JSON) to understanding. It seems to be standard practice that LLMs are now pre-trained on MCP so integration becomes simpler for all concerned.
So it would make sense for Node-RED to have an MCP for LLMs to control and discover instances of Node-RED.
2 Likes
Great post - I would be interested in a discussion over what would a NR MCP server look like
N8n forum has someone 2 days ago release an MCP connector which potential could be converted
I can start working on this once I finish my framework to develop nodes more easily.
I thought about these actions:
- all possible deploy types
- start/stop flows
- add/remove/update nodes/wires to enable the agent to build flows with natural language
What else do you think would be needed?
I guess the debug messages would need to be sent over somehow.
Great to have a conversation with an HAL[1] and have it say something like "Sorry Dave but I can't execute the NodeRED flow because its buggy." 
Btw some are complaining about the lack of security built in to MCP, so I can imagine that much could still change with the protocol. I base those this on a couple articles over at hacker news).
The holy grail would be if HAL could explain the path a data object would travel through a flow and point out potential problems, i.e., simulate a flow with various data sets.
[1] = Holistically Artificial LLM
Trigger flows and get responses, as well as logs, would certainty be good. I believe it would be necessary to have some sort of webhook to notify the agent after the flow has been executed, or make the agent pull the status of a flow every N milliseconds as there could exist flows that take too long to complete.
I think we would have to create a node-red plugin to listen to certain events to keep tracking of the status of the flow. This plugin would be configured to send events to a service store that our mcp server can later query. The MCP server would then query these events to analyze the flow execution. What do you think?
There would have to exist some uuid to map all events that belong to the same flow execution request.
Definitely need some kind of plugin for monitoring the internals of NR.
As for sending the data to some external server, I would just use NR - it has perfectly well function http in node for accepting requests from the AI agent. Or even storing, I've heard, to store statistics on flow execution 
That's already too much implementation details for me - I would just create an example MCP server in NR (something like a hello world app) to which the AI agent can connect. I would definitely create that visually in NR using the http in nodes and co.
And then extend that setup out to encompass more functionality as required.
Having the MCP server in NR will allow others to use that for other services requiring an MCP server.
If having the MCP running on the same server that is being monitored, then it's just a matter of setting up a second NR and having it run there - and the plugin can then post off its data to that server (if required). But starting out with a MCP on the target server will save this step of sending off data ... less implementation details!
1 Like
I am a bit confused about the nomenclature of the term 'mcp' - reading the documentation - isn't this just a 'tool' that can be defined for the LLM to use ? Or should I read it as a 'standardization' for interacting with tools ?
I see that there are many 'mcp servers' available which all appear to be just 'tools' ?
Reading a bit more, node-red appears to be a good candidate to 'be' the mcp 'server' for many tasks.
It's the gateway for AI to use a tool - it's the glue between the tool (whatever bit of software you like) and HAL (i.e. any AI you can to mention).
Search hacker news for many details including criticism. MCP emerged from one of the big AI companies and has become a de-facto standard for allowing AIs to interact with the "real" world.
It's currently the hot topic amongst AI folk because it's the beginnings of AI being "let loose" on the real world, i.e., singularity is just around the corner.
Laughably MCP is nothing else than a better REST endpoint ... I imagine someone will come up with a Swagger-like definition soon (or has done).
Exactly, my thinking too.
OOOOooooo hang on, so NodeRED will become the MCP which is an MCP which is an MCP which is connecting to some service - it's NodeRED/MCP all the way down!
2 Likes
I came across some blogpost how to 'enable' tool calling for every model by 'fooling' the system prompt.
In a function node i used:
const stream = false
const server = flow.get("ollama_server")
const {res,req} = msg
const o = new ollama.Ollama({ host: `${server.host}:${server.port}` })
const system_message = {
'role': 'system',
'content': `In this environment you have access to a set of tools you can use to answer the user's question.
available tools:
- weather
- web search
- statistics
If you decide that you need to call one or more tools to answer, you should respond in the following json format:
{function_call:{function:function_name, parameters}}
`
}
const message = { role: 'user', content: msg.payload.question }
const response = await o.chat({ model: 'qwen2.5', messages: [system_message, message], stream })
// @ts-ignore
// for await (const part of response) {
// if(part.done) node.warn({ part, response })
// node.send({payload:part,req,res})
// }
return response
responses seem to work
/// what are today's headlines
role: "assistant"
content: "{function_call:{function:web_search, parameters:"today's headlines"}}"
// how cold is it in the netherlands ?
role: "assistant"
content: "{function_call:{function:weather, parameters:"Netherlands"}}"
The responses could be captured and trigger some flow, although handling the response may be a bigger challenge.
oh no, don't do this to me - it works with Ollama? Now I'm going to setup some MCP server at home for my ollama installation so that it can help me to code Erlang code so that I can recreate this MCP in Erlang-RED to get it to tell me how to code up an Node-RED backing in Rust .... 
Strange times we live in ... or perhaps tail recursion is the problem!
1 Like
Ollama has native tool support, but most models don't.
1 Like