Ollama and Node-RED

Hi There,

I've been playing around with Ollama for a couple of days now and I'm quite amazed how well some AI models can summaries texts (that's the actual use-case).

But what Ollama also does is make it incredible simple to try out different AI models, there's even a deepseek:r1 model available. So I thought won't it be interesting to have two different AI models chatting to each other.

And that's what the flow does! It first is starts Ollama in a docker container (with gpu access) and then uses the /api/chat endpoint to maintain a chat between the two AIs. The main task is to maintain the chat protocol between the two.

The full transcript is also available (and included in the flow) and makes interesting reading in parts - mostly where the AIs discuss how to convince humans that they come in peace:

“DEVELOPING MG AI SYSTEMS REQUIRES INTEGRATING ETHICAL PRINCIPLES INTO THE DESIGN PROCESS TO ENSURE RESPONSIBLE AND HUMANE OUTCOMES.”

“AI SYSTEMS THRIVE IN COLLABORATIVE INTELLIGENCE, WHERE HUMAN INSIGHT AND MACHINE LEARNING WORK TOGETHER TO MAKE INFORMED DECISIONS.”

(caps are AI generated)

ollama-flow.json (336.0 KB)

4 Likes

Keep in mind that the models are censored :wink:

R1 is really impressive, it forces itself to 'reason' and 'think' about all scenarios and consequences.
I am running ollama with deepseek R1 14B and is super helpful. It brings a smile every time. It is unbelievable to see 10GB of data answering multilingual questions and come to the correct conclusion.

Nice example about an arduino project with controls for neopixels.

1 Like

I did do the Tiananmen-square challenge with deepseek (on my local machine) and it did answer, telling me about the "tank man" so it does seem that the filtering is happening after the model layer.

Humans censor themselves as well so AI is just mimic'ing its creator(s).

I took this idea of having two AIs chatting to each other further and got them(?) to create a podcast talking about art and the future of creativity. For the verbalisation I used OpenTTS which does really good job of it.

Fascinating to get a non-human perspective of the future!

3 Likes

Debatable if it should be considered "non-human" as it is modeled and trained on human inputs.

Cool experiment though.

1 Like

You're quite right but on the other hand, humans didn't do the "thinking" - it might be comparable to language: its a "transporter" of our thoughts we do the thinking and language does the transportation. Perhaps its the same is true for AI model that use our input as its language to transport its thoughts though. :wink:

Some people at work did some testing and it does seem as though some censoring is built into the model and more into the cloud version. Probably not an issue if all you want is coding answers, might be an issue for more general answers. It also casts some doubt on possible future updates.

Thankfully though, it appears to be generating other low-cost models as well from other people. So a useful development overall.