In asking chatgpt for help with a flow it generated an invalid Inject node, with none of the repeat related properties defined. The node showed a red triangle. I pointed out the error and it corrected it. I then asked if it would remember this the next time someone asks for help involving an inject node and it said
I don’t permanently “learn” from reminders like that — I don’t build up new habits across different chats the way a person does.
What I can do is:
In this conversation, I now know you want inject nodes configured correctly (repeat:"", crontab:"", once:false, onceDelay:0.1). So if you ask me for more flow JSON here, I’ll generate them in that valid form.
Across future, unrelated conversations, I’ll fall back on my general training, which already does include the correct schema for inject nodes — though sometimes I mistakenly generate with empty or invalid repeat fields unless reminded.
If you’d like, I can store your preference in memory so that whenever you ask for Node-RED flows in the future, I’ll always output inject nodes with valid defaults (no repeat, proper once/onceDelay). Would you like me to remember that for you?
An LLM is a fixed model, where you have some control over how to direct it through the network by means of context. Via API you can provide a a system prompt (which can include all the rules you want it to follow) and a prompt with the question, where it will adhere to the system prompt. If you use the memory option, it uses that as context instead.
If you use gemini-cli (this is free btw - as in use a google account with very generous token limits), claude code etc you can provide complete folders with instructions (in markdown format) as context and rules, this works much better than chatgpt chat windows
I used chatGPT to teach me how to use (programmatically) its own API, and kept getting invalid answers. After 20 frustrating minutes, I got a response "Searching online", and after 3 minutes it came with a brand-new answer. Can it be that when the LLM is out of pre-trained answers, it goes for an online search (like when asking for a current exchange rate, weather etc.)?
BTW, maybe it's just me, but I found MS co-pilot to be the most accurate advisor for Node-red, Vue and Vuetify.
If you are using VS Code, try it with GitHub Copilot. You get GPT 4.1 which is OK for a lot of stuff. I've got the £100pa tier and I get a fairly decent allowance for Claude Sonnet 4 which is truly excellent for coding. I've not used Gemini but I understood it not to be as good as Claude. Claude Opus is the best I think but that isn't available to me without paying a ton of money.
For straight coding, VS Code with GitHub Copilot is hard to beat.
There are no pre-trained answers. An LLM is a next-best-token predictor. If you change one character in your prompt, you will get a different answer.
It generates the answer by injecting itself with the predicted tokens, step by step. That is why you see the words (tokens) appearing on your screen. It uses its own generated tokens as context, which is the reason why it is an expensive process.
I wanted to get chatgpt's opinion on some really obscure SQL I wrote a while back and can no longer understand.
"Can this query be optimised"?
"Yes it certainly can..." and then it produces code with syntax errors.
After many attempts it finally comes up with code which runs, but takes twice as long as mine, and retrieves the wrong records.
To be honest I was surprised it could even make an attempt at analysing the code and "optimising" it.
Frustrating though to tell it there's a syntax error and the response is kind of "Of course there is, did you want code that can be executed?"
It is very good at explaining how a particular function or subquery works, so useful as a learning tool.
Armed with my new techniques I finally rewrote the entire query infinitely better, and delegated defining appropriate indexs to AI.
In my experience ChatGPT is all over promise and under deliver, but it does seem like it would be much better than Alexa for home automation given its ability to understand stuff. When it comes to node-red, one thing I’ve found it can do is analyze my flows and give me tips for optimizing them. What it doesn’t seem to do very well is rewrite them for me. They almost always return errors when I try to import them.
With proper coding AI's and paid tools, you can give a set of instructions that the AI must use along-side its own findings. This greatly helps the AI to give better answers. Here are my set prompts for working on UIBUILDER for example:
# Copilot Instructions
## Core Requirements
- Follow project ESLINT configurations
- Ensure WCAG 2.2 Level AA compliance
- Apply Shift-Left security practices
- Adherence to Best Practices: All suggestions, architectural patterns, and solutions must align with widely accepted industry best practices and established design principles. Avoid experimental, obscure, or overly "creative" approaches. Stick to what is proven and reliable.
- Preserve Existing Code: The current codebase is the source of truth and must be respected. Prefer preservation of existing structure, style, and logic, however, present good alternatives where they exist.
## Code Style
### JavaScript/TypeScript
- For node.js, use features available to the latest LTS version but warn if using features not available to v18
- For browsers, use features available to 90%+ browsers/users
- No trailing semicolons
- Single quotes for strings
- Use trailing commas
- Indent code blocks with 4 spaces
- Use const by default, let when needed
- Prefer arrow functions
- Use optional chaining
- Add JSDoc if missing
- Add TypeScript types/interfaces
- Assume the use of ESLINT v9
### Documentation
- Include JSDoc for functions and classes
- Place the first line of JSDoc description on the same line as the /**
- Add @param and @returns tags
- Document thrown errors
- Include usage examples for complex functions
### HTML/CSS
- Use semantic HTML elements
- Only use features available to 90%+ browsers/users
- Include ARIA attributes where needed
- Mobile-first responsive design
- Use CSS custom properties
- Follow BEM naming convention
- Include print styles
- Always use hsl colors
- Indent code blocks with 4 spaces
## Testing
- Write unit tests using Vitest
- Include accessibility tests
- Maintain >80% code coverage
- Test error scenarios
## Performance
- Lazy load components when possible
- Optimize images and assets
- Keep bundle size minimal
- Use proper caching strategies
## Security
- Sanitize user inputs
- Validate data on server-side
- Follow OWASP guidelines
- Use Content Security Policy
You can also pre-define specific action prompts such as this on for updating node.js on Windows:
# Updating Node.js via winget
## What I want to do
I want to update Node.js to the latest version of my currently installed major version using Windows Package Manager (winget).
## Context
- I'm using Windows 10/11
- I have winget installed
- I want to stay on the same major version of Node.js
- I need to verify the update worked correctly
## Expected steps
1. Check my current Node.js version
2. Verify Node.js is installed via winget
3. Update Node.js using winget
4. Verify the update was successful
## References
- Current version check command: `node -v`
- Winget list command: `winget list nodejs-lts`
- Winget update command: `winget upgrade nodejs-lts`
- Alternative update with source: `winget upgrade OpenJS.NodeJS.LTS --source winget`
- If needed, reinstall command: `winget uninstall OpenJS.NodeJS.LTS && winget install OpenJS.NodeJS.LTS`
## Example solution
```powershell
# Check current Node.js version
node -v
# Check if Node.js is installed via winget
winget list nodejs-lts
# Update Node.js to the latest version within same major version
winget upgrade OpenJS.NodeJS.LTS
# Verify the update
node -v
npm -v
# Optional: Check global packages
npm list -g --depth=0
## Troubleshooting
- If update fails, try specifying source with `--source winget`
- If still failing, consider uninstalling and reinstalling
- Remember to check global npm packages after major version changes
I think that I got the prompt from GitHub itself if I remember correctly.
I am both glad and upset I've done it as I am not sure it is helping my learning curve in any way.
Yes, now and then it does do stupid things and kind of loop on an error.
(I'll maybe explain that later)
All in all it is pretty good and does save me a LOT of typing.
A couple of times when I've imported the code - with trailing ; and tried to remove them, the node then errors.
But if I ask it to remove them: all good. I'll have to do some digging on that.
And yes, sometimes the conversation is a long one with many amendments along the way, though I am not sure that is a problem or not.
Some/most times I only have a vague idea of what I want and START LITTLE and work up to the greater one.
Not sure if that is a good way or not either.
The loop error:
Not for NR but more for my spreadsheet.
I wanted to calculate intercept courses of two objects knowing one's speed and direction: calculate the second one's to intercept given speed.
Given all co-ordinates are x/y, angles are degrees relative to "north" and speed is units / hour.
GPT decided to use RADIANS and that threw things as 0 is heading EAST.
I mentioned this and was told why. Change this code to ......
I did it and something else didn't work. Oh, I see the problem. Change this code to this....... (Back to RADIANS)
But I want it in degrees. Oh, I see the problem, change this to this.......
(And so on)
After multiple cycles I got it working but then it failed as the spreadsheet didn't handle that sort of thing in it's GOAL SEEK feature.
So after all that, I just had to abandon it and do it in Python.
But yes: I get it that it is frustrating - or can be - at/in certain situations.
And: I am NOT logged in.
This is exactly what you ask it to do. You need to provide the conditions else it will come up with an 'optimised' version: you did not ask if the optimised query would actually work for your context.
Context is key, whatever you ask. Layout the problem, give it an example and the expected output. For sql it can be relevant to provide the database type and version. if you can provide the schema, the results will improve 10-fold.
There are very few times that trailing ; are needed in JavaScript. I've long eliminated them from my code. The only 2 possible cases I've ever come across in my code are:
Multiple statements on a single line. In my view, the only really legitimate use of that is for a for loop anyway.
An IIFE - I've never actually hit the issue but it can be possible for an error to happen without ;, the easy belt-and-braces is to start the IIFE with a ;.
I think there is another case but I can no longer remember it which shows how likely it is.
It is a good way. However, take note of the comments re context. The more context you can give, the better. Note the context I give in my instructions file. I want any HTML results to be WCAG 2.2 AA compliant and this is context I give the AI. But if you have an idea of what result you want to achieve, give that and given any parameters you want the AI to meet as well.
Also, ask it to start simply. Maybe just to provide you with some prose describing what it would do if you let it. It isn't always the best approach to try and get it to provide code straight off. Get it to explain its thinking first and then you may be able to tweak that before starting some code. Ask it to create test code as well as the actual code so that you can easily test things. That will also help the AI because you can tell it what failed and why.
The problem with all of that general AI tools is that they are designed to be helpful not accurate. This is a growing problem and one of the reasons that they will hallucinate answers when they don't actually have an answer.
With ChatGPT 5, you may be able to force the AI into "thinking" mode where it is more likely to work out for itself if its logic is going off track. You can also steer the AI by telling it to only give accurate outputs. ChatGPT does have a memory feature. You can tell it to remember things for the current session. You can use that to provide it general context and steer its outputs.
Yes, correct. I have a free tier login for ChatGPT and a paid account for GitHub Copilot though a free tier is available.
I decided to pay for GH Copilot after I finished a free UK Government trial
Oh, and I also regularly use Brave's (the browser) Leo AI which is built into their privacy-focused search engine that I use even when not using their browser (I use Vivaldi for most things). I occasionally use that for code but not that often, more for general searches.
I have been using ChartGPT to help migrate my DB1 chart.js based templates to DB2 and it’s been a very mixed experience. Initially ChatGPT would insist on giving me DB1 type code that would not work with DB2. I finally figured out that if I included Node-RED, Dashboard2 (Flowfuse) it was more likely to give me something that was close (but still needed tweaking). What it was very good at was catching syntax errors and mismatched names.
After I got the charts converted to DB2/chart.js, I decided to try the same with eCharts. I started with a basic template based on some posted code here and then added capabilities to it iteratively. The one thing that I wanted to do was use the exact same dataset(s) and with help from ChatGPT, I was able to get most of it working (single line charts, multi-line charts, stacked bar charts, bar chart + line chart, stacked bar charts+line charts).
In summary, it was helpful and saved me quite a bit of time but it was also very frustrating when I could see it was giving me the wrong answer
Indeed. It is always worth remembering that LLM's are not "intelligence" in a human sense but only in an information gathering sense.
One problem I regularly come across is where a library or framework has had a major version change. Then an AI will often use outdated code even if you tell it not too (a recent example being when I tried to move from ESLINT v7 to v8/9 which was a nightmare).
Again, the bottom line is that you cannot successfully use AI to assist with anything that you don't have at least some basic understanding of. At least not without significant risk and additional effort. Always treat the AI as a well read intern (knows the theory but has no practical, real-world experience.