Is it possible to server the editor's static files from a CDN?

Is it possible to include the static editor assets via a CDN?

Not without modifying the editor, it expects to load the from the backend.

What problem are you trying to solve? (Given that most modern browsers don't share cached assets across different pages these days)

Actually, I think you could. But you would need to use a proxy to do so which would let you define the proxy to use an alternative URL to the default. You would want to test carefully though and might well hit CORS issues if not careful.

If you need your own assets loading from elsewhere, that is likely to be possible but would require changing the code in the nodes. Again it is possible you would hit CORS issues.

If all you need is to be able to increase performance of a single Node-RED instance, you will probably want to look at a caching proxy anyway since this is likely to give a boost, especially if the Editor is accessed by many people.

Without knowing why you think you want to do this, it is really hard for us to give sensible answers though.

:question: A single URL should indeed be cached by the browser for any page unless the originating server (or a subsequent proxy) tells it not to or it has some code to prevent it (e.g. a dynamic URL parameter).

I have n users (around 20) accessing m node-red instances (around 1000) and many of these instances run on low bandwidth devices. So I would like to have a central CDN for the static assets to reduce the load on them.

OK, so a slight terminology issue here. a CDN (Content Delivery Network) is typically something that large vendors provide to help reduce Internet bandwidth and to deliver content from physically closer locations to you. You need mega-bucks and a world-wide network to create and manage one. :slight_smile:

What you are more likely looking for is a caching proxy. A service that you run that sits "in front" of your micro-services (in this case Node-RED).

By ensuring that all user requests go through the proxy, you build up a cache of common requests and their resulting data which the proxy can send straight back to the requesting browser.

However, a slight caveat which you probably already know. Node-RED is quite a dynamic tool and therefore there are many things that cannot be cached.

That said, with a sufficiently large cache to support your requests, the most common static resources will be cached. But with a 1000 Node-RED endpoints, that is a LOT of data to cache. So you may find it hard to tune the proxy correctly. Such things can be a bit of a black art and may require trial and error. You will also need to keep an eye on longer-term performance since changes in the architecture and use may require retuning things.

@TotallyInformation Modern browsers separate content based on the initiator these days, so they don't share content across sites, even if they are say loading the same js library from a CDN

And yeah, a proxy in-between could do it, it is something we are looking at doing for device in FlowFuse as we already have the proxy, the tricky bit is being able to identify if there are different versions of Node-RED running on each device and then picking the correct version of the static assets to serve.

Interesting but not quite what I'm seeing in the browser or the info I'm getting online (which isn't necessarily trustworthy of course! :slight_smile: ).

Of course, there are several things that might also have an impact. http/2 tries to multiplex requests. Service Workers may also be shared between tabs.

Haha, I doubt that will be your only issue but yes, that's a problem. I've no doubt there are architectures that will work though.

@TotallyInformation technically I it would still be ok to call it a CDN I suppose since all 20 users reside in my local network and therefore I could place a server very very close to them to speed up the delivery of those static assets.

@hardillb I see the issue with the different node-red versions running on those devices, I wonder whether the best way really would be to have the approach webpack is using to put a file hash into the build artifacts for each release.

Overall, when I look at the data transfer tab of my browser when I access one of the node-red instances. There are give or take 10 MB going over the wire to load the static assets (e.g. editor.js @ 3.89 MB). The actual dynamic data is very little in comparison. I tried using a proxy but it's very hard to dynamically sort through the requests as to which ones should be cached and which once shouldn't. Hence why I am here :slight_smile:

Screenshot 2024-06-08 185017

Leaving semantics behind, you should at least try a proxy in any case because even without much configuration, it may well help quite a bit. For example, in your image, the majority of the items are cachable.

You might also be able to adjust the caching rules in ExpressJS as well (that is Node-RED's web server) but that may be more complex unless you have someone on hand who knows how. You are looking to adjust the time-to-live values so that the browser does not ask for the resource again too often.

With a proxy, you will want to look at the "hit" levels that the proxy reports and may need to increase the size of the cache (which has an impact on memory use).

@TotallyInformation Gaining security and privacy by partitioning the cache  |  Blog  |  Chrome for Developers

Thanks Ben, that is really useful. So this changed from October 2020 (Chrome 86), good to know.

based on "scheme://eTLD+1", subdomains and port numbers are ignored

So actually, it would be feasible to get good browser cache hits anyway depending on the actual architecture used.

But more importantly, this puts the focus back on having a caching proxy which could take care of everything and make sure that not only would the browser cache be used effectively (by proxying resources from different TLD+1 sources under a single TLD+1) but also could cache the different microservice back-ends (instances of Node-RED in this case) URL's effectively.