Use Node-RED as reverse proxy

Hi folks,

During development of my camera-viewer, I'm having some security issues (concerning CORS). Summarized: when I show a camera Mjpeg stream from host X on my Node-RED dashboard (from host Y), I cannot get access to the underlying image data. So you can display the images (using an img or video element), but you cannot get the image (as bytes) from those elements. It is some copyright protection system. You can only get around it when host X sends a Access-Control-Allow-Origin http header variable, but I cannot rely on that ... :face_with_symbols_over_mouth:

So I want to have my host Y to supply both my Node-RED dashboard and also my camera Mjpeg stream. I assume my browser won't start complaining again about cross-domain issues ... And I'm highly allergic for combining multiple tools, thus I would like my Node-RED flow to offer both the dashboard and also the camera mjpeg stream (without having to use other tools!!). So I assume Node-RED has to behave as a reverse proxy.

Let's assume I want to decode this public online mjpeg stream:

image

I could solve this with my multipart-stream decoder node (which decodes the camera mjpeg stream to separate images), combined with my multpart-stream encoder node (which encodes the camera images back to an mjpeg stream):

image

[{"id":"254b4aec.9c6f06","type":"http in","z":"8bb35f74.82618","name":"","url":"/mjpeg_test","method":"get","upload":false,"swaggerDoc":"","x":400,"y":320,"wires":[["b8e4e8b7.f45358"]]},{"id":"4c392f29.e336a","type":"multipart-decoder","z":"8bb35f74.82618","name":"","ret":"bin","url":"https://webcam1.lpl.org/axis-cgi/mjpg/video.cgi","tls":"","delay":0,"maximum":1000000,"blockSize":1,"x":390,"y":260,"wires":[["b8e4e8b7.f45358"]]},{"id":"b8e4e8b7.f45358","type":"multipart-encoder","z":"8bb35f74.82618","name":"","statusCode":"","ignoreMessages":false,"outputOneNew":false,"outputIfSingle":true,"outputIfAll":false,"globalHeaders":{"Content-Type":"multipart/x-mixed-replace;boundary=--myboundary","Connection":"keep-alive","Expires":"Fri, 01 Jan 1990 00:00:00 GMT","Cache-Control":"no-cache, no-store, max-age=0, must-revalidate","Pragma":"no-cache"},"partHeaders":{"Content-Type":"image/jpeg"},"destination":"all","highWaterMark":"500000","x":620,"y":300,"wires":[[]]},{"id":"3f7224a8.1503ec","type":"inject","z":"8bb35f74.82618","name":"Start stream","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":"","x":190,"y":260,"wires":[["4c392f29.e336a"]]}]

And then indeed my camera stream is available via my Node-RED hostname:

image

So far so good. But this involves a lot of processing (mostly for finding boundary sequences in the large image chunks) on my poor Raspberry, which is not what I want. Indeed now my Node-RED flow needs to separate individual images, concatenate them together again, and then my browser has to separate the individual images again ...

I would like to pass the (infinite stream of) data chunks from the camera mjpeg stream straight to my browser (via the Node-RED flow), without any processing. What would be the best way to accomplish that? Perhaps with a combination of httpin/httprequest/httpout or anything else ???

Thanks !
Bart

Quick question, is the camera you're trying to get the stream from a similar model AXIS or should I think of a different PTZ based or even different brand camera? Most of them have the mjpeg stream defined in cgi files, and this one just passes out jpeg images through it with a set content length. I remember messing with those mjpeg streams for AXIS some 6 years ago, but I've no idea if I can find those python files back, they didn't seem worth saving elsewhere so they're probably still on my old laptop.

What I do remember is that AXIS is friendly enough to write a manual on API access to those cameras through CGI: https://www.axis.com/files/manuals/HTTP_API_VAPIX_2.pdf
/axis-cgi/mjpg/video.cgi gets you a multi-part jpeg stream of constant images, as if a video is being displayed. Section 5.2 of the manual might get you API calls for this thing. Instead of requesting the stream you could request individual images every X second(s) instead.

Hi Lena (@afelix),

Yes I know. Then indeed the parsing is much faster. But I have experienced in the last years (during development of my node-red-contrib-multipart-stream-decoder node) that you cannot count on it. My decoder node makes use of the content-length header if available, and will search through all bytes otherwise. Lot's of camera's don't give the length, and I want a general solution that everybody can use ...

But in this case I don't want ANY processing to be done by Node-RED on the data chunks. Node-RED receives data chunks from a source, and just need to pass those chunks to the receiver. And I'm pretty sure that there will be lots of other use cases, for people (like me) that want to use their Node-RED to quickly setup a reverse-proxy.

When looking at the code of the httprequest node, I don't see any chunk related code anymore (since it has been rewritten by Hitachi to use the 'request' library underneath). On the other hand the node-red-contrib-http-multipart node 'could' perhaps do the job, but then the reverse proxy solution would only be usable for multipart responses (not for normal finite responses). Moreover that node is parsing the multipart headers, which is again a bit useless since the (dashboard) browser will do that also afterwards.

Does anybody know if I can just pass the data chunks through Node-RED, or do I have to develop a new custom node (e.g. based on this article ?)? If I should have to develop something on my own, does anybody have some info to get me started and stuff that I need to think about??

Let me see if I can find something that different brands have in common for mjpeg streams. This is something I’ve to see in practice again before I can come up with more ideas.

<pedant on> I think in this case M stand for Motion... such that it is a stream of complete images</pedant off>

1 Like

You’re right, the Axis docs confused me for a bit. I just checked it. Sorry for the confusion this might have caused others. Once again Lena, double-check before you speak please :slight_smile:

In case somebody is reading this discussion afterwards, at first sight I don't think this could easily be implemented using the existing nodes:

I added a few lines to the httprequest node code, to send an output message for every data chunk that arrives (instead of waiting for the entire response to arrive):

request(opts, function(err, res, body) {
    if(err){
        // Send error message
    }
    else{
        // Send (complete) response message
    }
}).on('data', function(data) {
    // Data-handler added to send an output message for every data chunk
    console.log('Chunk received with length ' + data.length);
    msg.payload = data; // decompressed data
    node.send(msg);
});

And indeed the httprequest node now sends an output message for every data chunk:

Chunk received with length 1664
Chunk received with length 6528
Chunk received with length 1472
Chunk received with length 6720

However the httpout node will give an error for every message (except for the first message):
Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client

The httpout node expects a complete response, and will send it once and then it is considered to be completed. So dead end again ...

Well don't send to msg.res until it really is complete. As it says just send it once

Hey Dave (@dceejay),
Do you mean I can send data chunks (via input messages) to the httpout node, which are immediately written to the client socket. And that I can keep the socket open (infinitely since the stream has no end)?

Ah no, (Looked harder at your pic on a bigger screen) - the http out doesn't do that. I guess it must be possible to write a version that does - no idea if they can be versions of the same node.

You could do http streaming but that doesn't really follow the Node-RED standard message-based approach does it?

I have wondered a few times whether Node-RED needed a "parallel" set of features that supported true streaming. I don't have the knowledge though as to whether that would be a reasonable approach.

Hey guys,

I wrote an experimental node as a prove-of-concept: node-red-contrib-reverse-proxy. It contains currently only the basic functionality to allow me to redirect my mjpeg test stream via Node-RED. And since this is the first time ever that I work with proxies, please be kind/patient in your feedback :wink:

Don't even know if this a good way of working to integrate it into Node-RED.
But at least this way I can learn from it ...

You can install it directly from my Github page via the command in the above link.

1 Like

Only taken a quick look at the code for interest as I don't need this capability myself right now. Seems a straight-forward solution. Not sure what the performance will be like.

One small thing, you have console.log on error where you could have RED.log.error to integrate properly to Node-RED logging.

After reading through all the fan mail about this reverse-proxy node, it became immediately obvious that the entire community was excited to get new functionality. Not ... :pleading_face:

So I have added a bunch of new features, and I have added a lot of information and examples on the readme page.

Julian (@TotallyInformation), thanks for your response! I have added a performance test in the readme page. It is not a very professional benchmark, but for my personal use case it is sufficient: it shows me that my poor Raspberry Pi 3 won't die after a slowly and painful dead, when I would start proxying my Mjpeg streams via Node-RED :partying_face: :champagne: :cake:

Unless my test is incorrect ...

3 Likes

Great work Bart!
I just wonder some few things. I understand the thing you explained about hiding credentials and the problem accessing IP cameras due to security exceptions,,,,I think

Besides just viewing video in attended mode, do you think this node you have developed also could make it possible for some kind of AI analyze inside NR (by sending the stream further to another "analyzer" node or some external component)? As well as centralized recording...

@BartButenaers - Bart, your readme ends with

Rermarks:

  • The Y-axis contains a percentage of the overal CPU usage (i.e. a sum of the 4 cores).
  • Of course there is NO processing of the data chunks involved, since the Mjpeg stream is decoded (into separate images) in the dashboard (i.e. in the browser and not on my Raspberry).

When I open the

It should be Remarks (0ne 'r') and don't leave me hanging!! When you open what?:joy:

Hey Walter (@krambriw), I try to add as much information as possible. Novice users won't need it, but as soon as they become more advanced they know where to find it. But if it isn't clear for skilled people like you, I definitely have to find a way to simplify my explanation somehow. Will keep you updated ...

If I'm not mistaken, I have read somewhere that Nick regretted that he had introduced (in the past) the exchange of the http request/response objects via messages. But not sure about it !!!!!!!!!! I thought because you could run into problems since those objects cannot simply be converted to json. For example in the future there will be pluggable custom wiring mechanisms, so I assume you cannot simply pass the request/response objects in messages - via some 'remote' wire - to another Node-RED instance.
Therefore I don't think this is the way to go when you want streaming ...

Paul (@zenofmud), unbelievable that you have read to the end of my readme :joy:. I find this very suspicious ... Or you are some kind of masochist, or your Chinese nanny has teached you read from bottom to top?
I have removed the open end, which means there will be no sequel next year...

1 Like

But...but...now I'll never know if the butler actually did it :sob:

(while I read it, I never said I understood it :blush:)

1 Like

I have read through the code on my mobile, a few notes:

  1. can we receieve the config from msg? The usual trick is to set parameters like
const url = msg.params.url || config.url;
  1. seems error event from http-proxy is not handled, not sure what would happen if there is error (e.g. Target url not found)

  2. is it possible to return a msg instead of directly send the http response? Some people may want to post process the proxy result

  3. will the response header "polluted" by the proxy's response header? One of your use case is to hide internal resource link and you will definitely not hoping for exposing proxy headers to actual response header

Hi Bart, what you have written is fine, I do understand. Eventually what could be better highlighted is the absolute benefits & advantages & why. I had some difficulties to really get that at first glance

What I mean is maybe best explained with a typical use case example related to my own needs & thoughts. Assume I would like to build a gui for video viewing & management. If I do this "without thinking" I quickly go ahead and configure a Dashboard with direct links to my ip-cameras. This you can easily do out-of-the-NR-box

If I then lack a middle-ware in between handling the logon, like Motion does, I would reveal the credentials of my cameras (just have to use the inspect function in the browser). This might be ok if you just restrict viewing while on your local home network. But as soon as you would like to expose to the outside or have concerns that not everyone on the same network should have access, then you will face problems. I can imagine if setting up a small/midsize NR system with video managing as a commercial product, this would certianly be something to worry about

I do not know if this was a good example since I might have a setup that is already taking care of stuff like this. My setup is as below and maybe Motion actualy works as a proxy server when connecting ip cameras? And I access my home network from outside via VPN. Only thing not protected is that anyone connected to our home network can access the Dashboard

IP & USB Cameras <-> Motion <-> NR <-> Dashboard w Video Views <-> VPN Server <-> Router <-> Internet