I am trying to access the camera feed of my 3d printer using node-red, but this is not directly accessible and have to go through some hoops to make it work.
I found a way to do it - but it was written in python, i would like to translate this into javascript for use within flows directly (if this does not have a huge performance impact on my instance).
I see the code below and tried to decipher it:
It looks like it is constructing a packet containing the username+access_code and then fires it off to the socket, which is an SSL socket, should I read this a an tcp request ?
Is there a smart person here who could possibly help me with constructing this packet/buffer ?
username = 'bblp'
access_code = ''
hostname = ''
port = 6000
MAX_CONNECT_ATTEMPTS = 12
auth_data = bytearray()
connect_attempts = 0
auth_data += struct.pack("<I", 0x40) # '@'\0\0\0
auth_data += struct.pack("<I", 0x3000) # \0'0'\0\0
auth_data += struct.pack("<I", 0) # \0\0\0\0
auth_data += struct.pack("<I", 0) # \0\0\0\0
for i in range(0, len(username)):
auth_data += struct.pack("<c", username[i].encode('ascii'))
for i in range(0, 32 - len(username)):
auth_data += struct.pack("<x")
for i in range(0, len(access_code)):
auth_data += struct.pack("<c", access_code[i].encode('ascii'))
for i in range(0, 32 - len(access_code)):
auth_data += struct.pack("<x")
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
jpeg_start = bytearray([0xff, 0xd8, 0xff, 0xe0])
jpeg_end = bytearray([0xff, 0xd9])
read_chunk_size = 4096 # 4096 is the max we'll get even if we increase this.
# Payload format for each image is:
# 16 byte header:
# Bytes 0:3 = little endian payload size for the jpeg image (does not include this header).
# Bytes 4:7 = 0x00000000
# Bytes 8:11 = 0x00000001
# Bytes 12:15 = 0x00000000
# These first 16 bytes are always delivered by themselves.
#
# Bytes 16:19 = jpeg_start magic bytes
# Bytes 20:payload_size-2 = jpeg image bytes
# Bytes payload_size-2:payload_size = jpeg_end magic bytes
#
# Further attempts to receive data will get SSLWantReadError until a new image is ready (1-2 seconds later)
I got something, I have set Buffer.alloc(79); to 80 and it started to respond.
The tcp request node set to 'never - keep connection open' - receive buffers back with 4096 and 3008 bytes (flip/flopping).
How does this work:
Grab each reply and append it to a buffer until you see [0xff, 0xd9] (jpeg end)
Then, scan for [0xff, 0xd8, 0xff, 0xe0] (jpeg start) and grab that upto jpeg end.
Do I need to decode the buffer or loop through each buffer ?
By accident (?) I did see those bytes passing by
but how do/should I capture and assemble all those packets together to create an image ?
Create an empty buffer in context (easiest way would be an inject -> change node.
Send all responses to a function. Grab the buffer from context, use Buffer.concat to append data.
After the stream stops, scan forwards for the start marker & remember the position. Continue scanning forward until you find end mark & remember the position. Use the values to slice the data out into msg.payload.
Unfortunately I think this goes beyond my capabilities.
The stream never stops, it is a videostream. The 'buffer' concept does not click in my brain, is it the same/similar as an array ? How can i check for [0xff, 0xd8, 0xff, 0xe0] as a whole ? or does it need to be checked per value ?
After that, you keep appending until len(img) == payload_size (if it goes over, it exits)
Once it matches the length, it checks the next 4 bytes (of the total img) === jpeg_start
img[:4] != jpeg_start:
and the last 2 bytes (of the total img) === jpeg_end
img[-2:] != jpeg_end
If all is good, it writes the file (os.write(1, img))
bufffer is kinda an array of bytes. It has nifty helper functions like .readUInt32be (for reading 4 bytes as a UINT in big endian format)
Its late here I cannot help further right now. However, if you can capture a stream of responses (enough to capture the very first byte until after at least 2 or more images) I will take a stab tomorrow (DM me the raw data in a code block in the msg or send a link to the saved streams)
PS to save the streams, just write (append) them to file or files direct from the TCP node.
Receiving an endless supply of jpeg chunks and reassembling them is what pipe2jpeg was designed for. if you can make you connection and pipe the buffer into this node, it should output jpegs for you.
wth, this works flawlessly! I was expecting to first get the proper headers of the images, but no, first deploy and running. Thanks a lot! And because there is no need for ffmpeg, this is quite efficient on the cpu, very nice.
I understand, sorry I was not trying to push - I actually would like to learn a 'bit' about buffers as it is a large thing within nodejs.
img[:4] != jpeg_start:
Things like this - I am aware in python about the slicing, but what I fail to see is how this is done in javascript, because the python part is matching directly against the byte array (?) - meaning like the exact match in one go if i read it correctly. Is this where the (.readUInt32be) comes in ? but when do you use big endian or little endian are things like a realm i don't exist in :') It will require some more digging for me to understand these low level concepts.
Well, the inner library, pipe2jpeg was made after i realized the hard way that ffmpeg does not spit out single jpegs nicely due to node.js breaking inbound buffer into 64k chunks or lower based on the os. If the jpeg was a little too big, it would get broken up and have to be re-assemble, thus the pipe2jpeg lib was needed.
In node-red-pipe2jpeg, you can turn on the http option, then 2 http routes will be served where you could grab the single image and refresh for updates or get the mjpeg stream. So, it can be mjpeg if you needed to stream that to the browser by simply pointing the url, but you could quickly run into the browser limitation of persistant http connection if you have are viewing too many at the same time.
a simple montage of img elements that refresh the src every 2 seconds for a camera overview:
Buffers are tricky. i have worked with them for years and only recently figured out how to use them efficiently by creating my own buffer pool to reuse them and not relying on node.js' garbage collection.