I want to run a standalone utility that would normally be found under /usr/bin, but given the node-red docker instance can't access that, I moved it into the /data directory that is mounted and functional. I'd assumed that when the exec node called it at /data/utility, it would be able to execute.
So, using 'bash /data/utility' gets me a 'cannot execute binary file' response and error 126.
I'm not great with linux, so assume this is a permissions thing on that utility file, but what should I be doing to make it executable to the exec node?
I don't think so.
You are trying to interpret the file as a shell script ("bash /data/utility") and I suspect that it's an executable not a script file, possibly even something compiled for a different CPU architecture.
*** What does
file /data/utility show?
*** What about
file /usr/bin/true ?
It is not usual or sensible to put your own files in /usr/bin; to put programs in /data is no better. ~/bin might be more appropriate (and log back in to add it to $PATH).
I know nothing about Docker but it seems improbable that it cannot access operating system utilities in /usr/bin. If not, how can it do anything at all?
I understand that a container only has access to a sandboxed partial environment that is a minimum necessary to run the specific application, so has no access to the underlying os.
I think you're right that perhaps the way I'm calling it is a problem, or it is permissions.
From within the container, performing a file /data/stress provides the following:
~$ file /data/stress
/data/stress: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=800abdc1177388d244b2d95344b87604e53c0c89, stripped
file /usr/bin/true provides:
~$ file /usr/bin/true
/usr/bin/true: cannot open `/usr/bin/true' (No such file or directory)
Ah well, on my OS true is in /usr/bin but no matter.
It is indeed an executable.
- Ensure that the file permissions indicate it as executable.
chmod +x /data/stress
- Change the exec node command to
The default docker image is built on Alpine and not Debian (it's smaller and faster to start). So the host binary is incompatible (most probably the bindings) - so you could try the debian based docker image instead (if that is what your host is using)
Ah yes, that's exactly why I wanted to compare two file commands .
I still think the "Cannot execute binary file" comes from trying to interpret the executable as a script file though?
You can still "escape" your docker container with the exec node by ssh'ing onto the host system and attach your command at the end.
I do so with a certificate and a special user with non-root rights in order to do some backup jobs on host level.
I decribed in more details how I am doing it here
Aaaand ... if you'd like to run a script which temporarily stops your docker container after you started the script with above mentioned approach, check here to get everything up and running again, as you like.
Thanks, this is a good idea, but probably too much of a workaround for now and may be the plan B, but did get me thinking about using files as a semiphore to pass messages or requests to host.
So, I solved the problem by doing this:
- I wrote a bash script (watcher.sh) that watches a folder within the node-red docker volume (i.e. the host of node-red's /data/requests).
- Node-red writes files (requests) to that folder (/data/requests), either using the filename as the request indicator (placing a json request inside the file).
- The watcher.sh is notified of the new file, reads the filename as the request (or the json within the file), and performs the request.
- On completion, the watcher writes a confirmation file back to the node-red docker volume (/data/responses), either with a meaningful filename or containing a response json within the file.
I set the watcher.sh script up to run as a service, and ensured systemd had execute rights, and that the service auto-starts on boot. It works like a charm!
It is a great way to allow a container to request a host action (such as starting or stopping another docker container), or passing basic messages between co-located docker containers.
Thanks for the idea, it is very much appreciated.
Thanks, this is a great explanation.
So, theoretically, if I have a local copy of a standalone utility that is built on Alpine, it should execute within the container context?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.