Hi, I'm trying to install node-red on a kubernetes cluster and when the pod starts up it fails with the following error message:
Error: EACCES: permission denied, copyfile '/usr/src/node-red/node_modules/node-red/settings.js' -> '/data/settings.js'
at Object.copyFileSync (fs.js:2061:3)
at copyFile (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:70:6)
at onFile (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:56:25)
at getStats (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:48:44)
at handleFilterAndCopy (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:33:10)
at Object.copySync (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:26:10)
at Object.<anonymous> (/usr/src/node-red/node_modules/node-red/red.js:125:20)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
at Module.load (internal/modules/cjs/loader.js:950:32) {
errno: -13,
syscall: 'copyfile',
code: 'EACCES',
path: '/usr/src/node-red/node_modules/node-red/settings.js',
dest: '/data/settings.js'
}
I added the env settings PGID and PUID based on other posts on github etc. experiencing the same problem (but on Docker, not kubernetes). The file system is AWS EFS, so I also tried creating the access point with user/group 1000 as owner. Also added runAsUser and runAsGroup to the container template. Nothing works. Any ideas what I'm doing wrong here? I have successfully deployed several other services with persistent volume claims and had no permissions issues, so I'm not sure why the node-red container is different. Any help appreciated.
Thanks @hardillb I have attached my EFS access point settings. I have also tried a different install method with a helm chart and I have set all of the following to 1000:
The PV is using csi.volumeHandle in the format fileSystemId::accessPointId as shown in excerpt from my script below. This config works for other deployments with PVs, any reason why that wouldn't work for node-red?
Thanks @jmk-randers I tried the helm chart you suggested. It fails when it runs an InitContainer called "permission-fix", the node-red container never even starts. Looking at the template I think it's failing because it tries to set permissions on a dynamic volumeMount which isn't possible because I have static existing volumes in EFS. I can see here that they created the InitContainer to work around the same issue I reported. At this stage I'm thinking node-red is not compatible with static volumes on AWS EFS due to the node-red user configuration. This means it won't run on EKS Fargate because Fargate only supports static volumes on EFS (afaik).
This is not the case, I have had this working (not for the user dir, but for a data directory mounted under it) on AWS EKS with Node-RED running on fargate.
Out of interest can you talk a bit more about what your end goal is here. Is it a single Node-RED instance or are you planning to run a lot more? If so there may be better solutions
I have a few services already running on Fargate, including postgresql, and wanted to use node-red to prototype an MQTT to database adapter. Did you see anything obviously wrong with my EFS and PV setup? Did you have to configure any user/group settings on EFS to get your solution to work?
Well I'm not exactly sure which part made it work, but I added a few extra lines from your PV/PVC configs (e.g. claimRef section in PV and volumeName in PVC) and I created a new access point with root directory set to "/node-red" (instead of default "/" ). Works like a dream, thanks!! If I get some time I'll try to find out which of those changes got it working and post the results here.