Running node-red on kubernetes: file permissions error on pod initialization

Hi, I'm trying to install node-red on a kubernetes cluster and when the pod starts up it fails with the following error message:

Error: EACCES: permission denied, copyfile '/usr/src/node-red/node_modules/node-red/settings.js' -> '/data/settings.js'
    at Object.copyFileSync (fs.js:2061:3)
    at copyFile (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:70:6)
    at onFile (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:56:25)
    at getStats (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:48:44)
    at handleFilterAndCopy (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:33:10)
    at Object.copySync (/usr/src/node-red/node_modules/fs-extra/lib/copy-sync/copy-sync.js:26:10)
    at Object.<anonymous> (/usr/src/node-red/node_modules/node-red/red.js:125:20)
    at Module._compile (internal/modules/cjs/loader.js:1085:14)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Module.load (internal/modules/cjs/loader.js:950:32) {
  errno: -13,
  syscall: 'copyfile',
  code: 'EACCES',
  path: '/usr/src/node-red/node_modules/node-red/settings.js',
  dest: '/data/settings.js'
}

Here's my kubernetes deployment configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: node-red
  name: node-red
  namespace: node-red
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-red
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: node-red
    spec:
      containers:
        - image: nodered/node-red:2.2.0
          imagePullPolicy: IfNotPresent
          name: node-red
          ports:
            - containerPort: 1880
              protocol: TCP
          resources: {}
          env:
            - name: NODE_RED_ENABLE_PROJECTS
              value: "true"
            - name: PGID
              value: "1000"
            - name: PUID
              value: "1000"
          volumeMounts:
            - mountPath: /data
              name: pvc-nodered
      volumes:
        - name: pvc-nodered
          persistentVolumeClaim:
            claimName: pvc-nodered

I added the env settings PGID and PUID based on other posts on github etc. experiencing the same problem (but on Docker, not kubernetes). The file system is AWS EFS, so I also tried creating the access point with user/group 1000 as owner. Also added runAsUser and runAsGroup to the container template. Nothing works. Any ideas what I'm doing wrong here? I have successfully deployed several other services with persistent volume claims and had no permissions issues, so I'm not sure why the node-red container is different. Any help appreciated.

Assuming you've set the access point up correctly this should work

PUID and PGID won't do anything to the default Node-RED container

Thanks @hardillb I have attached my EFS access point settings. I have also tried a different install method with a helm chart and I have set all of the following to 1000:

podSecurityContext.runAsUser
podSecurityContext.runAsGroup
podSecurityContext.fsGroup
securityContext.runAsUser
securityContext.runAsGroup

Not sure what to try next.

Yes, but have you made sure the PersistentVolume is using the mountpoint id and not just the root EFS volume?

Also setting all those options to 1000 isn't going to change anything, the node-red user already has the id of 1000

The PV is using csi.volumeHandle in the format fileSystemId::accessPointId as shown in excerpt from my script below. This config works for other deployments with PVs, any reason why that wouldn't work for node-red?

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  name: efs.csi.aws.com
spec:
  attachRequired: false
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-efs-nodered
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: $PW_NODERED_EFS_FS_ID::$PW_NODERED_EFS_AP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nodered
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 1Gi

We are using this Helm chart: node-red 0.13.2 · schwarzit/node-red which seem to work fine with PVCs on our longhorn based distributed storage.

Maybe the pvc template in this chart is helpful?

Thanks @jmk-randers I tried the helm chart you suggested. It fails when it runs an InitContainer called "permission-fix", the node-red container never even starts. Looking at the template I think it's failing because it tries to set permissions on a dynamic volumeMount which isn't possible because I have static existing volumes in EFS. I can see here that they created the InitContainer to work around the same issue I reported. At this stage I'm thinking node-red is not compatible with static volumes on AWS EFS due to the node-red user configuration. This means it won't run on EKS Fargate because Fargate only supports static volumes on EFS (afaik).

This is not the case, I have had this working (not for the user dir, but for a data directory mounted under it) on AWS EKS with Node-RED running on fargate.

Out of interest can you talk a bit more about what your end goal is here. Is it a single Node-RED instance or are you planning to run a lot more? If so there may be better solutions

I have a few services already running on Fargate, including postgresql, and wanted to use node-red to prototype an MQTT to database adapter. Did you see anything obviously wrong with my EFS and PV setup? Did you have to configure any user/group settings on EFS to get your solution to work?

No, nothing special. I was creating the EFS AccessPoints dymaically with the AWS NodeJS Library passing in the following:

{
	FileSystemId: this._fsID,
	PosixUser: {
		Uid: 1000,
		Gid: 1000
	},
	RootDirectory: {
		Path: "/"+ name,
		CreationInfo: {
			OwnerUid: 1000,
			OwnerGid: 1000,
			Permissions: "755"
		}
	},
	Tags: [
		{Key: "Name", Value: name+"-ap"}
	]
}

Where name is a unique name for each Node-RED instance.

Then used the following to create the PV and PVC (note this is JSON representation, but it's the same structure as the YAML would be:

let pv = {
	apiVersion: "v1",
	kind: "PersistentVolume",
	metadata: {
			name: instance.appname+"-pv"
	},
	spec: {
		capacity: {
			storage: "5Gi"
		},
		volumeMode: "Filesystem",
		accessModes: ["ReadWriteOnce"],
		persistentVolumeReclaimPolicy: "Retain",
		storageClassName: "efs-sc",
		csi: {
			driver: "efs.csi.aws.com",
			volumeHandle: this.storage._fsID + "::" + data.AccessPointId
		},
		claimRef: {
			namespace: instance.namespace,
			name: instance.appname+"-pvc"
		}
	}
}

let pvc = {
	apiVersion: "v1",
	kind: "PersistentVolumeClaim",
	metadata: {
		name: instance.appname+"-pvc"
	},
	spec: {
		accessModes: ["ReadWriteOnce"],
		storageClassName: "efs-sc",
		resources: {
			requests: {
				storage: "5Gi"
			}
		},
		volumeName: instance.appname+"-pv"
	}
}

this.storage._fsID is the EFS filesystem ID and data.AccessPointId is the AccessPoint ID.

1 Like

Well I'm not exactly sure which part made it work, but I added a few extra lines from your PV/PVC configs (e.g. claimRef section in PV and volumeName in PVC) and I created a new access point with root directory set to "/node-red" (instead of default "/" ). Works like a dream, thanks!! If I get some time I'll try to find out which of those changes got it working and post the results here.

It will have been the /node-red you can't have an access point at the root of the volume with different uid/gid to 0 iirc

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.