Long deploy time (again...)

I know this topic isn't unique, but I wasn't able to find the answer when searching through a few similar posts here, so maybe someone can point me in the right direction?

Randomly this evening I was editing nodes and suchlike, and when hitting "Deploy" (on modified nodes only) it took rather longer than it would usually take.

I have some Grafana logging of CPU usage on my Linux OS and there has been no change at all.

As suggested by @knolleary in another post, I have enabled 'trace' level logging in my settings file then used the log output to see which node is taking the time.

What I see there is just a whole load of repeated stuff relating to alexa-remote (the below repeated hundreds of times):

20 Nov 21:58:25 - [debug] [alexa-remote-account:9fc5798445494cc9] Alexa-Remote: Sending Request with {"host":"alexa.amazon.co.uk","path":"/api/activities?startTime=&size=13&offset=1","method":"GET","timeout":10000,"headers":{}}
20 Nov 21:58:25 - [debug] [alexa-remote-account:9fc5798445494cc9] Alexa-Remote: Response: No/Invalid JSON
20 Nov 21:58:25 - [debug] [alexa-remote-account:9fc5798445494cc9] Alexa-Remote: No authentication check needed (time elapsed 470508)

I have disabled the node "alexa-remote-account" temporarily but that made no difference. NB these messages show when I am not deploying as well.

Right at the end of the delay I then get the standard stopping of nodes, so I don't think it's an issue stopping nodes:

20 Nov 22:06:37 - [debug] Flushing localfilesystem context scope global
20 Nov 22:06:37 - [debug] Flushing localfilesystem context scope 2e858c805ecd8ca3:a0c2f06d.bcd7a
20 Nov 22:06:37 - [debug] Flushing localfilesystem context scope 3d86e78e.17ff28
20 Nov 22:06:38 - [trace] utils.writeFile - copied /home/mat/.node-red/flows_automate.json TO /home/mat/.node-red/.flows_automate.json.backup
20 Nov 22:06:38 - [trace] utils.writeFile - written content to /home/mat/.node-red/flows_automate.json.$$$
20 Nov 22:06:38 - [trace] utils.writeFile - renamed /home/mat/.node-red/flows_automate.json.$$$ to /home/mat/.node-red/flows_automate.json
20 Nov 22:06:38 - [debug] saved flow revision: 8aad655a69666cba9dcc4feef1757651
20 Nov 22:06:38 - [info] Stopping modified nodes

Then immediately after that (milliseconds later) the deployment has finished.

Hi @hazymat,

I am not really a kind of 'guessing' guy...
Instead I always try to analyse such problems in a bit more scientific way :wink:

In case you would be interested, I have quickly written a tutorial for you. It explains step by step how I would analyze such problems.

Summarized: I connect with my Chrome debugger to NodeJs, and then I create a CPU profile that tells me exactly how much (CPU) time each function has been consuming.

If you want to try it, you can always at the end save your CPU profile result:

image

Then you can share it here on Discourse, and I can 'try' to help you to analyze that flamechart.

Good luck hunting!!
Bart

Thanks!

Does this magic chart mean anything to you? I wouldn't know where to start analysing it :slight_smile:

Thanks

Mornig @hazymat,
Thanks for giving it a go! We can only start getting experienced in this kind of stuff, by trying to analyse use cases like this.

I have to admit that this not what I expected to see. Normally when there is a performance bottleneck somewhere, you should see wide bars very clear.

You started profiling just before pressing the Deploy button, and stopped the profiling immediately when the deploy was done? And how long was the deploy running?

Can you please save the profiling and share it here, so I (or others that want to try) can have a look when I have time?

To all others: don't hesitate to help him in other ways, because we might not find it using my way! Because I am not a CPU profiling Phd yet :joy:

Do you mean it did not stop the messages? If so then likely there is a config node still running.

Yes that's exactly what I did. Deploy takes around 12 seconds.

Yes it has not stopped the messages!

I am very confused. I have been through all the config nodes (on the config sidebar) to ensure there isn't a stray one, but couldn't see any more...

In that case I am sure there must be a non-disabled node, or a config node still present. Note that you may have to restart node-red to get rid of config nodes.

It looks like I previously installed an Alexa contrib that was then forked on Github, then I installed the forked version.

It's the two shown below in the middle:
node-red-contrib-alexa-remote2
node-red-contrib-alexa-remote2-applestrudel

I'm loath to remove the applestrudel one right now, last time I did this it sent me into a 6 hour spin of trying and failing to get alexa working (and it works right now!)

Is there some way I can "split" these two things out, or temporarily remove it or something like that?

Maybe this is causing a conflict or something like that?

Backup the whole node red folder then if you get in a mess you can just restore it and restart node-red.

For normal backup you should not backup the node_modules folder as that is large and can be rebuilt using npm install, in the .node-red folder. But for a simple short term copy in case of disaster just copy the whole folder.

You will probably need to uninstall one of those.

Looks like this problem is specific to my laptop. I had assumed not, because I tested in multiple browsers, private browsers, etc. in an attempt to rule out it being a client-side issue!

After much head-scratching I tested from another machine on my network (a virtual machine, then wife's laptop...) and the deploy took place within 2 seconds.

I am now going through the painful process of uninstalling graphics drivers, Windows updates, etc.

If anyone else has had a similar problem I'd love to hear from you!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.