Node-red application gradually increases memory usage until crashing

Hello guys, we have node-red running in our device. We have prebuilt nodejs and node-red from source in one of the linux machine with target specific (target is device) configs. Moved the built pacakge to device and started node-red from init.d script using start-stop-demon.
versions used in package are nodejs v14.12, node-red v1.2.3
.node-red/pacakge.json is as follows.

{
    "name": "node-red-project",
    "description": "A Node-RED Project",
    "version": "0.0.1",
    "private": true,
    "dependencies": {
        "node-red-contrib-azure-iot-hub": "^0.4.0",
        "node-red-contrib-google-cloud": "0.0.19",
        "node-red-contrib-influxdb": "^0.5.3",
        "node-red-contrib-opcua": "^0.2.91",
        "node-red-contrib-serial-modbus": "0.0.11"
    }
}

So problem here is we have observed that the memory utilization gradually goes up, this we have seen from "top" command, the %mem goes upwards. Initially the speed was very high and it would crash the system with javascript memory leak issue. On community someone suggested that they got performance improvement by deleting debug nodes from flow. Deleting debug nodes drastically reduced the memory utilization speed.

But the increase in % memory is still there, over the time the the percentage changes from 6% to around 27% in 3days as per our recent observation, with debug node it would reach there within an hour.

Please suggest us what is the issue with debug node in node-red, and how can we detect the nodejs memory leak that is causing the increase in memory utilization and eventually crashing application.

What versions of node-red and nodejs are you running?

@Colin Thanks for quick response, I have updated the post with nodejs and node-red version details.

Can you post the node red log output from just before it crashes please.

latest version is 1.2.7

@zenofmud node-red v1.2.3 was used because we had taken the node-red code from source in our build machine, we don't create package every time, just build it locally. Still as to check the possibilities, I had changed nodejs to v14.15.0 and node-red v1.2.7, getting same issue.
@Colin below is the log output that I got from the new build i.e node-red v1.2.7, the output was same for older version.

[23989:0x5602b038d250] 231916414 ms: Scavenge 1006.7 (1025.4) -> 1005.8 (1026.4) MB, 22.4 / 0.0 ms (average mu = 0.872, current mu = 0.848) allocation failure
[23989:0x5602b038d250] 231916526 ms: Scavenge 1006.7 (1025.4) -> 1005.8 (1026.4) MB, 22.4 / 0.1 ms (average mu = 0.872, current mu = 0.848) allocation failure
[23989:0x5602b038d250] 231916613 ms: Scavenge 1006.8 (1026.4) -> 1005.9 (1026.4) MB, 17.8 / 0.0 ms (average mu = 0.872, current mu = 0.848) allocation failure
[23989:0x5602b038d250] 231916777 ms: Scavenge 1006.8 (1026.4) -> 1005.8 (1026.4) MB, 24.8 / 0.0 ms (average mu = 0.872, current mu = 0.848) allocation failure
[23989:0x5602b038d250] 231916943 ms: Scavenge 1006.8 (1026.4) -> 1005.8 (1026.4) MB, 21.8 / 0.1 ms (average mu = 0.872, current mu = 0.848) allocation failure
15 Jan 09:20:49 - [info] [azureiothub:Azure IoT Hub] Message sent.
[23989:0x5602b038d250] 231917083 ms: Scavenge 1006.8 (1028.4) -> 1005.9 (1028.4) MB, 24.9 / 0.0 ms (average mu = 0.872, current mu = 0.848) allocation failure
[23989:0x5602b038d250] 231922048 ms: Mark-sweep 1007.8 (1028.4) -> 1005.6 (1028.6) MB, 4646.4 / 0.7 ms (average mu = 0.781, current mu = 0.233) task scavenge might not succeed
15 Jan 09:20:54 - [info] [azureiothub:Azure IoT Hub] JSON
15 Jan 09:20:54 - [info] [azureiothub:Azure IoT Hub] Sending Message to Azure IoT Hub :
Payload: 1610682649477
15 Jan 09:20:54 - [info] [azureiothub:Azure IoT Hub] JSON
15 Jan 09:20:54 - [info] [azureiothub:Azure IoT Hub] Sending Message to Azure IoT Hub :
Payload: 1610682654270
15 Jan 09:20:55 - [info] [azureiothub:Azure IoT Hub] Message sent.
15 Jan 09:20:55 - [info] [azureiothub:Azure IoT Hub] Message sent.
[23989:0x5602b038d250] 231928290 ms: Mark-sweep 1007.7 (1028.6) -> 1005.6 (1028.1) MB, 5197.8 / 0.2 ms (average mu = 0.642, current mu = 0.167) allocation failure scavenge might not succeed
15 Jan 09:21:00 - [info] [azureiothub:Azure IoT Hub] JSON
15 Jan 09:21:00 - [info] [azureiothub:Azure IoT Hub] Sending Message to Azure IoT Hub :
Payload: 1610682660592
[23989:0x5602b038d250] 231933971 ms: Mark-sweep 1007.7 (1028.1) -> 1005.5 (1028.4) MB, 5297.6 / 0.3 ms (average mu = 0.474, current mu = 0.068) allocation failure scavenge might not succeed
15 Jan 09:21:06 - [error] [modbusSerialConfig:Modbus63] Error: {"name":"TransactionTimedOutError","message":"Timed out","errno":"ETIMEDOUT"}
15 Jan 09:21:06 - [info] [azureiothub:Azure IoT Hub] Message sent.
15 Jan 09:21:07 - [info] [azureiothub:Azure IoT Hub] JSON
15 Jan 09:21:07 - [info] [azureiothub:Azure IoT Hub] Sending Message to Azure IoT Hub :
Payload: 1610682666310
[23989:0x5602b038d250] 231939961 ms: Mark-sweep 1007.6 (1028.4) -> 1005.5 (1028.6) MB, 5048.3 / 0.2 ms (average mu = 0.353, current mu = 0.157) allocation failure scavenge might not succeed
15 Jan 09:21:13 - [info] [azureiothub:Azure IoT Hub] Message sent.
[23989:0x5602b038d250] 231946153 ms: Mark-sweep 1007.5 (1028.6) -> 1005.4 (1028.4) MB, 5169.1 / 0.2 ms (average mu = 0.270, current mu = 0.165) allocation failure scavenge might not succeed
15 Jan 09:21:18 - [error] [modbusSerialConfig:Modbus63] Error: {"name":"TransactionTimedOutError","message":"Timed out","errno":"ETIMEDOUT"}
15 Jan 09:21:18 - [info] [azureiothub:Azure IoT Hub] JSON
15 Jan 09:21:18 - [info] [azureiothub:Azure IoT Hub] Sending Message to Azure IoT Hub :
Payload: 1610682672218
15 Jan 09:21:18 - [info] [azureiothub:Azure IoT Hub] JSON
15 Jan 09:21:18 - [info] [azureiothub:Azure IoT Hub] Sending Message to Azure IoT Hub :
Payload: 1610682678361
[23989:0x5602b038d250] 231951681 ms: Mark-sweep 1007.6 (1028.4) -> 1005.6 (1028.4) MB, 5119.1 / 0.3 ms (average mu = 0.184, current mu = 0.074) allocation failure scavenge might not succeed
[23989:0x5602b038d250] 231957089 ms: Mark-sweep 1007.6 (1028.4) -> 1005.5 (1028.4) MB, 5205.5 / 0.2 ms (average mu = 0.116, current mu = 0.037) task scavenge might not succeed
15 Jan 09:21:29 - [info] [azureiothub:Azure IoT Hub] Message sent.
15 Jan 09:21:29 - [info] [azureiothub:Azure IoT Hub] Message sent.
15 Jan 09:21:29 - [error] [modbusSerialConfig:Modbus63] Error: {"name":"TransactionTimedOutError","message":"Timed out","errno":"ETIMEDOUT"}

<--- Last few GCs --->

[23989:0x5602b038d250] 231951681 ms: Mark-sweep 1007.6 (1028.4) -> 1005.6 (1028.4) MB, 5119.1 / 0.3 ms (average mu = 0.184, current mu = 0.074) allocation failure scavenge might not succeed
[23989:0x5602b038d250] 231957089 ms: Mark-sweep 1007.6 (1028.4) -> 1005.5 (1028.4) MB, 5205.5 / 0.2 ms (average mu = 0.116, current mu = 0.037) task scavenge might not succeed

<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x5602acf6bfc0 node::Abort() [node-red]
2: 0x5602acebc3b6 node::FatalError(char const*, char const*) [node-red]
3: 0x5602ad0c9002 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node-red]
4: 0x5602ad0c925a v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node-red]
5: 0x5602ad256595 [node-red]
6: 0x5602ad2566d4 [node-red]
7: 0x5602ad267eb7 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node-red]
8: 0x5602ad2686f5 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node-red]
9: 0x5602ad26ba1c v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node-red]
10: 0x5602ad26bf65 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node-red]
11: 0x5602ad236c0a v8::internal::factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node-red]
12: 0x5602ad5355ef v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node-red]
13: 0x5602ad88e5b9 [node-red]

I have searched for many options, one option I added to init script is --max-old-space-size=1024. I have changed this value to 2048 and 2560 in 2 different devices for testing, and I can see them increasing at same rate, just the thing is higher the value longer it takes to crash. Above log may give you an idea about the issue.

It seems something is leaking memory. It can be very difficult to find such problems as I expect you know. You might try disabling sections of the flow in order to work out what is causing it. Possibly it is one of the contrib nodes. I have had node-red running fair sized flows in Pis for months without issue so there is not a big fundamental issue with node-red. It may well be one of the contrib nodes, or if you are handling large chunks of data perhaps something in your code is not releasing it.

@Colin Thanks for the reply, sorry for delayed response from my end, recently we were trying to figure out this with multiple approaches, so I have few updates related to this.

In one approach I tried to take the memory readings by keeping only one contrib package in the build, in this way we got that the build that was using node-red-contrib-opcua package increased gradually, rest of the packages were stable for longer duration. But I am not sure what might be the exact issue behind it, please suggest me some ways if you know of, how to track or find the possible leak block. I tried out the chrome inspector as many articles online suggest to compare the heap snapshots, but this too don't provide the possible js filename/function name.

Another update is, one of my colleague who is haveing some good experience C++, he followed the nodejs build from source documentation (link here), and tried out the nodejs asan build technique to test with leak sanitizer. In this case we had kept only node-red running without any contrib packages, this also reported a memory error as follows.

Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7f3f9f098947 in operator new(unsigned long) (/lib/x86_64-linux-gnu/libasan.so.5+0x10f947)
#1 0x555a16e26da0 in node::native_module::NativeModuleLoader::LoadBuiltinModuleSource(v8::Isolate*, char const*) (memory-leak/node-js-asan-debug-out/bin/node+0x14bbda0)
#2 0x555a16e28d1a in node::native_module::NativeModuleLoader::LookupAndCompile(v8::Localv8::Context, char const*, std::vector<v8::Localv8::String, std::allocator<v8::Localv8::String > >, node::native_module::NativeModuleLoader::Result) (memory-leak/node-js-asan-debug-out/bin/node+0x14bdd1a)
#3 0x555a16e2b2e3 in node::native_module::NativeModuleLoader::CompileAsModule(v8::Localv8::Context, char const*, node::native_module::NativeModuleLoader::Result*) (memory-leak/node-js-asan-debug-out/bin/node+0x14c02e3)
#4 0x555a16e38106 in node::native_module::NativeModuleEnv::CompileFunction(v8::FunctionCallbackInfov8::Value const&) (memory-leak/node-js-asan-debug-out/bin/node+0x14cd106)
#5 0x555a178c13a5 in v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo) (memory-leak/node-js-asan-debug-out/bin/node+0x1f563a5)
#6 0x555a178c7350 in v8::internal::MaybeHandlev8::internal::Object v8::internal::(anonymous namespace)::HandleApiCallHelper(v8::internal::Isolate*, v8::internal::Handlev8::internal::HeapObject, v8::internal::Handlev8::internal::HeapObject, v8::internal::Handlev8::internal::FunctionTemplateInfo, v8::internal::Handlev8::internal::Object, v8::internal::BuiltinArguments) (memory-leak/node-js-asan-debug-out/bin/node+0x1f5c350)
#7 0x555a178dcfe7 in v8::internal::Builtin_Impl_HandleApiCall(v8::internal::BuiltinArguments, v8::internal::Isolate*) (memory-leak/node-js-asan-debug-out/bin/node+0x1f71fe7)
#8 0x555a178e0a95 in v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) (memory-leak/node-js-asan-debug-out/bin/node+0x1f75a95)
#9 0x555a1af49b3e in Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit (memory-leak/node-js-asan-debug-out/bin/node+0x55deb3e)
#10 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#11 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#12 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#13 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#14 0x555a1ad1657e in Builtins_ArgumentsAdaptorTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53ab57e)
#15 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#16 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#17 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#18 0x555a1ad1657e in Builtins_ArgumentsAdaptorTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53ab57e)
#19 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#20 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#21 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#22 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#23 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#24 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#25 0x555a1ad2e074 in Builtins_InterpreterEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53c3074)
#26 0x555a1ad24959 in Builtins_JSEntryTrampoline (memory-leak/node-js-asan-debug-out/bin/node+0x53b9959)
#27 0x555a1ad24737 in Builtins_JSEntry (memory-leak/node-js-asan-debug-out/bin/node+0x53b9737)
#28 0x555a18066359 in v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) (memory-leak/node-js-asan-debug-out/bin/node+0x26fb359)
#29 0x555a1806aac0 in v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handlev8::internal::Object, v8::internal::Handlev8::internal::Object, int, v8::internal::Handlev8::internal::Object*) (memory-leak/node-js-asan-debug-out/bin/node+0x26ffac0)

Direct leak of 24 byte(s) in 1 object(s) allocated from:
#0 0x7f3f9f098947 in operator new(unsigned long) (/lib/x86_64-linux-gnu/libasan.so.5+0x10f947)
#1 0x555a16e26da0 in node::native_module::NativeModuleLoader::LoadBuiltinModuleSource(v8::Isolate*, char const*) (memory-leak/node-js-asan-debug-out/bin/node+0x14bbda0)
#2 0x555a16e28d1a in node::native_module::NativeModuleLoader::LookupAndCompile(v8::Localv8::Context, char const*, std::vector<v8::Localv8::String, std::allocator<v8::Localv8::String > >, node::native_module::NativeModuleLoader::Result) (memory-leak/node-js-asan-debug-out/bin/node+0x14bdd1a)
#3 0x555a16e385e2 in node::native_module::NativeModuleEnv::LookupAndCompile(v8::Localv8::Context, char const*, std::vector<v8::Localv8::String, std::allocator<v8::Localv8::String > >, node::Environment) (memory-leak/node-js-asan-debug-out/bin/node+0x14cd5e2)
#4 0x555a16b1c4d1 in node::ExecuteBootstrapper(node::Environment*, char const*, std::vector<v8::Localv8::String, std::allocator<v8::Localv8::String > >, std::vector<v8::Localv8::Value, std::allocator<v8::Localv8::Value > >) (memory-leak/node-js-asan-debug-out/bin/node+0x11b14d1)
#5 0x555a16b1fdc9 in node::Environment::BootstrapInternalLoaders() (memory-leak/node-js-asan-debug-out/bin/node+0x11b4dc9)
#6 0x555a16b22693 in node::Environment::RunBootstrapping() (memory-leak/node-js-asan-debug-out/bin/node+0x11b7693)
#7 0x555a168e6403 in node::CreateEnvironment(node::IsolateData*, v8::Localv8::Context, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, node::EnvironmentFlags::Flags, node::ThreadId, std::unique_ptr<node::InspectorParentHandle, std::default_deletenode::InspectorParentHandle >) (memory-leak/node-js-asan-debug-out/bin/node+0xf7b403)
#8 0x555a16ddf2dd in node::NodeMainInstance::CreateMainEnvironment(int*) (memory-leak/node-js-asan-debug-out/bin/node+0x14742dd)
#9 0x555a16ddfa61 in node::NodeMainInstance::Run() (memory-leak/node-js-asan-debug-out/bin/node+0x1474a61)
#10 0x555a16b2d0ed in node::Start(int, char**) (memory-leak/node-js-asan-debug-out/bin/node+0x11c20ed)
#11 0x7f3f9ea480b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)

SUMMARY: AddressSanitizer: 1656 byte(s) leaked in 69 allocation(s).

This output may give you some idea, the command we used was

./node ASAN_OPTIONS=new_delete_type_mismatch=0:detect_leaks=1 ../lib/node_modules/node-red/node_modules/node-red/red.js --userDir=/home/ubuntu/nodeRedTemp2/.node-red/

I am not exactly sure about the second approach as I am not much familiar with it, you guys can have a look at it if there is by any chance a possibility of leak in node-red.

Lastly have you guys faced any issue with the exec node in node-red, just recently we got that when we run a 4k string generation flow the swap memory drains out really soon. I have attached a sample flow for the same, this flow when used sooner or later will give spawn ENOMEM error. Could it be the root cause of the memory leak issue ?

[{"id":"d2cce0a1.c9155","type":"tab","label":"4k Block","disabled":false,"info":""},{"id":"4399d6bc.8bfd98","type":"inject","z":"d2cce0a1.c9155","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"1","crontab":"","once":true,"onceDelay":".5","topic":"","payload":"","payloadType":"str","x":110,"y":180,"wires":[["4eb351c9.e5284"]]},{"id":"52c73fd4.13fd3","type":"debug","z":"d2cce0a1.c9155","name":"Output to cloud","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":560,"y":180,"wires":[]},{"id":"4eb351c9.e5284","type":"exec","z":"d2cce0a1.c9155","command":"base64 /dev/urandom | head -c 4096","addpay":false,"append":"","useSpawn":"false","timer":"","oldrc":false,"name":"4K Random Data","x":330,"y":180,"wires":[["52c73fd4.13fd3"],[],[]]}]

Have you upgraded to the latest version of node-red as was suggested earlier?

I am running your exec flow on Ubuntu, with the inject set to 0.02 second, so running 50 times/sec. I think maybe the memory is going up, I will leave it running for some time.

Yes as per previous conversation, I had updated the node-red to v1.2.7 in last month. As of now we have same running for the build, as previously mentioned we are building the package locally on a build machine, we don't install everything from internet.

Well I have run it (the exec node into a debug node) for three hours at 50 times a second and no sign of leaking memory.

When you did it and used up all the memory, which process is it that is consuming memory?

I don't know anything about the leak analyser tool you have used so can't help there I am afraid.

Hi @nikhilkinkar,
I'm FAR from a memory expert.
But could it perhaps be of any help to:

  • Use my node-red-contrib-heap-dump node to create two heap dumps (with some time in between), and compare two heap dumps using an external tool (e.g. Chrome) to analyze the difference.

  • Start your Node-RED in debug mode and connect with a Chrome debugger to NodeJs (as I described here). Then you can go to the "Memory" tabsheet, and see the memory live or take heap dumps and compare them, and ...

    image

There are a lot of tutorials (like this one) available to describe how you can do that. So perhaps you have already tried this kind of analysis?

You talk about the node-red-contrib-opcua node. I don't know how it works, but perhaps it uses memory (e.g. C++) that is managed outside of the V8 engine (which is running NodeJs and manages all NodeJs memory). In that case perhaps you won't find anything using heap dumps. But it is just an assumption ...

Good luck with it!
Bart

@Colin have you seen any continuous increase in memory usage, or in your case it was getting back to normal after nodejs gc run.

@BartButenaers I have checked out your package, initially it gave me error while installing, I will check it again with a new build.

@Colin few more updates I would like to add, in our build system, I have installed the node-red using following command.

npm --target_arch=amd64 --target_platform=linux install --prefix nodejs/lib/node_modules/node-red --unsafe-perm node-red

so is it fine, or it may cause issue on the device, as the steps are like.

  1. build the nodejs from source,
  2. install node-red from above command
  3. install contrib packages like below command inside .node-red/

npm --target_arch=amd64 --target_platform=linux install node-red-contrib-opcua

  1. create package from in tar file, move it to device.
  2. extract files, setup the init script in /etc/init.d/ and start the node-red with following command.

USER_DIR='/opt/.node-red'
DAEMON=/opt/apps/nodejs/lib/node_modules/node-red/node_modules/node-red/red.js
OPTIONS="--userDir=$USER_DIR"

start-stop-daemon \
--chuid $USER:$GROUP
--start
--quiet
--pidfile $PIDFILE
--exec $DAEMON
-- \ $OPTIONS >> $LOG 2>&1 & #>>/dev/null 2>&1 & #>>$STDOUT 2>>$STDERR &

so my question is, are these steps fine or do we need to make any corrections in them.
Also additionally are there any standard practices for creating flow, if our flows are doing something wrong.

Sorry, those questions are above my knowledge level.

I need to check again to give an answer on your earlier question.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.