BIGSSH - passing server IP to SSH connector?

Hi - I'm looking to use Node-RED to pull the same data from multiple servers via SSH and combine it into a single SQL table, which can be queried by our reporting tools. The servers all authenticate via AD, so the User/Password credentials are always the same.

Issue is that I can't find a way using BIGSSH / ssh - v3 etc to pass the server IPs as a variable - each server needs static credentials in Node-RED, meaning I need an instance of the SSH node per server.

I've got multiple servers to run through and the list will keep changing - ideally want to have the credentials configured in Node-RED, then pass the list of servers into it, ready to be looped through and processed.

Anyone aware of a way to do this?

Thanks

Consider pushing the data from your multiple servers over MQTT to a broker on your Node-red device.

You could use Node-red on each server to push or Python, etc.

@jbudd is correct. Push from the servers to node-red.

Here is how I deal with the problem.

What are you trying to report ?

I created a real light weight bash script to get server stats and send them using Curl post to a node-red instance. I use Cron job to run the BASH every minute but this could be changed to your needs if duration is higher.

In node-red:
Screenshot from 2024-01-11 15-56-52

[
    {
        "id": "726704620e7e3ab4",
        "type": "http in",
        "z": "b0c00a7ba74c5b20",
        "name": "",
        "url": "/stats_in",
        "method": "post",
        "upload": false,
        "swaggerDoc": "",
        "x": 590,
        "y": 140,
        "wires": [
            [
                "1342fc3d01ae4522"
            ]
        ]
    },
    {
        "id": "ce3668c1aabed4f5",
        "type": "http response",
        "z": "b0c00a7ba74c5b20",
        "name": "",
        "statusCode": "",
        "headers": {},
        "x": 950,
        "y": 120,
        "wires": []
    },
    {
        "id": "1342fc3d01ae4522",
        "type": "function",
        "z": "b0c00a7ba74c5b20",
        "name": "",
        "func": "msg.statusCode = \"200\";\nmsg.payload = \"got the data!\"\nmsg.headers = {};\nmsg.headers['X-Robots-Tag'] = \"noindex\";\nmsg.headers['content-type'] = 'application/json';\n\n\nreturn msg;",
        "outputs": 1,
        "timeout": 0,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 780,
        "y": 140,
        "wires": [
            [
                "ce3668c1aabed4f5",
                "f8edc780db55be6a"
            ]
        ]
    },
    {
        "id": "f8edc780db55be6a",
        "type": "debug",
        "z": "b0c00a7ba74c5b20",
        "name": "",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "true",
        "targetType": "full",
        "statusVal": "",
        "statusType": "auto",
        "x": 950,
        "y": 160,
        "wires": []
    }
]

The Bash Script: NOTE line 267 change it to the node-red server
Called using:

systemStatsJSON.sh [timeInSeconds] [curl]
$ systemStatsJSON.sh 60 curl
#!/bin/bash
# Copyright (c) 2023 One DB Ventures, LLC (AKA, No Flipping Switches)
#
# MIT License - https://opensource.org/license/mit/
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.

# Paramiter one used to set time in sec between reads
sleepDurationSeconds=$1
if [ "$1" -lt 1 ]; then
  sleepDurationSeconds=1
fi

##########################################################
########## Stuff Mesured between sleep duration ##########
##########################################################

# previous
readarray -t previousStats < <( awk '/^cpu /{flag=1}/^intr/{flag=0}flag' /proc/stat )
previousrx_bytes=$(head -n1 /sys/class/net/eth0/statistics/rx_bytes)
previoustx_bytes=$(head -n1 /sys/class/net/eth0/statistics/tx_bytes)

sleep $sleepDurationSeconds

# current
readarray -t currentStats < <( awk '/^cpu /{flag=1}/^intr/{flag=0}flag' /proc/stat )
currentrx_bytes=$(head -n1 /sys/class/net/eth0/statistics/rx_bytes)
currenttx_bytes=$(head -n1 /sys/class/net/eth0/statistics/tx_bytes)



#########################
########## CPU ##########
#########################

#Start JSON line
json='{"cpu":{"usagePercent":{'

# loop through the arrays
for i in "${!previousStats[@]}"; do
  # Break up arrays 1 line sting into an array element for each item in string
  previousStat_elemant_array=(${previousStats[i]})
  currentStat_elemant_array=(${currentStats[i]})

  # Get all columns
  previousStat_colums="${previousStat_elemant_array[@]:1:7}"
  currentStat_colums="${currentStat_elemant_array[@]:1:7}"

  # Replace the column seperator (space) with +
  previous_cpu_sum=$((${previousStat_colums// /+}))
  current_cpu_sum=$((${currentStat_colums// /+}))

  # Get the delta between two reads
  cpu_delta=$((current_cpu_sum - previous_cpu_sum))

  # Get the idle time Delta
  cpu_idle=$((currentStat_elemant_array[4]- previousStat_elemant_array[4]))

  # Calc time spent working
  cpu_used=$((cpu_delta - cpu_idle))

  # Calc cpu percentage used
  cpu_usage=$(echo 'scale = 2; x=100 * '$cpu_used'/'$cpu_delta'; if(x>0 && x<1){"0"};  x' | bc)


  # Get cpu used for calc cpu percentage used
  cpu_used_for_calc="${currentStat_elemant_array[0]}"

  # save outputs to json string
  if [[ "$cpu_used_for_calc" == "cpu" ]]; then
    json+='"total":'$cpu_usage','
  else
    json+='"'$cpu_used_for_calc'":'$cpu_usage','
  fi

done

#remove last ','
json=${json::-1}

# end usagePercent in json line
json+='},'

########## CPU freq ##########

# start freqGHz line in json
json+='"freqGHz":{'

# Put CPUs freq into an array
readarray -t cpuFreqsArrayRAW < <( awk '/cpu MHz/ { print $4 }' /proc/cpuinfo )
cpuFreqsArray=( "${cpuFreqsArrayRAW[@]%.*}" )

#check if cpuFreqsArray is empty
if (( ${#cpuFreqsArray[@]} )); then
  # loop through the array
  for i in "${!cpuFreqsArray[@]}"; do
    # calc cpuFreq from kHz to GHz and save outputs to json string
    json+='"cpu'$i'":'$(echo 'scale = 2; x='${cpuFreqsArray[i]}'/'1000 '; if(x>0 && x<1){"0"};  x' | bc)','
  done
  #remove last ','
  json=${json::-1}
fi

# end cpuFreq in json line
json+='},'

########## CPU Loadavg ##########

# start cpuLoadavg line in json
json+='"loadavg":{'

# Get first 3 entries to array
cpuLoadavgArray=($(head -n1 /proc/loadavg))
cpuLoadavgArrayformatted=(${cpuLoadavgArray[@]:0:3})

# loop through the array
for i in ${!cpuLoadavgArrayformatted[@]}; do
  # calc the minutes 1,5,15 for the load averages
  loadAverageMinutes=$(echo 'scale = 0; 3*'$i'^2+1*'$i'+1' | bc)

  #cpuLoadavg 1min
  json+='"'$loadAverageMinutes'min":'${cpuLoadavgArrayformatted[i]}','
#echo "cpuLoadavgArray: " ${cpuLoadavgArray[i]}
#echo "cpuLoadavgArrayformatted: " ${cpuLoadavgArrayformatted[i]}
done

#remove last ','
json=${json::-1}

# end cpuLoadavg in json line
json+='},'

########## CPUs present (number of CPUs) ##########

cpusPresentRAW=$(head -n1 /sys/devices/system/cpu/present)
cpusPresent=$((${cpusPresentRAW##*-} + 1))

# cpusPresent line in json
json+='"coresPresent":'$cpusPresent

# end of cpu section
json+='},'

##############################
########## Network ###########
##############################

# start network line in json
json+='"network":{'

# start hostname line in json
json+='"hostname":"'$(hostname)'",'

# start eth0 line in json
json+='"eth0":{'

# start interface line in json
json+='"interface":'$(ip -j addr show dev eth0)','

# start rx_bytes line in json
json+='"rx_bytes":'$(($currentrx_bytes-$previousrx_bytes))','

# start tx_bytes line in json
json+='"tx_bytes":'$(($currenttx_bytes-$previoustx_bytes))

# end of eth0 section
json+='}'

# end of network section
json+='},'

##############################
########## RAM/SWAP ##########
##############################

# start Ram line in json
json+='"memory":{'

# MemTotalGigabytes line in json
MemTotalGigabytes=$(echo 'scale = 2; '$(awk '/MemTotal/ { print $2 }' /proc/meminfo)'/'1048576 | bc)
json+='"ramTotalGigabytes":'$MemTotalGigabytes','

# MemAvailableGigabytes line in json
MemAvailableGigabytes=$(echo 'scale = 2; '$(awk '/MemAvailable/ { print $2 }' /proc/meminfo)'/'1048576 | bc)
json+='"ramAvailableGigabytes":'$MemAvailableGigabytes','

# MemUsedGigabytes line in json
json+='"ramUsedGigabytes":'$(echo 'scale = 2; x='$MemTotalGigabytes'-'$MemAvailableGigabytes '; if(x>0 && x<1){"0"};  x' | bc)','

# SwapTotalGigabytes line in json
SwapTotalGigabytes=$(echo 'scale = 2; '$(awk '/SwapTotal/ { print $2 }' /proc/meminfo)'/'1048576 | bc)
json+='"swapTotalGigabytes":'$SwapTotalGigabytes','

# SwapFreeGigabytes line in json
SwapFreeGigabytes=$(echo 'scale = 2; '$(awk '/SwapFree/ { print $2 }' /proc/meminfo)'/'1048576 | bc)
json+='"swapFreeGigabytes":'$SwapFreeGigabytes','

# SwapUsedGigabytes line in json
json+='"swapUsedGigabytes":'$(echo 'scale = 2; x='$SwapTotalGigabytes'-'$SwapFreeGigabytes '; if(x>0 && x<1){"0"};  x' | bc)

#1 GB = 1048576 kB

# end of RAM/SWAP section
json+='},'

##############################
########## Storage ###########
##############################

# start Storage line in json
json+='"storage":{'

# size in Megabytes
json+='"sizeMegabytes":'$(df -BM / | awk 'NR==2{print substr($2, 1, length($2)-1)}')','

# used in Megabytes
json+='"usedMegabytes":'$(df -BM / | awk 'NR==2{print substr($3, 1, length($3)-1)}')','

# available in GB
json+='"availableMegabytes":'$(df -BM / | awk 'NR==2{print substr($4, 1, length($4)-1)}')','

# percentage used
json+='"percentageDiskUsed":'$(df / | awk 'NR==2{print substr($5, 1, length($5)-1)}')

# end of Storage section
json+='},'

#################################
########## Uptime ###############
#################################
uptimeRAW=$(head -n1 /proc/uptime)
json+='"uptime_in_days":'$(echo 'scale = 4; x='$(echo "${uptimeRAW%%.*}")'/'86400 '; if(x>0 && x<1){"0"};  x' | bc)','



#################################
########## Time Frame ###########
#################################

# start Storage line in json
json+='"measurementInterval_in_seconds":'$1

### end/complete JSON String ###
json+='}'

# handle no return option for parameter 2 of nr
  if [[ "$2" == "nr" ]]; then
    echo -ne $json
  elif [[ "$2" == "curl" ]]; then
    # send JSON to manager
    curl --max-time 30 -d $(echo $json) -H "Content-Type: application/json" -X POST http://127.0.0.1:1880/stats_in
  else
    echo $json
  fi

exit 0

Hope this helps your brain make the switch

Some good suggestions have been made. A lot depends on the value of the data and systems you are dealing with along with their possible vulnerabilities and we know nothing of any of that and so we can only give very generic advice.

Assuming this is some commercial venture, my suggestion would be not necessarily to push from each server to the node-red server but rather to a central location, presumably a Windows file server. Get the node-red server access to the specific file share as well which is easy to do at a filing system level. Then you don't need to do anything special in Node-RED itself. Node-RED will be able to use the network filing system - it should only be given read-only access - and then be able to process the data.

No need for SSH at all unless the servers are only connected via a wide area network. If so, that changes the dynamic somewhat and probably needs a different approach. If so, let us know and we can steer you accordingly.

Thanks all -

  • I'd rather not push from the servers; they are simple container hosts and I don't want to run custom cron jobs / systemd services on them as this would need additional monitoring etc.
  • The data that I am retrieving from each is from a DB running within a Docker network; we don't expose the DB to the host network (all connections are within Docker), so I can't access it via TCP. We don't expose the Docker API either - the only option at the moment is to SSH to the server and grab the data using docker exec.
  • The servers are internal hosts on our internal LAN; no internet access to worry about.
  • Longer term plan is to expose the data via an API in our application, but we need to get something in place now.

Appreciate any thoughts you can offer

Internal networks should not be assumed to be secure.

Personally, I would probably create a dedicated network drive to take the db extracts from each instance. With write-only access from the docker instances and read-only access from the node-red server. The advantage being that node-red never even needs to know what other servers exist, it only needs to walk through the available files. And each docker server PUSHES data out and so remains completely secure. If needed, each of the servers could just be given access to their own folders which would further isolate things. With separate folders, each docker server could take responsibility for the extract lifecycle by giving read/write access but data separation would still be preserved.

As I say though, a lot depends on value and much more.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.