Hello

I have connected an Energy Meter that brings me Actual Watt every Second. To Calculate the Energy Cost i need An KW/H

Had anyone an odea how to make this?

Greeting

# Create KW/H from Watt

So you are getting a figure that is w/s.

To get to KW/h:

(W / 1000) / (s / (60*60))

But really, you probably need to accumulate an hour's worth of data - summing 60*60 readings would give you W/h, dividing by 1000 would give you KW/h.

Otherwise you are only getting a spot figure rather than something more accurate.

You might even accumulate data over 24 hrs, 1wk and 1 month which would give you figures that would be more in keeping with the values you will see from your energy provider.

It is worth saying that you can get InfluxDB to do this for you. It's a faff to work out initially but once done, the calculations will happen automatically, all you would need to do is to feed in the data.

I have the Dataand put the Watt every Second in an InfluxDB.

The Way to Calculate is in my head but how to Calculate this in NodeRed?

I not find any Possible way.

Perhaps if you can write down the way that is in your head someone can help you?

As I outline. Accumulate over an hour, day, week or month as required. The simplest is to accumulate the data for an hour. Adding up the 3600 values you will recieve in an hour will give you `x`

Watts/hr.

Divide that value by 1000 will give you `y`

KW/hr

You accumulate the with a function node that adds sequential values together until you have 3600 then you add them all together, divide by 1000, reset your accumulator variable back to zero entries and output the result which you send to your Dashboard.

Better still is to send all the data to InfluxDB and let it do the calculations. Then you can have a simple flow that requests the last value once per hour.

If you however do want to do this in a function node, take a look at the `array.reduce`

where all values are put in an array first. If the timing distance between each of those values is the same, no further magic is needed. Instead of resetting the index, you can accumulate more than 3600 sets of data by looking at `currentIndex % 3600`

if that value is 0 (with `currentIndex`

non-zero) you have exactly 1 hour since the last time it happened.

It depends on your specific use case and architecture it runs on which option is preferred.

On the other hand, I know there are nodes in the core library that will let you accumulate messages until a set amount of time has passed, or a specific amount of time has passed. I haven’t played with them (yet), but I think it might be the batch node. @TotallyInformation probably knows this better than me (after all I’m only working with the tool for a bit over a month now).

You might take a look at a setup like that too.

If all the data is in Influx I believe that with the appropriate query you should be able to get influx to do it. I am not sure exactly how though. Perhaps we have an influx expert here, or perhaps asking on an influx forum might be the best way.

By the way I think it is probably kWh you want and not kW/H which would be a meaningless figure.

I think this is what you are looking for

https://docs.influxdata.com/influxdb/v1.3/query_language/functions/#integral

with appropriate scaling. Give it a try and see what it gives.

I tried that approach Colin, but found that it made serious demands upon the processing power of my Pi 3b, so I went down a different route.

I'll post what I am using - give me few minutes...

I tackled this a little differently...

Using the method described at the beginning of the thread, it is dependent upon your data arriving exactly every second, and any deviation will result in a calculation error.

I don't know how you are getting data into node-RED, but if it's MQTT (for example) there will be delays, so some messages may arrive at 1.112345 secs, whilst another may arrive 0.89 secs later, and so on. If you are basing the calculations on the expectaion that it will be exactly a second, you can see how the errors will occur, especially when the input value is also changing.

I get around that by basing the calculation on the measured number of milliseconds between the messages, as per this flow. NOTE - uses node-red-contrib-simpletime

```
[{"id":"1bf0b5e4.35c0ea","type":"function","z":"c53060.842a0fa","name":"kWh","func":" var grid = msg.payload;\n var watthr = flow.get('watthr') || 0;\n \n //Find time since last update\n\t\tvar updateTime = msg.myepoch;\n\t\tvar lastTime = flow.get('lastTime2')||updateTime;\n\t\tvar t = updateTime-lastTime;\n\t\tflow.set(\"lastTime2\",updateTime);\n\n // Energy calculations\n\t\twatthr = watthr+(t*grid)/3600000;\n\t\tflow.set(\"watthr\",watthr);\n\t\t\n\tmsg.payload = {\n\t\tWatthr: watthr,\n\t\tkWatthr: watthr/1000\n\t };\n\t\treturn msg;","outputs":1,"noerr":0,"x":480,"y":1320,"wires":[["1a07b1cb.a053be"]]},{"id":"274e35d2.67affa","type":"inject","z":"c53060.842a0fa","name":"","topic":"","payload":"1000","payloadType":"num","repeat":"1","crontab":"","once":false,"onceDelay":0.1,"x":150,"y":1320,"wires":[["d814d63c.fc96e8"]]},{"id":"d814d63c.fc96e8","type":"simpletime","z":"c53060.842a0fa","name":"","x":320,"y":1320,"wires":[["1bf0b5e4.35c0ea"]]},{"id":"1a07b1cb.a053be","type":"debug","z":"c53060.842a0fa","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":660,"y":1320,"wires":[]}]
```

The `watthr`

flow context can be easily reset to zero at midnight, so it always outputs the current day's value.

I'm using influx, so I save the W/h feed to the database (every 5 seconds in my case), it's then very easy then to process it in Grafana. To do this, change the function to;

```
var grid = msg.payload;
//Find time since last update
var updateTime = msg.myepoch;
var lastTime = flow.get('lastTime2')||updateTime;
var t = updateTime-lastTime;
flow.set("lastTime2",updateTime);
// Energy calculations
msg.payload =(t*grid)/3600000;
return msg;
```

...and in Grafana...

Thanks for the discussion. I'm very interested in exactly that topic too.

Currently I get my measurements (Solar Power and Power Usage) via a modtcp node, once per minute.

Or so I thought. The measurement device is often busy (with whatever) and not responding.

Therefore as a first hack I poll the device every minute and store the response (if there is any) into a global variable.

With another flow I read that value once per minute and send it to influxes/grafana. That way there is never a missing value in the sequence, although some values might not be perfectly accurate.

I will study Paul-Reeds solution to see how I can do the last part better.

Again, many thanks, this is very helpful.

Sounds interesting - why would this cause so much load? Do you have more details?

Second this. So the calculation would be every time the watts are sampled:

total in kWh += (watts / 1000) * (sample time in seconds / 3600);

That node is certainly looking decent for this purpose. You will however have to feed it constantly with the new data before it can really calculate the value and give an output.

The node however appears to be unmaintained. There’s 3 open issues that haven’t even had acknowledgement, 1 open pull req, and not a single of either closed. All code got committed on the same day too, and the version number changes are based on tiny changes such as the group in the palette, or keywords in the `package.json`

file. Without interaction back from the author it’s hard to get an idea of code quality. A quick look at it tells me that the value is calculated by taking the current time the message is incoming. **Feeding it historical data will go wrong.**

The influxdb/grafana setup sounds the best to me, especially since you’re already storing the values in influxdb.

The integral() function in mysql will require a fair amount of processing as it has to multiply each value by the time interval and then add them all up.

I agree that a much more efficient way would be to not only save the instantaneous power to influx but also the estimated kWh since the last sample. The influx query still needs to add up the kWh for each sample in the time range but does not need to do the interval calculation and multiplication.

To expand upon @Colin's comments;

In my case, I have three values to calculate - power consumed, power generated (solar) & power diverted (to heat water).

Each energy value arrives every 5 seconds, so in 24hrs thats 3 x 17,280 = 51,840 data point to perform the integral calculation **EVERY 5 SECONDS** (towards the end of each day).

Also, let's not forget that the above is just to calculate the power; every 5 seconds the instantaneous energy values are also stored and subject to their own calculations.

I think that the query I used was something like;

`SELECT cumulative_sum(integral("grid")) /3600 AS "Usage"`

so comparing that calculation to the query that I'm currently using - `SELECT sum("accusage") AS "Usage"`

it's much less intensive, and the processor load is significantly reduced (by a magnitude!).

Very interesting problem here, and some creative solutions...

Although I've not used it, I think that Bart's `node-red-contrib-interval-length`

node will provide the timings between the readings, for more accurate power calculations. Maybe I can work up a prototype later this evening.

My solution pre dates that node, but it looks promising, and easier to implement than mine.

Better yet, he would *probably* add a "power" mode calculation, if we asked nicely ;*)

Mathematically, I think he would just need to add a time-based integral function, which could be applied to any given field on the incoming msg. I've use another contrib node to do statistical aggregation calculations like this before, but cannot find it now.