A few MQTT topics to SQLite

I am very new to this, not a very quick learner but learn most of examples and tips.
But after a few days of struggeling with this I need some pointers about different MQTT topics to put in a SQLite db.

I have a SQLite database with a table that can store:
temperature, humidity, pressure, currendate, currenttime and device.
I can test it with an inject node with:
INSERT INTO dhtreadings(temperature, humidity, pressure, currentdate, currenttime, device) values(19.6, 53.8, date('now'), time('now'), "manual").
With DB Browser for SQLite I can see that it works.

Now I would like to put these 3 different MQTT topics in this database but am only able to put one value in the db. throug the function node with this:
var newMsg = { payload: msg.payload };
newMsg.topic="insert into dhtreadings(temperature) values ("+newMsg.payload+")";
return newMsg;

so I like to ask: how can I put diffent MQTT out nodes in one table?

Have a look at the join node

1 Like

What I've done (in the past) is to write the data from the MQTT nodes to global variables and then once every so often write the variables to the database. If you chose a time/frequency that is longer than the update-time from MQTT then you will always have the latest values.

Hope this helps.

1 Like

Thanks. I did but get confused about how. Do I need to create an array?

I'd create a key/value object as you can then easily refer to the correct value in your following function

1 Like

Thanks again. It looks like that is going to help me get this done.


I added two different MQTT topics (all from one sensor, named BME01 as device name) just to see, and it looks great so far.

Now it is just, how can I put this in one table?

Just be warned that this db is going to get big very quickly (dependent on how often the values are updated of course). If you are using something constrained like a Pi, you are likely to run into serious problems after a while.

Though more complex, something like InfluxDB is much better at handling timeseries data like this. Not only will it handle much larger datasets, a quick check on my live system shows that my 2 dbs take up 428Mb currently. But it has built in tools for managing timeseries data. For example, I take my home sensor values (temp., humidity, light, etc) about every 50-60sec but I only keep that raw data for 24hrs. An automatic summary query (called a "Continuous Query" in InfluxDB terms) aggregates the detail data to hourly data that is kept for 5 years. That's been running for a couple of years now.

1 Like

Thanks for the warning.
In this case it's even very interesting what you say about InfluxDB.

I started this test with SQLite because of a tutorial I was following.
This because I want to know more about the possibility with openHAB and the InfluxDB (v 1.0) Persistence
persistence-influxdb - 1.13.0.M3 binding I seem to use.
But the lack of knowledge from my side about the openhabian, it's file locations of the database file from InfluxDB for example, I am a little bit lost about that InfluxDB.
Yeah, I have a influxdb.cfg in a openHAB-conf\services folder but am unable to find etc/influxdb/influxdb.conf on my Raspberry Pi with this openhabian image on it.
This SQLite database may be a bad start but I learned something as well.

It's maybe not to difficult now anymore so I may look in to InfluxDB again.
Thanks again

InfluxDB is a little confusing and I often forget what I've learned since it very rarely needs attention once set up. I kept copious notes, especially about continuous queries and retention periods so that I would be able to reproduce things later. The query language can be a bit of a challenge initially as well if you are used to SQL. It is very similar to SQL but that can occasionally confuse since InfluxDB is not a relational database.

Still, I went through a number of database options as I was developing my home automation system with Node-RED and, even with the learning curve, InfluxDB has proven to be the most robust and reliable even on the limited resources of my Pi2. Reading and writing doesn't need a dedicate node (though one exists) since the interface is simply a REST one so the http request node is all you need (I have it set up in a sub-flow).

It automatically shards the data files so you don't have to worry particularly about file sizes - MongoDB is notoriously picky about file sizes and regularly corrupted databases well before the theoretical file size limit.

It does use a fair bit of resource but still manages to work well along side everything else that is running on my Pi2. I will transition to a Pi3 at some point, I've started setting it up. That will give me some more resources.

I'll likely split things between the 2 but it really isn't needed as long as you are sensible about retention periods & I love that it has the built-in capability to trim & aggregate data so you don't need a separate process for it.


By default, your InfluxDB data is in /var/lib/influxdb/data/ though I don't know OpenHAB so I don't know if it puts it somewhere else. You might need to check on the OpenHAB forum if so.

Thanks again for the advice to use a join node. I did used it with the array option because it was a bit to much info with that key/value object option for me to handle.

Also added a template node.

And this is going through a function node.

A part of the result

I only need to figure out, how I can add the date and time as I did before with an inject node

INSERT INTO dhtreadings(temperature, pressure, altitude, humidity, lux, currentdate, currenttime, device) values(22.4, 1020, 10, 48, 12000, date('now'), time('now'), "manual");

because that is giving me errors if I try to add only the date for now.
debug-1

Any idea about how I must do this?

Understanding javascript objects is very important for using Node-RED.
I found the following tutorial very helpful
https://www.w3schools.com/Js/js_objects.asp

If you look at your debug node output, check that your nodes are outputting what you expect and compare them to the syntax of SQL ( there are loads of SQL tutorials online)
Your date('now') is being seen as a piece of text and isn't therefore being processed. (HINT it's in green in the display) . You need to follow the same thing you did for msg.payload

1 Like

If you always want the current timestamp when inserting the values, redefine your table with a default as in this example from StackOverflow:

CREATE TABLE MyTable(
    ID INTEGER PRIMARY KEY,
    Name TEXT,
    Other STUFF,
    Timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
);

Then simply don't try to insert values for the timestamp fields.

The last time I did something with a db was almost 20 years ago with Delphi. But time let things disappear in my own database upstairs! Because I am looking, exploring on home automation, i came out on OpenHAB. I bit naive I just bought a Raspberry Pi and installed a SD card with OpenHABian. Made some sensors with the ESP8266-01 module and let them connect to the MQTT broker on the Raspberry Pi. In OpenHAB I have installed InfluxDB and Grafana.

I only feel that I am not fully in control of what OpenHab is doing with my data from the diverse ESP8266-01 sensors because some things are like

Directory /var/lib/influxdb/data: permission denied

That's just one reason I thought about the use of SQLite so I would have more control about the data and maybe just forget about that OpenHABian but instate use Raspbian with OpenHAB to have at least also a desktop on the Pi.

But: it seems that I have the InfluxDB working well for now. I also have the InfluxDB nodes so maybe I need to explore this more.
Thanks again for your kind words.

It is likely that they are running InfluxDB under a different user/group. You will be able to view the data folder using sudo assuming that the user you are logged in with has sudo capability (standard on Rasbian but I don't know OpenHABian). I expect that the OpenHAB docs have the details somewhere.

Once you know what group is being used, you can simply add yourself (or the user running Node-RED) to the group and you should then have access.

There are some other nice tricks once you have things accessible. For example, newer versions of Graphana have some amazing addins that are worth exploring. There is an InfluxDB module, for example, that lets you more easily get at the technical details of your InfluxDB installation and databases.

You can also install Telegraf (which is from the same people as InfluxDB) which will let you aggregate information from all sorts of sources including your Pi's diagnostics, web page (and DNS server) performance, router SNMP data and more. By working all 3 together, you can easily build amazing dashboards.

By the way, if you haven't already done so, converting your pressure data to sea-level pressure will let you compare data with weather forecasts. You may well know that already.

1 Like

Thanks again, TotallyInformation.
I did redefining the table like your advised with that StackOverflow example. Named the table like the sensor name at the same time and it just works.

With this I think I can go on. Learn and adjust on the way. Like that 2 hours difference (comes out as UTC not as local time!)
But most important is the 2 second interval.
Like you said before, this going to be a very big table in a short time and think I need to do something like you said.

about every 50-60sec but I only keep that raw data for 24hrs. An automatic summary query (called a "Continuous Query" in InfluxDB terms) aggregates the detail data to hourly data that is kept for 5 years

Anyway, I can not thank you enough for your information. You really helped me out here.

No problem.

Regarding UTC vs Local. Always store and process date/time values as UTC. That is very important. You should only convert to local time for display. Otherwise, you are bound to hit edge cases and issues. It makes life a lot simpler and more robust.

1 Like

Indeed. I read about that.

it is considered to be a good idea to use UTC timestamps as you know exactly what the time was at that moment regardless of how the machine is configured, where it is located in the world, what the users local time is, the servers local time, etc.

And of-course,

Best practice says that you work in UTC and always convert at the UI.

triggered me because that commotion of switching into/out of daylight saving time here in Europe. Thinking about it and read it here again makes it another good tip.
Thanks.

1 Like

Hi All

I have been trying to replicate this but i keep running into a brick wall.

I have used the Join node to combine values for x y z from an orientation sensor into an Array. The debug node at this stage says Array [3] and gives the payload values as eg.

All is well until i come to insert it into Sqlite3. The debug node attached to the sqlite node just says [empty] and the only thing added is the auto increment to the id field.

Please could someone help me get the changing array values into the appropriate fields

Many Thanks in advance guys

image

Show us the debug of what is going into the sqlite node, and the settings for that node.