I have a table with no primary key. There will be several duplicated records.
The table will hold alarms from a facility. 4 columns: tag, description, start_time and end_time.
The only rule is that there cant exist duplicate tags at the same time as the end_time is null. When a fault is detected an alarm is inserted. The only other action on that record in the table is an update when the fault is normalized.
The facility can get multiple faults at the same time. Thus, a multiple insert is needed, and at the same time a check in the insert needs to make sure that the same fault does not exist in the table already. This and other uses I have noticed would be useful to be able to connect to the sqlite db from a function node.
The only rule is that there cant exist duplicate tags at the same time as the end_time is null.
The facility can get multiple faults at the same time.
The only other action on that record in the table is an update when the fault is normalized.
and at the same time a check in the insert needs to make sure that the same fault does not exist in the table already
It seems to me that tag, start-time and end-time must be unique and therefore should be an index?
If there is no way to uniquely identify a record, how can you safely update it?
At least you need an auto-generated unique id.
Databases provide ways to enforce data integrity. It seems barmy to reinvent the wheel.
It seems barmy to reinvent the wheel.
I would suggest MQTT instead, which is better suited for alarm/state tracking.
Is it possible to use MQTT in a function node?
I use global, flow and node context in my programme. I use function node heavely.
The tag is not unique. Start_time and end_time is not unique. Even if I had a unique key, what will the system use it for?
When a fault occurs an alarm is inserted. The alarm in it self from the system can be defined before hand and the system can give the type of fault a key. But the same fault can occure many times, so it is not unique. I could give the table a primary key or use the rowid that the table have. But the system dont know about it.
Fault occures, gets inserted and is an active fault. Fault resolves and gets updated with end time. That record is now historical and won’t be touched ever again.
There will be a unique row only in active state.
There will be a unique row only in active state. Does "active state" mean end_time is NULL?
the system can give the type of fault a key. Is this what the "tag" field is?
The facility can get multiple faults at the same time.
I imagined a few simultaneous alarm records:
Start_time Tag Description End_time
11:22:59, "Coolant leak", "Fluid detected in containment vessel",NULL
11:22:59, "Coolant pressure","Primary coolant circuit low pressure",NULL
11:22:59, "Core temperature","Core temperature rising", NULL
If there can only be one "Core temperature" record where end_time is NULL then only one "Core temperature" record can be resolved at any given end_time.
Thus the combination of tag and end_time is unique and you can safely
UPDATE faults SET end_time = now() WHERE tag = "Core temperature" AND end_time IS NULL
But that's my way and of course you may prefer to do it differently.
Yes, this is exactly how I do it now jbudd. But through a loop in a function faults can be detected and resolved. An array of objects is created. Based upon if end_time is undefined or not they are sent to the insert or update pipe. But there can be multiple incidents of each. I am able to create sql for that also. But som rare factors can happen and duplicated fault in active state happens. I have problems creating sql to guard against that. One way I thought is to use sqlite directly from a function node.
It would be interesting to see the function that detects and resolves faults.
I wonder if these duplicates can occur if your array and database get out of sync.
I guess it may be possible to call sqlite (or mqtt) from within the function node, but it just feels like Node-red is the wrong environment for such an approach.
I have a json fault object in memory where faults are defined. If the fault in question have 0 as value on the value key then the fault is not active. This object fault is reset back to the original at startup. I want the system to do checks during startup and then detect the faults and set them, rather than storing it so it survives a shotdown.
If a fault occures and is inserted, and the system shuts down and up again the value for every fault is 0. But the fault is inserted in the table. When a fault occures the fault is only inserted in the table if the fault first is 0. Thants when I get a duplicated record.
Not sure what you mean by "the system" but if you can identify when it restarts, would it be ok to delete all database records where end_date is null before you re-detect and recreate them?
I want start_time to be the first time the system (Node-Red) detects the fault, in active state of course.
I’m leaning towards that maybe, before I insert an alarm, must do a select on active alarms and do an integrity check before insering a new alarm.
But the same fault can occure many times, so it is not unique.
But shouldnt it be unique ?
I have a json fault object in memory where faults are defined.
sqlite is a "relational" database, why not store those fault types in a table and use foreign keys to relate them.
In addition I would suggest to set a unique constraint on the fault type, because even the "fault can occur many times", what do you do with this information ? Store them and update them all when it restores updating all those records with the latest timestamp ? A recurrence counter field would make more sense in that case. (add something like a last_notify field with the last timestamp event was received).
Unique constraints will prohibit duplicates.
Then again, I would just store all the events sequentially and query what you need to know at the time you need it.
But shouldnt it be unique ?
What would I use the "uniqueness" for? The system (Node-red) dont know the uniqe key at the time of a fault. The system monitores values, like temperatures, and if the temperature is over or under the limit it gets inserted as an incident. Active faults has end_time as null. When the fault normalizes, the end_time is updated with a value, making it historical. The uniqueness is the tag and end_time combined, but only in active state.
insert into table (tag, descr, start_time) values ('some tag', 'some description', 'someStartTtime')
update table set end_time = 'someEndTime' where tag='some tag' and end_time is null`
The table is used for displaying the incident to the user.
When storing information like this in the "context" way, it is directly accessable in a function node.
And several other nodes has access. Like change, switch etc.
When monitoring system asks the "PLC" through modbus I can just return the value from the global.fault_object.someDefinedFaultObject.value with the node-red-contrib-modbus-api.
It sounds like a nightmare having to deal with databases, MQTT etc.
After som re-coding/configuring I have a solution that I am happy with.
But have you had any experience with sqlite eating resourcess?
After I started using sqlite my device started to struggle.
Should I use postgres instead?
SQLite is easier for getting up and running/PoC/smaller quantities of data etc. Postgres is better for long-term/larger qualities of data.
Thats why I thought to use sqlite.
But if others have any excperience with sqlite making devices struggle I would try something else.
If not, then it is probably something with my code.
it depends on the frequency of operations.
What size of data and what frequency are you reading and writing to SQLite?
If you edit your flow to temporary prevent data being written to or read from, does the CPU throttle down?
As you see on the picture, the moment I enable the inject node with an interval on 3s.
Then the CPU struggles, and Node-red responds very slowly. The moment I disable the inject node the CPU is happy.
The inject node on 3s starts a flow that goes through a json object checking if a value is 0 or 1. Fills two arrays. One for faults and one for none faults. Then two sql statements is created from the arrays. One for insert if not exists and one update if exists.
Then two selects. One for active faults, and one for historical faults. That data is feeded to a html table "template".
All of that is one long flow. It was two flows with some paralell jobs.
But trying to please the CPU I now have one long flow and no paralell jobs. Just to make sure that I dont talk to the database at the same time...
Not a hight frequenzy, and very little data.
And the json object is not big.
Do you mean that if you keep it so you you are not running mutliple sqlite actions in parallel then it is ok? Sqlite is not good at parallel actions, that is a know issue. The database is a single file with everything in it and is not designed for parallel access. It does work but it is not good.
I had paralell jobs interacting with sqlite. But as you say, that might cause problems.
So i re-designed the solutions to no paralell jobs interacting with sqlite.