Yeah, but I don't think you understand what I'm wanting to do.
(I think the terminology is TL/DR for this bit)
Ok, I have a message.
It is for TTS processing. (And I don't have an API key for google TTS - or any TTS)
There are .....4 (?) extra attributes to the message.
1 - MUTE
2 - ALERT
3 - split (I'll get back to this soon)
4 - VOLUME
(and I hope no more that have slipped off my memory.)
5 - PAUSE (forgot this one. Maybe because I don't use it much.)
What's going on and why?
Well, the messages can vary from basic ones which are just plain text, to muting, pausing, being split..... and so on.
These are given to the message at the start of their journey.
So there are a lot of options - yeah: option overload. 
(See screen shot)
And that is only one machine.
Other machines have a whole lot of options for TTS and alarms.
(and - alas - there are some things there that don't need to be. I just like to keep an eye of what's going on under the hood.)
So these were working all well and good until the messages started to become LONG.
The TTs node would just spit the dummy and not work. (annoying)
So what I did was did a simple SPLIT of the message and send the subsequent messages after the one before them was complete.
This is all good in theory, but things aren't perfect and problems started to happen.
Ideally with no attributes the message would go through, be played and nothing discernible happens.
But when you start adding a tone to be played BEFORE the message, it would be like:
ding
..... wait....... message heard
.
Not too bad, but noticeable.
That was not too bad.
But when you had the MUTE it became a problem
(be it with or without a DING.)
mute
(TV) ding
.... wait.... message heard
.... mute off
And again, all good.
But the delays started to become noticeable.
Then - sorry, but to show the evolution of where I'm at now and HOW I got here - BIG messages started to happen
So I split them with a simpler SPLIT
node.
ding
.... wait.....message1
.....message 2
... and so on.
Yeah, not too bad.
Here's the kicker!
Mute:
mute
..... message 1
....mute off
... message 2
.... and so on.
Suffice to say words were said.
So I had to work out a better way.
So I had to completely restructure the whole thing.
Queue the message, send to TTS node, get attributes back - alas they are lost going through the TTS node. (kinda - I'm open to alternatives) THEN process any MUTE, DING, VOLUME, etc. ....
THEN send it to the player.
Of course all that also has to be queued and clocked so the split messages sound fluid.
The SPLIT
node saves all the extra stuff (attributes, as I've been calling them) to flow.context and when the message comes out the other side of the TTS node, it goes through the second node and gets all it's attributes back.
OR SO I THOUGHT!
Now and then messages would..... vanish.
Poof!
Gone!
And I don't know where/how.
Though when I start tracing them, weird things happen.
All the queues are empty yet when I send another message through, the lost one appears in front of the newer message.
WHERE WAS IT HIDING? (Rhetorical)
Anyway, while digging, I saw two messages come through with the same _msgid
and it got me wondering.....
The flow I have is very complicated and I am sure it is over doing a lot of things.
But I've only added this extra stuff because messages get stuck or lost and when I add this extra stuff, they seem to work.
There, that's the John Dory.
Does it make sense?