As @cymplecy says, there are too many variables to be able to advise you confidently. The main factors are the hardware you are running on, the complexity of the flow, and indeed the design of the flow. For a fairly simple flow I’ve seen a Pi handle hundreds of messages a second, a laptop handle several thousand a second, But if the flow is complex involving lookups, or extensive logging, it may be a lot less. On the other hand , if it’s a purely http flow you maybe able to design it to scale horizontally and be able to throw more processors at it and run several instances in parallel - but that will take careful application design.
At the end of the day you will have to build it and benchmark it yourself in a representative scenario.