Hello,
I'm trying to get the HTTP Request node to send out multiple payloads when a HTTP request is sent.
The payloads arrive as individual JSON objects every 100ms or so apart from a single request. They are small enough to not require multiple payloads and so are parsable.
Example:
$ curl http://192.1.2.59:11434/api/generate -d '{"model": "tinyllama","prompt": "Why is the sky not red?","options":{"num_ctx":64,"top_k":1,"top_p":0.1,"temperature": 0.1,"mirostat_tau":1.0}}'
{"model":"tinyllama","created_at":"2024-02-04T11:16:21.232303888Z","response":"The","done":false}
{"model":"tinyllama","created_at":"2024-02-04T11:16:21.285452561Z","response":" sky","done":false}
{"model":"tinyllama","created_at":"2024-02-04T11:16:21.338583188Z","response":" is","done":false}
{"model":"tinyllama","created_at":"2024-02-04T11:16:21.392181616Z","response":" not","done":false}
{"model":"tinyllama","created_at":"2024-02-04T11:16:21.445267597Z","response":" red","done":false}
{"model":"tinyllama","created_at":"2024-02-04T11:16:21.498385753Z","response":",","done":false}
... etc
{"model":"tinyllama","created_at":"2024-02-04T11:16:26.559776889Z","response":"","done":true,"context":[529, ...etc],"total_duration":7110858147,"load_duration":5946278,"prompt_eval_count":38,"prompt_eval_duration":1825929000,"eval_count":93,"eval_duration":5274177000}
Unfortunately, the HTTP Request node only outputs once the connection is completed and closed. I was wondering if there was something in the msg.whatever that could get it to output as it receives?
I could probably do this with a function node with fetch, readableStream.getReader, and node.send(), but was hoping it was possible with the request node.