As promised before, I have just completed (and verified) a flow which implements an API that is compatible with that of Open AI - see the related GitHub repository.
Possible use cases could be:
- log any requests to the Open AI API - either for reporting reasons or just to learn how other scripts (like Auto-GPT or BabyAGI) use this API to reach their goals
- don't give your (secret!) Open AI API key to potentially dangerous scripts! Instead, add it to these flows only and configure your untrusted scripts to use these endpoints as a proxy;
- replace some (or all) requests to the Open AI API by your own implementations - e.g., based on other flows which use LLaMA, Stanford Alpaca, or GPT4All filtered or unfiltered or GPT4All-J models.
In any case, this proxy gives you much more control over Open AI API requests with respect to
- data privacy,
- safety and
- costs.
So far, I have already managed to instruct Auto-GPT to use my server instead of the official one - and BabyAGI will hopefully come next