OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

OpenAI function calling while streaming response

  • Thread starter Thread starter TurmoiledPython
  • Start date Start date
T

TurmoiledPython

Guest
I have a chat app built with the ChatGPT API. This app streams responses from the API back to users and currently uses Django channels and langchain. I’m hoping to do 2 things and I’m not sure how to approach them:


  1. I’d like to remove langchain from the app and just rely on the low level api. What approach can I take to stream responses back from the API to my front end?


  2. I’d like to pass tools into the OpenAI call for function calling. However, the app renders text to the client via a stream but part of the chat UI relies on this function call being returned from the API. Is there an existing pattern that can take the arguments from a function call response and render it to the user without noticeable latency?

Basically, the user sends a message and I pass that into the chat completion api with a function. If the llm decides I should call a function, the code will pull those arguments and render to a part of my UI. I understand there is a bunch of scope in this questions but any insight to any part of it will help. Thanks!
<p>I have a chat app built with the ChatGPT API. This app streams responses from the API back to users and currently uses Django channels and langchain. I’m hoping to do 2 things and I’m not sure how to approach them:</p>
<ol>
<li><p>I’d like to remove langchain from the app and just rely on the low level api. What approach can I take to stream responses back from the API to my front end?</p>
</li>
<li><p>I’d like to pass tools into the OpenAI call for function calling. However, the app renders text to the client via a stream but part of the chat UI relies on this function call being returned from the API. Is there an existing pattern that can take the arguments from a function call response and render it to the user without noticeable latency?</p>
</li>
</ol>
<p>Basically, the user sends a message and I pass that into the chat completion api with a function. If the llm decides I should call a function, the code will pull those arguments and render to a part of my UI. I understand there is a bunch of scope in this questions but any insight to any part of it will help. Thanks!</p>
 

Latest posts

M
Replies
0
Views
1
MOHAMED AMIIN ABDI AADAN
M
Top