How can I integrate LLM API calls into a custom real-time chat script without changing pages? | XM Community
Skip to main content

I’m using a custom chat script that lets users chat in real time, and now I want to integrate LLMs into it. From what I understand, I might need to set up a proxy server, but I’m not sure if that’s strictly necessary.

My main questions are:

  1. Is it possible to send API requests directly to servers running the AI models using AJAX/HTTP calls, or will I need to go through a proxy?

  2. If direct AJAX calls are possible, what parameters or request setup details usually need to be adjusted so that the LLM API accepts them?

  3. If a proxy is the recommended approach, what’s the simplest way to implement one for this use case (so I don’t have to rebuild my chat framework or redirect users to a new page)?

The key requirement is that the chat stays on the same page, I can’t use the existing web service frameworks for LLMs since they force a page change.

Any guidance or examples would be much appreciated!

Be the first to reply!