I want to change from a chatGPT integration to a local LLM Ollama but I do not get a response.

0
I have already created a chatbot which can talk to chatgpt but now I want to talk to a local LLM Ollama. But I do not know how to do this. For the chatcpt integration I use the following things:    Location: https://api.openai.com/v1/chat/completions -> with http method POST Content-Type: 'application/json' Authorization: 'Bearer sk-proj-.......'-> the api key   This is my custom request template:    this the response   this is my JSON structure:  This is the import response: where i map the content (string) of message to the attribute Response (string). When I then activate the microflow by asking a question I get a response.    Now I want to run a local LLM Ollama. When I change the following things I do not get a response from Ollama.    I use the following settings: Location: http://localhost:11434/api/generate -> with http method POST Content-Type: 'application/json' ( I removed this: Authorization: 'Bearer sk-proj-.......'-> the api key)    in custom request tempate i changed: "model": "mistral", the same holds for the json structure   Does anyone know how I can solve this? Or what am I doing wrong?
asked
1 answers
1

Hi Kas,

what is the error message you're seeing?

 

Have you read the blog post that we published last month? It explains How to Run Open-Source LLMs Locally with the OpenAI Connector and Ollama and might already help you.

 

Best regards

Liam

answered