How to start with RAG

0
Hello,We currently have a our own generic Azure OpenAI chatbot integrated into our Mendix app along with the MCP server. Now, we want the chatbot to answer process-related questions, such as those found in "How to do" documents, in conjunction with the MCP server.I am unsure whether to use a Retrieval-Augmented Generation (RAG) approach or function calling for this use case. Could someone please provide guidance or suggest a simple way to implement this?
asked
1 answers
0

As far as I know, if your goal is to answer "how to do" or process-related questions from internal documents, RAG is the simplest and most common approach. Function calling (via MCP tools) is usually more useful for performing actions or fetching live system data, not for reading guidance text.


From a RAG perspective, the first step (Retrieve) is to store your "How to do" content in a searchable knowledge base by splitting it into chunks and storing their embeddings in a vector database. In Mendix, PgVector Knowledge Base is a good fit for this purpose.


When a user asks a question, the application retrieves the most relevant chunks from PgVector (Retrieve) and adds them to the prompt as additional context (Augment). This enriched prompt is then sent to Azure OpenAI, which generates the final answer based on that provided documentation (Generate). This way, the chatbot consistently answers using your internal documents rather than relying only on the model’s general knowledge.


answered