As far as I know, if your goal is to answer "how to do" or process-related questions from internal documents, RAG is the simplest and most common approach. Function calling (via MCP tools) is usually more useful for performing actions or fetching live system data, not for reading guidance text.
From a RAG perspective, the first step (Retrieve) is to store your "How to do" content in a searchable knowledge base by splitting it into chunks and storing their embeddings in a vector database. In Mendix, PgVector Knowledge Base is a good fit for this purpose.
When a user asks a question, the application retrieves the most relevant chunks from PgVector (Retrieve) and adds them to the prompt as additional context (Augment). This enriched prompt is then sent to Azure OpenAI, which generates the final answer based on that provided documentation (Generate). This way, the chatbot consistently answers using your internal documents rather than relying only on the model’s general knowledge.