How Large Language Models can support customer service in Dynamics 365 - using the example of Ollama

Geschrieben von Alan Rachid am 07.04.2025

Large Language Models (LLMs) are on everyone's lips and I thought it was time to experiment a little more with them and find out how they can be used outside of my day-to-day work.

I deliberately didn't jump on the bandwagon that tells us all how colorful and great the world will be when autonomous AI agents replace us in the “not foreseeable” future and take over our work, but instead came up with an application scenario in which an LLM supports a user in a business application. It was also important for me to find out to what extent certain data protection-related areas of tension can be taken into account when using LLMs. Specifically, I wanted to find out whether self-hosted LLMs could replace the models of the major technology providers such as ChatGPT, Microsoft Copilot, etc. Translated with DeepL.com (free version)

To summarize: In this blog post, I describe how a Large Language Model is hosted “locally” in the form of an Azure Container Instance with the help of Ollama as a Docker Container and called with an Azure Function to provide a service request from Microsoft Dynamics 365 Customer Service with keywords so that the request can be pre-sorted downstream, e.g. into a corresponding queue. I will briefly discuss Large Language Models, describe what Ollama is and show step by step how I implemented my application scenario.

What are Large Lanugage Models (LLM)?

Large Language Models sind eine Unterkategorie der natürlichen Sprachverarbeitung (Natural Large language models are a subcategory of natural language processing (NLP) that is designed to enable computers to understand, interpret and generate human language. The aim of large language models is to generate human-like text responses. To enable them to do this, they have been trained with a large amount of text. This enables the models to recognize patterns, connections and meanings in the language.

The term generative AI is also often used in connection with LLMs. LLMs are not the same as generative AI, but are a component of it. Generative AI includes all technologies that can generate content (text, images, music) independently. Microsoft Copilot, ChatGPT and others are generative and pre-trained models that run as cloud-based services and can send data to external servers for processing. To prevent the external processing of critical data, services can be used that run LLMs locally on a computer without sending data to third parties. Ollama is such a service. With the help of Ollama, a variety of LLMs such as Llama or Phi can be used without the complexity of installing and configuring the models. The models can simply be run locally as Docker containers.

The application scenario: Automated keywording of a service request in Dynamics 365 Customer Service using large language models

When using Microsoft Dynamics 365 Customer Service, queues help to process service requests quickly by allowing requests to be pre-sorted by topic, among other things. Depending on the maturity level of a service process, pre-sorting can either be carried out manually by an employee or automatically by routing rule sets that automatically assign a request according to simple search criteria ("Subject contains word ‘X’). We would now like to use an LLM to improve automatic routing by searching the request text for keywords using the LLM and then using these for downstream routing.

  • Microsoft Dynamics 365 Customer Service: The case table is part of the first-party app for managing service processes. We will use this table in our scenario. To enable tagging of the service request, we create the new table “Category”. This table only has one relevant field “Name” and an N:M relationship to the service request. We want to make it possible for a service request to be assigned to several categories and for a category to be assigned to several service requests.
  • Azure Functions: Communication with the Large Language Model is mapped via an Azure Function. The Azure Function is called from the CRM using an http trigger. This happens after the service request has been created. As soon as this has been created, the Azure Function is called with the query parameter caseId. The caseId contains the Guid of the service request. Within the Azure Function, we create a prompt and send it to our self-hosted LLM.
  • Azure Container Registry: This is where we store our Docker image to create a container.
  • Azure Container Instances: This is a service from Microsoft Azure that makes it possible to run container applications directly in the cloud. We will host the Docker image of Ollama here so that the Azure Function sends the prompts against this container instance.

The general process is as follows: In Dynamics 365 Customer Service, we maintain categories to which service requests can be assigned. When a service request is created, we automatically call an Azure Function. This Azure Function retrieves the description of the service request and all categories and creates a prompt from this. This prompt is sent to an LLM, which we run in an Azure Container Instance with the help of Ollama. Finally, the Azure Function sends identified categories back to the CRM so that the request can be routed.

Hands on: Extend Dynamics 365 Customer Service and create new table

Below I show the steps I took to implement the scenario described above. First, we extend Dynamics 365 Customer Service.

We create our new entity “Category”. To do this, we open make.powerapps.com and click on “Solutions” in the navigation bar. After we have created a new solution “LLMPlayground”, we add a new table to it. We name the table “Category” and make sure that the name field is displayed on the main form. After creating the table, we create a new N:M relationship between the category and the query. As a final step, we customize the query form and add a subgrid to the categories. This means that categories that have been identified by our Azure function can now be linked directly to the query.

Set up Azure Container Instances and deploy Ollama image with the LLM Phi3.5

The Microsoft Dynamics 365 Customer Service App is prepared to such an extent that requests can be automatically tagged with categories. Now we create our Docker container in Microsoft Azure. To do this, we first need to create an Azure Container Registry. This is a service from the Azure cloud that allows you to manage container images and other artifacts privately and centrally. To create an Azure Container Registry, we use the Azure CLI. The following commands must be executed in sequence:
Create Azure Container Registry with Azure CLI commads
You should now be able to see a new element of the type “Container Registry” in the specified resource group. Next, we prepare our Docker image locally. You must therefore have Docker installed on your computer. I reommend Docker Desktop If you have already installed Docker Desktop, you can now load the official Docker image from Ollama and create a container:
Create Docker Image with Ollama
In our example, we use the LLM Phi3.5, which is provided by Ollama. Phi3.5 was developed by Microsoft and, according to Microsoft, is suitable for applications with limited memory and computing power. You can find out more about Phi3.5 here. Now that we have set up our Docker container locally, we still need to deploy it to Azure Container Instances. Next, we need to tag our Docker image and upload it to our container registry.
Push Docker Image to Azure Container Registry
Use the “docker commit” command to create a new image with the name “newtestllama”, which contains a snapshot of all changes that have been made to the container's file system. This excludes data that is stored on a volume. We use the “docker tag” command to give the existing newtestllama image a new reference so that we can push the image to a registry. The “docker push” command then finally uploads the image to the ollama registry repository.

Our Docker image is now in the container registry and we can create an Azure Container Instance based on it. To do this, we open portal.azure.com, navigate to our resource group and create a new container instance. When creating the container instances, we select “Azure Container Registry” as the image source and use the image we have just uploaded. For the size, we leave the CPU as the default settings are, but we have to increase the memory to 8 GiB, as otherwise there is not enough space for the LLM. In the network settings, we enable port 11434.

After successfully deploying the container instance, you can open the container instance. We will need the public IP address for a later step, as this allows us to address the LLM.

Creating our Azure Functions project and calling the LLM with Microsoft.Extensions.Ai

Now that we have hosted our Docker container with the image of Ollama in the Azure Cloud, we are going to write our Azure Function, which receives a request from Dynamics 365 Customer Service, retrieves the description of the request and all categories and sends them to the LLM.

First, we create and Azure Functions project using Visual Studio Code and the Azure Functions Core Tools . Since I have already gone into detail in previous blog posts about how we create a Functions Project with the Core Tools and how we connect to Dataverse, I will not go into detail in this post. Below are the links for further reading:

After we have set up the Functions project and established our connection to Dataverse, we next install the Nuget package Microsoft.Extensions.Ai. The library Microsoft.Extensions.Ai offers .Net developers standardized abstractions that can be implemented by different LLM providers. This allows developers to simply exchange the underlying Large Language Model without having to rewrite code.

To get started, we need to install the relevant Nuget packages:
Install nuget packages
Then we have to add our OllamaChatClient to the IoC container in the Program.cs of the Azure Function so that we can use it via constructor injection in our actual Azure Function:
Constructor injection of OllamaChatClient

Our appsettings entry “OllamaURl” contains the URL that we received when we created the container instance. This URL contains the public IP address of the container instance and the port 11434. The local.settings.json look like this:
adjust local.settings.json

As soon as the Azure Function has been pushed to the cloud, the entries we have created in local.settings.json still need to be added to the environment variables area in the Azure Function App must be added.

We name our actual function CaseTagging.cs and in a first step we pass in our relevant dependencies via the constructor. These are the IOrganizationService to communicate with Dynamics 365 and the IChatClient to have a conversation with our LLM.
The Azure function is called via an HTTP trigger and receives the ID of the service request from the CRM as a query parameter. We save this ID in the variable “caseId”. We then retrieve the service request and the categories using the IOrganizationService and build our prompt.
Azure Function to call LLM
Probably the most interesting part of the code snippet is the line with the call to the LLM. Here we tell the chat client that it should return a strongly-typed response of type CategoryList. In this way, we achieve a certain standardization of the response so that we can always process the response in the same way. For the sake of simplicity, the CategoryList only has one property “Categories” of the type List of Strings.
DTO for CategoryLists

However, to ensure that the LLM always returns a response that can be converted to our CategoryList type, our prompt must be designed correctly. We do this in the GeneratePrompt method. Our prompt consists partly of standardized text and partly of dynamic content that we load from the request. In the prompt, we also let the LLM know in which format we expect the response.
Building prompt

After the Azure Function has been successfully run, the service request in Dynamics 365 Customer Service should now be provided with relevant categories. These categories can then be used for routing or other downstream process steps.

Recap: How Large Language Models can support customer service in Dynamics 365

In this blog article, we looked at how we can deploy Large Language Models locally using Ollama and call them using an Azure Function to support processes in Dynamics 365 Customer Service. We have come up with an example in which a service request is automatically assigned categories by an LLM and then used for routing. use them for routing.

Deploying the LLM in an Azure Container Instance is relatively simple and fast. Ollama offers us an alternative to the LLMs of the major technology providers. This gives us significantly more control over the further processing of company-relevant data.

The Microsoft.Extensions.Ai library offers us an easy way to integrate LLMs into .Net applications. Without changing much code, you could simply replace the LLM used with the help of this library.

Without the following two blog articles, it would not have been so easy for me to deal with this topic. The complete source code can be viewed on my Github profile:
Ollama-Agent for Dynamics 365 Customer Service

As always, I look forward to receiving feedback from you. If you have any questions on this topic, just write to me!

If you have any questions about our best practices or need support with implementing Microsoft Power Platform or Dynamics 365, please feel free to reach out to us. We look forward to connecting with you.

Send message