Share via

Pronoun resolution in Semantic Kernel

Sachin Nandanwar 65 Reputation points
2026-03-19T11:28:05.5233333+00:00

Does anyone have any experience in "Pronoun resolution in Semantic Kernel" or GPT LLM's in general ? I am kind of lost here. Using the GPT 4.1 model.

To set the context of what I mean by "Pronoun resolution".

In the following chat conversation

User: Give me a list of hotels that provide animal safari

System: Desert Mirage Inn

User: Is it budget friendly?

System: Yes

This works because there is only one clear entity in context history.

But consider this in the follow up conversation :

User: Give me a list of hotels that are pet friendly

System: City Central Hotel

User: Is it budget friendly?

System: ???

At this point, the LLM doesn't have a deterministic way to decide whether “it” refers to Desert Mirage Inn or City Central Hotel.

I tried using GetChatMessageContentAsync of SK but relying on GetChatMessageContentAsync and chat history is not working.

Azure OpenAI Service
Azure OpenAI Service

An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.

0 comments No comments
{count} votes

Answer accepted by question author
  1. Anshika Varshney 8,925 Reputation points Microsoft External Staff Moderator
    2026-03-19T19:48:50.21+00:00

    Hi Sachin Nandanwar,

    This is a normal limitation, and you are not doing anything wrong.

    Semantic Kernel is mainly the middleware that sends your chat history and instructions to the model and returns the model output. It does not add a special pronoun resolver on top. So when the conversation has two valid targets, the model may guess differently across runs because there is no single correct answer without extra context. [Introducti...soft Learn | Learn.Microsoft.com],

    The practical fix is to make the reference explicit in the conversation state.

    One simple pattern is to store a current selection and always answer follow up questions using that selection. In Semantic Kernel you can keep the conversation context in a ChatHistory object and you can also add richer messages, including tool messages, to inject extra context that the user did not type. This is useful to provide the model with the selected hotel name so the follow up question is no longer ambiguous. [learn.microsoft.com], [github.com]

    Another simple pattern is to guide the model with a rule in your system message. Tell it that when a user asks a question with an unclear it or they, the assistant should ask a quick clarification instead of guessing. This is exactly the type of behavior you control with prompt design and prompt engineering. [Prompt eng...soft Learn | Learn.Microsoft.com],

    So the key idea is this. If you want deterministic behavior, do not rely on the model to guess what it refers to when there are multiple candidates. Either track the selected entity yourself and pass it in the chat history, or have the assistant ask which hotel the user means and then save that choice as the current context for the next turns.

    I Hope this helps. Do let me know if you have any further queries.

    Thankyou!

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.