An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
Hi Sachin Nandanwar,
This is a normal limitation, and you are not doing anything wrong.
Semantic Kernel is mainly the middleware that sends your chat history and instructions to the model and returns the model output. It does not add a special pronoun resolver on top. So when the conversation has two valid targets, the model may guess differently across runs because there is no single correct answer without extra context. [Introducti...soft Learn | Learn.Microsoft.com],
The practical fix is to make the reference explicit in the conversation state.
One simple pattern is to store a current selection and always answer follow up questions using that selection. In Semantic Kernel you can keep the conversation context in a ChatHistory object and you can also add richer messages, including tool messages, to inject extra context that the user did not type. This is useful to provide the model with the selected hotel name so the follow up question is no longer ambiguous. [learn.microsoft.com], [github.com]
Another simple pattern is to guide the model with a rule in your system message. Tell it that when a user asks a question with an unclear it or they, the assistant should ask a quick clarification instead of guessing. This is exactly the type of behavior you control with prompt design and prompt engineering. [Prompt eng...soft Learn | Learn.Microsoft.com],
So the key idea is this. If you want deterministic behavior, do not rely on the model to guess what it refers to when there are multiple candidates. Either track the selected entity yourself and pass it in the chat history, or have the assistant ask which hotel the user means and then save that choice as the current context for the next turns.
I Hope this helps. Do let me know if you have any further queries.
Thankyou!