Edit

Share via


Cache responses to large language model API requests

APPLIES TO: All API Management tiers

The llm-semantic-cache-store policy caches responses to chat completion API requests to a configured external cache. Response caching reduces bandwidth and processing requirements imposed on the backend Azure OpenAI API and lowers latency perceived by API consumers.

Note

Note

Set the policy's elements and child elements in the order provided in the policy statement. Learn more about how to set or edit API Management policies.

Supported models

Use the policy with LLM APIs added to Azure API Management that are available through the Azure AI Model Inference API or with OpenAI-compatible models served through third-party inference providers.

Policy statement

<llm-semantic-cache-store duration="seconds"/>

Attributes

Attribute Description Required Default
duration Time-to-live of the cached entries, specified in seconds. Policy expressions are allowed. Yes N/A

Usage

Usage notes

  • This policy can only be used once in a policy section.
  • If the cache lookup fails, the API call that uses the cache-related operation doesn't raise an error, and the cache operation completes successfully.
  • We recommend configuring a rate-limit policy (or rate-limit-by-key policy) immediately after any cache lookup. This helps keep your backend service from getting overloaded if the cache isn't available.

Examples

Example with corresponding llm-semantic-cache-lookup policy

The following example shows how to use the llm-semantic-cache-lookup policy along with the llm-semantic-cache-store policy to retrieve semantically similar cached responses with a similarity score threshold of 0.05. Cached values are partitioned by the subscription ID of the caller.

Note

The rate-limit policy added after the cache lookup helps limit the number of calls to prevent overload on the backend service in case the cache isn't available.

<policies>
    <inbound>
        <base />
        <llm-semantic-cache-lookup
            score-threshold="0.05"
            embeddings-backend-id ="llm-backend"
            embeddings-backend-auth ="system-assigned" >
            <vary-by>@(context.Subscription.Id)</vary-by>
        </llm-semantic-cache-lookup>
        <rate-limit calls="10" renewal-period="60" />
    </inbound>
    <outbound>
        <llm-semantic-cache-store duration="60" />
        <base />
    </outbound>
</policies>

For more information about working with policies, see: