Edit

Share via


.NET Aspire Azure AI Foundry integration (Preview)

Includes: Hosting integration included Hosting integration —&— Client integration included Client integration

Azure AI Foundry is an AI platform that provides access to cutting-edge foundation models, tools for AI development, and scalable infrastructure for building intelligent applications. The .NET Aspire Azure AI Foundry integration enables you to connect to Azure AI Foundry or run models locally using Foundry Local from your .NET applications.

Hosting integration

The .NET Aspire Azure AI Foundry hosting integration models Azure AI Foundry resources as AzureAIFoundryResource. To access these types and APIs for expressing them within your AppHost project, install the 📦 Aspire.Hosting.Azure.AIFoundry NuGet package:

dotnet add package Aspire.Hosting.Azure.AIFoundry

For more information, see dotnet add package or Manage package dependencies in .NET applications.

Add an Azure AI Foundry resource

To add an AzureAIFoundryResource to your app host project, call the AddAzureAIFoundry method:

var builder = DistributedApplication.CreateBuilder(args);

var foundry = builder.AddAzureAIFoundry("foundry");

builder.AddProject<Projects.ExampleProject>()
       .WithReference(foundry);

// After adding all resources, run the app...

The preceding code adds an Azure AI Foundry resource named foundry to the app host project. The WithReference method passes the connection information to the ExampleProject project.

Important

When you call AddAzureAIFoundry, it implicitly calls AddAzureProvisioning(IDistributedApplicationBuilder)—which adds support for generating Azure resources dynamically during app startup. The app must configure the appropriate subscription and location. For more information, see Local provisioning: Configuration.

Add an Azure AI Foundry deployment resource

To add an Azure AI Foundry deployment resource, call the AddDeployment method:

var builder = DistributedApplication.CreateBuilder(args);

var foundry = builder.AddAzureAIFoundry("foundry");

var chat = foundry.AddDeployment("chat", "Phi-4", "1", "Microsoft");

builder.AddProject<Projects.ExampleProject>()
       .WithReference(chat)
       .WaitFor(chat);

// After adding all resources, run the app...

The preceding code:

  • Adds an Azure AI Foundry resource named foundry.
  • Adds an Azure AI Foundry deployment resource named chat with a model name of Phi-4. The model name must correspond to an available model in the Azure AI Foundry service.

Note

The format parameter of the AddDeployment(...) method can be found in the Azure AI Foundry portal in the details page of the model, right after the Quick facts text.

Configure deployment properties

You can customize deployment properties using the WithProperties method:

var chat = foundry.AddDeployment("chat", "Phi-4", "1", "Microsoft")
                  .WithProperties(deployment =>
                  {
                      deployment.SkuName = "Standard";
                      deployment.SkuCapacity = 10;
                  });

The preceding code sets the SKU name to Standard and capacity to 10 for the deployment.

Provisioning-generated Bicep

If you're new to Bicep, it's a domain-specific language for defining Azure resources. With .NET Aspire, you don't need to write Bicep by-hand, instead the provisioning APIs generate Bicep for you. When you publish your app, the generated Bicep provisions an Azure AI Foundry resource with standard defaults.

@description('The location for the resource(s) to be deployed.')
param location string = resourceGroup().location

resource ai_foundry 'Microsoft.CognitiveServices/accounts@2024-10-01' = {
  name: take('aifoundry-${uniqueString(resourceGroup().id)}', 64)
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  kind: 'AIServices'
  properties: {
    customSubDomainName: toLower(take(concat('ai-foundry', uniqueString(resourceGroup().id)), 24))
    publicNetworkAccess: 'Enabled'
    disableLocalAuth: true
  }
  sku: {
    name: 'S0'
  }
  tags: {
    'aspire-resource-name': 'ai-foundry'
  }
}

resource chat 'Microsoft.CognitiveServices/accounts/deployments@2024-10-01' = {
  name: 'Phi-4'
  properties: {
    model: {
      format: 'Microsoft'
      name: 'Phi-4'
      version: '1'
    }
  }
  sku: {
    name: 'GlobalStandard'
    capacity: 1
  }
  parent: ai_foundry
}

output aiFoundryApiEndpoint string = ai_foundry.properties.endpoints['AI Foundry API']

output endpoint string = ai_foundry.properties.endpoint

output name string = ai_foundry.name

The preceding Bicep is a module that provisions an Azure Cognitive Services resource configured for AI Services. Additionally, role assignments are created for the Azure resource in a separate module:

@description('The location for the resource(s) to be deployed.')
param location string = resourceGroup().location

param ai_foundry_outputs_name string

param principalType string

param principalId string

resource ai_foundry 'Microsoft.CognitiveServices/accounts@2024-10-01' existing = {
  name: ai_foundry_outputs_name
}

resource ai_foundry_CognitiveServicesUser 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(ai_foundry.id, principalId, subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'a97b65f3-24c7-4388-baec-2e87135dc908'))
  properties: {
    principalId: principalId
    roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'a97b65f3-24c7-4388-baec-2e87135dc908')
    principalType: principalType
  }
  scope: ai_foundry
}

resource ai_foundry_CognitiveServicesOpenAIUser 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(ai_foundry.id, principalId, subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '5e0bd9bd-7b93-4f28-af87-19fc36ad61bd'))
  properties: {
    principalId: principalId
    roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '5e0bd9bd-7b93-4f28-af87-19fc36ad61bd')
    principalType: principalType
  }
  scope: ai_foundry
}

The generated Bicep is a starting point and is influenced by changes to the provisioning infrastructure in C#. Customizations to the Bicep file directly will be overwritten, so make changes through the C# provisioning APIs to ensure they're reflected in the generated files.

Customize provisioning infrastructure

All .NET Aspire Azure resources are subclasses of the AzureProvisioningResource type. This enables customization of the generated Bicep by providing a fluent API to configure the Azure resources—using the ConfigureInfrastructure<T>(IResourceBuilder<T>, Action<AzureResourceInfrastructure>) API:

builder.AddAzureAIFoundry("foundry")
    .ConfigureInfrastructure(infra =>
    {
        var resources = infra.GetProvisionableResources();
        var account = resources.OfType<CognitiveServicesAccount>().Single();

        account.Sku = new CognitiveServicesSku
        {
            Tier = CognitiveServicesSkuTier.Enterprise,
            Name = "E0"
        };
        account.Tags.Add("ExampleKey", "Example value");
    });

The preceding code:

Connect to an existing Azure AI Foundry service

You might have an existing Azure AI Foundry service that you want to connect to. You can chain a call to annotate that your AzureAIFoundryResource is an existing resource:

var builder = DistributedApplication.CreateBuilder(args);

var existingFoundryName = builder.AddParameter("existingFoundryName");
var existingFoundryResourceGroup = builder.AddParameter("existingFoundryResourceGroup");

var foundry = builder.AddAzureAIFoundry("foundry")
                     .AsExisting(existingFoundryName, existingFoundryResourceGroup);

builder.AddProject<Projects.ExampleProject>()
       .WithReference(foundry);

// After adding all resources, run the app...

Important

When you call RunAsExisting, PublishAsExisting, or AsExisting methods to work with resources that are already present in your Azure subscription, you must add certain configuration values to your App Host to ensure that .NET Aspire can locate them. The necessary configuration values include SubscriptionId, AllowResourceGroupCreation, ResourceGroup, and Location. If you don't set them, "Missing configuration" errors appear in the .NET Aspire dashboard. For more information about how to set them, see Configuration.

For more information on treating Azure AI Foundry resources as existing resources, see Use existing Azure resources.

Note

Alternatively, instead of representing an Azure AI Foundry resource, you can add a connection string to the app host. This approach is weakly typed, and doesn't work with role assignments or infrastructure customizations. For more information, see Add existing Azure resources with connection strings.

Use Foundry Local for development

Aspire supports the usage of Foundry Local for local development. Add the following to your AppHost project:

var builder = DistributedApplication.CreateBuilder(args);

var foundry = builder.AddAzureAIFoundry("foundry")
                     .RunAsFoundryLocal();

var chat = foundry.AddDeployment("chat", "phi-3.5-mini", "1", "Microsoft");

builder.AddProject<Projects.ExampleProject>()
       .WithReference(chat)
       .WaitFor(chat);

// After adding all resources, run the app...

When the AppHost starts up, the local foundry service is also started. This requires the local machine to have Foundry Local installed and running.

The RunAsFoundryLocal() method configures the resource to run as an emulator. It downloads and loads the specified models locally. The method provides health checks for the local service and automatically manages the Foundry Local lifecycle.

Assign roles to resources

You can assign specific roles to resources that need to access the Azure AI Foundry service. Use the WithRoleAssignments method:

var foundry = builder.AddAzureAIFoundry("foundry");

builder.AddProject<Projects.Api>("api")
       .WithRoleAssignments(foundry, CognitiveServicesBuiltInRole.CognitiveServicesUser)
       .WithReference(foundry);

The preceding code assigns the CognitiveServicesUser role to the api project, granting it the necessary permissions to access the Azure AI Foundry resource.

Client integration

To get started with the .NET Aspire Azure AI Foundry client integration, install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project, that is, the project for the application that uses the Azure AI Foundry client.

dotnet add package Aspire.Azure.AI.Inference

Add an Azure AI Foundry client

In the Program.cs file of your client-consuming project, use the AddAzureChatCompletionsClient(IHostApplicationBuilder, String) method to register a ChatCompletionsClient for dependency injection (DI). The method requires a connection name parameter.

builder.AddAzureChatCompletionsClient(connectionName: "chat");

Tip

The connectionName parameter must match the name used when adding the Azure AI Foundry deployment resource in the app host project. For more information, see Add an Azure AI Foundry deployment resource.

After adding the ChatCompletionsClient, you can retrieve the client instance using dependency injection:

public class ExampleService(ChatCompletionsClient client)
{
    // Use client...
}

For more information, see:

Add Azure AI Foundry client with registered IChatClient

If you're interested in using the IChatClient interface, with the Azure AI Foundry client, simply chain the AddChatClient API to the AddAzureChatCompletionsClient method:

builder.AddAzureChatCompletionsClient(connectionName: "chat")
       .AddChatClient();

For more information on the IChatClient and its corresponding library, see Artificial intelligence in .NET (Preview).

Alternative: Use OpenAI client for compatible models

For models that are compatible with the OpenAI API, you can also use the 📦 Aspire.OpenAI client integration:

builder.AddOpenAIClient("chat")
       .AddChatClient();

This approach works well with models that support the OpenAI API format.

Configuration

The .NET Aspire Azure AI Foundry library provides multiple options to configure the Azure AI Foundry connection based on the requirements and conventions of your project. Either an Endpoint and DeploymentId, or a ConnectionString is required to be supplied.

Use a connection string

When using a connection string from the ConnectionStrings configuration section, you can provide the name of the connection string when calling builder.AddAzureChatCompletionsClient:

builder.AddAzureChatCompletionsClient("chat");

The connection string is retrieved from the ConnectionStrings configuration section, and there are two supported formats:

Azure AI Foundry Endpoint

The recommended approach is to use an Endpoint, which works with the ChatCompletionsClientSettings.Credential property to establish a connection. If no credential is configured, the DefaultAzureCredential is used.

{
  "ConnectionStrings": {
    "chat": "Endpoint=https://{endpoint}/;DeploymentId={deploymentName}"
  }
}
Connection string

Alternatively, a custom connection string can be used:

{
  "ConnectionStrings": {
    "chat": "Endpoint=https://{endpoint}/;Key={account_key};DeploymentId={deploymentName}"
  }
}

Use configuration providers

The .NET Aspire Azure AI Inference library supports Microsoft.Extensions.Configuration. It loads the ChatCompletionsClientSettings from configuration by using the Aspire:Azure:AI:Inference key. Example appsettings.json that configures some of the options:

{
  "Aspire": {
    "Azure": {
      "AI": {
        "Inference": {
          "DisableTracing": false,
          "ClientOptions": {
            "UserAgentApplicationId": "myapp"
          }
        }
      }
    }
  }
}

For the complete Azure AI Inference client integration JSON schema, see Aspire.Azure.AI.Inference/ConfigurationSchema.json.

Use inline delegates

You can pass the Action<ChatCompletionsClientSettings> configureSettings delegate to set up some or all the options inline, for example to disable tracing from code:

builder.AddAzureChatCompletionsClient(
    "chat",
    static settings => settings.DisableTracing = true);

Observability and telemetry

.NET Aspire integrations automatically set up Logging, Tracing, and Metrics configurations, which are sometimes known as the pillars of observability. For more information about integration observability and telemetry, see .NET Aspire integrations overview. Depending on the backing service, some integrations may only support some of these features. For example, some integrations support logging and tracing, but not metrics. Telemetry features can also be disabled using the techniques presented in the Configuration section.

Logging

The .NET Aspire Azure AI Foundry integration uses the following log categories:

  • Azure
  • Azure.Core
  • Azure.Identity

Tracing

The .NET Aspire Azure AI Foundry integration emits tracing activities using OpenTelemetry for operations performed with the ChatCompletionsClient.

See also