Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Includes:
Hosting integration —&—
Client integration
GitHub Models provides access to various AI models including OpenAI's GPT models, DeepSeek, Microsoft's Phi models, and other leading AI models, all accessible through GitHub's infrastructure. The .NET Aspire GitHub Models integration enables you to connect to GitHub Models from your .NET applications for prototyping and production scenarios.
Hosting integration
The .NET Aspire GitHub Models hosting integration models GitHub Models resources as GitHubModelResource
. To access these types and APIs for expressing them within your AppHost project, install the 📦 Aspire.Hosting.GitHub.Models NuGet package:
dotnet add package Aspire.Hosting.GitHub.Models
For more information, see dotnet add package or Manage package dependencies in .NET applications.
Add a GitHub Model resource
To add a GitHubModelResource
to your app host project, call the AddGitHubModel
method:
var builder = DistributedApplication.CreateBuilder(args);
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini");
builder.AddProject<Projects.ExampleProject>()
.WithReference(chat);
// After adding all resources, run the app...
The preceding code adds a GitHub Model resource named chat
using the openai/gpt-4o-mini
model. The WithReference method passes the connection information to the ExampleProject
project.
Specify an organization
For organization-specific requests, you can specify an organization parameter:
var builder = DistributedApplication.CreateBuilder(args);
var organization = builder.AddParameter("github-org");
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini", organization);
builder.AddProject<Projects.ExampleProject>()
.WithReference(chat);
// After adding all resources, run the app...
When an organization is specified, the token must be attributed to that organization in GitHub.
Configure API key authentication
The GitHub Models integration supports multiple ways to configure authentication:
Default API key parameter
By default, the integration creates a parameter named {resource_name}-gh-apikey
that automatically falls back to the GITHUB_TOKEN
environment variable:
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini");
Then in user secrets:
{
"Parameters": {
"chat-gh-apikey": "YOUR_GITHUB_TOKEN_HERE"
}
}
Custom API key parameter
You can also specify a custom parameter for the API key:
var apiKey = builder.AddParameter("my-api-key", secret: true);
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini")
.WithApiKey(apiKey);
Then in user secrets:
{
"Parameters": {
"my-api-key": "YOUR_GITHUB_TOKEN_HERE"
}
}
Health checks
You can add health checks to verify the GitHub Models endpoint accessibility and API key validity:
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini")
.WithHealthCheck();
Important
Because health checks are included in the rate limit of the GitHub Models API, use this health check sparingly, such as when debugging connectivity issues. The health check runs only once per application instance to minimize API usage.
Available models
GitHub Models supports various AI models. Some popular options include:
openai/gpt-4o-mini
openai/gpt-4o
deepseek/DeepSeek-V3-0324
microsoft/Phi-4-mini-instruct
Check the GitHub Models documentation for the most up-to-date list of available models.
Client integration
To get started with the .NET Aspire GitHub Models client integration, you can use either the Azure AI Inference client or the OpenAI client, depending on your needs and model compatibility.
Using Azure AI Inference client
Install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:
dotnet add package Aspire.Azure.AI.Inference
Add a ChatCompletionsClient
In the Program.cs file of your client-consuming project, use the AddAzureChatCompletionsClient
method to register a ChatCompletionsClient
for dependency injection:
builder.AddAzureChatCompletionsClient("chat");
You can then retrieve the ChatCompletionsClient
instance using dependency injection:
public class ExampleService(ChatCompletionsClient client)
{
public async Task<string> GetResponseAsync(string prompt)
{
var response = await client.GetChatCompletionsAsync(
"openai/gpt-4o-mini",
new[]
{
new ChatMessage(ChatRole.User, prompt)
});
return response.Value.Choices[0].Message.Content;
}
}
Add ChatCompletionsClient with registered IChatClient
If you're using the Microsoft.Extensions.AI abstractions, you can register an IChatClient
:
builder.AddAzureChatCompletionsClient("chat")
.AddChatClient();
Then use it in your services:
public class StoryService(IChatClient chatClient)
{
public async Task<string> GenerateStoryAsync(string prompt)
{
var response = await chatClient.GetResponseAsync(prompt);
return response.Text;
}
}
Using OpenAI client
For models compatible with the OpenAI API (such as openai/gpt-4o-mini
), you can use the OpenAI client. Install the 📦 Aspire.OpenAI NuGet package:
dotnet add package Aspire.OpenAI
Add an OpenAI client
builder.AddOpenAIClient("chat");
You can then use the OpenAI client:
public class ChatService(OpenAIClient client)
{
public async Task<string> GetChatResponseAsync(string prompt)
{
var chatClient = client.GetChatClient("openai/gpt-4o-mini");
var response = await chatClient.CompleteChatAsync(
new[]
{
new UserChatMessage(prompt)
});
return response.Value.Content[0].Text;
}
}
Add OpenAI client with registered IChatClient
builder.AddOpenAIClient("chat")
.AddChatClient();
Configuration
The GitHub Models integration supports configuration through user secrets, environment variables, or app settings. The integration automatically uses the GITHUB_TOKEN
environment variable if available, or you can specify a custom API key parameter.
Authentication
The GitHub Models integration requires a GitHub personal access token with models: read
permission. The token can be provided in several ways:
Environment variables in Codespaces and GitHub Actions
When running an app in GitHub Codespaces or GitHub Actions, the GITHUB_TOKEN
environment variable is automatically available and can be used without additional configuration. This token has the necessary permissions to access GitHub Models for the repository context.
// No additional configuration needed in Codespaces/GitHub Actions
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini");
Personal access tokens for local development
For local development, you need to create a fine-grained personal access token with the models: read
scope and configure it in user secrets:
{
"Parameters": {
"chat-gh-apikey": "github_pat_YOUR_TOKEN_HERE"
}
}
Connection string format
The connection string follows this format:
Endpoint=https://models.github.ai/inference;Key={api_key};Model={model_name};DeploymentId={model_name}
For organization-specific requests:
Endpoint=https://models.github.ai/orgs/{organization}/inference;Key={api_key};Model={model_name};DeploymentId={model_name}
Rate limits and costs
Important
Each model has rate limits that vary by model and usage tier. Some models include costs if you exceed free tier limits. Check the GitHub Models documentation for current rate limits and pricing information.
Tip
Use health checks sparingly to avoid consuming your rate limit allowance. The integration caches health check results to minimize API calls.
Sample application
The dotnet/aspire
repo contains an example application demonstrating the GitHub Models integration. You can find the sample in the Aspire GitHub repository.
Observability and telemetry
.NET Aspire integrations automatically set up Logging, Tracing, and Metrics configurations, which are sometimes known as the pillars of observability. For more information about integration observability and telemetry, see .NET Aspire integrations overview. Depending on the backing service, some integrations may only support some of these features. For example, some integrations support logging and tracing, but not metrics. Telemetry features can also be disabled using the techniques presented in the Configuration section.
Logging
The GitHub Models integration uses standard HTTP client logging categories:
System.Net.Http.HttpClient
Microsoft.Extensions.Http
Tracing
HTTP requests to the GitHub Models API are automatically traced when using the Azure AI Inference or OpenAI clients.
See also
.NET Aspire