ExecutionProviderCatalog Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Provides methods to discover, acquire, and register AI execution providers (EPs) for use with the ONNX Runtime.
ExecutionProviderCatalog handles the complexity of package management and hardware selection, and it's the entry point for your app to access hardware-optimized machine learning acceleration through the Windows ML runtime.
public ref class ExecutionProviderCatalog sealed
/// [Windows.Foundation.Metadata.ContractVersion(Microsoft.Windows.AI.MachineLearning.MachineLearningContract, 65536)]
/// [Windows.Foundation.Metadata.MarshalingBehavior(Windows.Foundation.Metadata.MarshalingType.Agile)]
/// [Windows.Foundation.Metadata.Threading(Windows.Foundation.Metadata.ThreadingModel.Both)]
class ExecutionProviderCatalog final
[Windows.Foundation.Metadata.ContractVersion(typeof(Microsoft.Windows.AI.MachineLearning.MachineLearningContract), 65536)]
[Windows.Foundation.Metadata.MarshalingBehavior(Windows.Foundation.Metadata.MarshalingType.Agile)]
[Windows.Foundation.Metadata.Threading(Windows.Foundation.Metadata.ThreadingModel.Both)]
public sealed class ExecutionProviderCatalog
Public NotInheritable Class ExecutionProviderCatalog
- Inheritance
- Attributes
Examples
// Get the default catalog
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
// Ensure and register all compatible execution providers
await catalog.EnsureAndRegisterCertifiedAsync();
// Use ONNX Runtime directly for inference (using Microsoft.ML.OnnxRuntime namespace)
// Get the default catalog
winrt::Microsoft::Windows::AI::MachineLearning::ExecutionProviderCatalog catalog =
winrt::Microsoft::Windows::AI::MachineLearning::ExecutionProviderCatalog::GetDefault();
// Ensure and register all compatible execution providers
catalog.EnsureAndRegisterCertifiedAsync().get();
// Use ONNX Runtime C API directly for inference
import winui3.microsoft.windows.ai.machinelearning as winml
# Get the default catalog
catalog = winml.ExecutionProviderCatalog.get_default()
# DO NOT call winml's register methods in Python. Those will not work for the onnxruntime Python environment.
# Instead, register execution providers following this pattern:
providers = catalog.find_all_providers()
for provider in providers:
provider.ensure_ready_async().get()
ort.register_execution_provider_library(provider.name, provider.library_path)
Methods
| Name | Description |
|---|---|
| EnsureAndRegisterCertifiedAsync() | |
| FindAllProviders() |
Retrieves a collection of all execution providers compatible with the current hardware. |
| GetDefault() |
Retrieves the default ExecutionProviderCatalog instance that provides access to all execution providers on the system. |
| RegisterCertifiedAsync() |
Registers all compatible execution providers with the ONNX Runtime without ensuring they are ready. This registers only providers that are already present on the machine, avoiding the potentially long download times that might be required by EnsureAndRegisterCertifiedAsync. |