az iot ops dataflow endpoint create
Note
This reference is part of the azure-iot-ops extension for the Azure CLI (version 2.62.0 or higher). The extension will automatically install the first time you run an az iot ops dataflow endpoint create command. Learn more about extensions.
Create or replace a dataflow endpoint resource.
Commands
Name | Description | Type | Status |
---|---|---|---|
az iot ops dataflow endpoint create adls |
Create or replace a dataflow endpoint resource for Azure Data Lake Storage Gen2. |
Extension | GA |
az iot ops dataflow endpoint create adx |
Create or replace a dataflow endpoint resource for Azure Data Explorer. |
Extension | GA |
az iot ops dataflow endpoint create custom-kafka |
Create or replace a dataflow endpoint resource for custom kafka broker. |
Extension | GA |
az iot ops dataflow endpoint create custom-mqtt |
Create or replace a dataflow endpoint resource for custom MQTT broker. |
Extension | GA |
az iot ops dataflow endpoint create eventgrid |
Create or replace a dataflow endpoint resource for Azure Event Grid. |
Extension | GA |
az iot ops dataflow endpoint create eventhub |
Create or replace a dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace. |
Extension | GA |
az iot ops dataflow endpoint create fabric-onelake |
Create or replace a dataflow endpoint resource for Microsoft Fabric OneLake. |
Extension | GA |
az iot ops dataflow endpoint create fabric-realtime |
Create or replace a Microsoft Fabric Real-Time Intelligence data flow endpoint. |
Extension | GA |
az iot ops dataflow endpoint create local-mqtt |
Create or replace a Azure IoT Operations Local MQTT dataflow endpoint. |
Extension | GA |
az iot ops dataflow endpoint create local-storage |
Create or replace a local storage dataflow endpoint. |
Extension | GA |
az iot ops dataflow endpoint create adls
Create or replace a dataflow endpoint resource for Azure Data Lake Storage Gen2.
For more information on Azure Data Lake Storage Gen2 dataflow endpoint, see
https://aka.ms/adlsv2.
Note: When using user assigned managed identity authentication method,
scope will default to 'https://storage.azure.com/.default' if not
specified by --scope
.
az iot ops dataflow endpoint create adls --instance
--name
--resource-group
--sa --storage-account
[--aud --audience]
[--auth-type {AccessToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cid --client-id]
[--latency]
[--mc --message-count]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account mystorageaccount
Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.
az iot ops dataflow endpoint create adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account mystorageaccount --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836 --scope "https://storage.azure.com/.default"
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account mystorageaccount --latency 70 --message-count 100 --secret-name mysecret --show-config
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
The name of Azure Data Lake Storage Gen2 account.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
The client ID of the user assigned identity.
The batching latency in seconds. Min value: 0, max value: 65535.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains SAS token.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create adx
Create or replace a dataflow endpoint resource for Azure Data Explorer.
For more information on Azure Data Explorer dataflow endpoint, see https://aka.ms/aio-adx.
az iot ops dataflow endpoint create adx --database --db
--host
--instance
--name
--resource-group
[--aud --audience]
[--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cid --client-id]
[--latency]
[--mc --message-count]
[--scope]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --database mydatabase --host "https://cluster.region.kusto.windows.net"
Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.
az iot ops dataflow endpoint create adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --database mydatabase --host "https://cluster.region.kusto.windows.net" --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --database mydatabase --host "https://cluster.region.kusto.windows.net" --latency 70 --message-count 100 --audience myaudience --show-config
Required Parameters
The name of the Azure Data Explorer database.
Host of the Azure Data Explorer is Azure Data Explorer cluster URI. In the form of https://cluster.region.kusto.windows.net.
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
The client ID of the user assigned identity.
The batching latency in seconds. Min value: 0, max value: 65535.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create custom-kafka
Create or replace a dataflow endpoint resource for custom kafka broker.
For more information on custom kafka dataflow endpoint, see https://aka.ms/aio-custom-kafka.
az iot ops dataflow endpoint create custom-kafka --hostname
--instance
--name
--port
--resource-group
[--acks {All, One, Zero}]
[--aud --audience]
[--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--cm --config-map-ref]
[--compression {Gzip, Lz4, None, Snappy}]
[--db --disable-batching {false, true}]
[--dbpc --disable-broker-props-copy {false, true}]
[--disable-tls {false, true}]
[--gid --group-id]
[--latency]
[--max-bytes --mb]
[--mc --message-count]
[--no-auth {false, true}]
[--partition-strategy --ps {Default, Property, Static, Topic}]
[--sasl-type {Plain, ScramSha256, ScramSha512}]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092
Create or replace a dataflow endpoint resource using SASL authentication method.
az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092 --sasl-type ScramSha256 --secret-name mysecret
Create or replace a dataflow endpoint resource with no auth.
az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092 --no-auth
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092 --disable-batching --latency 70 --max-bytes 200000 --message-count 100 --audience myaudience --config-map-ref myconfigmap --disable-tls --show-config
Required Parameters
The hostname of the Kafka broker host setting.
IoT Operations instance name.
Dataflow endpoint name.
The port number of the Kafka broker host setting.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
Compression type for the messages sent to Kafka topics.
Disable batching.
Disable MQTT broker properties copy to Kafka user headers.
The data flow uses an insecure connection to the Kafka/MQTT broker.
ID of consumer group that the data flow uses to read messages from the Kafka topic.
The batching latency in milliseconds. Min value: 0, max value: 65535.
Maximum number of bytes in a batch.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
No authentication for the endpoint.
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.
The type of SASL authentication.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name of the Kubernetes secret that contains the SASL token.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create custom-mqtt
Create or replace a dataflow endpoint resource for custom MQTT broker.
For more information on custom MQTT dataflow endpoint, see https://aka.ms/aio-custom-mqtt.
az iot ops dataflow endpoint create custom-mqtt --hostname
--instance
--name
--port
--resource-group
[--auth-type {ServiceAccountToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--client-id-prefix]
[--cm --config-map-ref]
[--disable-tls {false, true}]
[--ka --keep-alive]
[--max-inflight-msg --mim]
[--no-auth {false, true}]
[--protocol {Mqtt, WebSockets}]
[--qos]
[--retain {Keep, Never}]
[--sami-aud --sami-audience]
[--sat-aud --sat-audience]
[--scope]
[--secret-name]
[--session-expiry]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092
Create or replace a dataflow endpoint resource using Kubernetes Service Account Token authentication method.
az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092 --sat-audience myaudience --secret-name mysecret
Create or replace a dataflow endpoint resource with no auth.
az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092 --no-auth
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092 --client-id-prefix myclientprefix --keep-alive 100 --max-inflight-msg 60 --protocol WebSockets --qos 1 --retain Never --session-expiry 100 --cloud-event-attribute CreateOrRemap --secret-name mysecret --disable-tls --show-config
Required Parameters
The hostname of the custom MQTT broker host setting.
IoT Operations instance name.
Dataflow endpoint name.
The port number of the custom MQTT broker host setting.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
The data flow uses an insecure connection to the Kafka/MQTT broker.
The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.
The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.
No authentication for the endpoint.
Protocol to use for client connections.
Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.
Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.
The audience of the system assigned managed identity.
The audience of the Kubernetes service account token (SAT).
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.
The session expiry interval in seconds for the data flow MQTT client. Min value: 0.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create eventgrid
Create or replace a dataflow endpoint resource for Azure Event Grid.
For more information on Azure Event Grid dataflow endpoint, see https://aka.ms/aio-eventgrid.
az iot ops dataflow endpoint create eventgrid --hostname
--instance
--name
--resource-group
[--aud --audience]
[--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--client-id-prefix]
[--cm --config-map-ref]
[--ka --keep-alive]
[--max-inflight-msg --mim]
[--port]
[--protocol {Mqtt, WebSockets}]
[--qos]
[--retain {Keep, Never}]
[--scope]
[--secret-name]
[--session-expiry]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname "namespace.region-1.ts.eventgrid.azure.net" --port 9092
Create or replace a dataflow endpoint resource using X509 authentication method.
az iot ops dataflow endpoint create eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname "namespace.region-1.ts.eventgrid.azure.net" --port 9092 --secret-name mysecret
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname "namespace.region-1.ts.eventgrid.azure.net" --port 9092 --client-id-prefix myclientprefix --keep-alive 100 --max-inflight-msg 60 --protocol WebSockets --qos 1 --retain Never --session-expiry 100 --cloud-event-attribute CreateOrRemap --secret-name mysecret --config-map-ref myconfigmap --show-config
Required Parameters
The hostname of the event grid namespace. Can be found in 'Http hostname' property. In the form of NAMESPACE.REGION-1.ts.eventgrid.azure.net.
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.
The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.
The port number of the event grid namespace.
Protocol to use for client connections.
Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.
Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.
The session expiry interval in seconds for the data flow MQTT client. Min value: 0.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create eventhub
Create or replace a dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace.
For more information on Azure Event Hubs dataflow endpoint, see https://aka.ms/aio-eventhub.
az iot ops dataflow endpoint create eventhub --ehns --eventhub-namespace
--instance
--name
--resource-group
[--acks {All, One, Zero}]
[--aud --audience]
[--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--cm --config-map-ref]
[--compression {Gzip, Lz4, None, Snappy}]
[--db --disable-batching {false, true}]
[--dbpc --disable-broker-props-copy {false, true}]
[--gid --group-id]
[--latency]
[--max-bytes --mb]
[--mc --message-count]
[--partition-strategy --ps {Default, Property, Static, Topic}]
[--sasl-type {Plain, ScramSha256, ScramSha512}]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --eventhub-namespace myeventhubnamespace
Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.
az iot ops dataflow endpoint create eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --eventhub-namespace myeventhubnamespace --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836 --scope "https://eventhubs.azure.net/.default"
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --eventhub-namespace myeventhubnamespace --acks One --compression Gzip --disable-broker-props-copy --group-id mygroupid --partition-strategy Static --max-bytes 200000 --message-count 100 --latency 70 --cloud-event-attribute CreateOrRemap --sasl-type ScramSha256 --secret-name mysecret --config-map-ref myconfigmap --show-config
Required Parameters
The name of the Event Hubs namespace.
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
Compression type for the messages sent to Kafka topics.
Disable batching.
Disable MQTT broker properties copy to Kafka user headers.
ID of consumer group that the data flow uses to read messages from the Kafka topic.
The batching latency in milliseconds. Min value: 0, max value: 65535.
Maximum number of bytes in a batch.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.
The type of SASL authentication.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains event hub connection string. Note: The secret must be in the same namespace as the Kafka data flow endpoint. The secret must have both the username and password as key-value pairs. For more information about secret format, please refer to link in command description.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create fabric-onelake
Create or replace a dataflow endpoint resource for Microsoft Fabric OneLake.
For more information on Microsoft Fabric OneLake dataflow endpoint, see https://aka.ms/fabric-onelake.
az iot ops dataflow endpoint create fabric-onelake --instance
--lakehouse
--name
--path-type --pt {Files, Tables}
--resource-group
--workspace
[--aud --audience]
[--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cid --client-id]
[--latency]
[--mc --message-count]
[--scope]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse mylakehouse --workspace myworkspace --path-type Files
Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.
az iot ops dataflow endpoint create fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse mylakehouse --workspace myworkspace --path-type Files --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse mylakehouse --workspace myworkspace --path-type Files --latency 70 --message-count 100 --audience myaudience --show-config
Required Parameters
IoT Operations instance name.
The Microsoft Fabric lakehouse name under provided workspace.
Dataflow endpoint name.
The type of path used in OneLake.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
The Microsoft Fabric workspace name. Note: The default 'my workspace' isn't supported.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
The client ID of the user assigned identity.
The batching latency in seconds. Min value: 0, max value: 65535.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create fabric-realtime
Create or replace a Microsoft Fabric Real-Time Intelligence data flow endpoint.
For more information on Microsoft Fabric Real-Time Intelligence dataflow endpoint, see https://aka.ms/aio-fabric-real-time.
az iot ops dataflow endpoint create fabric-realtime --host
--instance
--name
--resource-group
[--acks {All, One, Zero}]
[--aud --audience]
[--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--cm --config-map-ref]
[--compression {Gzip, Lz4, None, Snappy}]
[--db --disable-batching {false, true}]
[--dbpc --disable-broker-props-copy {false, true}]
[--disable-tls {false, true}]
[--gid --group-id]
[--latency]
[--max-bytes --mb]
[--mc --message-count]
[--partition-strategy --ps {Default, Property, Static, Topic}]
[--sasl-type {Plain, ScramSha256, ScramSha512}]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --host "fabricrealtime.servicebus.windows.net:9093"
Create or replace a dataflow endpoint resource using SASL authentication method.
az iot ops dataflow endpoint create fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --host "fabricrealtime.servicebus.windows.net:9093" --sasl-type ScramSha256 --secret-name mysecret
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --host "fabricrealtime.servicebus.windows.net:9093" --acks One --compression Gzip --group-id mygroupid --partition-strategy Static --max-bytes 200000 --cloud-event-attribute CreateOrRemap --disable-tls --show-config
Required Parameters
Host of the Fabric real-time is the 'Bootstrap server' value. Can be found in event stream destination -- 'SAS Key Authentication' section. In the form of *.servicebus.windows.net:9093.
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
Compression type for the messages sent to Kafka topics.
Disable batching.
Disable MQTT broker properties copy to Kafka user headers.
The data flow uses an insecure connection to the Kafka/MQTT broker.
ID of consumer group that the data flow uses to read messages from the Kafka topic.
The batching latency in milliseconds. Min value: 0, max value: 65535.
Maximum number of bytes in a batch.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.
The type of SASL authentication.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains Connection string-primary key value. Can be found in event stream destination -- 'SAS Key Authentication' section. Note: The secret must be in the same namespace as the Kafka data flow endpoint. For more information about secret format, please refer to link in command description.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create local-mqtt
Create or replace a Azure IoT Operations Local MQTT dataflow endpoint.
For more information on Azure IoT Operations Local MQTT dataflow endpoint, see https://aka.ms/local-mqtt-broker.
az iot ops dataflow endpoint create local-mqtt --hostname
--instance
--name
--port
--resource-group
[--aud --audience]
[--auth-type {ServiceAccountToken, X509Certificate}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--client-id-prefix]
[--cm --config-map-ref]
[--disable-tls {false, true}]
[--ka --keep-alive]
[--max-inflight-msg --mim]
[--no-auth {false, true}]
[--protocol {Mqtt, WebSockets}]
[--qos]
[--retain {Keep, Never}]
[--secret-name]
[--session-expiry]
[--show-config {false, true}]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883
Create or replace a dataflow endpoint resource using X509 authentication method.
az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883 --secret-name mysecret
Create or replace a dataflow endpoint resource with no auth.
az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883 --no-auth
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883 --client-id-prefix myclientprefix --keep-alive 100 --max-inflight-msg 70 --protocol WebSockets --qos 0 --retain Never --show-config
Required Parameters
The hostname of the local MQTT broker.
IoT Operations instance name.
Dataflow endpoint name.
The port number of the local MQTT broker.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
The audience of the Kubernetes service account token (SAT).
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
The data flow uses an insecure connection to the Kafka/MQTT broker.
The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.
The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.
No authentication for the endpoint.
Protocol to use for client connections.
Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.
Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.
The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.
The session expiry interval in seconds for the data flow MQTT client. Min value: 0.
Show the generated resource config instead of invoking the API with it.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint create local-storage
Create or replace a local storage dataflow endpoint.
For more information on local storage dataflow endpoint, see https://aka.ms/local-storage-endpoint.
az iot ops dataflow endpoint create local-storage --instance
--name
--pvc-ref
--resource-group
[--show-config {false, true}]
Examples
Create or replace a dataflow endpoint resource with minimum input.
az iot ops dataflow endpoint create local-storage --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --pvc-ref mypvc
Show config for creating a dataflow endpoint resource.
az iot ops dataflow endpoint create local-storage --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --pvc-ref mypvc --show-config
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
The name of the PersistentVolumeClaim (PVC) to use for local storage. Note: The PVC must be in the same namespace as the data flow endpoint.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Show the generated resource config instead of invoking the API with it.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.