Share via


az iot ops dataflow endpoint create

Note

This reference is part of the azure-iot-ops extension for the Azure CLI (version 2.62.0 or higher). The extension will automatically install the first time you run an az iot ops dataflow endpoint create command. Learn more about extensions.

Create or replace a dataflow endpoint resource.

Commands

Name Description Type Status
az iot ops dataflow endpoint create adls

Create or replace a dataflow endpoint resource for Azure Data Lake Storage Gen2.

Extension GA
az iot ops dataflow endpoint create adx

Create or replace a dataflow endpoint resource for Azure Data Explorer.

Extension GA
az iot ops dataflow endpoint create custom-kafka

Create or replace a dataflow endpoint resource for custom kafka broker.

Extension GA
az iot ops dataflow endpoint create custom-mqtt

Create or replace a dataflow endpoint resource for custom MQTT broker.

Extension GA
az iot ops dataflow endpoint create eventgrid

Create or replace a dataflow endpoint resource for Azure Event Grid.

Extension GA
az iot ops dataflow endpoint create eventhub

Create or replace a dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace.

Extension GA
az iot ops dataflow endpoint create fabric-onelake

Create or replace a dataflow endpoint resource for Microsoft Fabric OneLake.

Extension GA
az iot ops dataflow endpoint create fabric-realtime

Create or replace a Microsoft Fabric Real-Time Intelligence data flow endpoint.

Extension GA
az iot ops dataflow endpoint create local-mqtt

Create or replace a Azure IoT Operations Local MQTT dataflow endpoint.

Extension GA
az iot ops dataflow endpoint create local-storage

Create or replace a local storage dataflow endpoint.

Extension GA

az iot ops dataflow endpoint create adls

Create or replace a dataflow endpoint resource for Azure Data Lake Storage Gen2.

For more information on Azure Data Lake Storage Gen2 dataflow endpoint, see https://aka.ms/adlsv2. Note: When using user assigned managed identity authentication method, scope will default to 'https://storage.azure.com/.default' if not specified by --scope.

az iot ops dataflow endpoint create adls --instance
                                         --name
                                         --resource-group
                                         --sa --storage-account
                                         [--aud --audience]
                                         [--auth-type {AccessToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                         [--cid --client-id]
                                         [--latency]
                                         [--mc --message-count]
                                         [--scope]
                                         [--secret-name]
                                         [--show-config {false, true}]
                                         [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account mystorageaccount

Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.

az iot ops dataflow endpoint create adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account mystorageaccount --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836 --scope "https://storage.azure.com/.default"

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account mystorageaccount --latency 70 --message-count 100 --secret-name mysecret --show-config

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--sa --storage-account

The name of Azure Data Lake Storage Gen2 account.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: AccessToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cid --client-id

The client ID of the user assigned identity.

--latency -l

The batching latency in seconds. Min value: 0, max value: 65535.

Default value: 60
--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

Default value: 100000
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains SAS token.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create adx

Create or replace a dataflow endpoint resource for Azure Data Explorer.

For more information on Azure Data Explorer dataflow endpoint, see https://aka.ms/aio-adx.

az iot ops dataflow endpoint create adx --database --db
                                        --host
                                        --instance
                                        --name
                                        --resource-group
                                        [--aud --audience]
                                        [--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                        [--cid --client-id]
                                        [--latency]
                                        [--mc --message-count]
                                        [--scope]
                                        [--show-config {false, true}]
                                        [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --database mydatabase --host "https://cluster.region.kusto.windows.net"

Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.

az iot ops dataflow endpoint create adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --database mydatabase --host "https://cluster.region.kusto.windows.net" --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --database mydatabase --host "https://cluster.region.kusto.windows.net" --latency 70 --message-count 100 --audience myaudience --show-config

Required Parameters

--database --db

The name of the Azure Data Explorer database.

--host

Host of the Azure Data Explorer is Azure Data Explorer cluster URI. In the form of https://cluster.region.kusto.windows.net.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cid --client-id

The client ID of the user assigned identity.

--latency -l

The batching latency in seconds. Min value: 0, max value: 65535.

Default value: 60
--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

Default value: 100000
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create custom-kafka

Create or replace a dataflow endpoint resource for custom kafka broker.

For more information on custom kafka dataflow endpoint, see https://aka.ms/aio-custom-kafka.

az iot ops dataflow endpoint create custom-kafka --hostname
                                                 --instance
                                                 --name
                                                 --port
                                                 --resource-group
                                                 [--acks {All, One, Zero}]
                                                 [--aud --audience]
                                                 [--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                                 [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                                 [--cid --client-id]
                                                 [--cm --config-map-ref]
                                                 [--compression {Gzip, Lz4, None, Snappy}]
                                                 [--db --disable-batching {false, true}]
                                                 [--dbpc --disable-broker-props-copy {false, true}]
                                                 [--disable-tls {false, true}]
                                                 [--gid --group-id]
                                                 [--latency]
                                                 [--max-bytes --mb]
                                                 [--mc --message-count]
                                                 [--no-auth {false, true}]
                                                 [--partition-strategy --ps {Default, Property, Static, Topic}]
                                                 [--sasl-type {Plain, ScramSha256, ScramSha512}]
                                                 [--scope]
                                                 [--secret-name]
                                                 [--show-config {false, true}]
                                                 [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092

Create or replace a dataflow endpoint resource using SASL authentication method.

az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092 --sasl-type ScramSha256 --secret-name mysecret

Create or replace a dataflow endpoint resource with no auth.

az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092 --no-auth

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mykafkabroker --port 9092 --disable-batching --latency 70 --max-bytes 200000 --message-count 100 --audience myaudience --config-map-ref myconfigmap --disable-tls --show-config

Required Parameters

--hostname

The hostname of the Kafka broker host setting.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--port

The port number of the Kafka broker host setting.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--acks

Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.

Accepted values: All, One, Zero
Default value: All
--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--compression

Compression type for the messages sent to Kafka topics.

Accepted values: Gzip, Lz4, None, Snappy
Default value: None
--db --disable-batching

Disable batching.

Accepted values: false, true
Default value: False
--dbpc --disable-broker-props-copy

Disable MQTT broker properties copy to Kafka user headers.

Accepted values: false, true
Default value: False
--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
Default value: False
--gid --group-id

ID of consumer group that the data flow uses to read messages from the Kafka topic.

--latency -l

The batching latency in milliseconds. Min value: 0, max value: 65535.

Default value: 5
--max-bytes --mb

Maximum number of bytes in a batch.

Default value: 1000000
--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

Default value: 100000
--no-auth

No authentication for the endpoint.

Accepted values: false, true
Default value: False
--partition-strategy --ps

The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.

Accepted values: Default, Property, Static, Topic
Default value: Default
--sasl-type

The type of SASL authentication.

Accepted values: Plain, ScramSha256, ScramSha512
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name of the Kubernetes secret that contains the SASL token.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create custom-mqtt

Create or replace a dataflow endpoint resource for custom MQTT broker.

For more information on custom MQTT dataflow endpoint, see https://aka.ms/aio-custom-mqtt.

az iot ops dataflow endpoint create custom-mqtt --hostname
                                                --instance
                                                --name
                                                --port
                                                --resource-group
                                                [--auth-type {ServiceAccountToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
                                                [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                                [--cid --client-id]
                                                [--client-id-prefix]
                                                [--cm --config-map-ref]
                                                [--disable-tls {false, true}]
                                                [--ka --keep-alive]
                                                [--max-inflight-msg --mim]
                                                [--no-auth {false, true}]
                                                [--protocol {Mqtt, WebSockets}]
                                                [--qos]
                                                [--retain {Keep, Never}]
                                                [--sami-aud --sami-audience]
                                                [--sat-aud --sat-audience]
                                                [--scope]
                                                [--secret-name]
                                                [--session-expiry]
                                                [--show-config {false, true}]
                                                [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092

Create or replace a dataflow endpoint resource using Kubernetes Service Account Token authentication method.

az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092 --sat-audience myaudience --secret-name mysecret

Create or replace a dataflow endpoint resource with no auth.

az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092 --no-auth

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname mymqttbroker --port 9092 --client-id-prefix myclientprefix --keep-alive 100 --max-inflight-msg 60 --protocol WebSockets --qos 1 --retain Never --session-expiry 100 --cloud-event-attribute CreateOrRemap --secret-name mysecret --disable-tls --show-config

Required Parameters

--hostname

The hostname of the custom MQTT broker host setting.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--port

The port number of the custom MQTT broker host setting.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: ServiceAccountToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--client-id-prefix

The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
Default value: False
--ka --keep-alive

The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.

Default value: 60
--max-inflight-msg --mim

The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.

Default value: 100
--no-auth

No authentication for the endpoint.

Accepted values: false, true
Default value: False
--protocol

Protocol to use for client connections.

Accepted values: Mqtt, WebSockets
Default value: Mqtt
--qos

Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.

Default value: 1
--retain

Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.

Accepted values: Keep, Never
Default value: Keep
--sami-aud --sami-audience

The audience of the system assigned managed identity.

--sat-aud --sat-audience

The audience of the Kubernetes service account token (SAT).

--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.

--session-expiry

The session expiry interval in seconds for the data flow MQTT client. Min value: 0.

Default value: 3600
--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create eventgrid

Create or replace a dataflow endpoint resource for Azure Event Grid.

For more information on Azure Event Grid dataflow endpoint, see https://aka.ms/aio-eventgrid.

az iot ops dataflow endpoint create eventgrid --hostname
                                              --instance
                                              --name
                                              --resource-group
                                              [--aud --audience]
                                              [--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
                                              [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                              [--cid --client-id]
                                              [--client-id-prefix]
                                              [--cm --config-map-ref]
                                              [--ka --keep-alive]
                                              [--max-inflight-msg --mim]
                                              [--port]
                                              [--protocol {Mqtt, WebSockets}]
                                              [--qos]
                                              [--retain {Keep, Never}]
                                              [--scope]
                                              [--secret-name]
                                              [--session-expiry]
                                              [--show-config {false, true}]
                                              [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname "namespace.region-1.ts.eventgrid.azure.net" --port 9092

Create or replace a dataflow endpoint resource using X509 authentication method.

az iot ops dataflow endpoint create eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname "namespace.region-1.ts.eventgrid.azure.net" --port 9092 --secret-name mysecret

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname "namespace.region-1.ts.eventgrid.azure.net" --port 9092 --client-id-prefix myclientprefix --keep-alive 100 --max-inflight-msg 60 --protocol WebSockets --qos 1 --retain Never --session-expiry 100 --cloud-event-attribute CreateOrRemap --secret-name mysecret --config-map-ref myconfigmap --show-config

Required Parameters

--hostname

The hostname of the event grid namespace. Can be found in 'Http hostname' property. In the form of NAMESPACE.REGION-1.ts.eventgrid.azure.net.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--client-id-prefix

The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--ka --keep-alive

The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.

Default value: 60
--max-inflight-msg --mim

The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.

Default value: 100
--port

The port number of the event grid namespace.

Default value: 8883
--protocol

Protocol to use for client connections.

Accepted values: Mqtt, WebSockets
Default value: Mqtt
--qos

Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.

Default value: 1
--retain

Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.

Accepted values: Keep, Never
Default value: Keep
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.

--session-expiry

The session expiry interval in seconds for the data flow MQTT client. Min value: 0.

Default value: 3600
--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create eventhub

Create or replace a dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace.

For more information on Azure Event Hubs dataflow endpoint, see https://aka.ms/aio-eventhub.

az iot ops dataflow endpoint create eventhub --ehns --eventhub-namespace
                                             --instance
                                             --name
                                             --resource-group
                                             [--acks {All, One, Zero}]
                                             [--aud --audience]
                                             [--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                             [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                             [--cid --client-id]
                                             [--cm --config-map-ref]
                                             [--compression {Gzip, Lz4, None, Snappy}]
                                             [--db --disable-batching {false, true}]
                                             [--dbpc --disable-broker-props-copy {false, true}]
                                             [--gid --group-id]
                                             [--latency]
                                             [--max-bytes --mb]
                                             [--mc --message-count]
                                             [--partition-strategy --ps {Default, Property, Static, Topic}]
                                             [--sasl-type {Plain, ScramSha256, ScramSha512}]
                                             [--scope]
                                             [--secret-name]
                                             [--show-config {false, true}]
                                             [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --eventhub-namespace myeventhubnamespace

Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.

az iot ops dataflow endpoint create eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --eventhub-namespace myeventhubnamespace --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836 --scope "https://eventhubs.azure.net/.default"

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --eventhub-namespace myeventhubnamespace --acks One --compression Gzip --disable-broker-props-copy --group-id mygroupid --partition-strategy Static --max-bytes 200000 --message-count 100 --latency 70 --cloud-event-attribute CreateOrRemap --sasl-type ScramSha256 --secret-name mysecret --config-map-ref myconfigmap --show-config

Required Parameters

--ehns --eventhub-namespace

The name of the Event Hubs namespace.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--acks

Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.

Accepted values: All, One, Zero
Default value: All
--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--compression

Compression type for the messages sent to Kafka topics.

Accepted values: Gzip, Lz4, None, Snappy
Default value: None
--db --disable-batching

Disable batching.

Accepted values: false, true
Default value: False
--dbpc --disable-broker-props-copy

Disable MQTT broker properties copy to Kafka user headers.

Accepted values: false, true
Default value: False
--gid --group-id

ID of consumer group that the data flow uses to read messages from the Kafka topic.

--latency -l

The batching latency in milliseconds. Min value: 0, max value: 65535.

Default value: 5
--max-bytes --mb

Maximum number of bytes in a batch.

Default value: 1000000
--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

Default value: 100000
--partition-strategy --ps

The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.

Accepted values: Default, Property, Static, Topic
Default value: Default
--sasl-type

The type of SASL authentication.

Accepted values: Plain, ScramSha256, ScramSha512
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains event hub connection string. Note: The secret must be in the same namespace as the Kafka data flow endpoint. The secret must have both the username and password as key-value pairs. For more information about secret format, please refer to link in command description.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create fabric-onelake

Create or replace a dataflow endpoint resource for Microsoft Fabric OneLake.

For more information on Microsoft Fabric OneLake dataflow endpoint, see https://aka.ms/fabric-onelake.

az iot ops dataflow endpoint create fabric-onelake --instance
                                                   --lakehouse
                                                   --name
                                                   --path-type --pt {Files, Tables}
                                                   --resource-group
                                                   --workspace
                                                   [--aud --audience]
                                                   [--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                                   [--cid --client-id]
                                                   [--latency]
                                                   [--mc --message-count]
                                                   [--scope]
                                                   [--show-config {false, true}]
                                                   [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse mylakehouse --workspace myworkspace --path-type Files

Create or replace a dataflow endpoint resource using user assigned managed identity authentication method.

az iot ops dataflow endpoint create fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse mylakehouse --workspace myworkspace --path-type Files --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse mylakehouse --workspace myworkspace --path-type Files --latency 70 --message-count 100 --audience myaudience --show-config

Required Parameters

--instance -i

IoT Operations instance name.

--lakehouse

The Microsoft Fabric lakehouse name under provided workspace.

--name -n

Dataflow endpoint name.

--path-type --pt

The type of path used in OneLake.

Accepted values: Files, Tables
--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

--workspace

The Microsoft Fabric workspace name. Note: The default 'my workspace' isn't supported.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cid --client-id

The client ID of the user assigned identity.

--latency -l

The batching latency in seconds. Min value: 0, max value: 65535.

Default value: 60
--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

Default value: 100000
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create fabric-realtime

Create or replace a Microsoft Fabric Real-Time Intelligence data flow endpoint.

For more information on Microsoft Fabric Real-Time Intelligence dataflow endpoint, see https://aka.ms/aio-fabric-real-time.

az iot ops dataflow endpoint create fabric-realtime --host
                                                    --instance
                                                    --name
                                                    --resource-group
                                                    [--acks {All, One, Zero}]
                                                    [--aud --audience]
                                                    [--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                                    [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                                    [--cid --client-id]
                                                    [--cm --config-map-ref]
                                                    [--compression {Gzip, Lz4, None, Snappy}]
                                                    [--db --disable-batching {false, true}]
                                                    [--dbpc --disable-broker-props-copy {false, true}]
                                                    [--disable-tls {false, true}]
                                                    [--gid --group-id]
                                                    [--latency]
                                                    [--max-bytes --mb]
                                                    [--mc --message-count]
                                                    [--partition-strategy --ps {Default, Property, Static, Topic}]
                                                    [--sasl-type {Plain, ScramSha256, ScramSha512}]
                                                    [--scope]
                                                    [--secret-name]
                                                    [--show-config {false, true}]
                                                    [--tenant-id --tid]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --host "fabricrealtime.servicebus.windows.net:9093"

Create or replace a dataflow endpoint resource using SASL authentication method.

az iot ops dataflow endpoint create fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --host "fabricrealtime.servicebus.windows.net:9093" --sasl-type ScramSha256 --secret-name mysecret

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --host "fabricrealtime.servicebus.windows.net:9093" --acks One --compression Gzip --group-id mygroupid --partition-strategy Static --max-bytes 200000 --cloud-event-attribute CreateOrRemap --disable-tls --show-config

Required Parameters

--host

Host of the Fabric real-time is the 'Bootstrap server' value. Can be found in event stream destination -- 'SAS Key Authentication' section. In the form of *.servicebus.windows.net:9093.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--acks

Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.

Accepted values: All, One, Zero
Default value: All
--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--compression

Compression type for the messages sent to Kafka topics.

Accepted values: Gzip, Lz4, None, Snappy
Default value: None
--db --disable-batching

Disable batching.

Accepted values: false, true
Default value: False
--dbpc --disable-broker-props-copy

Disable MQTT broker properties copy to Kafka user headers.

Accepted values: false, true
Default value: False
--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
Default value: False
--gid --group-id

ID of consumer group that the data flow uses to read messages from the Kafka topic.

--latency -l

The batching latency in milliseconds. Min value: 0, max value: 65535.

Default value: 5
--max-bytes --mb

Maximum number of bytes in a batch.

Default value: 1000000
--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

Default value: 100000
--partition-strategy --ps

The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.

Accepted values: Default, Property, Static, Topic
Default value: Default
--sasl-type

The type of SASL authentication.

Accepted values: Plain, ScramSha256, ScramSha512
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains Connection string-primary key value. Can be found in event stream destination -- 'SAS Key Authentication' section. Note: The secret must be in the same namespace as the Kafka data flow endpoint. For more information about secret format, please refer to link in command description.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create local-mqtt

Create or replace a Azure IoT Operations Local MQTT dataflow endpoint.

For more information on Azure IoT Operations Local MQTT dataflow endpoint, see https://aka.ms/local-mqtt-broker.

az iot ops dataflow endpoint create local-mqtt --hostname
                                               --instance
                                               --name
                                               --port
                                               --resource-group
                                               [--aud --audience]
                                               [--auth-type {ServiceAccountToken, X509Certificate}]
                                               [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                               [--client-id-prefix]
                                               [--cm --config-map-ref]
                                               [--disable-tls {false, true}]
                                               [--ka --keep-alive]
                                               [--max-inflight-msg --mim]
                                               [--no-auth {false, true}]
                                               [--protocol {Mqtt, WebSockets}]
                                               [--qos]
                                               [--retain {Keep, Never}]
                                               [--secret-name]
                                               [--session-expiry]
                                               [--show-config {false, true}]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883

Create or replace a dataflow endpoint resource using X509 authentication method.

az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883 --secret-name mysecret

Create or replace a dataflow endpoint resource with no auth.

az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883 --no-auth

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname aio-broker --port 1883 --client-id-prefix myclientprefix --keep-alive 100 --max-inflight-msg 70 --protocol WebSockets --qos 0 --retain Never --show-config

Required Parameters

--hostname

The hostname of the local MQTT broker.

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--port

The port number of the local MQTT broker.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

The audience of the Kubernetes service account token (SAT).

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: ServiceAccountToken, X509Certificate
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--client-id-prefix

The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

Default value: azure-iot-operations-aio-ca-trust-bundle
--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
Default value: False
--ka --keep-alive

The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.

Default value: 60
--max-inflight-msg --mim

The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.

Default value: 100
--no-auth

No authentication for the endpoint.

Accepted values: false, true
Default value: False
--protocol

Protocol to use for client connections.

Accepted values: Mqtt, WebSockets
Default value: Mqtt
--qos

Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.

Default value: 1
--retain

Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.

Accepted values: Keep, Never
Default value: Keep
--secret-name -s

The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.

--session-expiry

The session expiry interval in seconds for the data flow MQTT client. Min value: 0.

Default value: 3600
--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint create local-storage

Create or replace a local storage dataflow endpoint.

For more information on local storage dataflow endpoint, see https://aka.ms/local-storage-endpoint.

az iot ops dataflow endpoint create local-storage --instance
                                                  --name
                                                  --pvc-ref
                                                  --resource-group
                                                  [--show-config {false, true}]

Examples

Create or replace a dataflow endpoint resource with minimum input.

az iot ops dataflow endpoint create local-storage --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --pvc-ref mypvc

Show config for creating a dataflow endpoint resource.

az iot ops dataflow endpoint create local-storage --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --pvc-ref mypvc --show-config

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--pvc-ref

The name of the PersistentVolumeClaim (PVC) to use for local storage. Note: The PVC must be in the same namespace as the data flow endpoint.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.