Share via


az iot ops dataflow endpoint update

Note

This reference is part of the azure-iot-ops extension for the Azure CLI (version 2.62.0 or higher). The extension will automatically install the first time you run an az iot ops dataflow endpoint update command. Learn more about extensions.

Update the properties of an existing dataflow endpoint resource.

Commands

Name Description Type Status
az iot ops dataflow endpoint update adls

Update the properties of an existing dataflow endpoint resource for Azure Data Lake Storage Gen2.

Extension GA
az iot ops dataflow endpoint update adx

Update the properties of an existing dataflow endpoint resource for Azure Data Explorer.

Extension GA
az iot ops dataflow endpoint update custom-kafka

Update the properties of an existing dataflow endpoint resource for custom kafka broker.

Extension GA
az iot ops dataflow endpoint update custom-mqtt

Update the properties of an existing dataflow endpoint resource for custom MQTT broker.

Extension GA
az iot ops dataflow endpoint update eventgrid

Update the properties of an existing dataflow endpoint resource for Azure Event Grid.

Extension GA
az iot ops dataflow endpoint update eventhub

Update the properties of an existing dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace.

Extension GA
az iot ops dataflow endpoint update fabric-onelake

Update the properties of an existing dataflow endpoint resource for Microsoft Fabric OneLake.

Extension GA
az iot ops dataflow endpoint update fabric-realtime

Update the properties of an existing Microsoft Fabric Real-Time Intelligence data flow endpoint.

Extension GA
az iot ops dataflow endpoint update local-mqtt

Update the properties of an existing Azure IoT Operations Local MQTT data flow endpoint.

Extension GA
az iot ops dataflow endpoint update local-storage

Update the properties of an existing local storage data flow endpoint.

Extension GA

az iot ops dataflow endpoint update adls

Update the properties of an existing dataflow endpoint resource for Azure Data Lake Storage Gen2.

For more information on Azure Data Lake Storage Gen2 dataflow endpoint, see https://aka.ms/adlsv2.

az iot ops dataflow endpoint update adls --instance
                                         --name
                                         --resource-group
                                         [--aud --audience]
                                         [--auth-type {AccessToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                         [--cid --client-id]
                                         [--latency]
                                         [--mc --message-count]
                                         [--sa --storage-account]
                                         [--scope]
                                         [--secret-name]
                                         [--show-config {false, true}]
                                         [--tenant-id --tid]

Examples

Update the storage account name of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account newstorageaccount

Update to use user assigned managed identity authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836 --scope "https://storage.azure.com/.default"

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: AccessToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cid --client-id

The client ID of the user assigned identity.

--latency -l

The batching latency in seconds. Min value: 0, max value: 65535.

--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

--sa --storage-account

The name of Azure Data Lake Storage Gen2 account.

--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains SAS token.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update adx

Update the properties of an existing dataflow endpoint resource for Azure Data Explorer.

For more information on Azure Data Explorer dataflow endpoint, see https://aka.ms/aio-adx.

az iot ops dataflow endpoint update adx --instance
                                        --name
                                        --resource-group
                                        [--aud --audience]
                                        [--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                        [--cid --client-id]
                                        [--database --db]
                                        [--host]
                                        [--latency]
                                        [--mc --message-count]
                                        [--scope]
                                        [--show-config {false, true}]
                                        [--tenant-id --tid]

Examples

Update the batching configurations of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --latency 70 --message-count 100

Update to use system assigned managed identity authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --auth-type SystemAssignedManagedIdentity

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cid --client-id

The client ID of the user assigned identity.

--database --db

The name of the Azure Data Explorer database.

--host

Host of the Azure Data Explorer is Azure Data Explorer cluster URI. In the form of https://cluster.region.kusto.windows.net.

--latency -l

The batching latency in seconds. Min value: 0, max value: 65535.

--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update custom-kafka

Update the properties of an existing dataflow endpoint resource for custom kafka broker.

For more information on custom kafka dataflow endpoint, see https://aka.ms/aio-custom-kafka.

az iot ops dataflow endpoint update custom-kafka --instance
                                                 --name
                                                 --resource-group
                                                 [--acks {All, One, Zero}]
                                                 [--aud --audience]
                                                 [--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                                 [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                                 [--cid --client-id]
                                                 [--cm --config-map-ref]
                                                 [--compression {Gzip, Lz4, None, Snappy}]
                                                 [--db --disable-batching {false, true}]
                                                 [--dbpc --disable-broker-props-copy {false, true}]
                                                 [--disable-tls {false, true}]
                                                 [--gid --group-id]
                                                 [--hostname]
                                                 [--latency]
                                                 [--max-bytes --mb]
                                                 [--mc --message-count]
                                                 [--no-auth {false, true}]
                                                 [--partition-strategy --ps {Default, Property, Static, Topic}]
                                                 [--port]
                                                 [--sasl-type {Plain, ScramSha256, ScramSha512}]
                                                 [--scope]
                                                 [--secret-name]
                                                 [--show-config {false, true}]
                                                 [--tenant-id --tid]

Examples

Update the hostname of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname newkafkabroker

Update to use SASL authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --sasl-type ScramSha256 --secret-name mysecret

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--acks

Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.

Accepted values: All, One, Zero
Default value: All
--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--compression

Compression type for the messages sent to Kafka topics.

Accepted values: Gzip, Lz4, None, Snappy
Default value: None
--db --disable-batching

Disable batching.

Accepted values: false, true
--dbpc --disable-broker-props-copy

Disable MQTT broker properties copy to Kafka user headers.

Accepted values: false, true
--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
--gid --group-id

ID of consumer group that the data flow uses to read messages from the Kafka topic.

--hostname

The hostname of the Kafka broker host setting.

--latency -l

The batching latency in milliseconds. Min value: 0, max value: 65535.

--max-bytes --mb

Maximum number of bytes in a batch.

--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

--no-auth

No authentication for the endpoint.

Accepted values: false, true
--partition-strategy --ps

The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.

Accepted values: Default, Property, Static, Topic
Default value: Default
--port

The port number of the Kafka broker host setting.

--sasl-type

The type of SASL authentication.

Accepted values: Plain, ScramSha256, ScramSha512
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name of the Kubernetes secret that contains the SASL token.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update custom-mqtt

Update the properties of an existing dataflow endpoint resource for custom MQTT broker.

For more information on custom MQTT dataflow endpoint, see https://aka.ms/aio-custom-mqtt.

az iot ops dataflow endpoint update custom-mqtt --instance
                                                --name
                                                --resource-group
                                                [--auth-type {ServiceAccountToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
                                                [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                                [--cid --client-id]
                                                [--client-id-prefix]
                                                [--cm --config-map-ref]
                                                [--disable-tls {false, true}]
                                                [--hostname]
                                                [--ka --keep-alive]
                                                [--max-inflight-msg --mim]
                                                [--no-auth {false, true}]
                                                [--port]
                                                [--protocol {Mqtt, WebSockets}]
                                                [--qos]
                                                [--retain {Keep, Never}]
                                                [--sami-aud --sami-audience]
                                                [--sat-aud --sat-audience]
                                                [--scope]
                                                [--secret-name]
                                                [--session-expiry]
                                                [--show-config {false, true}]
                                                [--tenant-id --tid]

Examples

Update the cloud event setting type of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --cloud-event-attribute CreateOrRemap

Update to use X509 authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --auth-type X509Certificate --secret-name mysecret

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: ServiceAccountToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--client-id-prefix

The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
--hostname

The hostname of the custom MQTT broker host setting.

--ka --keep-alive

The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.

--max-inflight-msg --mim

The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.

--no-auth

No authentication for the endpoint.

Accepted values: false, true
--port

The port number of the custom MQTT broker host setting.

--protocol

Protocol to use for client connections.

Accepted values: Mqtt, WebSockets
Default value: Mqtt
--qos

Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.

--retain

Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.

Accepted values: Keep, Never
Default value: Keep
--sami-aud --sami-audience

The audience of the system assigned managed identity.

--sat-aud --sat-audience

The audience of the Kubernetes service account token (SAT).

--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.

--session-expiry

The session expiry interval in seconds for the data flow MQTT client. Min value: 0.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update eventgrid

Update the properties of an existing dataflow endpoint resource for Azure Event Grid.

For more information on Azure Event Grid dataflow endpoint, see https://aka.ms/aio-eventgrid.

az iot ops dataflow endpoint update eventgrid --instance
                                              --name
                                              --resource-group
                                              [--aud --audience]
                                              [--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
                                              [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                              [--cid --client-id]
                                              [--client-id-prefix]
                                              [--cm --config-map-ref]
                                              [--hostname]
                                              [--ka --keep-alive]
                                              [--max-inflight-msg --mim]
                                              [--port]
                                              [--protocol {Mqtt, WebSockets}]
                                              [--qos]
                                              [--retain {Keep, Never}]
                                              [--scope]
                                              [--secret-name]
                                              [--session-expiry]
                                              [--show-config {false, true}]
                                              [--tenant-id --tid]

Examples

Update the session expiry interval of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --session-expiry 100

Update to use X509 authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --secret-name mysecret

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--client-id-prefix

The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--hostname

The hostname of the event grid namespace. Can be found in 'Http hostname' property. In the form of NAMESPACE.REGION-1.ts.eventgrid.azure.net.

--ka --keep-alive

The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.

--max-inflight-msg --mim

The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.

--port

The port number of the event grid namespace.

--protocol

Protocol to use for client connections.

Accepted values: Mqtt, WebSockets
Default value: Mqtt
--qos

Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.

--retain

Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.

Accepted values: Keep, Never
Default value: Keep
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.

--session-expiry

The session expiry interval in seconds for the data flow MQTT client. Min value: 0.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update eventhub

Update the properties of an existing dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace.

For more information on Azure Event Hubs dataflow endpoint, see https://aka.ms/aio-eventhub.

az iot ops dataflow endpoint update eventhub --instance
                                             --name
                                             --resource-group
                                             [--acks {All, One, Zero}]
                                             [--aud --audience]
                                             [--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                             [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                             [--cid --client-id]
                                             [--cm --config-map-ref]
                                             [--compression {Gzip, Lz4, None, Snappy}]
                                             [--db --disable-batching {false, true}]
                                             [--dbpc --disable-broker-props-copy {false, true}]
                                             [--ehns --eventhub-namespace]
                                             [--gid --group-id]
                                             [--latency]
                                             [--max-bytes --mb]
                                             [--mc --message-count]
                                             [--partition-strategy --ps {Default, Property, Static, Topic}]
                                             [--sasl-type {Plain, ScramSha256, ScramSha512}]
                                             [--scope]
                                             [--secret-name]
                                             [--show-config {false, true}]
                                             [--tenant-id --tid]

Examples

Update the compression type of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --compression Gzip

Update to use SASL authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --sasl-type ScramSha256 --secret-name mysecret

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--acks

Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.

Accepted values: All, One, Zero
Default value: All
--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--compression

Compression type for the messages sent to Kafka topics.

Accepted values: Gzip, Lz4, None, Snappy
Default value: None
--db --disable-batching

Disable batching.

Accepted values: false, true
--dbpc --disable-broker-props-copy

Disable MQTT broker properties copy to Kafka user headers.

Accepted values: false, true
--ehns --eventhub-namespace

The name of the Event Hubs namespace.

--gid --group-id

ID of consumer group that the data flow uses to read messages from the Kafka topic.

--latency -l

The batching latency in milliseconds. Min value: 0, max value: 65535.

--max-bytes --mb

Maximum number of bytes in a batch.

--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

--partition-strategy --ps

The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.

Accepted values: Default, Property, Static, Topic
Default value: Default
--sasl-type

The type of SASL authentication.

Accepted values: Plain, ScramSha256, ScramSha512
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains event hub connection string. Note: The secret must be in the same namespace as the Kafka data flow endpoint. The secret must have both the username and password as key-value pairs. For more information about secret format, please refer to link in command description.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update fabric-onelake

Update the properties of an existing dataflow endpoint resource for Microsoft Fabric OneLake.

For more information on Microsoft Fabric OneLake dataflow endpoint, see https://aka.ms/fabric-onelake.

az iot ops dataflow endpoint update fabric-onelake --instance
                                                   --name
                                                   --resource-group
                                                   [--aud --audience]
                                                   [--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                                   [--cid --client-id]
                                                   [--lakehouse]
                                                   [--latency]
                                                   [--mc --message-count]
                                                   [--path-type --pt {Files, Tables}]
                                                   [--scope]
                                                   [--show-config {false, true}]
                                                   [--tenant-id --tid]
                                                   [--workspace]

Examples

Update the lakehouse name of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse newlakehouse

Update to use system assigned managed identity authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --audience newaudience

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cid --client-id

The client ID of the user assigned identity.

--lakehouse

The Microsoft Fabric lakehouse name under provided workspace.

--latency -l

The batching latency in seconds. Min value: 0, max value: 65535.

--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

--path-type --pt

The type of path used in OneLake.

Accepted values: Files, Tables
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

--workspace

The Microsoft Fabric workspace name. Note: The default 'my workspace' isn't supported.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update fabric-realtime

Update the properties of an existing Microsoft Fabric Real-Time Intelligence data flow endpoint.

For more information on Microsoft Fabric Real-Time Intelligence dataflow endpoint, see https://aka.ms/aio-fabric-real-time.

az iot ops dataflow endpoint update fabric-realtime --instance
                                                    --name
                                                    --resource-group
                                                    [--acks {All, One, Zero}]
                                                    [--aud --audience]
                                                    [--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
                                                    [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                                    [--cid --client-id]
                                                    [--cm --config-map-ref]
                                                    [--compression {Gzip, Lz4, None, Snappy}]
                                                    [--db --disable-batching {false, true}]
                                                    [--dbpc --disable-broker-props-copy {false, true}]
                                                    [--disable-tls {false, true}]
                                                    [--gid --group-id]
                                                    [--host]
                                                    [--latency]
                                                    [--max-bytes --mb]
                                                    [--mc --message-count]
                                                    [--partition-strategy --ps {Default, Property, Static, Topic}]
                                                    [--sasl-type {Plain, ScramSha256, ScramSha512}]
                                                    [--scope]
                                                    [--secret-name]
                                                    [--show-config {false, true}]
                                                    [--tenant-id --tid]

Examples

Update the partition strategy of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --partition-strategy Static

Update to use SASL authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --sasl-type ScramSha256 --secret-name mysecret

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--acks

Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.

Accepted values: All, One, Zero
Default value: All
--aud --audience

Audience of the service to authenticate against.

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--cid --client-id

The client ID of the user assigned identity.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--compression

Compression type for the messages sent to Kafka topics.

Accepted values: Gzip, Lz4, None, Snappy
Default value: None
--db --disable-batching

Disable batching.

Accepted values: false, true
--dbpc --disable-broker-props-copy

Disable MQTT broker properties copy to Kafka user headers.

Accepted values: false, true
--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
--gid --group-id

ID of consumer group that the data flow uses to read messages from the Kafka topic.

--host

Host of the Fabric real-time is the 'Bootstrap server' value. Can be found in event stream destination -- 'SAS Key Authentication' section. In the form of *.servicebus.windows.net:9093.

--latency -l

The batching latency in milliseconds. Min value: 0, max value: 65535.

--max-bytes --mb

Maximum number of bytes in a batch.

--mc --message-count

Maximum number of messages in a batch. Min value: 0, max value: 4294967295.

--partition-strategy --ps

The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.

Accepted values: Default, Property, Static, Topic
Default value: Default
--sasl-type

The type of SASL authentication.

Accepted values: Plain, ScramSha256, ScramSha512
--scope

Resource identifier (application ID URI) of the resource, affixed with the .default suffix.

--secret-name -s

The name for the kubernetes secret that contains Connection string-primary key value. Can be found in event stream destination -- 'SAS Key Authentication' section. Note: The secret must be in the same namespace as the Kafka data flow endpoint. For more information about secret format, please refer to link in command description.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
--tenant-id --tid

The tenant ID of the user assigned identity.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update local-mqtt

Update the properties of an existing Azure IoT Operations Local MQTT data flow endpoint.

For more information on Azure IoT Operations Local MQTT dataflow endpoint, see https://aka.ms/local-mqtt-broker.

az iot ops dataflow endpoint update local-mqtt --instance
                                               --name
                                               --resource-group
                                               [--aud --audience]
                                               [--auth-type {ServiceAccountToken, X509Certificate}]
                                               [--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
                                               [--client-id-prefix]
                                               [--cm --config-map-ref]
                                               [--disable-tls {false, true}]
                                               [--hostname]
                                               [--ka --keep-alive]
                                               [--max-inflight-msg --mim]
                                               [--no-auth {false, true}]
                                               [--port]
                                               [--protocol {Mqtt, WebSockets}]
                                               [--qos]
                                               [--retain {Keep, Never}]
                                               [--secret-name]
                                               [--session-expiry]
                                               [--show-config {false, true}]

Examples

Update the config map reference for trusted CA certificate of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --config-map-ref mynewconfigmap

Update to use Kubernetes Service Account Token authentication method of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --auth-type ServiceAccountToken --audience myaudience

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--aud --audience

The audience of the Kubernetes service account token (SAT).

--auth-type

The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.

Accepted values: ServiceAccountToken, X509Certificate
--cea --cloud-event-attribute

CloudEvent settings type to map events to cloud. Different message format are required by different setting.

Accepted values: CreateOrRemap, Propagate
Default value: Propagate
--client-id-prefix

The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.

--cm --config-map-ref

Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.

--disable-tls

The data flow uses an insecure connection to the Kafka/MQTT broker.

Accepted values: false, true
Default value: False
--hostname

The hostname of the local MQTT broker.

--ka --keep-alive

The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.

--max-inflight-msg --mim

The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.

--no-auth

No authentication for the endpoint.

Accepted values: false, true
--port

The port number of the local MQTT broker.

--protocol

Protocol to use for client connections.

Accepted values: Mqtt, WebSockets
Default value: Mqtt
--qos

Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.

--retain

Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.

Accepted values: Keep, Never
Default value: Keep
--secret-name -s

The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.

--session-expiry

The session expiry interval in seconds for the data flow MQTT client. Min value: 0.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az iot ops dataflow endpoint update local-storage

Update the properties of an existing local storage data flow endpoint.

For more information on local storage dataflow endpoint, see https://aka.ms/local-storage-endpoint.

az iot ops dataflow endpoint update local-storage --instance
                                                  --name
                                                  --resource-group
                                                  [--pvc-ref]
                                                  [--show-config {false, true}]

Examples

Update the PVC reference of the dataflow endpoint resource called 'myendpoint'.

az iot ops dataflow endpoint update local-storage --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --pvc-ref newpvc

Required Parameters

--instance -i

IoT Operations instance name.

--name -n

Dataflow endpoint name.

--resource-group -g

Name of resource group. You can configure the default group using az configure --defaults group=<name>.

Optional Parameters

--pvc-ref

The name of the PersistentVolumeClaim (PVC) to use for local storage. Note: The PVC must be in the same namespace as the data flow endpoint.

--show-config

Show the generated resource config instead of invoking the API with it.

Accepted values: false, true
Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

Accepted values: json, jsonc, none, table, tsv, yaml, yamlc
Default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.