az iot ops dataflow endpoint update
Note
This reference is part of the azure-iot-ops extension for the Azure CLI (version 2.62.0 or higher). The extension will automatically install the first time you run an az iot ops dataflow endpoint update command. Learn more about extensions.
Update the properties of an existing dataflow endpoint resource.
Commands
Name | Description | Type | Status |
---|---|---|---|
az iot ops dataflow endpoint update adls |
Update the properties of an existing dataflow endpoint resource for Azure Data Lake Storage Gen2. |
Extension | GA |
az iot ops dataflow endpoint update adx |
Update the properties of an existing dataflow endpoint resource for Azure Data Explorer. |
Extension | GA |
az iot ops dataflow endpoint update custom-kafka |
Update the properties of an existing dataflow endpoint resource for custom kafka broker. |
Extension | GA |
az iot ops dataflow endpoint update custom-mqtt |
Update the properties of an existing dataflow endpoint resource for custom MQTT broker. |
Extension | GA |
az iot ops dataflow endpoint update eventgrid |
Update the properties of an existing dataflow endpoint resource for Azure Event Grid. |
Extension | GA |
az iot ops dataflow endpoint update eventhub |
Update the properties of an existing dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace. |
Extension | GA |
az iot ops dataflow endpoint update fabric-onelake |
Update the properties of an existing dataflow endpoint resource for Microsoft Fabric OneLake. |
Extension | GA |
az iot ops dataflow endpoint update fabric-realtime |
Update the properties of an existing Microsoft Fabric Real-Time Intelligence data flow endpoint. |
Extension | GA |
az iot ops dataflow endpoint update local-mqtt |
Update the properties of an existing Azure IoT Operations Local MQTT data flow endpoint. |
Extension | GA |
az iot ops dataflow endpoint update local-storage |
Update the properties of an existing local storage data flow endpoint. |
Extension | GA |
az iot ops dataflow endpoint update adls
Update the properties of an existing dataflow endpoint resource for Azure Data Lake Storage Gen2.
For more information on Azure Data Lake Storage Gen2 dataflow endpoint, see https://aka.ms/adlsv2.
az iot ops dataflow endpoint update adls --instance
--name
--resource-group
[--aud --audience]
[--auth-type {AccessToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cid --client-id]
[--latency]
[--mc --message-count]
[--sa --storage-account]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the storage account name of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --storage-account newstorageaccount
Update to use user assigned managed identity authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update adls --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --client-id 425cb1e9-1247-4cbc-8cdb-1aac9b429696 --tenant-id bca45660-49a2-4bad-862a-0b9459b4b836 --scope "https://storage.azure.com/.default"
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
The client ID of the user assigned identity.
The batching latency in seconds. Min value: 0, max value: 65535.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
The name of Azure Data Lake Storage Gen2 account.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains SAS token.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update adx
Update the properties of an existing dataflow endpoint resource for Azure Data Explorer.
For more information on Azure Data Explorer dataflow endpoint, see https://aka.ms/aio-adx.
az iot ops dataflow endpoint update adx --instance
--name
--resource-group
[--aud --audience]
[--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cid --client-id]
[--database --db]
[--host]
[--latency]
[--mc --message-count]
[--scope]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the batching configurations of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --latency 70 --message-count 100
Update to use system assigned managed identity authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update adx --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --auth-type SystemAssignedManagedIdentity
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
The client ID of the user assigned identity.
The name of the Azure Data Explorer database.
Host of the Azure Data Explorer is Azure Data Explorer cluster URI. In the form of https://cluster.region.kusto.windows.net.
The batching latency in seconds. Min value: 0, max value: 65535.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update custom-kafka
Update the properties of an existing dataflow endpoint resource for custom kafka broker.
For more information on custom kafka dataflow endpoint, see https://aka.ms/aio-custom-kafka.
az iot ops dataflow endpoint update custom-kafka --instance
--name
--resource-group
[--acks {All, One, Zero}]
[--aud --audience]
[--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--cm --config-map-ref]
[--compression {Gzip, Lz4, None, Snappy}]
[--db --disable-batching {false, true}]
[--dbpc --disable-broker-props-copy {false, true}]
[--disable-tls {false, true}]
[--gid --group-id]
[--hostname]
[--latency]
[--max-bytes --mb]
[--mc --message-count]
[--no-auth {false, true}]
[--partition-strategy --ps {Default, Property, Static, Topic}]
[--port]
[--sasl-type {Plain, ScramSha256, ScramSha512}]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the hostname of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --hostname newkafkabroker
Update to use SASL authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update custom-kafka --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --sasl-type ScramSha256 --secret-name mysecret
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
Compression type for the messages sent to Kafka topics.
Disable batching.
Disable MQTT broker properties copy to Kafka user headers.
The data flow uses an insecure connection to the Kafka/MQTT broker.
ID of consumer group that the data flow uses to read messages from the Kafka topic.
The hostname of the Kafka broker host setting.
The batching latency in milliseconds. Min value: 0, max value: 65535.
Maximum number of bytes in a batch.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
No authentication for the endpoint.
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.
The port number of the Kafka broker host setting.
The type of SASL authentication.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name of the Kubernetes secret that contains the SASL token.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update custom-mqtt
Update the properties of an existing dataflow endpoint resource for custom MQTT broker.
For more information on custom MQTT dataflow endpoint, see https://aka.ms/aio-custom-mqtt.
az iot ops dataflow endpoint update custom-mqtt --instance
--name
--resource-group
[--auth-type {ServiceAccountToken, SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--client-id-prefix]
[--cm --config-map-ref]
[--disable-tls {false, true}]
[--hostname]
[--ka --keep-alive]
[--max-inflight-msg --mim]
[--no-auth {false, true}]
[--port]
[--protocol {Mqtt, WebSockets}]
[--qos]
[--retain {Keep, Never}]
[--sami-aud --sami-audience]
[--sat-aud --sat-audience]
[--scope]
[--secret-name]
[--session-expiry]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the cloud event setting type of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --cloud-event-attribute CreateOrRemap
Update to use X509 authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update custom-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --auth-type X509Certificate --secret-name mysecret
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
The data flow uses an insecure connection to the Kafka/MQTT broker.
The hostname of the custom MQTT broker host setting.
The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.
The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.
No authentication for the endpoint.
The port number of the custom MQTT broker host setting.
Protocol to use for client connections.
Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.
Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.
The audience of the system assigned managed identity.
The audience of the Kubernetes service account token (SAT).
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.
The session expiry interval in seconds for the data flow MQTT client. Min value: 0.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update eventgrid
Update the properties of an existing dataflow endpoint resource for Azure Event Grid.
For more information on Azure Event Grid dataflow endpoint, see https://aka.ms/aio-eventgrid.
az iot ops dataflow endpoint update eventgrid --instance
--name
--resource-group
[--aud --audience]
[--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity, X509Certificate}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--client-id-prefix]
[--cm --config-map-ref]
[--hostname]
[--ka --keep-alive]
[--max-inflight-msg --mim]
[--port]
[--protocol {Mqtt, WebSockets}]
[--qos]
[--retain {Keep, Never}]
[--scope]
[--secret-name]
[--session-expiry]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the session expiry interval of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --session-expiry 100
Update to use X509 authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update eventgrid --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --secret-name mysecret
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
The hostname of the event grid namespace. Can be found in 'Http hostname' property. In the form of NAMESPACE.REGION-1.ts.eventgrid.azure.net.
The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.
The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.
The port number of the event grid namespace.
Protocol to use for client connections.
Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.
Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.
The session expiry interval in seconds for the data flow MQTT client. Min value: 0.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update eventhub
Update the properties of an existing dataflow endpoint resource for kafka-enabled Azure Event Hubs namespace.
For more information on Azure Event Hubs dataflow endpoint, see https://aka.ms/aio-eventhub.
az iot ops dataflow endpoint update eventhub --instance
--name
--resource-group
[--acks {All, One, Zero}]
[--aud --audience]
[--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--cm --config-map-ref]
[--compression {Gzip, Lz4, None, Snappy}]
[--db --disable-batching {false, true}]
[--dbpc --disable-broker-props-copy {false, true}]
[--ehns --eventhub-namespace]
[--gid --group-id]
[--latency]
[--max-bytes --mb]
[--mc --message-count]
[--partition-strategy --ps {Default, Property, Static, Topic}]
[--sasl-type {Plain, ScramSha256, ScramSha512}]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the compression type of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --compression Gzip
Update to use SASL authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update eventhub --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --sasl-type ScramSha256 --secret-name mysecret
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
Compression type for the messages sent to Kafka topics.
Disable batching.
Disable MQTT broker properties copy to Kafka user headers.
The name of the Event Hubs namespace.
ID of consumer group that the data flow uses to read messages from the Kafka topic.
The batching latency in milliseconds. Min value: 0, max value: 65535.
Maximum number of bytes in a batch.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.
The type of SASL authentication.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains event hub connection string. Note: The secret must be in the same namespace as the Kafka data flow endpoint. The secret must have both the username and password as key-value pairs. For more information about secret format, please refer to link in command description.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update fabric-onelake
Update the properties of an existing dataflow endpoint resource for Microsoft Fabric OneLake.
For more information on Microsoft Fabric OneLake dataflow endpoint, see https://aka.ms/fabric-onelake.
az iot ops dataflow endpoint update fabric-onelake --instance
--name
--resource-group
[--aud --audience]
[--auth-type {SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cid --client-id]
[--lakehouse]
[--latency]
[--mc --message-count]
[--path-type --pt {Files, Tables}]
[--scope]
[--show-config {false, true}]
[--tenant-id --tid]
[--workspace]
Examples
Update the lakehouse name of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --lakehouse newlakehouse
Update to use system assigned managed identity authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update fabric-onelake --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --audience newaudience
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
The client ID of the user assigned identity.
The Microsoft Fabric lakehouse name under provided workspace.
The batching latency in seconds. Min value: 0, max value: 65535.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
The type of path used in OneLake.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
The Microsoft Fabric workspace name. Note: The default 'my workspace' isn't supported.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update fabric-realtime
Update the properties of an existing Microsoft Fabric Real-Time Intelligence data flow endpoint.
For more information on Microsoft Fabric Real-Time Intelligence dataflow endpoint, see https://aka.ms/aio-fabric-real-time.
az iot ops dataflow endpoint update fabric-realtime --instance
--name
--resource-group
[--acks {All, One, Zero}]
[--aud --audience]
[--auth-type {Sasl, SystemAssignedManagedIdentity, UserAssignedManagedIdentity}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--cid --client-id]
[--cm --config-map-ref]
[--compression {Gzip, Lz4, None, Snappy}]
[--db --disable-batching {false, true}]
[--dbpc --disable-broker-props-copy {false, true}]
[--disable-tls {false, true}]
[--gid --group-id]
[--host]
[--latency]
[--max-bytes --mb]
[--mc --message-count]
[--partition-strategy --ps {Default, Property, Static, Topic}]
[--sasl-type {Plain, ScramSha256, ScramSha512}]
[--scope]
[--secret-name]
[--show-config {false, true}]
[--tenant-id --tid]
Examples
Update the partition strategy of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --partition-strategy Static
Update to use SASL authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update fabric-realtime --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --sasl-type ScramSha256 --secret-name mysecret
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
Level of acknowledgment from the Kafka broker to ensure that the message sent by producer is successfully written to the topic and replicated across the Kafka cluster.
Audience of the service to authenticate against.
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client ID of the user assigned identity.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
Compression type for the messages sent to Kafka topics.
Disable batching.
Disable MQTT broker properties copy to Kafka user headers.
The data flow uses an insecure connection to the Kafka/MQTT broker.
ID of consumer group that the data flow uses to read messages from the Kafka topic.
Host of the Fabric real-time is the 'Bootstrap server' value. Can be found in event stream destination -- 'SAS Key Authentication' section. In the form of *.servicebus.windows.net:9093.
The batching latency in milliseconds. Min value: 0, max value: 65535.
Maximum number of bytes in a batch.
Maximum number of messages in a batch. Min value: 0, max value: 4294967295.
The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics.
The type of SASL authentication.
Resource identifier (application ID URI) of the resource, affixed with the .default suffix.
The name for the kubernetes secret that contains Connection string-primary key value. Can be found in event stream destination -- 'SAS Key Authentication' section. Note: The secret must be in the same namespace as the Kafka data flow endpoint. For more information about secret format, please refer to link in command description.
Show the generated resource config instead of invoking the API with it.
The tenant ID of the user assigned identity.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update local-mqtt
Update the properties of an existing Azure IoT Operations Local MQTT data flow endpoint.
For more information on Azure IoT Operations Local MQTT dataflow endpoint, see https://aka.ms/local-mqtt-broker.
az iot ops dataflow endpoint update local-mqtt --instance
--name
--resource-group
[--aud --audience]
[--auth-type {ServiceAccountToken, X509Certificate}]
[--cea --cloud-event-attribute {CreateOrRemap, Propagate}]
[--client-id-prefix]
[--cm --config-map-ref]
[--disable-tls {false, true}]
[--hostname]
[--ka --keep-alive]
[--max-inflight-msg --mim]
[--no-auth {false, true}]
[--port]
[--protocol {Mqtt, WebSockets}]
[--qos]
[--retain {Keep, Never}]
[--secret-name]
[--session-expiry]
[--show-config {false, true}]
Examples
Update the config map reference for trusted CA certificate of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --config-map-ref mynewconfigmap
Update to use Kubernetes Service Account Token authentication method of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update local-mqtt --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --auth-type ServiceAccountToken --audience myaudience
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
The audience of the Kubernetes service account token (SAT).
The authentication type for the dataflow endpoint. Note: When not specified, the authentication type is determinded by other authentication parameters.
CloudEvent settings type to map events to cloud. Different message format are required by different setting.
The client id prefix for MQTT client. Note: Changing the client ID prefix after IoT Operations deployment might result in data loss.
Config map reference for Trusted CA certificate for Kafka/MQTT endpoint. Note: This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka/MQTT data flow resource.
The data flow uses an insecure connection to the Kafka/MQTT broker.
The hostname of the local MQTT broker.
The maximum time in seconds that the data flow client can be idle before sending a PINGREQ message to the broker. Min value: 0.
The maximum number of inflight messages that the data flow MQTT client can have. Min value: 0.
No authentication for the endpoint.
The port number of the local MQTT broker.
Protocol to use for client connections.
Quality of Service (QoS) level for the MQTT messages. Only 0 or 1 are supported.
Retain setting to specify whether the data flow should keep the retain flag on MQTT messages. Setting this ensures whether or not the remote broker has the same messages retained as the local broker.
The name for the kubernetes secret that contains the X509 client certificate, private key corresponding to the client certificate, and intermediate certificates for the client certificate chain. Note: The certificate and private key must be in PEM format and not password protected.
The session expiry interval in seconds for the data flow MQTT client. Min value: 0.
Show the generated resource config instead of invoking the API with it.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.
az iot ops dataflow endpoint update local-storage
Update the properties of an existing local storage data flow endpoint.
For more information on local storage dataflow endpoint, see https://aka.ms/local-storage-endpoint.
az iot ops dataflow endpoint update local-storage --instance
--name
--resource-group
[--pvc-ref]
[--show-config {false, true}]
Examples
Update the PVC reference of the dataflow endpoint resource called 'myendpoint'.
az iot ops dataflow endpoint update local-storage --name myendpoint --instance mycluster-ops-instance --resource-group myresourcegroup --pvc-ref newpvc
Required Parameters
IoT Operations instance name.
Dataflow endpoint name.
Name of resource group. You can configure the default group using az configure --defaults group=<name>
.
Optional Parameters
The name of the PersistentVolumeClaim (PVC) to use for local storage. Note: The PVC must be in the same namespace as the data flow endpoint.
Show the generated resource config instead of invoking the API with it.
Global Parameters
Increase logging verbosity to show all debug logs.
Show this help message and exit.
Only show errors, suppressing warnings.
Output format.
JMESPath query string. See http://jmespath.org/ for more information and examples.
Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID
.
Increase logging verbosity. Use --debug for full debug logs.