Share via


What’s coming?

Learn about features and behavioral changes in upcoming Azure Databricks releases.

Introduction of lower-cost serverless performance mode

You will soon be able to configure the performance mode of your serverless workflows, selecting between standard mode and performance-optmized mode. Previously, all serverless workflows ran on a performance-optimized mode.

For serverless workflows where a slightly higher launch latency is acceptable, you will be able to optimize for cost by selecting standard performance mode.

Standard mode is comparable to classic jobs compute. This mode will be supported for scheduled jobs, event-based triggers, and manually triggered jobs. For multi-task jobs, standard mode is only used for the serverless tasks in a job.

Azure Databricks is updating the outbound public IP range for our control plane services starting May 20th, 2025.

On May 20th, 2025, Azure Databricks will deploy new components for our control plane service. These changes will improve the security and zone support availability of our control plane services. Specifically, we will update the egress (outbound) Azure Databricks control plane public IPs and associated Azure service tags. These are the IP addresses listed for each region under the Service Control Plane NAT.

This affects customers who use resource firewalls to control inbound access to their resources. If your resource firewall directly references the Azure Databricks service tag, no action is required. If your resource firewall currently explicitly allows the Azure Databricks Control Plane Public IPs, you will need to:

Behavioral change for the Auto Loader incremental directory listing option

Note

The Auto Loader cloudFiles.useIncrementalListing option is deprecated. Although this note discusses a change to the options’s default value and how to continue using it after this change, Databricks recommends replacing usage of this option with file notification mode.

In an upcoming Databricks Runtime release, the value of the deprecated Auto Loader cloudFiles.useIncrementalListing option will, by default, be set to false. Setting this value to false causes Auto Loader to perform a full directory listing each time it’s run. Currently, the default value of the cloudFiles.useIncrementalListing option is auto, instructing Auto Loader to make a best-effort attempt at detecting if an incremental listing can be used with a directory.

To continue using the incremental listing feature, set the cloudFiles.useIncrementalListing option to auto. When you set this value to auto, Auto Loader makes a best-effort attempt to do a full listing once every seven incremental listings, which matches the behavior of this option before this change.

To learn more about Auto Loader directory listing options, see Auto Loader options.

Serverless compute to get Scala SDK support for service credentials

An update to serverless compute will support Unity Catalog-governed authentication to external cloud services using service credentials with the Scala SDK. The Scala support for service principal authentication, already available in Databricks Runtime 16.2 and above, adds to the support for authenticating with service credentials using the Python SDK. See Manage access to external cloud services using service credentials.

Behavior change when dataset definitions are removed from a DLT pipeline

An upcoming release of DLT will change the behavior when a materialized view or streaming table is removed from a pipeline. With this change, the removed materialized view or streaming table will not be deleted automatically when the next pipeline update runs. Instead, you will be able to use the DROP MATERIALIZED VIEW command to delete a materialized view or the DROP TABLE command to delete a streaming table. After dropping an object, running a pipeline update will not recover the object automatically. A new object is created if a materialized view or streaming table with the same definition is re-added to the pipeline. You can, however, recover an object using the UNDROP command.

End of support timeline for legacy dashboards

  • April 7, 2025: Official support for the legacy version of dashboards will end. Only critical security issues and service outages will be addressed.
  • November 3, 2025: Databricks will begin archiving legacy dashboards that have not been accessed in the past six months. Archived dashboards will no longer be accessible, and the archival process will occur on a rolling basis. Access to actively used dashboards will remain unchanged.

Databricks will work with customers to develop migration plans for active legacy dashboards after November 3, 2025.

To help transition to AI/BI dashboards, upgrade tools are available in both the user interface and the API. For instructions on how to use the built-in migration tool in the UI, see Clone a legacy dashboard to an AI/BI dashboard. For tutorials about creating and managing dashboards using the REST API at Use Azure Databricks APIs to manage dashboards.

The sourceIpAddress field in audit logs will no longer include a port number

Due to a bug, certain authorization and authentication audit logs include a port number in addition to the IP in the sourceIPAddress field (for example, "sourceIPAddress":"10.2.91.100:0"). The port number, which is logged as 0, does not provide any real value and is inconsistent with the rest of the Databricks audit logs. To enhance the consistency of audit logs, Databricks plans to change the format of the IP address for these audit log events. This change will gradually roll out starting in early August 2024.

If the audit log contains a sourceIpAddress of 0.0.0.0, Databricks might stop logging it.