An Azure data analytics service for real-time analysis on large volumes of data streaming from sources including applications, websites, and internet of things devices.
It sounds like you want to stand up a DR copy of your on-prem DB2 LUW (RHEL 7.9, DB2 11.5.9.0) in Azure, but only ship the new archive logs—not push the whole database over every time. The native way to do exactly that is to configure DB2’s HADR between your on-prem primary and an Azure-based standby VM. Once the standby is seeded with an initial full backup, HADR takes care of shipping & applying only the new log files.
Here’s a high-level approach you can share with your customer:
Provision your Azure environment • Create a RHEL 7.9 VM in the same region or a paired region. • Make sure networking (VPN or ExpressRoute) and firewall rules allow DB2 traffic (default 50000/TCP or your custom ports).
Install DB2 11.5.9.0 on the Azure VM • Use the same fix pack level as your on-prem instance. • Create a DB2 instance and ensure the instance names/user IDs match your on-prem setup (or map them appropriately).
Seed the standby with a full backup On your on-prem primary: db2 backup db to /path/to/backup include logs Copy that backup image to the Azure VM (via scp, CIFS/SMB, Azure Files, etc.). On the Azure VM: db2 restore db from /path/to/backup without rolling forward keep db2 rollforward db to end of logs and complete
Configure HADR parameters on both sides On Primary: db2 update db cfg for using HADR_LOCAL_SVC db2 update db cfg for using HADR_REMOTE_SVC db2 update db cfg for using HADR_REMOTE_HOST db2 update db cfg for using HADR_SYNCMODE ASYNC ← ships only new logs On Standby: db2 update db cfg for using HADR_LOCAL_SVC db2 update db cfg for using HADR_REMOTE_SVC db2 update db cfg for using HADR_REMOTE_HOST db2 update db cfg for using HADR_SYNCMODE ASYNC
Start HADR • On standby: db2 start hadr on db as U (standby) • On primary: db2 start hadr on db as P (primary)
From that point on, DB2 ships only the new archive logs (your “logs folder”) to the Azure standby and applies them—in other words, you’re updating only the log files, not recopying the entire database each time.
Optional enhancements: • If you need automatic failover, consider integrating IBM’s Pacemaker resource agent on RHEL or a file-based fence script. • For additional resilience, put your standby VM in an Availability Zone or behind a Load Balancer probe.
Reference links for your engineer to include:
- High availability of IBM Db2 LUW on Azure VMs on RHEL (official Azure/SAP doc) https://learn.microsoft.com/azure/sap/workloads/high-availability-guide-rhel-ibm-db2-luw
- IBM DB2 HADR documentation (IBM Knowledge Center) https://www.ibm.com/docs/en/db2/11.5?topic=ha-hadr
- DB2 HADR parameter reference https://www.ibm.com/docs/en/db2/11.5?topic=parameters-hadr
- Setting up Pacemaker for DB2 on RHEL in Azure https://learn.microsoft.com/azure/sap/workloads/high-availability-guide-rhel-pacemaker
Hope this helps get your DR scenario up and running—let me know if you need more detail on any step!