Share via

DB2 server (RHEL 7.9, DB2 11.5.9.0) form on prem to Azure DR setup

Penumarthi Dora Krishna 0 Reputation points
2026-03-02T10:02:51.7733333+00:00

Hi Team,

Provide best solutions for below requirement DB2 server (RHEL 7.9, DB2 11.5.9.0) form on prem to Azure DR setup

I'm having on prem DB2 /HADR setup and log files folder. I want to update latest log files folder only with out DB data

Azure Data Explorer
Azure Data Explorer

An Azure data analytics service for real-time analysis on large volumes of data streaming from sources including applications, websites, and internet of things devices.

0 comments No comments

2 answers

Sort by: Most helpful
  1. Manoj Kumar Boyini 12,735 Reputation points Microsoft External Staff Moderator
    2026-03-03T18:57:57.03+00:00

    Hi Penumarthi Dora Krishna

    It sounds like you want to stand up a DR copy of your on-prem DB2 LUW (RHEL 7.9, DB2 11.5.9.0) in Azure, but only ship the new archive logs—not push the whole database over every time. The native way to do exactly that is to configure DB2’s HADR between your on-prem primary and an Azure-based standby VM. Once the standby is seeded with an initial full backup, HADR takes care of shipping & applying only the new log files.

    Here’s a high-level approach you can share with your customer:

    Provision your Azure environment • Create a RHEL 7.9 VM in the same region or a paired region. • Make sure networking (VPN or ExpressRoute) and firewall rules allow DB2 traffic (default 50000/TCP or your custom ports).

    Install DB2 11.5.9.0 on the Azure VM • Use the same fix pack level as your on-prem instance. • Create a DB2 instance and ensure the instance names/user IDs match your on-prem setup (or map them appropriately).

    Seed the standby with a full backup On your on-prem primary: db2 backup db to /path/to/backup include logs Copy that backup image to the Azure VM (via scp, CIFS/SMB, Azure Files, etc.). On the Azure VM: db2 restore db from /path/to/backup without rolling forward keep db2 rollforward db to end of logs and complete

    Configure HADR parameters on both sides On Primary: db2 update db cfg for using HADR_LOCAL_SVC db2 update db cfg for using HADR_REMOTE_SVC db2 update db cfg for using HADR_REMOTE_HOST db2 update db cfg for using HADR_SYNCMODE ASYNC ← ships only new logs On Standby: db2 update db cfg for using HADR_LOCAL_SVC db2 update db cfg for using HADR_REMOTE_SVC db2 update db cfg for using HADR_REMOTE_HOST db2 update db cfg for using HADR_SYNCMODE ASYNC

    Start HADR • On standby: db2 start hadr on db as U (standby) • On primary: db2 start hadr on db as P (primary)

    From that point on, DB2 ships only the new archive logs (your “logs folder”) to the Azure standby and applies them—in other words, you’re updating only the log files, not recopying the entire database each time.

    Optional enhancements: • If you need automatic failover, consider integrating IBM’s Pacemaker resource agent on RHEL or a file-based fence script. • For additional resilience, put your standby VM in an Availability Zone or behind a Load Balancer probe.

    Reference links for your engineer to include:

    1. High availability of IBM Db2 LUW on Azure VMs on RHEL (official Azure/SAP doc) https://learn.microsoft.com/azure/sap/workloads/high-availability-guide-rhel-ibm-db2-luw
    2. IBM DB2 HADR documentation (IBM Knowledge Center) https://www.ibm.com/docs/en/db2/11.5?topic=ha-hadr
    3. DB2 HADR parameter reference https://www.ibm.com/docs/en/db2/11.5?topic=parameters-hadr
    4. Setting up Pacemaker for DB2 on RHEL in Azure https://learn.microsoft.com/azure/sap/workloads/high-availability-guide-rhel-pacemaker

    Hope this helps get your DR scenario up and running—let me know if you need more detail on any step!


  2. Q&A Assist
    2026-03-02T10:03:43.3633333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    For a Db2 11.5 HADR environment where only log files need to be replicated from on-premises to Azure for DR (without copying full database data), the recommended pattern is to base the design on Db2 HADR log shipping plus a shared or centrally accessible archive location.

    Key points and recommended approach:

    1. Use Db2 HADR as the core DR mechanism
      • Db2 HADR already replicates changes from the primary to the standby using transaction logs.
      • The primary database is the only one that performs log archiving; when roles change, the new primary becomes responsible for log archiving.
      • Ensure the Azure DR node is configured as an HADR standby so it can continuously receive and apply logs.
    2. Configure log archiving with a shared, highly available target
      • Configure both primary (on-prem) and standby (Azure DR) databases to have automatic log retrieval capability from all log archive locations.
      • Both primary and standby must be able to retrieve log archive files from any archive location that either instance might use.
      • Recommended: use a common, highly available NFS share (or GlusterFS / Azure NetApp Files, depending on OS and design) as the archive target for logs. This allows:
        • Primary to archive logs to the shared location.
        • Standby in Azure to automatically retrieve only the required log files without copying full data files.
    3. Avoid duplicating archives and manage multiple archive locations
      • If multiple log archive locations are configured, logs might be archived twice.
      • When HADR roles change (planned or failover), the new primary will start archiving logs. If the old primary used a different archive path, archived logs might need to be manually copied from the old primary’s archive location to the active log location of the new primary.
      • To minimize manual work, standardize on a shared archive location accessible from both on-prem and Azure.
    4. Storage options for the shared log archive
      • For RHEL-based deployments, recommended options include:
        • A highly available NFS share.
        • GlusterFS cluster on Azure VMs.
        • Azure NetApp Files NFS volumes.
      • These options provide a central place where only log files are written and from which the Azure DR standby can read, without moving the database data files.
    5. High-level implementation steps
      1. Ensure existing on-prem Db2 HADR is healthy and synchronized.
      2. Deploy the Azure DR VM(s) and configure Db2 as an HADR standby instance.
      3. Provision a highly available NFS/GlusterFS/Azure NetApp Files share that is reachable from both on-prem and Azure (via VPN/ExpressRoute or similar connectivity).
      4. Configure Db2 log archiving on the primary to write archive logs to this shared location.
      5. Configure the standby in Azure to automatically retrieve archived logs from the same shared location.
      6. Test:
        • Normal log shipping and apply on the Azure standby.
        • Role change/failover and confirm that the new primary continues archiving to a location accessible by the other node.

    This design satisfies the requirement to “update latest log files only without DB data” by using HADR plus a shared archive location for transaction logs, while keeping database data files local to each site.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.