Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Supported High Availability Solutions by SLES for SAP Applications

Supported High Availability Solutions by SLES for SAP Applications

Publication Date: May 23, 2024

SUSE Linux Enterprise Server for SAP Applications is the leading Linux platform for SAP HANA, SAP S/4HANA, and SAP NetWeaver, and an Endorsed App by SAP. Two of the key components of SLES for SAP Applications are the SUSE Linux Enterprise High Availability (HAE) and resource agents. The HAE provides Pacemaker, an open source cluster framework. The resource agents manage automated failover of SAP HANA system replication, S/4HANA ASCS/ERS ENSA2, and NetWeaver ASCS/ERS ENSA1.

This document provides an overview of High Availability solutions that SUSE supports for SAP HANA, S/4HANA, and NetWeaver on SUSE Linux Enterprise Server 15. New solutions will be added when they become available.

Authors: Fabian Herschel, Lars Pinne, and Sherry Yu

Copyright © 2023–2024 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see https://www.suse.com/company/legal/. All third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

1 Support statement

1.1 Supportability definition

CategoryDefinition
MandatoryDe facto, must be implemented this way.
SupportedSupported, with a published configuration guide.
Supported but undocumentedSupported, but a configuration guide is not published. SUSE will accept bug reports and fix code, if needed.
Supported with consulting PoCSupport is possible if the consulting PoC proves to be working.
Non-supportedNot supported or not applicable.
PlannedOn the roadmap to be tested and supported.
LegacyOnly supported for older release legacy systems.

1.2 Infrastructure support

The infrastructure can be on-premises, physical, virtualization, or in public cloud. The infrastructure must be supported by both SAP and the High Availability so that important functions like STONITH and virtual IP are supported.

Public cloud deployment usually needs additional documentation focusing on the cloud-specific implementation details. Check the documentation provided by the respective public cloud vendor.

The support details for SUSE High Availability solutions for SAP and the support process for each public cloud vendor are defined in the following SAP Notes:

CompanySAP Note
SUSE SAP Note 1763512: Support details for SUSE Linux Enterprise for SAP Applications - HA Solution for SAP NetWeaver and SAP S/4HANA
Microsoft Azure SAP Note 2513384: SUSE Linux Enterprise Server for SAP Applications on Azure
AWS SAP Note 1656099: SAP Applications on AWS: Supported DB/OS and AWS EC2 products
Google Cloud SAP Note 2456432: SAP Applications on Google Cloud: Supported Products and GCP VM types

For more details, see the SUSE knowledgebase article SLES for SAP - How To Engage SAP and SUSE to address Product Issues.

Note
Note

If necessary, all documented SUSE High Availability scenarios can run the SAP workload while the High Availability cluster is temporarily disabled.

1.3 Supported SUSE software versions

The general SUSE software lifecycle applies for the described scenarios. See the SUSE lifecycle page for details: https://www.suse.com/lifecycle/#suse-linux-enterprise-server-for-sap-applications-15.

Usually all mentioned High Availability scenarios are supported on all currently supported service packs of SLES for SAP Applications 15. Exceptions are documented in detail in the setup guides and are listed below:

SAP HANA system replication scale-up – cost optimized scenario

Supported starting with 15 SP2.

Note
Note

This document applies to SLES for SAP Applications 15. Version 12 SP5 is still supported but not covered by this document, as SUSE strongly recommends using version 15 for new installations. If you need details about version 12, check the individual guides available at https://documentation.suse.com/sbp/sap-12/.

2 HA solutions for SAP HANA system replication

2.1 Overview of SAP HANA system replication

SAP HANA system replication (HSR) provides the ability to copy and continuously synchronize an SAP HANA database to a secondary location in the same or another data center. SAP HANA system replication is implemented between two different SAP HANA systems with the same number of active nodes. After system replication is set up between the two SAP HANA systems, it replicates all the data from the primary SAP HANA system to the secondary SAP HANA system (initial copy). After this, any logged changes in the primary system are also sent to the secondary system.

System replication from the primary system in data center 1 to the secondary system in data center 2.

If the primary SAP HANA system fails, the system administrator must perform a manual takeover. Takeover can be performed using SAP HANA Cockpit or the command line. Manual failover requires continuous monitoring and could lead to higher recovery time. To automate the failover process, the SUSE Linux Enterprise High Availability can be used. Using the High Availability for the takeover process helps customers achieve service-level agreements for SAP HANA downtime by enabling faster recovery without any manual intervention.

Multiple SAP HANA High Availability scenarios are supported based on SAP HANA system replication. For variations, contact SUSE to discuss defining a PoC for a scenario that is not mentioned in the documentation.

You can use SAP HANA Fast Restart on RAM-tmpfs and SAP HANA on persistent memory if they are transparent to the High Availability cluster.

SAP HANA Native Storage Extension (NSE) is supported in High Availability solutions for automated SAP HANA system replication in both Scale-up and Scale-out. This feature does not change the SAP HANA topology or interfaces to the High Availability cluster. However, unlike SAP HANA NSE, the HANA Extension Nodes do change the topology and are therefore not currently supported by SUSE Linux Enterprise High Availability. Refer to the SAP documentation for details of SAP HANA NSE and its functional restrictions.

2.2 Notation formula

NotationDefinition
A, B, CHANA scale-up instance or HANA scale-out site
=>Sync, syncmem replication
->Async replication
'Primary IP address
"Secondary IP address
()SUSE cluster

2.3 HA solutions for automated SAP HANA system replication in HANA scale-up

The support details are for high level overview only. Refer to the official documentation for the full conditions.

Table 1: Supported configurations for automated SAP HANA system replication in HANA scale-up
Supported configurationsStatusSupport details
HANA performance optimized, plain setupSupported (Documentation)(A' => B) Secondary site is not read-only enabled, so does not accept client's inquiries.
HANA performance optimized, secondary site read-enabledSupported (Documentation)(A' => B”) Secondary site is read-enabled and can accept client's read-only inquiries.
HANA cost optimizedSupported (Documentation)(A' => B, Q”) topology is supported. Q is a QA instance running on the secondary site.
HANA multi-tier system replication (replication chain), third site NOT managed by PacemakerSupported (Documentation)(A' => B) -> C topology is supported with conditions: A to B system replication in Pacemaker is supported. B to C system replication is not managed by Pacemaker.
HANA multi-target system replication, third site NOT managed by PacemakerSupported (Documentation)(B <= A') -> C topology is supported with conditions: A to B system replication in Pacemaker is supported. A to C system replication is not managed by Pacemaker.
Multi-tenancy or MDCSupportedThis scenario is supported since SAP HANA 1.0 SPS09. The setup and configuration from a cluster point of view is the same for multi-tenancy and single container, so existing documentation can be used.
HANA multi-SID performance optimized in one cluster (MCOS)Supported but undocumented 
HANA performance optimized and S/4HANA ENSA2 in one clusterSupported but undocumented 
HANA performance optimized cluster and stand-alone application serverSupported but undocumented 

Below is a summary of the most common configurations:

Performance optimized, including read-enabled on the secondary site (A => B)
SAP HANA performance optimized configuration.

In the performance optimized scenario, an SAP HANA RDBMS site A is synchronizing with an SAP HANA RDBMS site B on a second node. As the SAP HANA RDBMS on the second node is configured to preload the tables, the takeover time is typically very short.

Read access can be allowed on the secondary site. To support this read-enabled scenario, a second virtual IP address is added to the cluster and bound to the secondary role of the system replication.

Cost optimized (A => B, Q)
SAP HANA cost optimized configuration.

In the cost optimized scenario, the second node is also used for a stand-alone non-replicated SAP HANA RDBMS system (such as QAS or TST). Whenever a takeover is needed, the non-replicated system must be stopped first. As the productive secondary system on this node must be limited in using system resources, the table preload must be switched off. A possible takeover needs more time than in the performance optimized use case. We recommend running a PoC to determine the SLA before using it in production.

The secondary productive system needs to be running in a reduced memory consumption configuration, so read-enabled must not be used in this scenario. The HADR provider script needs to remove the memory restrictions when a takeover occurs, so multi-SID (MCOS) must not be used in this scenario either.

Multi-target (B <= A' ) -> C
SAP HANA multi-target configuration.

Multi-target system replication is supported in SAP HANA 2.0 SPS04 or newer. Only the first replication pair (A and B) is managed by the cluster. The main difference between multi-target and multi-tier (chain) replication is that multi-target allows auto-registration for HANA after takeover.

Multi-tier (A' => B ) -> C
SAP HANA multi-tier configuration.

In SAP HANA 2.0 SPS03 or older, where multi-target system replication is not available, the third side replicates the secondary in a chain topology. Only the first replication pair (A and B) is managed by the cluster. Because of the mandatory chain topology, the resource agent feature AUTOMATED_REGISTER=true is not possible with pure multi-tier replication.

(A' => B) -> C topology is supported with the following conditions:

  • A to B system replication in Pacemaker is supported.

  • B to C system replication is not managed by Pacemaker.

  • After takeover from A to B, manual intervention is needed for rejoining A.

2.4 HA solutions for automated SAP HANA system replication in HANA scale-out

Pacemaker manages the automated failover of SAP HANA system replication between two sites of HANA scale-out clusters. An auxiliary third site is needed for the decision-maker node.

System replication between clusters managed by Pacemaker.

The support details are for high level overview only. Refer to the official documentation for the full conditions.

Table 2: Supported configurations for automated SAP HANA system replication in HANA scale-out
Supported configurationsStatusSupport details
HANA performance optimized, up to 30 nodes including standbySupported (Documentation)Up to 30 HANA nodes including standby nodes.
HANA performance optimized, up to 12 nodes, no standbySupported (Documentation)Up to 12 HANA nodes, NO standby nodes.
HANA performance optimized, up to 4 nodes, secondary site read-enabledSupported (Documentation)Up to 4 HANA nodes, NO standby nodes.
HANA multi-target system replication, third site NOT managed by the clusterSupported (Documentation)( B <= A' ) -> C topology is supported with conditions: Site A to site B system replication in Pacemaker is supported. Site A to site C system replication is not managed by Pacemaker.
HANA multi-tier system replication, third site NOT managed by the clusterSupported (Documentation)(A' => B ) -> C topology is supported with conditions: A to B system replication in Pacemaker is supported. B to C system replication is not managed by Pacemaker.
Performance optimized, multi-tenancy (MDC)SupportedMulti-tenancy is available for all of the supported scenarios and use cases. This scenario is supported since SAP HANA 1.0 SPS12, and the default installation type for SAP HANA 2.0. The setup and configuration from a cluster point of view is the same for multi-tenancy and single containers because the tenants are managed all together by the Pacemaker cluster.

2.5 HANA database HADR provider hook scripts

SUSE provides several hook scripts to enhance the integration between SAP HANA and the SUSE High Availability cluster. The SUSE best practice configuration guides explain how to use and configure these hooks.

ScriptUsage

SAPHanaSR.py

SAPHanaSrMultiTarget.py

Mandatory for data integrity in case of cluster sr_takeover
susTkOver.pyProtects from data inconsistency caused by manual sr_takeover
susChkSrv.pyAvoids downtime caused by local indexserver restart
susCostOpt.pyHandles secondary site in case of cluster sr_takeover
Table 3: Supported scenarios for HADR provider hook scripts
HADR provider hook scriptScenario
Scale-upScale-out
Cost optimizedPerformance optimizedPerformance optimized
SAPHanaSR.py scale-upMandatoryMandatory-
SAPHanaSR.py scale-out--Legacy (mandatory for SAP HANA 1.0)
SAPHanaSrMultiTarget.py--Mandatory
susTkOver.pySupported but undocumentedNot mandatory but recommendedNot mandatory but recommended
susChkSrv.py-Not mandatory but recommendedNot mandatory but recommended
susCostOpt.pyMandatory--

3 HA solutions for S/4HANA based on ABAP Platform 1809 or newer

Standalone Enqueue Server 2 (ENSA2) is the successor to Standalone Enqueue Server (ENSA1). Starting from ABAP Platform 1809, Standalone Enqueue Server 2 is the default installation. The use of the new Standalone Enqueue Server 2 and Enqueue Replicator 2 provides an improved high availability architecture with robust and fast replication, and failover.

ENSA2 two-node cluster configuration.

The support details are for high level overview only. Refer to the official documentation for the full conditions.

Table 4: Supported configurations for S/4HANA based on ABAP Platform 1809 or newer
Supported configurationStatusSupport details
2-node clusterSupported (Documentation)In a 2-node cluster, ASCS fails over to the same node where ERS is running.
3-node clusterSupported (Documentation)In a 3-node cluster, ASCS fails over to the online node where ERS is not running.
Simple mount file system structureSupported (Documentation)Shared file system mounts are NOT managed by the cluster. This is the recommended configuration over the Filesystem resource-based solution.
Filesystem resource-basedSupported (Documentation)Shared file system mounts are managed by the cluster via the Filesystem resource agent. SUSE still supports this configuration, but it is not recommended for new installations. Use the simple mount file system structure instead.
Multi-SIDSupported (Documentation)Multiple SAP ASCS/SCS clustered instances are supported in the same cluster.
Additional dialog or other instances in clusterSupported but undocumentedAlthough it is possible to run Application Servers in the same cluster where ASCS/ERS are running, it is not recommended for easy management of the cluster.
SAP Web Dispatcher in clusterSupported (Documentation for the Filesystem resource-based setup; not yet documented for the simple mount structure)

This solution combines the following resources into one cluster resource group:

  • An SAP instance including the sapwebdisp service

  • A file system where the SAP instance is running

  • An IP address used by the clients of the service

4 HA solutions for SAP NetWeaver based on ABAP Platform 1709 or older

Under the Standalone Enqueue Server (ENSA1), the ASCS has to fail over to the cluster node where the active ERS is running, because it needs to access the shared memory that stores the enqueue replication table.

ENSA1 two-node cluster configuration.

The support details are for high level overview only. Refer to the official documentation for the full conditions.

Table 5: Supported configurations for SAP NetWeaver based on ABAP Platform 1709 or older
Supported configurationStatusSupport details
2-node clusterSupported (Documentation)In a 2-node cluster, ASCS fails over to the same node where ERS is running.
3-node clusterSupported but undocumentedA 3-node cluster is supported, but the extra node is not used for ASCS failover.
Simple mount file system structureSupported (Documentation)Shared file system mounts are NOT managed by the cluster. This is the recommended configuration over the Filesystem resource-based solution.
Filesystem resource-basedSupported (Documentation)Shared file system mounts are managed by the cluster via the Filesystem resource agent. SUSE still supports this configuration, but it is not recommended for new installations. Use the simple mount file system structure instead.
Multi-SIDSupported (Documentation)Multiple SAP ASCS/SCS clustered instances are supported in the same cluster.
Additional dialog or other instances in clusterSupported but undocumentedAlthough it is possible to run Application Servers in the same cluster where ASCS/ERS are running, it is not recommended for easy management of the cluster.
SAP Web Dispatcher in clusterSupported (Documentation for the Filesystem resource-based setup; not yet documented for the simple mount structure)

This solution combines the following resources into one cluster resource group:

  • An SAP instance including the sapwebdisp service

  • A file system where the SAP instance is running

  • An IP address used by the clients of the service

5 Documentation and configuration guides

Refer to the official Web sites for up-to-date documentation and configuration guides: