9 min read

Deploying SUSE SAP HA Automation in Microsoft Azure

Why automation

Maintaining a competitive advantage often depends on how quickly you can deliver new services. SAP applications are designed to help companies analyze data to anticipate new requirements and rapidly deliver new products and services. This gives you the ability to keep existing customers happy while attracting new business.

In line with SUSE’s vision is to simplify, modernize, and accelerate with technology, the first release of SUSE Linux Enterprise Server for SAP Applications already included automated installation features for the SAP software stack. Over the last 10 years, our SAP LinuxLab and development engineers have introduced several additional features to automate routine system administration.

graphical user interface

  • Simplify the deployment of an SAP Landscape in Azure for dev, test, and production.
  • Modernize customer environments by taking advantage of the power of the public cloud.
  • Accelerate customer migrations to the cloud.

Starting from the idea of simplifying and modernizing SAP HANA and SAP Interweave deployments, SUSE worked on rewriting the deployment wizards we had built.

Building the infrastructure to running SAP applications can get quite complex and demands a big effort if they get deployed manually. In addition, reproducing the process can be tedious and error-prone. An additional challenge is to make the infrastructure highly available, as this will add more complexity and tasks.

SUSE’s major motivation was to improve, simplify and unify the installation of SAP Landscape on SUSE Linux Enterprise Server for SAP Applications and clearly standardize deployments and allow customers to use one level of tooling in various ways—from a Command Line interface, through some GUI-driven process and SUSE Manager, or other automation frameworks. So it was clear for us to move to a more modern approach, like infrastructure-as-code, in order to reduce the effort and errors.

As SUSE Linux Enterprise Server and many other SUSE products ship with a universal configuration management solution for the last few years, we used this as the base for the new automation. This configuration infrastructure management system is called Salt from SaltStack and provides a highly scalable, powerful, and fast infrastructure automation and management, built on a dynamic communication bus. Salt can be used for data-driven orchestration, remote execution for any infrastructure, configuration management for any app stack, and much more.

Combining this management system with an infrastructure deployment solution like Terraform makes it possible to do a hands-free setup of an SAP Landscape, ready to login to start customizing your SAP System.

Such a number of systems have an additional challenge: getting an overview of what’s going on after the install is done. So we added the possibility to allow insights into your SAP Landscape with comprehensive dashboards, real-time and historic views, and active alerts and reporting based on flexible and powerful open-source projects Prometheus and Grafana. The deployment automation can be configured to also set up a monitoring environment for the clusters, HANA, and Netweaver.

What we cover with automation

SAP HANA and Netweaver applications can be deployed in many different scenarios and combinations. So we created building blocks, which are modular and reusable to attend from single install to full cluster deployment.

As of today, we take care of:

  • HANA single node
  • HANA HA Scale-up Systemreplication, including performance-optimized (active/passive and active/readonly) and cost-optimized scenarios
  • Netweaver
  • Netweaver HA with Enqueue Replication Version (ENSA1)
  • S/4 HANA

The solutions will get extended continuously depending on the demands of customers and partners and development is already underway for Netweaver HA with ENSA2 (Enqueue Standalone Architecture).

If you want to know more about the ENSA2 details, please have a look at this SUSE blog post.

The overall landscape that gets deployed looks like:

SAP Architecure overview

Details of what’s inside

SaltStack’s configuration management system lets you define the applications, files, and other settings that should be in place on a specific system. The system is continuously evaluated against the defined configuration, and changes are made as needed.

  • Salt works with so-called “States” that express the state a host should be in, using small, easy to read, easy to understand configuration files.
  • The automation is written as “formulas” which are a collection of pre-written Salt States and Salt “Pillar” files.
  • The Pillar files are the variables and data to build the system.

The good thing is that SLES-for-SAP Applications 15 SP2 ship all these formulas now as part of the product, so you can set up as you need.

HANA

The HANA formula takes care of the following:

  • Extract the required SAP files for SAP Medias (.tar,.sar,.exe)
  • Installs SAP HANA
  • Apply “saptune” for HANA to configure and tune the OS for HANA usage
  • Configures system replication
  • Preconfigure the High Availability cluster requirements
  • Configures the SAP HANA Prometheus exporter

Netweaver

The Netweaver formula for bootstrapping and managing the SAP Netweaver platform takes care of:

  • Extracting the required SAP files for SAP Medias (.tar,.sar,.exe)
  • Setting up
    • ASCS instance
    • ERS instance
    • PAS instance
    • AAS instance
    • Database instance (currently only HANA)

Beyond that, the formula sets up all of the pre-requirements as:

  • Hostnames
  • Virtual addresses
  • NFS mounts
  • Shared disks
  • SWAP partition space

The formula follows the best practices defined in the official SUSE documentation.

High availability

The HA bootstrap formula takes care of creating and managing a high availability cluster.

  • Creates and configures the High Availability cluster (Pacemaker, Corosync, SBD, and resource agents)
  • Adjustments for the Azure Infrastructure
  • Handle Netweaver, HANA and DRBD

Depending on the cloud requirements it may need an iSCSI server to be able to provide a shared disk for fencing where we use the iscsi-formula from  SaltStack.

Other dependent services

HA NFS Service

To build a HA NFS Service if there is none available, we can create one with help of 3 Linux services and the following

  • DRBD formula
  • HA formula
  • NFS formula from SaltStack

iSCSI Service

The iSCSI-formula from SaltStack is able to deploy iSNS, iSCSI initiator, and iSCSI target packages, manage configuration files, and then start the associated iSCSI services.

NFS formula

A SaltStack formula to install and configure nfs server and client.

Monitoring

Starting from the idea of improving user experience, SUSE worked on how to monitor the several High Availability clusters that manage SAP HANA and SAP Netweaver in a modern way. For monitoring, we use the Prometheus toolkit and the Grafana project to visualize the data.

To be able to monitor the clusters on either HANA or Netweaver we have written Prometheus exporters for it.

SAP HANA Database Exporter

The exporter provides metrics from more than one database or tenant. It provides

  • Memory metrics
  • CPU metrics
  • Disk usage metrics
  • I/O metrics
  • Network metrics
  • Top queries consuming time and memory

High Availability Cluster Exporter

Enables monitoring of Pacemaker, Corosync, SBD, DRBD, and other components of High Availability clusters. This provides the ability to easily monitor cluster status and health.

  • Pacemaker cluster summary, nodes, and resource status
  • Corosync ring errors and quorum votes. Currently, only Corosync version 2 is supported
  • Health status of SBD devices
  • DRBD resources and connections status. Currently, only DRBD version 9 is supported

SAP Host Exporter

Enables the monitoring of SAP Netweaver, SAP HANA, and other applications. The gathered metrics are the data that can be obtained by running the sapcontrol command.

  • SAP start service process list
  • SAP enqueue server metrics
  • SAP application server dispatcher metrics
  • SAP internal alerts

SUSE Dashboard snapshot

Note that the dashboards aren’t currently shipped within the product, but provided by SUSE as open source.

How to get it running

The simplest way is to use the Terraform project in Github. As development is always a moving target. SUSE provides releases to provide a stable setup. As of writing this, v6 is current. You should have knowledge of Terraform, Linux, and SAP to use it.

First, make sure that all pre-requirements are done:

  1. Have an Azure account
  2. Have installed the Azure commandline tool az
  3. Have installed terraform (v12) (it comes with SLES within the public cloud module)
  4. Have the SAP HANA install media
  5. Have created an Azure File Share
  6. Copy the SAP HANA install media to the Azure fileshare
  7. Extract the install media
  8. Open a browser and go to https://github.com/SUSE/ha-sap-terraform-deployments
  9. Click on tags
  10. Click on version (e.g., 6.0.0)
    • See what’s new and what has changed. If you use older versions, be sure to read it carefully.
    • The Usage section provides you with a link to a OpenBuildServer (OBS) repository where the RPM packages of the above-discussed building blocks are stored that fit the project version.
    • You need to use this value within the terraform variables file. So copy the line as described.
  11. Now go to Assets and download the Source code as .zip or .tar.gz
  12. Extract it into a folder on your computer
  13. Go to this folder and into the sub folder azure
  14. Copy the file tfvars.example to terraform.tfvars. You will see many key-value variable pairs, some enabled some disabled with a # in front. To have a simple start, only touch what we describe below.
  15. Change the region where you want to deploy the solution, so change az_region = “westeurope” to the azure region you want to use
  16. To make it easier to start, please change all four images types to pay-as-you-go (PAYG). To do so, replace all offer settings with “sles-sap-15-sp2” and sku with 15. Do this for hana, iscsi, monitoring, drbd e.g.
    hana_public_offer = "SLES-SAP-BYOS"
    hana_public_sku = "12-sp4"

    with
    hana_public_offer = "sles-sap-15-sp2"
    hana_public_sku = "gen2"

    This will make use of the on-demand images, which have automatically all needed SUSE repositories attached.
  17. Next is to set the name of the admin_user to a name that you want to use.
  18. The next step is to provide ssh keys to access the machines that will be deployed. We recommend creating new keys for this as you need to provide both keys as they need to get copied to the machines. So change the two locations variables and point them to your files in your ssh directory.
  19. As we need SAP Install Media for the automatic deployment of HANA, you need to create an Azure storage account where you will copy the HANA media. Best would be if you already have extracted the SAP media to save time during the deployment. Then we need to provide the name, key, and the path to this storage account to the system. Change:
    storage_account_name
    storage_account_key
    hana_inst_master
    The inst_master variable should point to the directory where you have the extracted HANA install files. There are more possibilities, but for the simplest usage have everything already extracted on you share.
    Disable the other SAP HANA variables by adding a ‘#’ in front of them.#hana_archive_file = "IMDB_SERVER.SAR"
    #hana_sapcar_exe = "SAPCAR"
    #hana_extract_dir = "/sapmedia/HDBSERVER"
  20. We need additional ssh keys for the cluster communications, so please save your changes and run the following commands from:
    mkdir -p ../salt/hana_node/files/sshkeys
    ssh-keygen -t rsa -N '' -f ../salt/hana_node/files/sshkeys/cluster.id_rsa
  21. Please open the tfvars file again as we need a few final changes. To create a HANA Systemreplication HA automation uncomment:hana_ha_enabled = true
    As now the system creates a cluster, we need to enable a few other services. Uncomment:
    hana_cluster_sbd_enabled = true
  22. Now we need to point to the place where the right packages for the v6 can be found. Copy the variable from step 1, e.g.:
    ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:v6"
  23. If you want the additional monitoring to be deployed, simply uncomment:
    monitoring_enabled = true
  24. As the last step, we enable a simplification parameter, which tries to find out a few settings automatically. Scroll down to the end and uncomment:
    pre_deployment = true

Now we have all settings for Terraform done and are nearly at the step to run the deployment, so save your changes.

  • Go one directory up and change into the pillar_example directory and here into the automatic directory where you can see 3 further directories. They will provide the configuration variable for the relevant services.
  • As we use only HANA, please switch to the hana directory and open the file hana_sls. The file looks quite complex, but we only need to change a few settings. Normally you would provide a more simple file with your dedicated settings, but as we want to do it automatically, we use this file.
  • We need to change the PRIMARY_SITE_NAME with some name you want to set and also the name for the SECONDARY_SITE_NAME. You can change other settings, like passwords, but for a simple test, you can leave it. Save your changes and go back to the main directory.
  • Now we are ready to run Terraform
    az login
    terraform init
    terraform workspace new myname
    terraform plan
    terraform apply

If you setup correctly, you will have an installed and running SAP HANA Systemreplication Cluster in Azure in around 40 minutes.

As there is a jumphost when a public IP address is created, you simply can login to all machines from your machine with:

ssh -J @jumphost <adminuser@targethost>

Azure Infrastructure for SaP

If everything is enabled, the resulting azure architecture will look like the above diagram. Hope you enjoyed this tutorial—if you have any questions or feedback, please let me know in the comments below.