Every SAP HANA node requires storage devices and capacity for:
- Operating system boot image
- SAP HANA installation (/hana/shared)
- SAP HANA persistence (data and log)
- Backup
For more information, see SAP HANA Storage Requirements.
Note: The formulas for capacity sizing in SAP HANA Storage Requirementsare subject to change by SAP. Always check the latest version of that document before you determine capacity requirements.
Operating system boot image
When the SAP HANA nodes boot from a volume on Unity XT (boot from SAN), the required capacity for the operating system must be included in the overall capacity calculation for the SAP HANA installation. Every SAP HANA node requires approximately 100 GB capacity for the operating system. This capacity includes space for the /usr/sap/ directory.
When booting from a SAN, follow the best practices that are described in the DellEMC Host Connectivity Guide for Linux.
SAP HANA installation (/hana/shared/)
Before you can install the SAP HANA binaries and the configuration files, traces, and logs, every SAP HANA node must have access to a file system that is mounted under the local /hana/shared/ mount point. In anSAP HANA scale-out cluster, a single shared file system is required and must be mounted on every node. Most SAP HANA installations use an NFS file system for this purpose. Unity XT all-flash and hybrid arrays can provide this file system with the NAS option.
I know that both HANA Studio and ST04 show quite similar information. However HANA is still too new so I cannot evaluate which of the numbers is the actual database size. Is it 'Memory Used' in HANA Studio / 'Used Physical Memory' in ST04? Or 'Disk Space Used' in HANA Studio / 'Used Disk Space' in ST04? HANA data aging is an application method to reduce the memory footprint based on application data logic. It is not a database feature but an application feature. The goal of HANA data aging is not to reduce the database size (which it is not doing), but to reduce the actual memory footprint of the HANA in-memory database. The compressed table size is the size occupied by the table in the main memory of SAP HANA database. Check Compression of a Column Table Using SAP HANA Studio, you can find out the compression status of a column store table and also the compression factor. There we can see tab 'Used/Total Size (%)' if that Used/TotalSize is low (HANA Alerts related to file system utilization. Oracle Database is ranked 2nd in Relational Databases with 26 reviews while SAP HANA is ranked 4th in Relational Databases with 20 reviews. Oracle Database is rated 8.2, while SAP HANA is rated 8.0. The top reviewer of Oracle Database writes 'Very robust, ideal for companies that need mission-critical databases, and extremely mature'.
You can calculate the size of the/hana/shared/ file system by using the latest formula in SAP HANA Storage Requirements. Version 2.10 (February 2017) of therequirements document uses the following formulas for calculation:
- Single node (scale-up):
Sizeinstallation(single-node)=MIN(1 x RAM; 1 TB)
- Multinode (scale-out):
Sizeinstallation(scale-out) = 1 x RAM_of_worker per 4 worker nodes
SAP HANA persistence (data and log)
The SAP HANA in-memory database requires disk storage to:
- Maintain the persistence of the in-memory data on disk to prevent a data loss due to a power outage. Disk storage must allow a host auto-failover, where a standby SAP HANA host takes over the in-memory data of a failed worker host in scale-out installations.
- Log information about data changes (redo log).
Every SAP HANA node (scale-up) or worker node (scale-out) requires two disk volumes/file systems to save the in-memory database on disk (data) and to keep a redo log (log). The size of these volumes/file systems depends on the anticipated total memory requirement of the database and the RAM size of the node. To assist with preparing the disk sizing, SAP provides several tools and documents, as described in SAP HANA Storage Requirements. Version 2.10 (February 2017) of the requirements document provides the following formulas to calculate the size of the data volume:
- Option 1: If an application-specific sizing program can be used:
Sizedata = 1.2 x anticipated net disk space for data
where “anticipated net disk space” is the anticipated total memory requirement of the database plus an additional 20 percent free space. If the database is distributed across multiple nodes in a scale-out cluster, the “anticipated net disk space” must be divided by the number of SAP HANA worker nodes in the cluster. For example, if the anticipated net disk space is 2 TB and the scale-out cluster consists of four worker nodes, then every node must be assigned a 616 GB data volume (2 TB / 4 = 512 GB x 1.2 = 616 GB).
If the anticipated net disk space is unknown at the time of the storage sizing, Dell EMC recommends using the RAM size of the node plus 20 percent free space for a capacity calculation of the data file system.
- Option 2: If no application-specific sizing program is available, the recommended size of the data volume of an SAP HANA system is equal to the total memory required for the system:
Sizedata = 1 x RAM
Sap Hana Database Size
The size of the log volume depends on the RAM size of the node. SAP HANA Storage Requirementsprovides the following formulas to calculate the minimum size of the log volume:
Hana Db Size Limit
Figure 9.Calculating the log volume
Hana Database Size Calculator
Backup
Hana Database Size
SAP HANA supports backup to a file system or the use of SAP-certified third-party tools. Dell EMC supports data protection strategies for SAP HANA backup using Dell EMC Data Domain systems and Dell EMC NetWorker software. An SAP HANA backup to an NFS file system on a Unity XT all-flash or hybrid array is possible. However, Dell EMC does not recommend backing up the SAP HANA database to the storage array where the primary persistence resides. If you plan to back up SAP HANA to an NFS file system on a different Unity XT array, see SAP HANA Storage Requirements for information about sizing the backup file system. The capacity depends not only on the data size and the frequency of change operations in the database but also on the backup generations that are kept on disk.