A Deep Dive into Huge Pages
Recently, was working on optimising an Oracle database server that was struggling with memory management overhead. The server was beefy, sporting 160GB of RAM—but despite the hardware specs, we were seeing latency issues and high CPU usage related to memory access.
The culprit? The standard Linux memory page size.
By default, Linux manages memory in tiny 4KB chunks. When you have an Oracle System Global Area (SGA)consuming nearly 100GB of RAM, the operating system has to maintain a map (Page Table) for millions of these tiny pages. This creates massive overhead.
I decided to implement Huge Pages, switching from standard 4KB pages to 2MB pages. This seemingly small configuration change reduces the number of pages the OS manages by a factor of 512, drastically cutting down the “bookkeeping” work the CPU has to do.
Here is the technical walkthrough of how I configured this, using the specific calculations and steps I employed on my 160GB server.
Step 1: The Analysis
Before changing anything, I needed to see how the system was currently handling huge pages. I checked the /proc/meminfo file to see the current status.
Step 2: The Calculation (The Critical Part)
This is where many people get stuck. You cannot just assign all your RAM to Huge Pages, or the operating system will crash (it needs standard memory for itself).
My server had 160GB of RAM. My strategy was to dedicate roughly half of this explicitly to the Oracle SGA via Huge Pages, leaving the rest for the OS and other processes.
Here is the math I followed:
-
Total RAM: 160 GB
-
Target for Huge Pages: 80 GB (50% of Total RAM)
-
Page Size: 2 MB
To find the number of pages required (vm.nr_hugepages), I converted the target memory into the number of 2MB chunks:
So, my magic number for the kernel configuration was 40,000.
Quick Reference for Other Sizes:
If you are working with different SGA sizes, here is a quick cheat sheet I used to verify my math:
1 GB SGA: 512 Huge Pages
1.5 GB SGA: 768 Huge Pages
2 GB SGA: 1024 Huge Pages
5 GB SGA: 2560 Huge Pages
10 GB SGA: 5120 Huge Pages
15 GB SGA: 7680 Huge Pages
Step 3: Kernel Configuration
Once I had my number (40,000), I had to tell the Linux kernel to reserve this memory immediately upon boot. This is done in the /etc/sysctl.conf file.
I opened the file as root and added the following line:
vm.nr_hugepages=40000
To apply this change without a reboot, I ran the sysctl command:
sysctl -p
However, I highly recommend a reboot in a production scenario to ensure the memory is actually available and contiguous. If the memory is already fragmented, the dynamic allocation might fail.
After applying, I verified the allocation again:
I confirmed that HugePages_Total showed 40000.
Step 4: Configuring Oracle
The OS was ready, but Oracle didn’t know it was supposed to use these specific pages yet.
I modified the database initialization parameters (in the spfile or init.ora). The critical parameter here is USE_LARGE_PAGES.
I set it to ONLY. This is a strict setting that forces the database instance to fail if it cannot secure Huge Pages, rather than silently falling back to standard pages. This ensures performance consistency.
After restarting the database instance, the Oracle SGA was fully pinned in Huge Pages.
The Result
The impact was immediate.
-
Reduced Page Table Overhead: The CPU was no longer wasting cycles looking up memory addresses in a massive table.
-
No Swapping: Because Huge Pages are locked in memory, the SGA could never be swapped out to disk, ensuring stable latency.
-
Lower TLB Misses: The Translation Lookaside Buffer (TLB) hits improved significantly.
If you are managing an Oracle database with a large SGA (anything over 8GB), I cannot recommend this configuration enough. It is one of the highest ROI changes you can make for system stability.
Oracle Solutions We believe in delivering tangible results for our customers in a cost-effective manner