This appendix contains information about administering Oracle Database on Linux.
It contains the following topics:
Note:
Starting with Oracle Database 11g Release 2 (11.2), Linux x86-64 and IBM: Linux on System z media does not contain Linux x86 binaries.
Note:
On Linux, Automatic Storage Management uses asynchronous Input-Output by default. Asynchronous Input-Output is not supported for database files stored on Network File Systems.
Oracle Database supports kernel asynchronous Input-Output. Asynchronous Input-Output is enabled by default on raw volumes. Automatic Storage Management uses asynchronous Input-Output by default.
By default, the DISK_ASYNCH_IO
initialization parameter in the parameter file is set to TRUE. To enable asynchronous Input-Output on file system files:
Note:
On Linux, Automatic Storage Management uses asynchronous Input-Output by default. Asynchronous Input-Output is not supported for database files stored on Network File Systems.
Oracle Database supports kernel asynchronous Input-Output. This feature is disabled by default.
By default, the DISK_ASYNCH_IO
initialization parameter in the parameter file is set to TRUE to enable asynchronous I/O on raw devices. To enable asynchronous Input-Output on file system files:
FILESYSTEMIO_OPTIONS
initialization parameter in the parameter file to ASYNCH
to enable asynchronous Input-Output. If you want to enable both asynchronous Input-Output and direct Input-Output, set the FILESYSTEMIO_OPTIONS
initialization parameter in the parameter file to SETALL.
Direct Input-Output support is available and supported on Linux.
To enable direct Input-Output support:
Set the FILESYSTEMIO_OPTIONS
initialization parameter to DIRECTIO
.
Set the FILESYSTEMIO_OPTIONS
initialization parameter in the parameter file to SETALL
, which will enable both asynchronous Input-Output and direct Input-Output.
If Simultaneous Multithreading is enabled, then the v$osstat
view reports two additional rows corresponding to the online logical (NUM_LCPUS
) and virtual CPUs (NUM_VCPUS
).
To use the MEMORY_TARGET
or MEMORY_MAX_TARGET
feature, the following kernel parameters must be modified.
/dev/shm
mount point should be equal in size or larger than the value of SGA_MAX_SIZE
, if set, or should be set to be at least MEMORY_TARGET
or MEMORY_MAX_TARGET
, whichever is larger. For example, with MEMORY_MAX_TARGET=4GB
only set, to create a 4 GB system on the /dev/shm
mount point:
Run the following command as the root
user:
# mount -t tmpfs shmfs -o size=4g /dev/shm
Ensure that the in-memory file system is mounted when the system restarts, add an entry in the /etc/fstab
file similar to the following:
tmpfs /dev/shm tmpfs size=4g 0
The number of file descriptors for each Oracle instance are increased by 512*PROCESSES
. Therefore, the maximum number of file descriptors should be at least this value, plus some more for the operating system requirements. For example, if the cat /proc/sys/fs/file-max
command returns 32768 and PROCESSES
are 100, you can set it to 6815744 or higher as root
, to have 51200 available for Oracle. Use one of the following options to set the value for the file-max
descriptor.
Run the following command:
echo 6815744 > /proc/sys/fs/file-max
OR
Modify the following entry in the /etc/sysctl.conf
file and restart the system as root
.
fs.file-max = 6815744
Per-process number of file descriptors must be at least 512. For example, as root
run the following command.
On bash and sh:
# ulimit -n
On csh:
# limit descriptors
If the preceding command returns 200, then run the following command to set the value for the per processor file descriptors limit, for example to 1000:
On bash and sh:
# sudo sh # ulimit -n 1000
On csh:
# sudo sh # limit descriptors 1000
MEMORY_TARGET
and MEMORY_MAX_TARGET
cannot be used when LOCK_SGA
is enabled. MEMORY_TARGET
and MEMORY_MAX_TARGET
also cannot be used with huge pages on Linux.
Cgroups, or control groups, improve database performance by associating a dedicated set of CPUs to a database instance. Each database instance can only use the resources in its cgroup.
When consolidating on a large server, you may want to restrict the database to a specific subset of the CPU and memory. This feature makes it easy to enable CPU and memory restrictions for an Oracle Database instance.
Use the setup_processor_group.sh
script to create cgroups. Download this script from note 1585184.1 on the My Oracle Support website:
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1585184.1
HugePages is a feature integrated into the Linux kernel 2.6. Enabling HugePages makes it possible for the operating system to support memory pages greater than the default (usually 4 KB). Using very large page sizes can improve system performance by reducing the amount of system resources required to access page table entries. HugePages is useful for both 32-bit and 64-bit configurations. HugePage sizes vary from 2 MB to 256 MB, depending on the kernel version and the hardware architecture. For Oracle Databases, using HugePages reduces the operating system maintenance of page states, and increases Translation Lookaside Buffer (TLB) hit ratio.
Note:
Transparent Hugepages is currently not an alternative to manually configure HugePages.
This section includes the following topics:
Review this information if your operating system has HugePages enabled.
On Linux platform installations, Oracle recommends that you use HugePages to obtain the best performance for Oracle Databases. When you upgrade Oracle Grid Infrastructure and Oracle Databases on servers that have HugePages enabled, Oracle recommends that you review your HugePages memory allocation requirements.
GIMR and HugePages Memory
Oracle Grid Infrastructure installations include the Grid Infrastructure Management Repository (GIMR). When HugePages is configured on cluster member nodes, the GIMR system global area (SGA) is installed into HugePages memory. The GIMR SGA occupies up to 1 GB of HugePages memory. Oracle Grid Infrastructure starts up before Oracle Databases installed on the cluster.
If your cluster member node operating system memory allocations to HugePages are insufficient for the size of the SGAs for all of the Oracle Database instances on the cluster, then you may find that one or more of your Oracle Database SGAs are mapped to regular pages, instead of Huge Pages, which reduces expected performance. To avoid this issue, when you plan your upgrade, ensure that the memory you reserve for HugePages is large enough to accommodate your memory requirements.
Allocate memory to HugePages large enough for all databases planned to run SGA on the cluster, and to accommodate the SGA for the Grid Infrastructure Management Repository.
To enable Oracle Database to use large pages (sometimes called HugePages) on Linux, set the value of the vm.nr_hugepages
kernel parameter to specify the number of large pages that you want to reserve. You must specify adequate large pages to hold the entire SGA for the database instance. To determine the required parameter value, divide the SGA size for the instance by the size of a large page, then round up the result to the nearest integer.
To determine the default large page size, run the following command:
# grep Hugepagesize /proc/meminfo
For example, if /proc/meminfo
lists the large page size as 2 MB, and the total SGA size for the instance is 1.6 GB, then set the value for the vm.nr_hugepages
kernel parameter to 820 (1.6 GB / 2 MB = 819.2).
Without HugePages, the operating system keeps each 4 KB of memory as a page. When it allocates pages to the database System Global Area (SGA), the operating system kernel must continually update its page table with the page lifecycle (dirty, free, mapped to a process, and so on) for each 4 KB page allocated to the SGA.
With HugePages, the operating system page table (virtual memory to physical memory mapping) is smaller, because each page table entry is pointing to pages from 2 MB to 256 MB.
Also, the kernel has fewer pages whose lifecycle must be monitored. For example, if you use HugePages with 64-bit hardware, and you want to map 256 MB of memory, you may need one page table entry (PTE). If you do not use HugePages, and you want to map 256 MB of memory, then you must have 256 MB * 1024 KB/4 KB = 65536 PTEs.
HugePages provides the following advantages:
Increased performance through increased TLB hits
Pages are locked in memory and never swapped out, which provides RAM for shared memory structures such as SGA
Contiguous pages are preallocated and cannot be used for anything else but for System V shared memory (for example, SGA)
Less bookkeeping work for the kernel for that part of virtual memory because of larger page sizes
Complete the following steps to configure HugePages on the computer:
Run the following command to determine if the kernel supports HugePages:
$ grep Huge /proc/meminfo
Some Linux systems do not support HugePages by default. For such systems, build the Linux kernel using the CONFIG_HUGETLBFS
and CONFIG_HUGETLB_PAGE
configuration options. CONFIG_HUGETLBFS
is located under File Systems and CONFIG_HUGETLB_PAGE
is selected when you select CONFIG_HUGETLBFS
.
Edit the memlock
setting in the /etc/security/limits.conf
file. The memlock
setting is specified in KB, and the maximum locked memory limit should be set to at least 90 percent of the current RAM when HugePages memory is enabled and at least 3145728 KB (3 GB) when HugePages memory is disabled. For example, if you have 64 GB RAM installed, then add the following entries to increase the maximum locked-in-memory address space:
* soft memlock 60397977 * hard memlock 60397977
You can also set the memlock
value higher than your SGA requirements.
Log in as oracle
user again and run the ulimit -l
command to verify the new memlock
setting:
$ ulimit -l 60397977
Run the following command to display the value of Hugepagesize
variable:
$ grep Hugepagesize /proc/meminfo
Complete the following procedure to create a script that computes recommended values for hugepages
configuration for the current shared memory segments:
Create a text file named hugepages_settings.sh
.
Add the following content in the file:
#!/bin/bash # # hugepages_settings.sh # # Linux bash script to compute values for the # recommended HugePages/HugeTLB configuration # # Note: This script does calculation for all shared memory # segments available when the script is run, no matter it # is an Oracle RDBMS shared memory segment or not. # Check for the kernel version KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'` # Find out the HugePage size HPG_SZ=`grep Hugepagesize /proc/meminfo | awk {'print $2'}` # Start from 1 pages to be on the safe side and guarantee 1 free HugePage NUM_PG=1 # Cumulative number of pages required to handle the running shared memory segments for SEG_BYTES in `ipcs -m | awk {'print $5'} | grep "[0-9][0-9]*"` do MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q` if [ $MIN_PG -gt 0 ]; then NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q` fi done # Finish with results case $KERN in '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`; echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;; '2.6'|'3.8') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;; *) echo "Unrecognized kernel version $KERN. Exiting." ;; esac # End
Run the following command to change the permission of the file:
$ chmod +x hugepages_settings.sh
Run the hugepages_settings.sh
script to compute the values for hugepages
configuration:
$ ./hugepages_settings.sh
Note:
Before running this script, ensure that all the applications that use hugepages
run.
Set the following kernel parameter, where value is the HugePages value that you determined in step 7:
# sysctl -w vm.nr_hugepages=value
To ensure that HugePages is allocated after system restarts, add the following entry to the /etc/sysctl.conf
file, where value is the HugePages value that you determined in step 7:
vm.nr_hugepages=value
Run the following command to check the available hugepages
:
$ grep Huge /proc/meminfo
Restart the instance.
Run the following command to check the available hugepages
(1 or 2 pages free):
$ grep Huge /proc/meminfo
Note:
If you cannot set your HugePages allocation using nr_hugepages
, then your available memory may be fragmented. Restart your server for the Hugepages allocation to take effect.
HugePages has the following limitations:
You must unset both the MEMORY_TARGET
and MEMORY_MAX_TARGET
initialization parameters. For example, to unset the parameters for the database instance, use the command ALTER SYSTEM RESET
.
Automatic Memory Management (AMM) and HugePages are not compatible. When you use AMM, the entire SGA memory is allocated by creating files under /dev/shm
. When Oracle Database allocates SGA with AMM, HugePages are not reserved. To use HugePages on Oracle Database 12c, You must disable AMM.
If you are using VLM in a 32-bit environment, then you cannot use HugePages for the Database Buffer cache. You can use HugePages for other parts of the SGA, such as shared_pool
, large_pool
, and so on. Memory allocation for VLM (buffer cache) is done using shared memory file systems (ramfs/tmpfs/shmfs
). Memory file systems do not reserve or use HugePages.
HugePages are not subject to allocation or release after system startup, unless a system administrator changes the HugePages configuration, either by modifying the number of pages available, or by modifying the pool size. If the space required is not reserved in memory during system startup, then HugePages allocation fails.
Ensure that HugePages is configured properly as the system may run out of memory if excess HugePages is not used by the application.
If there is insufficient HugePages when an instance starts and the initialization parameter use_large_pages
is set to only
, then the database fails to start and an alert log message provides the necessary information on Hugepages.
Oracle recommends that you disable Transparent HugePages before you start installation.
Transparent HugePages memory differs from standard HugePages memory because the kernel khugepaged
thread allocates memory dynamically during runtime. Standard HugePages memory is pre-allocated at startup, and does not change during runtime.
Note:
Although Transparent HugePages is disabled on UEK2 and later UEK kernels, Transparent HugePages may be enabled by default on your Linux system.Transparent HugePages memory is enabled by default with Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 7, SUSE 11, Oracle Linux 6, and Oracle Linux 7 with earlier releases of Oracle Linux Unbreakable Enterprise Kernel 2 (UEK2) kernels. Transparent HugePages memory is disabled by default in UEK2 and later UEK kernels.
Transparent HugePages can cause memory allocation delays during runtime. To avoid performance issues, Oracle recommends that you disable Transparent HugePages on all Oracle Database servers. Oracle recommends that you instead use standard HugePages for enhanced performance.
To check if Transparent HugePages is enabled, run one of the following commands as the root user:
Red Hat Enterprise Linux kernels:
# cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
Other kernels:
# cat /sys/kernel/mm/transparent_hugepage/enabled
The following is a sample output that shows Transparent HugePages are being used as the [always] flag is enabled.
[always] never
Note:
If Transparent HugePages is removed from the kernel, then neither /sys/kernel/mm/transparent_hugepage
nor /sys/kernel/mm/redhat_transparent_hugepage files
do not exist.
To disable Transparent HugePages:
Add the following entry to the kernel boot line in the /etc/grub.conf
file:
transparent_hugepage=never
title Oracle Linux Server (2.6.32-300.25.1.el6uek.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-300.25.1.el6uek.x86_64 ro root=LABEL=/ transparent_hugepage=never initrd /initramfs-2.6.32-300.25.1.el6uek.x86_64.img
Restart the system to make the changes permanent.