Tag Archives: tuning openldap

Linux #1 : Performance tuning of OpenLDAP #1

1. Tune buffer cache size of Berkely DB

The buffer cache becomes a typical tuning point of MySQL (InnoDB storage engine) and PostgreSQL open source database or even any other commercial databases. Only if some database is working with data, it should avoid the disk I/O and deployment data in memory for faster to get some data is persisted on the files and index. This setting is preferred.

Even in the Berkeley DB library as incoperated  into the program, it can maintain the data in the buffer cache frequently like database to act as an independent process. It can configure the buffer cache size on DB_CONFIG file with Berkeley DB files. When the openLDAP starts, the directory information such as entries, indexes is cached.

There are two ways to tune for the buffer cache size.

a) The buffer cache size necessary to load the database via slapadd in optimal time
b) The buffer cache size necessary to have a high performing running slapd once the data is loaded

For (a), the optimal cachesize is the size of the entire database, If you already have the database loaded, this is simply

# cd /usr/local/var/openldap-data
# du -ch *.bdb
68K     cn.bdb
64K     dn2id.bdb
12K     entryCSN.bdb
20K     entryUUID.bdb
12K     gidNumber.bdb
308K    id2entry.bdb
8.0K    loginShell.bdb
28K     memberUid.bdb
28K     objectClass.bdb
8.0K    ou.bdb
8.0K    sn.bdb
40K     uid.bdb
8.0K    uidNumber.bdb
8.0K    uniqueMember.bdb
620K    total

in the directory containing the OpenLDAP (default path : /usr/local/var/openldap-data) data.

For (b), the optimal buffer cache is just the size of the id2entry.bdb file, plus about 10% for growth.

For example,

# cd /usr/local/openldap/var/openldap-data
# vi DB_CONFIG
...
[Example]
set_cachesize 0 268435456 1
It will be provided 0.25 GBytes buffer logically and composed one cached area
physically
[Format]
set_cachesize gbytes bytes ncache
   gbytes : cache size by Gbytes
   bytes : cache size by Bytes
   ncache : a mount of cached files

Re-mapping buffer size by new configuration

# cd /usr/local/openldap/var
# /etc/init.d/slapd stop
# rm -rf __db.*
# /etc/init.d/slapd start
#  lsof __db.003
COMMAND  PID USER  FD   TYPE DEVICE      SIZE     NODE NAME
slapd   1405 ldap mem    REG  253,3 335552512 24445661 __db.003

 2. Tune log buffer size of Berkeley DB

The log buffer is area that is used to comfirm update or caused by its log buffer space becomes full before writing occurs to the transaction log file, if update request to the Berkeley DB.

For example,

# cd /usr/local/openldap/var/openldap-data
# vi DB_CONFIG
...
[Example]
set_lg_bsize 2097152
By default, or if the value is set to 0, a size of 32K is used
[Format]
set_lg_bsize lg_bsize
   lg_bsize : Set the size of the in-memory log buffer, in bytes.
...
[Example]
set_lg_regionmax 262144
By default, or if the value is set to 0, the base region size is 60KB
The log region is used to store filenames, and so may need to be increased in size
if a large number of files will be opened and registered with the specified
Berkeley DB environment's log manager.
[Format]
set_lg_regionmax size
  size : Set the size of the underlying logging subsystem region, in bytes

Re-mapping log buffer size by new configuration

# cd /usr/local/openldap/var
# /etc/init.d/slapd stop
# rm -rf __db.*
# /etc/init.d/slapd start
#  lsof __db.004
COMMAND  PID USER  FD   TYPE DEVICE    SIZE     NODE NAME
slapd   1405 ldap mem    REG  253,3 2359296 24445662 __db.004

 Additional information : Log file limits of Berkeley DB by Reference Guide

Log filenames and sizes impose a limit on how long databases may be used in a Berkeley DB database environment. It is quite unlikely that an application will reach this limit; however, if the limit is reached, the Berkeley DB environment’s databases must be dumped and reloaded.

The log filename consists of log. followed by 10 digits, with a maximum of 2,000,000,000 log files. Consider an application performing 6000 transactions per second for 24 hours a day, logged into 10MB log files, in which each transaction is logging approximately 500 bytes of data. The following calculation:

(10 * 2^20 * 2000000000) / (6000 * 500 * 365 * 60 * 60 * 24) = ~221

indicates that the system will run out of log filenames in roughly 221 years.

There is no way to reset the log filename space in Berkeley DB. If your application is reaching the end of its log filename space, you must do the following:

  1. Archive your databases as if to prepare for catastrophic failure
    (see db_archive for more information).
  2. Dump and reload all your databases (see db_dump and db_load for more information).
  3. Remove all of the log files from the database environment. Note: This is the only situation in which all the log files are removed from an environment; in all other cases, at least a single log file is retained.
  4. Restart your application.