The Database Cache - Firebird

Database cache is a chunk of memory reserved for each database running on the server. Its purpose is to cache all of the database pages (also called buffers) that have been most recently used. It is configured as a default for new databases and for all databases that are not individually configured. This default setting, which constitutes a number of blocks of memory, or page buffers, each the size of one database page, is set in the server’s configuration file:

  • For v.1.5 onward, the parameter is DefaultDbCachePages in firebird.conf for all platforms.
  • For v.1.0.x, the parameter is database_cache_pages in isc_config (POSIX) or ibconfig (Win32).

It should be emphasized that configuring the cache is not a “must-do.” The default configuration for Superserver fits most normal needs and server-level reconfiguration might never be necessary. On Classic server, the default is worthy of more attention, since it may be too high for a system with more than a few concurrent users.

A newly created database has a database-level cache size of zero pages. If the cache setting is left at zero, connections to that database will use the server-level configuration setting. Thus, databases with large page sizes will use more cache memory than those with smaller page sizes.

Cache size can be configured individually and permanently, per database. It can be changed again, if required. Other databases that retain (or are changed to) zero- cache will use the server default.

The number of cache buffers required is approximate. It needs to be large enough to cater for the page requirements of databases but not so large as to consume memory that is needed for other operations. Up to a point, the more activity that can be handled in the cache, the better the overall performance. The axiom “Database servers love RAM” is true for Firebird. But Firebird uses RAM for other activities that are at least as important as caching. Transaction inventory and index bitmaps are maintained in RAM and, from v.1.5, sorting and merging are done in memory, if it is available.

It is important to realize that every system has a critical point where a too-large cache configuration will consume more memory resources than the system can “spare.” Beyond this point, enlarging the cache will cause performance to degrade.

Limits and Defaults

The minimum cache size is 50 pages. There is no maximum, as long as the allocation in total does not exceed the RAM available. Default cache allocation is

  • Superserver: For each running database, 2048 pages. All users share this common cache pool.

    As an indication of how resources can be consumed, a single database running at the default settings for PAGE_SIZE (4K) and DefaultDbCachePages (2K) requires 8MB of memory. Two databases running with the same settings require 16MB, and so on. Default cache usage is calculated by

    PAGE_SIZE * DefaultDbCachePages * number of databases

  • Classic Server: Per client attachment, 75 cache pages. Each attachment is allocated its own cache. The amount of memory required is the total of the cache requirements of all client attachments to each database. Cache usage is calculated by

PAGE_SIZE * DefaultDbCachePages * number of attachments

Calculating the Cache Size

When Firebird reads a page from the database from disk, it stores that page in the cache. Ordinarily, the default cache size is adequate. If your application includes joins of five or more tables, Firebird Superserver automatically increases the size of the cache. If your application is well localized (i.e., it uses the same small part of the database repeatedly), you might want to consider increasing the cache size so that you never have to release one page from cache to make room for another.

Because the DbCache is configured in pages, obviously a database with a larger page size consumes more memory than one with a smaller page size. When there are multiple databases running on the same server, it may be desirable to override the serverwide cache size at the database level or, in some cases, at the application level.

An application that performs intensive indexed retrievals requires more buffers than one that performs mainly inserts.

Where many clients are accessing different tables or different parts of a single table, the demand for cache memory is higher than where most clients are working with the same, or overlapping, sets of data.

It can happen that too many cache buffers are allocated for available RAM to accommodate. With many databases running simultaneously, a request could demand more RAM than was available on the system. The cache would be swapped back and forth between RAM and disk, defeating the benefits of caching. Other applications (including the server) could be starved of memory if the cache were too large.

It is important, therefore, to ensure that adequate RAM is installed on the system to accommodate the database server’s memory requirements. If database performance is important for your users, then avoid creating competition for resources by running other applications on the server.

Estimating the Size Requirement

Estimating the size of the cache is not a simple or precise science, especially if you have multiple databases that have to run concurrently. The likely server cache usage is driven by the database with the largest page size. Classic server allocates a cache for each attachment, whereas Superserver pools cache for all attachments to a particular database. As a starting point, it will be useful to work with the numbers and needs for the database with the biggest page size. Actual usage conditions will determine whether any adjustments are needed.

It is not necessary to have a cache that will accommodate an entire database. Arrive at a reduction factor for each database by estimating the proportion that is likely to be accessed during normal usage. The estimation suggested here is just that—there is no “rule.” Assume when we talk about “DbCachePages” here, we are talking about the size
of the cache but not necessarily the default server setting for new and unconfigured databases.

The reduction factor, r, should be a value between 0 and 1. The size of the database, in pages, (size) can be established as follows:

  • For a single-file database , take the maximum file size allowed by the filesystem, minus 1 byte, and divide it by the page size.
  • On operating systems that support huge files, use the database file size instead of the maximum file size.

  • For a multi-file database, take the STARTING AT value of the first secondary file and add the LENGTH values of all of the secondary files.

Let DbCachePages equal the number of cache pages (buffers) required by this database.

For each database, calculate

DbCachePages = r*size

Calculate and record this figure for each individual database.


Calculating RAM Requirements

To calculate the amount of RAM required for database caching on your server, take the PAGE_SIZE of each database and multiply it by the Default DbCache Pages value. These results for all databases, when added together, will approximate the minimum RAM required for database caching.

Setting Cache Size at the Database Level

There are several ways to configure the cache size for a specified database. Changes do not take effect until the next time a first connection is made to Firebird Superserver or the next client connects to the Classic server.

Use gfix

The recommended way to set a database-level override to Default DbCache Pages is to use the gfix command-line utility with the following switches:

gfix –buffers n database_name

where n is the required number of database pages. This approach permits fine-tuning to reduce the risk of under-using memory or working with a cache that is too small. The override will remain in place until the next time it is changed.

Use the isql Command-Line Query Tool

To increase the number of cache pages for the duration of one session of the command line utility isql , you have two options.

The first option is to include the number of pages (n) as a switch when starting isql:

isql -c n database_name

where n is the number of cache pages to be used for the session and temporarily overrides any value set by the DefaultDbCachePages (database_cache_pages) configuration or gfix. It must be greater than 9.

Alternatively, you can include CACHE nas an argument to the CONNECT statement Once isql is running:

isql > connect database_name CACHE n

The value n can be any positive integer number of database pages. If a database cache already exists because of another attachment to the database, the cache size is increased only if n is greater than the current cache size.

Use the Database Parameter Buffer

In an application, the cache size can be set in a database parameter buffer (DPB) using either the isc_dpb_num_buffers or the isc_dpb_set_page_buffers parameter, according to your server’s requirements.

  • isc _dpb _num _buffers sets the number of buffers (pages of cache memory) to be used for the current connection. It makes the most sense in a Classic server architecture, where each connection gets a static allocation of cache memory. In Superserver, it will set the number of buffers to be used for the specific database, if that database is not already open, but it will not persist after the server closes the database.
  • isc_dpb_set _page _buffers is useful in both Classic server and Superserver. It has the same effect as using gfix to perform a persistent override of DefaultDbCachePages.

Changing the Server Default

Setting the value for the server-level Default DbCache Pages to be the largest of the DbCache Pages values you have recorded may be overkill. When you change the server- level default setting in the configuration file, it becomes the default for every new and zero-configured database on the server.

To change it, open the configuration file in a text editor and find the parameter:

  • For v.1.5, uncomment DefaultDbCachePages and change the number.
  • For v.1.0.x , in the v.1.0.x config files, it is default_cache_pages. Uncomment the line if necessary and make the entry database_cache_pages=nnnn, where nnnn is the new cache size.

For Superserver, the change will take effect the next time a first connection is made to the affected databases. For Classic server, it will affect all connections made after the reconfiguration.

A Pragmatic Approach

Do not overrate the importance of the database cache. Any cache imposes its own overhead on the overall memory resources and the filesystem cache plays its own role in optimizing the system’s read-write performance. There is always a point at which the real gain in overall server performance does not justify the cost of tying up resources for the worst-case demand scenario.

The best advice is this: Do not rush into cache size optimization as a “must-do” for Firebird. Work with the default settings during development and, for deployment, just verify that the amount of RAM available can accommodate the defaults.

Once into production conditions, use a monitoring tool to observe and record how reads and writes are satisfied from the cache for typical and extreme situations. If the statistics are not satisfactory, then begin considering optimization.

The first broad-brush optimization you can try is to increase the default cache to a size that will occupy approximately two-thirds of available free RAM. If there is not enough RAM installed, install more.

At that point, start monitoring again. If the procedure just described fails to improve things, repeat the exercise.

Verifying Cache Size

To verify the size of the database cache currently in use, execute the following commands in isql:

After SET STATS ON, the empty COMMIT command prompts isql to display information about memory and buffer usage. Read the Buffers=line to determine the current size of the cache, in pages.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

Firebird Topics