3. TOAR Database Setup and Operation

This section is intended for administrators of a TOAR data infrastructure installation.

It describes two use cases:

  • setup of a database copy for local use (“clone”)

  • setup of a database mirror site

The technical setup and the procedures for operating the TOAR database system are described in the following using the system setup at the Jülich Supercomputing Centre (JSC) as an example. The description is intended to be as generic as possible.

The TOAR database is a PostgreSQL database with the PostGIS and toar_controlled_vocabulary extensions installed. A suitable server configuration for hosting a copy of the TOAR database should include at least 2 cores, 4 GBytes of memory and 3 TBytes of storage space. The system will benefit from additional cores and memory under high load. Sufficient IO Bandwidth should be provided as even the indices of some tables can grow quite large and cannot not always be cached in memory. For example, the indices of the data_hourly table are at around 100GB at the time of writing of this documentation. The current system configuration at JSC has 2 cores, 4GB memory, 4TB disk space. Running in a virtual environment, these parameters can easily be scaled up if needed.

The Postgres data directory is located under /postgres/data. Other directories under /postgres are

  • /postgres/archive - (compressed) WAL files written by the DB server process

  • /postgres/backup - (compressed) base backups

3.1. Setup of a Local Database Copy (Clone)

If you don’t want to mirror the TOAR database, but only install a local copy for your own use, you can clone a published snapshot of the database. Therefore, this section only focuses on setting up a database clone and does not describe how to perform backup and recovery; this can be adapted to the local conditions of your operating system. Suggestions can be found in Section 3.3.

3.1.1. Installation

The necessary steps for the installation depend on your actual set-up, e. g. whether PostgreSQL/PostGIS is already available on your system, or whether you already defined roles in your database system.

An installation from scratch contains the following steps:

  1. the configuration files for a fresh PostgreSQL/PostGIS installation (on an Ubuntu system) have to be adapted and the installation needs to be done. Second, a new database named “toardb_v2” has to be created with the following roles:

    • toaradministrator: Superuser who owns the database and all its contents

    • toarcurator: A user with read and write access to TOAR metadata and data

    • toaruser: A user with read only access to TOAR metadata and data

  2. the PostGIS and toar_controlled_vocabulary extensions have to be installed within this new database.

  3. the necessary database tables (see: https://esde.pages.jsc.fz-juelich.de/toar-data/toardb_fastapi/docs/toardb_fastapi.html#models and checkout the database schema at https://gitlab.jsc.fz-juelich.de/esde/toar-data/toardb_fastapi/-/blob/master/toardb_v2_schema.sql) are set up. The repository also contains a small set of test data to fill the newly created database toardb_v2 for testing purpose.

  4. install the REST API from https://gitlab.jsc.fz-juelich.de/esde/toar-data/toardb_fastapi/ to then test your new database with https://gitlab.jsc.fz-juelich.de/esde/toar-data/toardb_fastapi/-/blob/master/production_tests.sh.

3.1.2. Loading Data

In order to populate a fresh database, it is necessary to load a dump (see installation guide at https://gitlab.jsc.fz-juelich.de/esde/toar-data/toardb_fastapi#installation-guide). Database dumps are done on a regular basis and made publicly available on b2share (https://b2share.fz-juelich.de/communities/TOAR).

3.2. Setup of a Database Mirror Site

3.2.1. Installing a Mirror Site

At JSC, the TOAR database is located on a virtual machine (VM) in the HDF Cloud 1 currently reachable as zam10131.zam.kfa-juelich.de. To install a database on your mirror site, follow the instructions given in Section 3.1.1.

3.2.2. Loading Data to Mirror Site

In order to populate a fresh database with the current database of TOAR-II, it is necessary to load a current dump (see installation guide at https://gitlab.jsc.fz-juelich.de/esde/toar-data/toardb_fastapi#installation-guide). Database dumps are done on a regular basis and made publicly available, but this will never mirror the actual database system. To get a current dump of toardb_v2, the TOAR Data Centre has to be contacted (support@toar-data.org).

3.2.3. Data Updates

Updates of data in the database happen automatically multiple times per day via the real-time scripts. Manual additions to the database for individually submitted data are done with various python scripts. For details see The TOAR Data Processing Workflow.

3.3. Operating a Mirror Site

When operating a mirror site, special precautions should be taken to avoid data loss or a mismatch between the original site and the mirror site. Therefore, the following two sections describe the measurements of backup and recovery that are taken by the original site at the TOAR Data Centre.

3.3.1. Backup

Due to the size of the DB (>2TB), regular backups of the entire DB at short intervals, e.g. 1 day, seem unrealistic. Therefore, we use the continuous archiving scheme, for which one uses a base backup of the entire DB server contents (can be done w/o any downtime) combined with so called write-ahead-log files (WAL).

This approach is described in the PostgreSQL manual 2. WAL files are written after they reach a size of 16MB, which is the default for PostgreSQL but at the latest after 6 hours.

Both the monthly base backup as well as all WALs that were written in the meantime are backed up to JSC’s central backup facility. A mirror of this data is also sent to RWTH Aachen University for a geo redundant backup. We currently keep the backup data of the past two months as active copies on the system. The oldest of these copies is deleted on the first day of every month, when the monthly base backup has been created. We expect the backup server to discard old backups of the data more than 30 days after their inactivation.

3.3.2. Recovery

The DB can be recovered using the most recent base backup in /postgres/backup and a recovery.conf file in the data directory (/postgres/data). The single relevant entry for recovery is this (cf. the PostgreSQL documentation 3):

restore_command = 'zcat /postgres/archive/%f > %p'

As you can see, the WAL archive files are expected under /postgres/archive, which is the location where they are written. The current setup is such that WAL archives are compressed using gzip, therefore we need to uncompress them using zcat during recovery.

Should the entire VM have been lost, the data must be restored from the backup server using TSM Spectrum Protect software 4. The backup node for the VM is registered as TOAR-DB.

In case of a loss of the entire data centre, including the site-local backup, the backup is available at the mirror site RWTH Aachen University employing the same backup software as is used for the local setup. It is possible to retrieve the backup of the data from there. When building an entirely new instance outside JSC, support of the administrators at RWTH will be required.

3.4. System Configuration

We employ configuration management through Puppet 5, which is responsible for the entire configuration of the system and database server. The Puppet classes assigned to the host comprise a number of profiles to configure the following aspects:

Table 3.1 System configuration

Module

Function

fsd::nrpeprobes::pgsql

Nagios NRPE probe to automatically monitor PostgreSql server availability

fsd::profile::big_postgresql

PostgreSQL server setup for a “big” database including WAL based continuous backup, including a monthly base backup

fsd::profile::defaults

general defaults employed within the group Data Services and Infrastructure at JSC

fsd::profile::firewall::defaults

disallow all inbound traffic on ports other than the ones explicitly specified

fsd::profile::firewall::tsm

allow inbound traffic from backup server

fsd::profile::toar_db

create PostgreSql DB administrators

While this scheme can be employed to quickly bring up another TOAR DB server, manual configuration may be desirable when upgrading between different versions of PostgreSQL, as this is not well supported by the external Puppet classes we use for this purpose. In order to install multiple versions of PostgreSQL at the same time, the packages provided by the PostgreSQL developers are well suited for this 6.

3.5. File System Backup

For file system backup the local TSM backup client is used. As almost everything on the machine is configured automatically, we merely backup the payload data. This comprises /home and /postgres with the exception of /postgres/data. As a consequence, the include/exclude list on the machine is defined as:

exclude /.../*
include /postgres/.../*
exclude.dir /postgres/data
include /home/.../*

3.6. Additional Procedures

After importing large amounts of data, there is an increased number and volume of write-ahead-log (WAL) files until the next base backup will be created. This number and volume are cumbersome to handle. It is therefore advisable to do another base backup after such imports. This can be as easy as running the base_backup script manually, ideally in a terminal multiplexer (tmux) session or using nohup. The script and its parameters can be derived by:

$ sudo crontab -u postgres -l
  @monthly python3 /postgres/base_backup.py -b /postgres/backup -a \ /postgres/archive

Generally, we keep the last two monthly base backups and the corresponding WAL files, such that a point in time recovery would be possible for the past two months. Besides the local backup of database dumps and the WALs, once a month a complete database dump is transferred to RWTH Aachen University. In addition, every day the changes to the latest database dump are also transferred to RWTH Aachen University. Therefore, the database can always be restored from the backups in Aachen in the unlikely event of a complete data loss at JSC.

Footnotes

1

Jülich Supercomputing Centre. (2019). HDF Cloud – Helmholtz Data Federation Cloud Resources at the Jülich Supercomputing Centre. Journal of large-scale research facilities, 5, A137. http://dx.doi.org/10.17815/jlsrf-5-173

2

https://www.postgresql.org/docs/10/continuous-archiving.html

3

https://www.postgresql.org/docs/10/index.html

4

https://www.ibm.com/products/data-protection-and-recovery

5

https://puppet.com/docs/puppet/6/puppet_index.html

6

https://www.postgresql.org/download/linux/ubuntu/