From 6bbcfe7d24a2dcfa71384cefc727bf247eac7e34 Mon Sep 17 00:00:00 2001 From: Tatsuo Ishii Date: Wed, 12 Jul 2017 17:01:17 +0900 Subject: [PATCH] Add Aurora setting example. --- doc/src/sgml/examples.sgml | 1989 +++++++++++++++++++----------------- 1 file changed, 1029 insertions(+), 960 deletions(-) diff --git a/doc/src/sgml/examples.sgml b/doc/src/sgml/examples.sgml index b8d100c03..550e19582 100644 --- a/doc/src/sgml/examples.sgml +++ b/doc/src/sgml/examples.sgml @@ -13,993 +13,1062 @@ Configuration Examples - - Basic Configuration Example + + Basic Configuration Example + + + Let's Begin! + + First, we must learn how to install and configure Pgpool-II and database nodes before using replication. + + + + Installing <productname>Pgpool-II</productname> + + Installing Pgpool-II is very easy. + In the directory which you have extracted the source tar ball, + execute the following commands. + + $ ./configure + $ make + $ make install + + configure script collects your system information + and use it for the compilation procedure. You can pass command + line arguments to configure script to change the default behavior, + such as the installation directory. Pgpool-II + will be installed to /usr/local directory by default. + + + make command compiles the source code, and + make install will install the executables. + You must have write permission on the installation directory. + In this tutorial, we will install Pgpool-II + in the default /usr/local directory. + + + + Pgpool-II requires libpq + library in PostgreSQL 7.4 or later (version 3 protocol). + + + + If the configure script displays the following error message, the + libpq library may not be installed, or it is not of version 3 + + configure: error: libpq is not installed or libpq is old + + If the library is version 3, but the above message is still displayed, your + libpq library is probably not recognized by the + configure script. + The configure script searches for libpq + library under /usr/local/pgsql. If you have installed the + PostgreSQL in a directory other than /usr/local/pgsql, use + --with-pgsql, or --with-pgsql-includedir + and --with-pgsql-libdir command line options when you + execute configure. + + + + + Configuration Files + + Pgpool-II configuration parameters are saved in the + pgpool.conf file. The file is in "parameter = value" + per line format. When you install Pgpool-II, + pgpool.conf.sample is automatically created. + We recommend copying and renaming it to pgpool.conf, and edit + it as you like. + + $ cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf + + Pgpool-II only accepts connections from the localhost + using port 9999. If you wish to receive conenctions from other hosts, + set to '*'. + + listen_addresses = 'localhost' + port = 9999 + + We will use the default parameters in thie tutorial. + + + + + Configuring <acronym>PCP</acronym> Commands + + Pgpool-II has an interface for administrative + purpose to retrieve information on database nodes, shutdown + Pgpool-II, etc. via network. To use + PCP commands, user authentication is required. + This authentication is different from PostgreSQL's user authentication. + A user name and password need to be defined in the pcp.conf + file. In the file, a user name and password are listed as a pair on each line, + and they are separated by a colon (:). Passwords are encrypted in + md5 hash format. + + + postgres:e8a48653851e28c69d0506508fb27fc5 + + + When you install Pgpool-II, pcp.conf.sample + is automatically created. We recommend copying and renaming it + to pcp.conf, and edit it. + + $ cp /usr/local/etc/pcp.conf.sample /usr/local/etc/pcp.conf + + To encrypt your password into md5 hash format, use the pg_md5 + command, which is installed as one of Pgpool-II's + executables. pg_md5 takes text as a command line argument, + and displays its md5-hashed text. + For example, give "postgres" as the command line argument, + and pg_md5 displays md5-hashed text on its standard output. + + $ /usr/bin/pg_md5 postgres + e8a48653851e28c69d0506508fb27fc5 + + PCP commands are executed via network, so the port number must be configured + with parameter in pgpool.conf file. + We will use the default 9898 for in this tutorial. + + pcp_port = 9898 + + + + + + + Preparing Database Nodes + + Now, we need to set up backend PostgreSQL servers for Pgpool-II + . These servers can be placed within the same host as + Pgpool-II, or on separate machines. If you decide + to place the servers on the same host, different port numbers must be assigned + for each server. If the servers are placed on separate machines, + they must be configured properly so that they can accept network + connections from Pgpool-II. + + + backend_hostname0 = 'localhost' + backend_port0 = 5432 + backend_weight0 = 1 + backend_hostname1 = 'localhost' + backend_port1 = 5433 + backend_weight1 = 1 + backend_hostname2 = 'localhost' + backend_port2 = 5434 + backend_weight2 = 1 + + + For , , + , set the node's hostname, port number, + and ratio for load balancing. At the end of each parameter string, + node ID must be specified by adding positive integers starting with 0 (i.e. 0, 1, 2..). + + + + parameters for all nodes are + set to 1, meaning that SELECT queries are equally distributed among + three servers. + + + + + + Starting/Stopping <productname>Pgpool-II</productname> + + To fire up Pgpool-II, execute the following + command on a terminal. + + + $ pgpool + + + The above command, however, prints no log messages because + Pgpool-II detaches the terminal. If you want to show + Pgpool-II log messages, you pass -n + option to pgpool command so Pgpool-II + is executed as non-daemon process, and the terminal will not be detached. + + $ pgpool -n & + + + The log messages are printed on the terminal, so it is recommended to use the following options. + + $ pgpool -n -d > /tmp/pgpool.log 2>&1 & + + + The -d option enables debug messages to be generated. + The above command keeps appending log messages to /tmp/pgpool.log + . If you need to rotate log files, pass the logs to a external + command which has log rotation function. + For example, you can use + rotatelogs from Apache2: + + $ pgpool -n 2>&1 | /usr/local/apache2/bin/rotatelogs \ + -l -f /var/log/pgpool/pgpool.log.%A 86400 & + + + This will generate a log file named "pgpool.log.Thursday" + then rotate it 00:00 at midnight. Rotatelogs adds logs to a file if it already + exists. To delete old log files before rotation, you could use cron: + + 55 23 * * * /usr/bin/find /var/log/pgpool -type f -mtime +5 -exec /bin/rm -f '{}' \; + + + Please note that rotatelogs may exist as /usr/sbin/rotatelogs2 + in some distributions. -f option generates a log file as soon as + rotatelogs starts and is available in apache2 2.2.9 or greater. + Also cronolog can be used. + + $ pgpool -n 2>&1 | /usr/sbin/cronolog \ + --hardlink=/var/log/pgsql/pgpool.log \ + '/var/log/pgsql/%Y-%m-%d-pgpool.log' & + + + To stop Pgpool-II execute the following command. + + $ pgpool stop + + + If any client is still connected, Pgpool-II + waits for it to disconnect, and then terminates itself. Run the following + command instead if you want to shutdown Pgpool-II + forcibly. + + $ pgpool -m fast stop + + + + + + + + Your First Replication + + Replication (see ) enables + the same data to be copied to multiple database nodes. + In this section, we'll use three database nodes, which we have already set + up in , and takes you step by step to + create a database replication system. + Sample data to be replicated will be generated by the + + pgbench benchmark program. + + + + Configuring Replication + + To enable the database replication function, set + to on in pgpool.conf file. + + replication_mode = true + + When is on, Pgpool-II + will send a copy of a received query to all the database nodes. + In addition, when is set to true, + Pgpool-II will distribute SELECT queries + among the database nodes. + + load_balance_mode = true + + In this section, we will enable both + and . + + + + + Checking Replication + + To reflect the changes in pgpool.conf, + Pgpool-II must be restarted. + Please refer to "Starting/Stopping Pgpool-II" + . + After configuring pgpool.conf and restarting the + Pgpool-II, let's try the actual replication + and see if everything is working. + First, we need to create a database to be replicated. We will name it + "bench_replication". This database needs to be created + on all the nodes. Use the + + createdb commands through + Pgpool-II, and the database will be created + on all the nodes. + + $ createdb -p 9999 bench_replication + + Then, we'll execute + pgbench with -i option. + -i option initializes the database with pre-defined tables and data. + + $ pgbench -i -p 9999 bench_replication + + The following table is the summary of tables and data, which will be created by + + pgbench -i. If, on all the nodes, the listed tables and + data are created, replication is working correctly. + + + + data summary + + + + Table Name + Number of Rows + + + + + + branches + 1 + + + + tellers + 10 + + + + accounts + 100000 + + + + history + 0 + + + + +
+ + + Let's use a simple shell script to check the above on all the nodes. + The following script will display the number of rows in branches, + tellers, accounts, and history tables on all the nodes (5432, 5433, 5434). + + $ for port in 5432 5433 5434; do + > echo $port + > for table_name in branches tellers accounts history; do + > echo $table_name + > psql -c "SELECT count(*) FROM $table_name" -p $port bench_replication + > done + > done + + + +
+
+ +
+ + + Watchdog Configuration Example - - Let's Begin! - First, we must learn how to install and configure Pgpool-II and database nodes before using replication. + This tutrial explains the simple way to try "Watchdog". + What you need is 2 Linux boxes on which + Pgpool-II is installed and a PostgreSQL + on the same machine or in the other one. It is enough + that 1 node for backend exists. + You can use watchdog with + Pgpool-II in any mode: replication mode, + master/slave mode and raw mode. - - - Installing <productname>Pgpool-II</productname> - - Installing Pgpool-II is very easy. - In the directory which you have extracted the source tar ball, - execute the following commands. - -$ ./configure -$ make -$ make install - - configure script collects your system information - and use it for the compilation procedure. You can pass command - line arguments to configure script to change the default behavior, - such as the installation directory. Pgpool-II - will be installed to /usr/local directory by default. - - - make command compiles the source code, and - make install will install the executables. - You must have write permission on the installation directory. - In this tutorial, we will install Pgpool-II - in the default /usr/local directory. - - - - Pgpool-II requires libpq - library in PostgreSQL 7.4 or later (version 3 protocol). - - - - If the configure script displays the following error message, the - libpq library may not be installed, or it is not of version 3 - -configure: error: libpq is not installed or libpq is old - - If the library is version 3, but the above message is still displayed, your - libpq library is probably not recognized by the - configure script. - The configure script searches for libpq - library under /usr/local/pgsql. If you have installed the - PostgreSQL in a directory other than /usr/local/pgsql, use - --with-pgsql, or --with-pgsql-includedir - and --with-pgsql-libdir command line options when you - execute configure. - - - - - Configuration Files - - Pgpool-II configuration parameters are saved in the - pgpool.conf file. The file is in "parameter = value" - per line format. When you install Pgpool-II, - pgpool.conf.sample is automatically created. - We recommend copying and renaming it to pgpool.conf, and edit - it as you like. - -$ cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf - - Pgpool-II only accepts connections from the localhost - using port 9999. If you wish to receive conenctions from other hosts, - set to '*'. - -listen_addresses = 'localhost' -port = 9999 - - We will use the default parameters in thie tutorial. - - - - - Configuring <acronym>PCP</acronym> Commands - - Pgpool-II has an interface for administrative - purpose to retrieve information on database nodes, shutdown - Pgpool-II, etc. via network. To use - PCP commands, user authentication is required. - This authentication is different from PostgreSQL's user authentication. - A user name and password need to be defined in the pcp.conf - file. In the file, a user name and password are listed as a pair on each line, - and they are separated by a colon (:). Passwords are encrypted in - md5 hash format. - - -postgres:e8a48653851e28c69d0506508fb27fc5 - - - When you install Pgpool-II, pcp.conf.sample - is automatically created. We recommend copying and renaming it - to pcp.conf, and edit it. - -$ cp /usr/local/etc/pcp.conf.sample /usr/local/etc/pcp.conf - - To encrypt your password into md5 hash format, use the pg_md5 - command, which is installed as one of Pgpool-II's - executables. pg_md5 takes text as a command line argument, - and displays its md5-hashed text. - For example, give "postgres" as the command line argument, - and pg_md5 displays md5-hashed text on its standard output. - -$ /usr/bin/pg_md5 postgres -e8a48653851e28c69d0506508fb27fc5 - - PCP commands are executed via network, so the port number must be configured - with parameter in pgpool.conf file. - We will use the default 9898 for in this tutorial. - -pcp_port = 9898 - - - - - - - Preparing Database Nodes - - Now, we need to set up backend PostgreSQL servers for Pgpool-II - . These servers can be placed within the same host as - Pgpool-II, or on separate machines. If you decide - to place the servers on the same host, different port numbers must be assigned - for each server. If the servers are placed on separate machines, - they must be configured properly so that they can accept network - connections from Pgpool-II. - - -backend_hostname0 = 'localhost' -backend_port0 = 5432 -backend_weight0 = 1 -backend_hostname1 = 'localhost' -backend_port1 = 5433 -backend_weight1 = 1 -backend_hostname2 = 'localhost' -backend_port2 = 5434 -backend_weight2 = 1 - - - For , , - , set the node's hostname, port number, - and ratio for load balancing. At the end of each parameter string, - node ID must be specified by adding positive integers starting with 0 (i.e. 0, 1, 2..). - - - - parameters for all nodes are - set to 1, meaning that SELECT queries are equally distributed among - three servers. - - - - - - Starting/Stopping <productname>Pgpool-II</productname> - - To fire up Pgpool-II, execute the following - command on a terminal. - - -$ pgpool - - - The above command, however, prints no log messages because - Pgpool-II detaches the terminal. If you want to show - Pgpool-II log messages, you pass -n - option to pgpool command so Pgpool-II - is executed as non-daemon process, and the terminal will not be detached. - -$ pgpool -n & - - - The log messages are printed on the terminal, so it is recommended to use the following options. - -$ pgpool -n -d > /tmp/pgpool.log 2>&1 & - - - The -d option enables debug messages to be generated. - The above command keeps appending log messages to /tmp/pgpool.log - . If you need to rotate log files, pass the logs to a external - command which has log rotation function. - For example, you can use - rotatelogs from Apache2: - -$ pgpool -n 2>&1 | /usr/local/apache2/bin/rotatelogs \ - -l -f /var/log/pgpool/pgpool.log.%A 86400 & - - - This will generate a log file named "pgpool.log.Thursday" - then rotate it 00:00 at midnight. Rotatelogs adds logs to a file if it already - exists. To delete old log files before rotation, you could use cron: - -55 23 * * * /usr/bin/find /var/log/pgpool -type f -mtime +5 -exec /bin/rm -f '{}' \; - - - Please note that rotatelogs may exist as /usr/sbin/rotatelogs2 - in some distributions. -f option generates a log file as soon as - rotatelogs starts and is available in apache2 2.2.9 or greater. - Also cronolog can be used. - -$ pgpool -n 2>&1 | /usr/sbin/cronolog \ - --hardlink=/var/log/pgsql/pgpool.log \ - '/var/log/pgsql/%Y-%m-%d-pgpool.log' & - - - To stop Pgpool-II execute the following command. - -$ pgpool stop - - - If any client is still connected, Pgpool-II - waits for it to disconnect, and then terminates itself. Run the following - command instead if you want to shutdown Pgpool-II - forcibly. - -$ pgpool -m fast stop - - - - - - - - Your First Replication - Replication (see ) enables - the same data to be copied to multiple database nodes. - In this section, we'll use three database nodes, which we have already set - up in , and takes you step by step to - create a database replication system. - Sample data to be replicated will be generated by the - - pgbench benchmark program. + This example uses use "osspc16" as an Active node and + "osspc20" as a Standby node. "Someserver" means one of them. - - Configuring Replication - - To enable the database replication function, set - to on in pgpool.conf file. - -replication_mode = true - - When is on, Pgpool-II - will send a copy of a received query to all the database nodes. - In addition, when is set to true, - Pgpool-II will distribute SELECT queries - among the database nodes. - -load_balance_mode = true - - In this section, we will enable both - and . - - - - - Checking Replication - - To reflect the changes in pgpool.conf, - Pgpool-II must be restarted. - Please refer to "Starting/Stopping Pgpool-II" - . - After configuring pgpool.conf and restarting the - Pgpool-II, let's try the actual replication - and see if everything is working. - First, we need to create a database to be replicated. We will name it - "bench_replication". This database needs to be created - on all the nodes. Use the - - createdb commands through - Pgpool-II, and the database will be created - on all the nodes. - -$ createdb -p 9999 bench_replication - - Then, we'll execute - pgbench with -i option. - -i option initializes the database with pre-defined tables and data. - -$ pgbench -i -p 9999 bench_replication - - The following table is the summary of tables and data, which will be created by - - pgbench -i. If, on all the nodes, the listed tables and - data are created, replication is working correctly. - - - - data summary - - - - Table Name - Number of Rows - - - - - - branches - 1 - - - - tellers - 10 - - - - accounts - 100000 - - - - history - 0 - - - - -
- - - Let's use a simple shell script to check the above on all the nodes. - The following script will display the number of rows in branches, - tellers, accounts, and history tables on all the nodes (5432, 5433, 5434). - -$ for port in 5432 5433 5434; do -> echo $port -> for table_name in branches tellers accounts history; do -> echo $table_name -> psql -c "SELECT count(*) FROM $table_name" -p $port bench_replication -> done -> done - - - -
-
- -
- - - Watchdog Configuration Example - - - This tutrial explains the simple way to try "Watchdog". - What you need is 2 Linux boxes on which - Pgpool-II is installed and a PostgreSQL - on the same machine or in the other one. It is enough - that 1 node for backend exists. - You can use watchdog with - Pgpool-II in any mode: replication mode, - master/slave mode and raw mode. - - - This example uses use "osspc16" as an Active node and - "osspc20" as a Standby node. "Someserver" means one of them. - + + Common configurations + + Set the following parameters in both of active and standby nodes. + + + + Enabling watchdog + + First of all, set to on. + + use_watchdog = on + # Activates watchdog + + + + + + Configure Up stream servers + + Specify the up stream servers (e.g. application servers). + Leaving it blank is also fine. + + trusted_servers = '' + # trusted server list which are used + # to confirm network connection + # (hostA,hostB,hostC,...) + + + + + + Watchdog Communication + + Specify the TCP port number for watchdog communication. + + wd_port = 9000 + # port number for watchdog service + + + + + + Virtual IP + + Specify the IP address to be used as a virtual IP address + in the . + + delegate_IP = '133.137.177.143' + # delegate IP address + + + + + Make sure the IP address configured as a Virtual IP should be + free and is not used by any other machine. + + + + + + + Individual Server Configurations + + Next, set the following parameters for each + Pgpool-II. + Specify , + and + with the values of + other Pgpool-II server values. + + + + Active (osspc16) Server configurations + + + other_pgpool_hostname0 = 'osspc20' + # Host name or IP address to connect to for other pgpool 0 + other_pgpool_port0 = 9999 + # Port number for othet pgpool 0 + other_wd_port0 = 9000 + # Port number for othet watchdog 0 + + + + + + Standby (osspc20) Server configurations + + + other_pgpool_hostname0 = 'osspc16' + # Host name or IP address to connect to for other pgpool 0 + other_pgpool_port0 = 9999 + # Port number for othet pgpool 0 + other_wd_port0 = 9000 + # Port number for othet watchdog 0 + + + + + + + Starting <productname>Pgpool-II</productname> + + Start Pgpool-II on each servers from + root user with "-n" switch + and redirect log messages into pgpool.log file. + + + + Starting pgpool in Active server (osspc16) + + First start the Pgpool-II on Active server. + + [user@osspc16]$ su - + [root@osspc16]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1 + + Log messages will show that Pgpool-II + has the virtual IP address and starts watchdog process. + + LOG: I am announcing my self as master/coordinator watchdog node + LOG: I am the cluster leader node + DETAIL: our declare coordinator message is accepted by all nodes + LOG: I am the cluster leader node. Starting escalation process + LOG: escalation process started with PID:59449 + LOG: watchdog process is initialized + LOG: watchdog: escalation started + LOG: I am the master watchdog node + DETAIL: using the local backend node status + + + + + + Starting pgpool in Standby server (osspc20) + + Now start the Pgpool-II on Standby server. + + [user@osspc20]$ su - + [root@osspc20]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1 + + Log messages will show that Pgpool-II + has joind the watchdog cluster as standby watchdog. + + LOG: watchdog cluster configured with 1 remote nodes + LOG: watchdog remote node:0 on Linux_osspc16_9000:9000 + LOG: interface monitoring is disabled in watchdog + LOG: IPC socket path: "/tmp/.s.PGPOOLWD_CMD.9000" + LOG: watchdog node state changed from [DEAD] to [LOADING] + LOG: new outbond connection to Linux_osspc16_9000:9000 + LOG: watchdog node state changed from [LOADING] to [INITIALIZING] + LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] + + LOG: successfully joined the watchdog cluster as standby node + DETAIL: our join coordinator request is accepted by cluster leader node "Linux_osspc16_9000" + LOG: watchdog process is initialized + + + + + + + + Try it out + + Confirm to ping to the virtual IP address. + + [user@someserver]$ ping 133.137.177.142 + PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data. + 64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms + 64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms + 64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms + + Confirm if the Active server which started at first has the virtual IP address. + + [root@osspc16]# ifconfig + eth0 ... + + eth0:0 inet addr:133.137.177.143 ... + + lo ... + + Confirm if the Standby server which started not at first doesn't have the virtual IP address. + + [root@osspc20]# ifconfig + eth0 ... + + lo ... + + + Try to connect PostgreSQL by "psql -h delegate_IP -p port". + + [user@someserver]$ psql -h 133.137.177.142 -p 9999 -l + + + + + + Switching virtual IP + + Confirm how the Standby server works when the Active server can't provide its service. + Stop Pgpool-II on the Active server. + + [root@osspc16]# {installed_dir}/bin/pgpool stop + + + Then, the Standby server starts to use the virtual IP address. Log shows: + + + + LOG: remote node "Linux_osspc16_9000" is shutting down + LOG: watchdog cluster has lost the coordinator node + + LOG: watchdog node state changed from [STANDBY] to [JOINING] + LOG: watchdog node state changed from [JOINING] to [INITIALIZING] + LOG: I am the only alive node in the watchdog cluster + HINT: skiping stand for coordinator state + LOG: watchdog node state changed from [INITIALIZING] to [MASTER] + LOG: I am announcing my self as master/coordinator watchdog node + LOG: I am the cluster leader node + DETAIL: our declare coordinator message is accepted by all nodes + + LOG: I am the cluster leader node. Starting escalation process + LOG: watchdog: escalation started + + LOG: watchdog escalation process with pid: 59551 exit with SUCCESS. + + + Confirm to ping to the virtual IP address. + + [user@someserver]$ ping 133.137.177.142 + PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data. + 64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms + 64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms + 64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms + + + Confirm that the Active server doesn't use the virtual IP address any more. + + [root@osspc16]# ifconfig + eth0 ... + + lo ... + + + Confirm that the Standby server uses the virtual IP address. + + [root@osspc20]# ifconfig + eth0 ... + + eth0:0 inet addr:133.137.177.143 ... + + lo ... + + + Try to connect PostgreSQL by "psql -h delegate_IP -p port". + + [user@someserver]$ psql -h 133.137.177.142 -p 9999 -l + + + + + + + More + + + Lifecheck + + There are the parameters about watchdog's monitoring. + Specify the interval to check , + the count to retry , + the qyery to check and + finaly the type of lifecheck . + + wd_lifecheck_method = 'query' + # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external') + # (change requires restart) + wd_interval = 10 + # lifecheck interval (sec) > 0 + wd_life_point = 3 + # lifecheck retry times + wd_lifecheck_query = 'SELECT 1' + # lifecheck query to pgpool from watchdog + + + + + + + Switching virtual IP address + + There are the parameters for switching the virtual IP address. + Specify switching commands , + , the path to them + , the command executed after + switching to send ARP request + and the path to it . + + ifconfig_path = '/sbin' + # ifconfig command path + if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0' + # startup delegate IP command + if_down_cmd = 'ifconfig eth0:0 down' + # shutdown delegate IP command + + arping_path = '/usr/sbin' # arping command path + + arping_cmd = 'arping -U $_IP_$ -w 1' + + You can also use the custom scripts to bring up and bring down the + virtual IP using and + configurations. + + + + + + + + AWS Configuration Example - - Common configurations - Set the following parameters in both of active and standby nodes. + This tutrial explains the simple way to try "Watchdog" + on AWS and using + the + Elastic IP Address as the Virtual IP for the high availability solution. + + + You can use watchdog with + Pgpool-II in any mode: replication mode, + master/slave mode and raw mode. + + - - Enabling watchdog - - First of all, set to on. - -use_watchdog = on - # Activates watchdog - - - - - - Configure Up stream servers - - Specify the up stream servers (e.g. application servers). - Leaving it blank is also fine. - -trusted_servers = '' - # trusted server list which are used - # to confirm network connection - # (hostA,hostB,hostC,...) - - - - - - Watchdog Communication - - Specify the TCP port number for watchdog communication. - -wd_port = 9000 - # port number for watchdog service - - - - - - Virtual IP - - Specify the IP address to be used as a virtual IP address - in the . - -delegate_IP = '133.137.177.143' - # delegate IP address - - - - - Make sure the IP address configured as a Virtual IP should be - free and is not used by any other machine. - - - - - - - Individual Server Configurations - - Next, set the following parameters for each - Pgpool-II. - Specify , - and - with the values of - other Pgpool-II server values. - + + AWS Setup + + For this example, we will use two node + Pgpool-II watchdog cluster. So we will set up two + Linux Amazon EC2 instances and one Elastic IP address. + So for this example, do the following steps: + + + + + + Launch two Linux Amazon EC2 instances. For this example, we name these + instances as "instance-1" and "instance-2" + + + + + + Configure the security group for the instances and allow inbound traffic + on ports used by pgpool-II and watchdog. + + + + + + Install the Pgpool-II on both instances. + + + + + + Allocate an Elastic IP address. + For this example, we will use "35.163.178.3" as an Elastic IP address" + + + + + + + + + <productname>Pgpool-II</productname> configurations + + Mostly the Pgpool-II configurations for this + example will be same as in the , except the + which we will not set in this example instead + we will use and + to switch the + Elastic IP address to the maste/Active Pgpool-II node. + + + + <productname>Pgpool-II</productname> configurations on Instance-1 + - - Active (osspc16) Server configurations - - -other_pgpool_hostname0 = 'osspc20' - # Host name or IP address to connect to for other pgpool 0 -other_pgpool_port0 = 9999 - # Port number for othet pgpool 0 -other_wd_port0 = 9000 - # Port number for othet watchdog 0 - - - - - - Standby (osspc20) Server configurations - - -other_pgpool_hostname0 = 'osspc16' - # Host name or IP address to connect to for other pgpool 0 -other_pgpool_port0 = 9999 - # Port number for othet pgpool 0 -other_wd_port0 = 9000 - # Port number for othet watchdog 0 - - - - - - - Starting <productname>Pgpool-II</productname> - - Start Pgpool-II on each servers from - root user with "-n" switch - and redirect log messages into pgpool.log file. - - - - Starting pgpool in Active server (osspc16) - - First start the Pgpool-II on Active server. - -[user@osspc16]$ su - -[root@osspc16]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1 - - Log messages will show that Pgpool-II - has the virtual IP address and starts watchdog process. - -LOG: I am announcing my self as master/coordinator watchdog node -LOG: I am the cluster leader node -DETAIL: our declare coordinator message is accepted by all nodes -LOG: I am the cluster leader node. Starting escalation process -LOG: escalation process started with PID:59449 -LOG: watchdog process is initialized -LOG: watchdog: escalation started -LOG: I am the master watchdog node -DETAIL: using the local backend node status - - - - - - Starting pgpool in Standby server (osspc20) - - Now start the Pgpool-II on Standby server. - -[user@osspc20]$ su - -[root@osspc20]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1 - - Log messages will show that Pgpool-II - has joind the watchdog cluster as standby watchdog. - -LOG: watchdog cluster configured with 1 remote nodes -LOG: watchdog remote node:0 on Linux_osspc16_9000:9000 -LOG: interface monitoring is disabled in watchdog -LOG: IPC socket path: "/tmp/.s.PGPOOLWD_CMD.9000" -LOG: watchdog node state changed from [DEAD] to [LOADING] -LOG: new outbond connection to Linux_osspc16_9000:9000 -LOG: watchdog node state changed from [LOADING] to [INITIALIZING] -LOG: watchdog node state changed from [INITIALIZING] to [STANDBY] - -LOG: successfully joined the watchdog cluster as standby node -DETAIL: our join coordinator request is accepted by cluster leader node "Linux_osspc16_9000" -LOG: watchdog process is initialized - - - - - - - - Try it out - - Confirm to ping to the virtual IP address. - -[user@someserver]$ ping 133.137.177.142 -PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data. -64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms -64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms -64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms - - Confirm if the Active server which started at first has the virtual IP address. - -[root@osspc16]# ifconfig -eth0 ... - -eth0:0 inet addr:133.137.177.143 ... - -lo ... - - Confirm if the Standby server which started not at first doesn't have the virtual IP address. - -[root@osspc20]# ifconfig -eth0 ... - -lo ... - - - Try to connect PostgreSQL by "psql -h delegate_IP -p port". - -[user@someserver]$ psql -h 133.137.177.142 -p 9999 -l - - - - - - Switching virtual IP - - Confirm how the Standby server works when the Active server can't provide its service. - Stop Pgpool-II on the Active server. - -[root@osspc16]# {installed_dir}/bin/pgpool stop - - - Then, the Standby server starts to use the virtual IP address. Log shows: - - - -LOG: remote node "Linux_osspc16_9000" is shutting down -LOG: watchdog cluster has lost the coordinator node - -LOG: watchdog node state changed from [STANDBY] to [JOINING] -LOG: watchdog node state changed from [JOINING] to [INITIALIZING] -LOG: I am the only alive node in the watchdog cluster -HINT: skiping stand for coordinator state -LOG: watchdog node state changed from [INITIALIZING] to [MASTER] -LOG: I am announcing my self as master/coordinator watchdog node -LOG: I am the cluster leader node -DETAIL: our declare coordinator message is accepted by all nodes - -LOG: I am the cluster leader node. Starting escalation process -LOG: watchdog: escalation started - -LOG: watchdog escalation process with pid: 59551 exit with SUCCESS. - - - Confirm to ping to the virtual IP address. - -[user@someserver]$ ping 133.137.177.142 -PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data. -64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms -64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms -64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms - - - Confirm that the Active server doesn't use the virtual IP address any more. - -[root@osspc16]# ifconfig -eth0 ... - -lo ... - - - Confirm that the Standby server uses the virtual IP address. - -[root@osspc20]# ifconfig -eth0 ... - -eth0:0 inet addr:133.137.177.143 ... - -lo ... - - - Try to connect PostgreSQL by "psql -h delegate_IP -p port". - -[user@someserver]$ psql -h 133.137.177.142 -p 9999 -l - + + use_watchdog = on + delegate_IP = '' + wd_hostname = 'instance-1-private-ip' + other_pgpool_hostname0 = 'instance-2-private-ip' + other_pgpool_port0 = 9999 + other_wd_port0 = 9000 + wd_escalation_command = '$path_to_script/aws-escalation.sh' + wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh' + - - - - - More - - - Lifecheck - - There are the parameters about watchdog's monitoring. - Specify the interval to check , - the count to retry , - the qyery to check and - finaly the type of lifecheck . - -wd_lifecheck_method = 'query' - # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external') - # (change requires restart) -wd_interval = 10 - # lifecheck interval (sec) > 0 -wd_life_point = 3 - # lifecheck retry times -wd_lifecheck_query = 'SELECT 1' - # lifecheck query to pgpool from watchdog - - - - - - - Switching virtual IP address - - There are the parameters for switching the virtual IP address. - Specify switching commands , - , the path to them - , the command executed after - switching to send ARP request - and the path to it . - -ifconfig_path = '/sbin' - # ifconfig command path -if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0' - # startup delegate IP command -if_down_cmd = 'ifconfig eth0:0 down' - # shutdown delegate IP command - -arping_path = '/usr/sbin' # arping command path - -arping_cmd = 'arping -U $_IP_$ -w 1' - - You can also use the custom scripts to bring up and bring down the - virtual IP using and - configurations. - - - - - - - - AWS Configuration Example + + - - This tutrial explains the simple way to try "Watchdog" - on AWS and using - the - Elastic IP Address as the Virtual IP for the high availability solution. - - - You can use watchdog with - Pgpool-II in any mode: replication mode, - master/slave mode and raw mode. - - - + + <productname>Pgpool-II</productname> configurations on Instance-2 + - - AWS Setup - - For this example, we will use two node - Pgpool-II watchdog cluster. So we will set up two - Linux Amazon EC2 instances and one Elastic IP address. - So for this example, do the following steps: - - - - - - Launch two Linux Amazon EC2 instances. For this example, we name these - instances as "instance-1" and "instance-2" - - - - - - Configure the security group for the instances and allow inbound traffic - on ports used by pgpool-II and watchdog. - - - - - - Install the Pgpool-II on both instances. - - - - - - Allocate an Elastic IP address. - For this example, we will use "35.163.178.3" as an Elastic IP address" - - - - - - - - - <productname>Pgpool-II</productname> configurations - - Mostly the Pgpool-II configurations for this - example will be same as in the , except the - which we will not set in this example instead - we will use and - to switch the - Elastic IP address to the maste/Active Pgpool-II node. - - - - <productname>Pgpool-II</productname> configurations on Instance-1 - - - -use_watchdog = on -delegate_IP = '' -wd_hostname = 'instance-1-private-ip' -other_pgpool_hostname0 = 'instance-2-private-ip' -other_pgpool_port0 = 9999 -other_wd_port0 = 9000 -wd_escalation_command = '$path_to_script/aws-escalation.sh' -wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh' - - - - - - - <productname>Pgpool-II</productname> configurations on Instance-2 - - - -use_watchdog = on -delegate_IP = '' -wd_hostname = 'instance-2-private-ip' -other_pgpool_hostname0 = 'instance-1-private-ip' -other_pgpool_port0 = 9999 -other_wd_port0 = 9000 -wd_escalation_command = '$path_to_script/aws-escalation.sh' -wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh' - - - - - - - - escalation and de-escalation Scripts - - Create the aws-escalation.sh and aws-de-escalation.sh scripts on both - instances and point the and - to the respective scripts. - - - - - You may need to configure the AWS CLI first on all AWS instances - to enable the execution of commands used by wd-escalation.sh and wd-de-escalation.sh. - See configure AWS CLI - - + + use_watchdog = on + delegate_IP = '' + wd_hostname = 'instance-2-private-ip' + other_pgpool_hostname0 = 'instance-1-private-ip' + other_pgpool_port0 = 9999 + other_wd_port0 = 9000 + wd_escalation_command = '$path_to_script/aws-escalation.sh' + wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh' + - - escalation script + + + + + + escalation and de-escalation Scripts + + Create the aws-escalation.sh and aws-de-escalation.sh scripts on both + instances and point the and + to the respective scripts. + + + + + You may need to configure the AWS CLI first on all AWS instances + to enable the execution of commands used by wd-escalation.sh and wd-de-escalation.sh. + See configure AWS CLI + + + + + escalation script + + + This script will be executed by the watchdog + to assign the Elastic IP on the instance when the watchdog becomes the active/master node. + Change the INSTANCE_ID and ELASTIC_IP values as per your AWS setup values. + + + aws-escalation.sh: + + #! /bin/sh - - This script will be executed by the watchdog - to assign the Elastic IP on the instance when the watchdog becomes the active/master node. - Change the INSTANCE_ID and ELASTIC_IP values as per your AWS setup values. - - - aws-escalation.sh: - -#! /bin/sh + ELASTIC_IP=35.163.178.3 + # replace it with the Elastic IP address you + # allocated from the aws console + INSTANCE_ID=i-0a9b64e449b17ed4b + # replace it with the instance id of the Instance + # this script is installed on -ELASTIC_IP=35.163.178.3 - # replace it with the Elastic IP address you - # allocated from the aws console -INSTANCE_ID=i-0a9b64e449b17ed4b - # replace it with the instance id of the Instance - # this script is installed on + echo "Assigning Elastic IP $ELASTIC_IP to the instance $INSTANCE_ID" + # bring up the Elastic IP + aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip $ELASTIC_IP -echo "Assigning Elastic IP $ELASTIC_IP to the instance $INSTANCE_ID" -# bring up the Elastic IP -aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip $ELASTIC_IP + exit 0 + -exit 0 - + - + + + de-escalation script - - - de-escalation script + + This script will be executed by watchdog + to remove the Elastic IP from the instance when the watchdog resign from the active/master node. + + + aws-de-escalation.sh: + + #! /bin/sh - - This script will be executed by watchdog - to remove the Elastic IP from the instance when the watchdog resign from the active/master node. - - - aws-de-escalation.sh: - -#! /bin/sh - -ELASTIC_IP=35.163.178.3 - # replace it with the Elastic IP address you - # allocated from the aws console - -echo "disassociating the Elastic IP $ELASTIC_IP from the instance" -# bring down the Elastic IP -aws ec2 disassociate-address --public-ip $ELASTIC_IP -exit 0 - - - - - - AWS Command References - - - - <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html">Configure AWS CLI</ulink> - - - AWS Documentation: Configuring the AWS Command Line Interface - - - - - - <ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/associate-address.html">associate-address</ulink> - - - AWS Documentation: associate-address reference - - - - - - <ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/disassociate-address.html">disassociate-address</ulink> - - - AWS Documentation: disassociate-address reference - - - - - - - - Try it out - - Start Pgpool-II on each server with "-n" switch - and redirect log messages to the pgpool.log file. - The log message of master/active Pgpool-II node - will show the message of Elastic IP assignment. - -LOG: I am the cluster leader node. Starting escalation process -LOG: escalation process started with PID:23543 -LOG: watchdog: escalation started - -Assigning Elastic IP 35.163.178.3 to the instance i-0a9b64e449b17ed4b -{ - "AssociationId": "eipassoc-39853c42" -} - -LOG: watchdog escalation successful -LOG: watchdog escalation process with pid: 23543 exit with SUCCESS. - - + ELASTIC_IP=35.163.178.3 + # replace it with the Elastic IP address you + # allocated from the aws console - - Confirm to ping to the Elastic IP address. - -[user@someserver]$ ping 35.163.178.3 -PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data. -64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms -64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms -64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms - - + echo "disassociating the Elastic IP $ELASTIC_IP from the instance" + # bring down the Elastic IP + aws ec2 disassociate-address --public-ip $ELASTIC_IP + exit 0 + + + + + + AWS Command References + + + + <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html">Configure AWS CLI</ulink> + + + AWS Documentation: Configuring the AWS Command Line Interface + + + + + + <ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/associate-address.html">associate-address</ulink> + + + AWS Documentation: associate-address reference + + + + + + <ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/disassociate-address.html">disassociate-address</ulink> + + + AWS Documentation: disassociate-address reference + + + + + + + + Try it out + + Start Pgpool-II on each server with "-n" switch + and redirect log messages to the pgpool.log file. + The log message of master/active Pgpool-II node + will show the message of Elastic IP assignment. + + LOG: I am the cluster leader node. Starting escalation process + LOG: escalation process started with PID:23543 + LOG: watchdog: escalation started + + Assigning Elastic IP 35.163.178.3 to the instance i-0a9b64e449b17ed4b + { + "AssociationId": "eipassoc-39853c42" + } + + LOG: watchdog escalation successful + LOG: watchdog escalation process with pid: 23543 exit with SUCCESS. + + + + + Confirm to ping to the Elastic IP address. + + [user@someserver]$ ping 35.163.178.3 + PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data. + 64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms + 64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms + 64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms + + + + + Try to connect PostgreSQL by "psql -h ELASTIC_IP -p port". + + [user@someserver]$ psql -h 35.163.178.3 -p 9999 -l + + + + + + Switching Elastic IP + + To confirm if the Standby server acquires the Elastic IP when the + Active server becomes unavailable, Stop the Pgpool-II + on the Active server. Then, the Standby server should start using the Elastic IP address, + And the Pgpool-II log will show the below messages. + + + + LOG: remote node "172.31.2.94:9999 [Linux ip-172-31-2-94]" is shutting down + LOG: watchdog cluster has lost the coordinator node + + LOG: watchdog node state changed from [STANDBY] to [JOINING] + LOG: watchdog node state changed from [JOINING] to [INITIALIZING] + LOG: I am the only alive node in the watchdog cluster + HINT: skiping stand for coordinator state + LOG: watchdog node state changed from [INITIALIZING] to [MASTER] + LOG: I am announcing my self as master/coordinator watchdog node + LOG: I am the cluster leader node + DETAIL: our declare coordinator message is accepted by all nodes + LOG: I am the cluster leader node. Starting escalation process + LOG: escalation process started with PID:23543 + LOG: watchdog: escalation started + + Assigning Elastic IP 35.163.178.3 to the instance i-0dd3e60734a6ebe14 + { + "AssociationId": "eipassoc-39853c42" + } + + LOG: watchdog escalation successful + LOG: watchdog escalation process with pid: 61581 exit with SUCCESS. + + Confirm to ping to the Elastic IP address again. + + [user@someserver]$ ping 35.163.178.3 + PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data. + 64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms + 64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms + 64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms + + + + Try to connect PostgreSQL by "psql -h ELASTIC_IP -p port". + + [user@someserver]$ psql -h 35.163.178.3 -p 9999 -l + + + + + + + Aurora Configuration Example - Try to connect PostgreSQL by "psql -h ELASTIC_IP -p port". - -[user@someserver]$ psql -h 35.163.178.3 -p 9999 -l - + Amazon Aurora for PostgreSQL + Compatibility (Aurora) is a managed service for + PostgreSQL. From user's point of + view, Aurora can be regarded as a + streaming replication cluster with some exceptions. First, + fail over and online recovery are managed + by Aurora. So you don't need to + set , , + and recovery related parameters. In this section we explain + how to set up Pgpool-II for Aurora. - - - Switching Elastic IP - - To confirm if the Standby server acquires the Elastic IP when the - Active server becomes unavailable, Stop the Pgpool-II - on the Active server. Then, the Standby server should start using the Elastic IP address, - And the Pgpool-II log will show the below messages. - - - -LOG: remote node "172.31.2.94:9999 [Linux ip-172-31-2-94]" is shutting down -LOG: watchdog cluster has lost the coordinator node - -LOG: watchdog node state changed from [STANDBY] to [JOINING] -LOG: watchdog node state changed from [JOINING] to [INITIALIZING] -LOG: I am the only alive node in the watchdog cluster -HINT: skiping stand for coordinator state -LOG: watchdog node state changed from [INITIALIZING] to [MASTER] -LOG: I am announcing my self as master/coordinator watchdog node -LOG: I am the cluster leader node -DETAIL: our declare coordinator message is accepted by all nodes -LOG: I am the cluster leader node. Starting escalation process -LOG: escalation process started with PID:23543 -LOG: watchdog: escalation started - -Assigning Elastic IP 35.163.178.3 to the instance i-0dd3e60734a6ebe14 -{ - "AssociationId": "eipassoc-39853c42" -} - -LOG: watchdog escalation successful -LOG: watchdog escalation process with pid: 61581 exit with SUCCESS. - - Confirm to ping to the Elastic IP address again. - -[user@someserver]$ ping 35.163.178.3 -PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data. -64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms -64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms -64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms - - - - Try to connect PostgreSQL by "psql -h ELASTIC_IP -p port". - -[user@someserver]$ psql -h 35.163.178.3 -p 9999 -l - - - - + + Setting pgpool.conf for Aurora + + + + + Create pgpool.conf + from pgpool.conf.sample-stream. + + + + + Set to 0 to + disable streaming replication delay checking. + + + + + Enable to on so + that md5 authentication is enabled + (Aurora always use md5 + authentication). + + + + + Create pool_password. See + for more details. + + + + + Set 0 for the + Aurora writer node. Set + other for the + Aurora reader node. Set + appropreate as + usual. You don't need to + set + + + + + Set ALWAYS_MASTER flag to + the for the master + node. + + + + + +
-- 2.39.5