</sect1>
<sect1 id="runtime-config-running-mode">
- <!--
- <title>Running mode</title>
- -->
- <title>動作モード</title>
-
- <sect2 id="runtime-config-master-slave-mode">
- <!--
- <title>Master slave mode</title>
- -->
- <title>マスタースレーブモード</title>
-
- <para>
- <!--
- This mode is used to couple <productname>Pgpool-II</productname>
- with another master/slave replication software (like <acronym>Slony-I</acronym>
- and Streaming replication), that is responsible for doing the actual data replication.
- -->
- このモードは<productname>Pgpool-II</productname>と(<acronym>Slony-I</acronym>やストリーミングレプリケーションのような)他のマスター/スレーブ型のレプリケーションソフトウェアと組み合わせるのに使用されます。
- 実際にデータレプリケーションを行うのはこれらのソフトウェアに任されます。
- </para>
-
- <note>
- <para>
- <!--
- The number of slave nodes are not limited to 1 and
- <productname>Pgpool-II</productname> can have up to 127 slave nodes.
- master/slave mode can also work just master node without any slave nodes.
- -->
- スレーブノードの数は1つに限定されず、<productname>Pgpool-II</productname>は127個までのスレーブノードを持つことができます。
- マスタースレーブモードは、スレーブノードが1つも存在しない場合マスターノードのみを動作させることができます。
- </para>
- </note>
-
- <para>
- <!--
- Load balancing (see <xref linkend="runtime-config-load-balancing"> ) can
- also be used with master/slave mode to distribute the read load on the
- standby backend nodes.
- -->
- 参照負荷をスタンバイバックエンドノードに振り分ける負荷分散(<xref linkend="runtime-config-load-balancing">を参照)もマスタースレーブモードと共に使用可能です。
- </para>
- <para>
- <!--
- Following options are required to be specified for master/slave mode.
- -->
- マスタースレーブモードでは以下のオプションを設定する必要があります。
- </para>
+ <title>クラスタリングモード</title>
<variablelist>
-
- <varlistentry id="guc-master-slave-mode" xreflabel="master_slave_mode">
- <term><varname>master_slave_mode</varname> (<type>boolean</type>)
+ <varlistentry id="guc-backend-clustering-mode" xreflabel="backend_clustering_mode">
+ <term><varname>backend_clustering_mode</varname> (<type>enum</type>)
<indexterm>
- <!--
- <primary><varname>master_slave_mode</varname> configuration parameter</primary>
- -->
- <primary><varname>master_slave_mode</varname> 設定パラメータ</primary>
+ <primary><varname>backend_clustering_mode</varname>設定パラメータ</primary>
</indexterm>
</term>
<listitem>
<para>
- <!--
- Setting to on enables the master/slave mode.
- Default is off.
- -->
- マスタースレーブモードを有効にします。
- デフォルトはoffです。
- </para>
- <note>
- <para>
- <!--
- <xref linkend="guc-master-slave-mode"> and <xref linkend="guc-replication-mode">
- are mutually exclusive and only one can be enabled at a time.
- -->
- <xref linkend="guc-master-slave-mode">と<xref linkend="guc-replication-mode">は相互に排他的で、一度に一方しか有効にすることができません。
- </para>
- </note>
- <para>
- <!--
- This parameter can only be set at server start.
- -->
- このパラメータはサーバ起動時にのみ設定可能です。
+ クラスタリングモードは<productname>PostgreSQL</productname>の同期を取る方法を指定します。
+ クラスタリングモードの設定には<varname>backend_clustering_mode</varname>を使用します。
+ この節ではクラスタリングモードの設定方法を説明します。
+ 詳細は<xref linkend="planning-postgresql">をご覧ください。
</para>
</listitem>
</varlistentry>
+ </variablelist>
- <varlistentry id="guc-master-slave-sub-mode" xreflabel="master_slave_sub_mode">
- <term><varname>master_slave_sub_mode</varname> (<type>enum</type>)
- <indexterm>
- <!--
- <primary><varname>master_slave_sub_mode</varname> configuration parameter</primary>
- -->
- <primary><varname>master_slave_sub_mode</varname> 設定パラメータ</primary>
- </indexterm>
- </term>
- <listitem>
- <para>
- <!--
- Specifies the external replication system used for data replication between
- <productname>PostgreSQL</> nodes.
- Below table contains the list of valid values for the parameter.
- -->
- <productname>PostgreSQL</>ノード間のデータレプリケーションに用いる外部のレプリケーションシステムを指定します。
- 以下の表にこのパラメータで有効な値のリストを示します。
-
- </para>
-
- <table id="master-slave-sub-mode-table">
- <!--
- <title>master slave sub mode options</title>
- -->
- <title>master_slave_sub_modeオプション</title>
- <tgroup cols="2">
- <thead>
- <row>
- <!--
- <entry>Value</entry>
- <entry>Description</entry>
- -->
- <entry>値</entry>
- <entry>説明</entry>
- </row>
- </thead>
-
- <tbody>
- <row>
- <entry><literal>'stream'</literal></entry>
- <!--
- <entry>Suitable for <productname>PostgreSQL</>'s built-in replication system (Streaming Replication)</entry>
- -->
- <entry><productname>PostgreSQL</>の組み込みレプリケーションシステム(ストリーミングレプリケーション)に適合</entry>
- </row>
-
- <row>
- <entry><literal>'slony'</literal></entry>
- <!--
- <entry>Suitable for <acronym>Slony-I</acronym></entry>
- -->
- <entry> <acronym>Slony-I</acronym>に適合</entry>
- </row>
+ <sect2 id="runtime-config-streaming-replication-mode">
+ <title>ストリーミングレプリケーションモード</title>
- <row>
- <entry><literal>'logical'</literal></entry>
- <!--
- <entry>Suitable for <productname>PostgreSQL</>'s built-in replication system (Logical Replication)</entry>
- -->
- <entry><productname>PostgreSQL</>の組み込みレプリケーションシステム(ロジカルレプリケーション)に適合</entry>
- </row>
+ <para>
+ このモードはもっともよく使われており、推薦できるクラスタリングモードです。
+ このモードでは<productname>PostgreSQL</productname>が個々のサーバをレプリケーションします。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>に'streaming_replication'を設定してください。
+ <programlisting>
+backend_clustering_mode = 'streaming_replication'
+ </programlisting>
+ このモードでは127台までのストリーミングレプリケーションスタンバイサーバを使用できます。
+ また、スタンバイサーバをまったく使用しないことも可能です。
+ </para>
+ <para>
+ ストリーミングレプリケーションモードで使用する追加のパラメータについては<xref linkend="runtime-streaming-replication-check">をご覧ください。
+ </para>
+ </sect2>
+
+ <sect2 id="runtime-config-logical-replication-mode">
+ <title>ロジカルレプリケーションモード</title>
- </tbody>
- </tgroup>
- </table>
+ <para>
+ このモードは最近追加されました。
+ このモードでは<productname>PostgreSQL</productname>が個々のサーバをレプリケーションします。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>に'logical_replication'を設定してください。
+ <programlisting>
+backend_clustering_mode = 'logical_replication'
+ </programlisting>
+ このモードでは127台までのロジカルレプリケーションストリーミングレプリケーションスタンバイサーバを使用できます。
+ また、スタンバイサーバをまったく使用しないことも可能です。
+ </para>
+ </sect2>
- <para>
- <!--
- Default is <literal>'stream'</literal>.
- -->
- デフォルトは<literal>'stream'</literal>です。
- </para>
- <para>
- <!--
- This parameter can only be set at server start.
- -->
- このパラメータはサーバ起動時にのみ設定可能です。
- </para>
- </listitem>
- </varlistentry>
+ <sect2 id="runtime-config-slony-mode">
+ <title>Slonyモード</title>
- </variablelist>
+ <para>
+ このモードでは<productname>Pgpool-II</productname>を<acronym>Slony-I</acronym>と組み合わせて使用します。
+ Slony-Iが実際にデータのレプリケーションを行います。
+ このモードを有効にするには<varname>backend_clustering_mode</varname>に'slony'を設定してください。
+ <programlisting>
+backend_clustering_mode = 'slony'
+ </programlisting>
+ このモードでは127台までのスレーブサーバを使用できます。
+ また、スレーブサーバをまったく使用しないことも可能です。
+ </para>
</sect2>
- <sect2 id="runtime-config-replication-mode">
- <!--
- <title>Replication mode</title>
- -->
- <title>レプリケーションモード</title>
+ <sect2 id="guc-replication-mode" xreflabel="native_replication_mode">
+ <title>ネィティブレプリケーションモード</title>
<para>
<!--
<variablelist>
- <varlistentry id="guc-replication-mode" xreflabel="replication_mode">
- <term><varname>replication_mode</varname> (<type>boolean</type>)
- <indexterm>
- <!--
- <primary><varname>replication_mode</varname> configuration parameter</primary>
- -->
- <primary><varname>replication_mode</varname> 設定パラメータ</primary>
- </indexterm>
- </term>
- <listitem>
- <para>
- <!--
- Setting to on enables the replication mode.
- Default is off.
- -->
- レプリケーションモードを有効にします。
- デフォルトはoffです。
- </para>
- <note>
- <para>
- <!--
- <xref linkend="guc-replication-mode"> and <xref linkend="guc-master-slave-mode">
- are mutually exclusive and only one can be enabled at a time.
- -->
- <xref linkend="guc-replication-mode">と<xref linkend="guc-master-slave-mode">は相互に排他的で、一度に一方しか有効にすることができません。
- </para>
- </note>
- <para>
- <!--
- This parameter can only be set at server start.
- -->
- このパラメータはサーバ起動時にのみ設定可能です。
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry id="guc-replication-stop-on-mismatch" xreflabel="replication_stop_on_mismatch">
<term><varname>replication_stop_on_mismatch</varname> (<type>boolean</type>)
<indexterm>
<ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
<command>pgbench</command></ulink> benchmark program.
-->
- ã\83¬ã\83\97ã\83ªã\82±ã\83¼ã\82·ã\83§ã\83³ï¼\88<xref linkend="runtime-config-replication-mode">を参照)では複数のデータベースノードに同じデータを複製して格納します。
+ ã\83\8dã\82£ã\83\86ã\82£ã\83\96ã\83¬ã\83\97ã\83ªã\82±ã\83¼ã\82·ã\83§ã\83³ã\83¢ã\83¼ã\83\89ï¼\88<xref linkend="guc-replication-mode">を参照)では複数のデータベースノードに同じデータを複製して格納します。
ここでは、<xref linkend="example-configs-begin">で準備した 3 台のデータベースノードを使用し、一歩一歩データベースクラスタシステムを作っていきましょう。
複製させるサンプルのデータは<ulink url="https://www.postgresql.org/docs/current/static/pgbench.html"><command>pgbench</command></ulink>ベンチマークプログラムで生成することにします。
</para>
</para>
<sect2 id="planning-postgresql">
- <title>PostgreSQLの動作モード</title>
+ <title>PostgreSQLのクラスタリングモード</title>
<para>
<productname>PostgreSQL</productname>の導入台数は1以上が可能ですが、1台ではその<productname>PostgreSQL</productname>がダウンした時にデータベースシステム全体が使えなくなるため、通常2台以上の<productname>PostgreSQL</productname>を導入します。
2台以上の<productname>PostgreSQL</productname>を用いる場合、何らかの方法でそれらのデータベース内容を同じになるようにしなければなりません。
- データベースの同期方法の違いをここでは「動作モード」と呼びます。
- もっとも広く使われている動作モードは、「ストリーミングレプリケーションモード」です。
+ データベースの同期方法の違いをここでは「クラスタリングモード」と呼びます。
+ もっとも広く使われているクラスタリングモードは、「ストリーミングレプリケーションモード」です。
特に何か特別な考慮が必要でなければ、ストリーミングレプリケーションモードを選択することをお勧めします。
動作モードの詳細については<xref linkend="running-mode">をご覧ください。
</para>
<title>負荷分散</title>
<para>
- <!--
- <productname>Pgpool-II</productname> load balancing of SELECT queries
- works with Master Slave mode (<xref linkend="runtime-config-master-slave-mode">)
- and Replication mode (<xref linkend="runtime-config-replication-mode">). When enabled
- <productname>Pgpool-II</productname> sends the writing queries to the
- <acronym>primay node</acronym> in Master Slave mode, all of the
- backend nodes in Replication mode, and other queries get load
- balanced among all backend nodes. To which node the load
- balancing mechanism sends read queries is decided at the session
- start time and will not be changed until the session ends. However
- there are some exceptions. See below for more details.
- -->
- <productname>Pgpool-II</productname>のSELECTクエリの負荷分散はマスタースレーブモード(<xref linkend="runtime-config-master-slave-mode">)とレプリケーションモード(<xref linkend="runtime-config-replication-mode">)で動作します。
+ <productname>Pgpool-II</productname>のSELECTクエリの負荷分散はrawモードを除くすべてのクラスタリングモードで動作します。
有効時、<productname>Pgpool-II</productname>は更新を伴うクエリを、マスタースレーブモードでは<acronym>プライマリノード</acronym>に、レプリケーションモードでは全てのバックエンドノードに対し送信します。
そして、その他のクエリは全てのバックエンドの間で負荷分散されます。
負荷分散メカニズムが参照クエリをどのノードに送信するかはセッション開始時に決められ、セッションの終了まで変更されません。
<!--
Change master_slave_sub_mode default to 'stream'. (Tatsuo Ishii)
-->
- <xref linkend="guc-master-slave-sub-mode">パラメータのデフォルト値を「stream」に変更しました。(Tatsuo Ishii)
+ master_slave_sub_modeパラメータのデフォルト値を「stream」に変更しました。(Tatsuo Ishii)
</para>
<para>
<!--
</para>
<para>
- <!--
- For each <productname>Pgpool-II</productname> operation mode,
- there are sample configurations.
- -->
- 各<productname>Pgpool-II</productname>の動作モードについて設定のサンプルがあります。
+ 各<productname>Pgpool-II</productname>のクラスタリングモードについて設定のサンプルがあります。
</para>
<entry>Operation mode</entry>
<entry>Configuration file name</entry>
-->
- <entry>動作モード</entry>
+ <entry>クラスタリングモード</entry>
<entry>設定ファイル名</entry>
</row>
</thead>
<entry><literal>pgpool.conf.sample-replication</literal></entry>
</row>
<row>
- <!--
- <entry>Master slave mode</entry>
- -->
- <entry>マスタースレーブモード</entry>
- <entry><literal>pgpool.conf.sample-master-slave</literal></entry>
+ <entry>ロジカルレプリケーションモード</entry>
+ <entry><literal>pgpool.conf.sample-logical</literal></entry>
</row>
<row>
- <!--
- <entry>Raw mode</entry>
- -->
<entry>Rawモード</entry>
- <entry><literal>pgpool.conf.sample</literal> </entry>
- </row>
- <row>
- <!--
- <entry>Logical replication mode</entry>
- -->
- <entry>ロジカルレプリケーションモード</entry>
- <entry><literal>pgpool.conf.sample-logical</literal> </entry>
+ <entry><literal>pgpool.conf.sample-raw</literal> </entry>
</row>
</tbody>
</tgroup>
<!--
<title>Running mode of Pgpool-II</title>
-->
- <title> Pgpool-IIの動作モード</title>
+ <title>Pgpool-IIのクラスタリングモード</title>
<indexterm zone="running-mode">
<!--
<primary>streaming replication mode</primary>
</indexterm>
<para>
- <!--
- There are four different running modes in <productname>Pgpool-II</>: streaming
- replication mode, master slave mode, native replication mode and
- raw mode. In any mode, <productname>Pgpool-II</> provides connection pooling,
- automatic fail over and online recovery.
- -->
- <productname>Pgpool-II</>にはストリーミングレプリケーションモード、ロジカルレプリケーションモード、マスタースレーブモード(slonyモード)、ネイティブレプリケーションモード、rawモードの5つの動作モードがあります。
+ <productname>Pgpool-II</>にはストリーミングレプリケーションモード、ロジカルレプリケーションモード、Slonyモード、ネイティブレプリケーションモード、rawモードの5つのクラスタリングモードがあります。
いずれのモードにおいても、<productname>Pgpool-II</>はコネクションプーリング、自動フェイルオーバ、オンラインリカバリの機能を提供します。
</para>
<xref linkend="guc-master-slave-sub-mode"> to <literal>'stream'</literal>.
-->
<productname>Pgpool-II</productname>は<productname>PostgreSQL</> 9.0から利用可能になった<productname>PostgreSQL</>組み込みのストリーミングレプリケーション機能と一緒に動作することができます。
- ストリーミングレプリケーション向けに<productname>Pgpool-II</productname>を設定するには、<xref linkend="guc-master-slave-mode">を有効にして<xref linkend="guc-master-slave-sub-mode">を<literal>'stream'</literal>に設定します。
+ ストリーミングレプリケーション向けに<productname>Pgpool-II</productname>を設定するには、<xref linkend="guc-backend-clustering-mode">に<literal>'streaming-replication'</literal>を設定します。
</para>
<para>
<!--
</sect1>
<sect1 id="runtime-config-running-mode">
- <title>Running mode</title>
-
- <sect2 id="runtime-config-master-slave-mode">
- <title>Master slave mode</title>
-
- <para>
- This mode is used to couple <productname>Pgpool-II</productname>
- with another master/slave replication software
- (like <acronym>Slony-I</acronym> and Streaming replication),
- that is responsible for doing the actual data replication.
- </para>
-
- <note>
- <para>
- The number of slave nodes are not limited to 1 and
- <productname>Pgpool-II</productname> can have up to 127 slave nodes.
- master/slave mode can also work just master node without any slave nodes.
- </para>
- </note>
-
- <para>
- Load balancing (see <xref linkend="runtime-config-load-balancing"> ) can
- also be used with master/slave mode to distribute the read load on the
- standby backend nodes.
- </para>
- <para>
- Following options are required to be specified for master/slave mode.
- </para>
-
+ <title>Clustering mode</title>
+ <para>
<variablelist>
-
- <varlistentry id="guc-master-slave-mode" xreflabel="master_slave_mode">
- <term><varname>master_slave_mode</varname> (<type>boolean</type>)
+ <varlistentry id="guc-backend-clustering-mode" xreflabel="backend_clustering_mode">
+ <term><varname>backend_clustering_mode</varname> (<type>enum</type>)
<indexterm>
- <primary><varname>master_slave_mode</varname> configuration parameter</primary>
+ <primary><varname>backend_clustering_mode</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
- Setting to on enables the master/slave mode.
- Default is off.
- </para>
- <note>
- <para>
- <xref linkend="guc-master-slave-mode"> and <xref linkend="guc-replication-mode">
- are mutually exclusive and only one can be enabled at a time.
- </para>
- </note>
- <para>
- This parameter can only be set at server start.
+ Clustering mode is the method to sync
+ <productname>PostgreSQL</productname> servers. To set the clustering
+ mode, <varname>backend_clustering_mode</varname> can be used. In
+ this section we discusss how to set the clustering mode. See <xref
+ linkend="planning-postgresql"> for more details.
</para>
</listitem>
</varlistentry>
+ </variablelist>
+ </para>
- <varlistentry id="guc-master-slave-sub-mode" xreflabel="master_slave_sub_mode">
- <term><varname>master_slave_sub_mode</varname> (<type>enum</type>)
- <indexterm>
- <primary><varname>master_slave_sub_mode</varname> configuration parameter</primary>
- </indexterm>
- </term>
- <listitem>
- <para>
- Specifies the external replication system used for data
- replication between
- <productname>PostgreSQL</productname> nodes.
- Below table contains the list of valid values for the parameter.
- </para>
-
- <table id="master-slave-sub-mode-table">
- <title>master slave sub mode options</title>
- <tgroup cols="2">
- <thead>
- <row>
- <entry>Value</entry>
- <entry>Description</entry>
- </row>
- </thead>
-
- <tbody>
- <row>
- <entry><literal>'stream'</literal></entry>
- <entry>Suitable
- for <productname>PostgreSQL</productname>'s built-in
- replication system (Streaming Replication)</entry>
- </row>
-
- <row>
- <entry><literal>'slony'</literal></entry>
- <entry>Suitable for <acronym>Slony-I</acronym></entry>
- </row>
+ <sect2 id="runtime-config-streaming-replication-mode">
+ <title>Streaming replication mode</title>
- <row>
- <entry><literal>'logical'</literal></entry>
- <entry>Suitable
- for <productname>PostgreSQL</productname>'s built-in
- replication system (Logical Replication)</entry>
- </row>
+ <para>
+ This mode is most poplular and recommended clustering mode. In this
+ mode <productname>PostgreSQL</productname> is responsible to
+ replicate each servers. To enable this mode, use
+ 'streaming_replication' for
+ <varname>backend_clustering_mode</varname>.
+ <programlisting>
+backend_clustering_mode = 'streaming_replication'
+ </programlisting>
+ In this mode you can have up to 127 streaming replication standby servers.
+ Also it is possible not to have standby server at all.
+ </para>
+ <para>
+ See <xref linkend="runtime-streaming-replication-check"> for
+ additional parameters for streaming replication mode.
+ </para>
+ </sect2>
+
+ <sect2 id="runtime-config-logical-replication-mode">
+ <title>Logical replication mode</title>
- </tbody>
- </tgroup>
- </table>
+ <para>
+ This mode is recently added. In this mode
+ <productname>PostgreSQL</productname> is responsible to replicate
+ each servers. To enable this mode, use 'logical_replication' for
+ <varname>backend_clustering_mode</varname>.
+ <programlisting>
+backend_clustering_mode = 'logical_replication'
+ </programlisting>
+ In this mode you can have up to 127 logical replication standby servers.
+ Also it is possible not to have standby server at all.
+ </para>
+ </sect2>
- <para>
- Default is <literal>'stream'</literal>.
- </para>
- <para>
- This parameter can only be set at server start.
- </para>
- </listitem>
- </varlistentry>
+ <sect2 id="runtime-config-slony-mode">
+ <title>Slony mode</title>
- </variablelist>
+ <para>
+ This mode is used to couple <productname>Pgpool-II</productname>
+ with <acronym>Slony-I</acronym>. Slony-I is responsible for doing
+ the actual data replication. To enable this mode, use 'slony' for
+ <varname>backend_clustering_mode</varname>.
+ <programlisting>
+backend_clustering_mode = 'slony'
+ </programlisting>
+ In this mode you can have up to 127 slave servers. Also it is
+ possible not to have slave server at all.
+ </para>
</sect2>
- <sect2 id="runtime-config-replication-mode">
- <title>Replication mode</title>
+ <sect2 id="guc-replication-mode" xreflabel="native_replication_mode">
+ <title>Native replication mode</title>
<para>
This mode makes the <productname>Pgpool-II</productname> to
replicate data between <productname>PostgreSQL</productname>
- backends.
+ backends. To enable this mode, use 'native_replication' for
+ <varname>backend_clustering_mode</varname>.
+ <programlisting>
+backend_clustering_mode = 'native_replication'
+ </programlisting>
+ In this mode you can have up to 127 slabe replication servers.
+ Also it is possible not to have slave server at all.
</para>
- <para>
- Load balancing
- (see <xref linkend="runtime-config-load-balancing"> ) can also
- be used with replication mode to distribute the load to the
- attached backend nodes.
- </para>
<para>
Following options affect the behavior of
<productname>Pgpool-II</productname> in the replication mode.
<variablelist>
- <varlistentry id="guc-replication-mode" xreflabel="replication_mode">
- <term><varname>replication_mode</varname> (<type>boolean</type>)
- <indexterm>
- <primary><varname>replication_mode</varname> configuration parameter</primary>
- </indexterm>
- </term>
- <listitem>
- <para>
- Setting to on enables the replication mode.
- Default is off.
- </para>
- <note>
- <para>
- <xref linkend="guc-replication-mode">
- and <xref linkend="guc-master-slave-mode"> are
- mutually exclusive and only one can be enabled at a
- time.
- </para>
- </note>
- <para>
- This parameter can only be set at server start.
- </para>
- </listitem>
- </varlistentry>
-
<varlistentry id="guc-replication-stop-on-mismatch" xreflabel="replication_stop_on_mismatch">
<term><varname>replication_stop_on_mismatch</varname> (<type>boolean</type>)
<indexterm>
<sect2 id="example-configs-replication">
<title>Your First Replication</title>
<para>
- Replication (see <xref linkend="runtime-config-replication-mode">) enables
+ Replication (see <xref linkend="guc-replication-mode">) enables
the same data to be copied to multiple database nodes.
In this section, we'll use three database nodes, which we have already set
up in <xref linkend="example-configs-begin">, and takes you step by step to
</para>
<sect2 id="planning-postgresql">
- <title>Running mode of PostgreSQL</title>
+ <title>Clustering mode of PostgreSQL</title>
<para>
It is possible to have more than or equal to one installation of
<productname>PostgreSQL</productname>, it is common to have more
<productname>PostgreSQL</productname> is not available. When we
use two or more <productname>PostgreSQL</productname> servers, it
is necessary to sync the databases in some way. We call the
- methods of syncing databases as "running mode". The most popular
- mode ever used is "streaming replication mode". Unless there's
- necessity to have special consideration, it is recommended to use
- the streaming replication mode. See <xref linkend="running-mode">
- for more details of running mode.
+ methods of syncing databases as "clustering running mode". The
+ most popular mode ever used is "streaming replication mode".
+ Unless there's necessity to have special consideration, it is
+ recommended to use the streaming replication mode. See <xref
+ linkend="running-mode"> for more details of running mode.
</para>
<para>
The next thing we need to consider is how many
<para>
<productname>Pgpool-II</productname> load balancing of SELECT queries
- works with Master Slave mode (<xref linkend="runtime-config-master-slave-mode">)
- and Replication mode (<xref linkend="runtime-config-replication-mode">). When enabled
+ works with any clustering mode except raw mode. When enabled
<productname>Pgpool-II</productname> sends the writing queries to the
<acronym>primary node</acronym> in Master Slave mode, all of the
backend nodes in Replication mode, and other queries get load
</para>
<para>
- For each <productname>Pgpool-II</productname> operation mode,
+ For each <productname>Pgpool-II</productname> clustering mode,
there are sample configurations.
</para>
<tgroup cols="2">
<thead>
<row>
- <entry>Operation mode</entry>
+ <entry>Clustering mode</entry>
<entry>Configuration file name</entry>
</row>
</thead>
<tbody>
<row>
<entry>Streaming replication mode</entry>
+ <entry><literal>pgpool.conf.sample-stream</literal></entry>
</row>
<row>
<entry>Replication mode</entry>
<entry><literal>pgpool.conf.sample-replication</literal></entry>
</row>
<row>
- <entry>Master slave mode</entry>
+ <entry>Logical replication mode</entry>
+ <entry><literal>pgpool.conf.sample-logical</literal> </entry>
</row>
<row>
- <entry>Raw mode</entry>
- <entry><literal>pgpool.conf.sample</literal> </entry>
+ <entry>Slony mode</entry>
+ <entry><literal>pgpool.conf.sample-slony</literal></entry>
</row>
<row>
- <entry>Logical replication mode</entry>
- <entry><literal>pgpool.conf.sample-logical</literal> </entry>
+ <entry>Raw mode</entry>
+ <entry><literal>pgpool.conf.sample-raw</literal> </entry>
</row>
</tbody>
</tgroup>
<productname>Pgpool-II</productname> can work with <productname>PostgreSQL</> native
Streaming Replication, that is available since <productname>PostgreSQL</> 9.0.
To configure <productname>Pgpool-II</productname> with streaming
- replication, enable <xref linkend="guc-master-slave-mode"> and set
- <xref linkend="guc-master-slave-sub-mode"> to <literal>'stream'</literal>.
+ replication, set
+ <xref linkend="guc-backend-clustering-mode"> to <literal>'streaming-replication'</literal>.
</para>
<para>
<productname>Pgpool-II</productname> assumes that Streaming Replication
sample/pcp.conf.sample \
sample/pool_hba.conf.sample \
sample/pgpool.conf.sample-replication \
- sample/pgpool.conf.sample-master-slave \
+ sample/pgpool.conf.sample-slony \
sample/pgpool.conf.sample-stream \
sample/pgpool.conf.sample-logical \
+ sample/pgpool.conf.sample-raw \
sample/scripts/failover.sh.sample \
sample/scripts/follow_master.sh.sample \
sample/scripts/pgpool_remote_start.sample \
sample/scripts/recovery_1st_stage.sample \
sample/scripts/recovery_2nd_stage.sample \
sample/pgpool.conf.sample sample/pool_hba.conf.sample \
- sample/pgpool.conf.sample-replication sample/pgpool.conf.sample-master-slave \
+ sample/pgpool.conf.sample-replication sample/pgpool.conf.sample-slony \
+ sample/pgpool.conf.sample-raw \
sample/pgpool.conf.sample-stream sample/pgpool.conf.sample-logical sample/pcp.conf.sample \
sql/Makefile \
sql/insert_lock.sql \
{NULL, 0, false}
};
+static const struct config_enum_entry backend_clustering_mode_options[] = {
+ {"streaming_replication", CM_STREAMING_REPLICATION, false},
+ {"native_replication", CM_NATIVE_REPLICATION, false},
+ {"logical_replication", CM_LOGICAL_REPLICATION, false},
+ {"slony", CM_SLONY, false},
+ {"raw", CM_RAW, false},
+ {NULL, 0, false}
+};
static const struct config_enum_entry master_slave_sub_mode_options[] = {
{"slony", SLONY_MODE, false},
static struct config_enum ConfigureNamesEnum[] =
{
+ {
+ {"backend_clustering_mode", CFGCXT_INIT, MASTER_SLAVE_CONFIG,
+ "backend clustering mode.",
+ CONFIG_VAR_TYPE_ENUM, false, 0
+ },
+ (int *) &g_pool_config.backend_clustering_mode,
+ CM_STREAMING_REPLICATION,
+ backend_clustering_mode_options,
+ NULL, NULL, NULL, NULL
+ },
+
+
{
{"syslog_facility", CFGCXT_RELOAD, LOGING_CONFIG,
"syslog local faclity.",
POOL_NODE_STATUS_INVALID /* invalid node (split branin, stand alone) */
} POOL_NODE_STATUS;
+#ifdef NO_USED
#define REPLICATION (pool_config->replication_mode)
#define MASTER_SLAVE (pool_config->master_slave_mode)
#define STREAM (MASTER_SLAVE && pool_config->master_slave_sub_mode == STREAM_MODE)
#define DUAL_MODE (REPLICATION || MASTER_SLAVE)
#define RAW_MODE (!REPLICATION && !MASTER_SLAVE)
#define SL_MODE (STREAM || LOGICAL) /* streaming or logical replication mode */
+#endif
+
+/* Clustering mode macros */
+#define REPLICATION (pool_config->backend_clustering_mode == CM_NATIVE_REPLICATION)
+#define MASTER_SLAVE (pool_config->backend_clustering_mode == CM_STREAMING_REPLICATION || \
+ pool_config->backend_clustering_mode == CM_LOGICAL_REPLICATION || \
+ pool_config->backend_clustering_mode == CM_SLONY)
+#define STREAM (pool_config->backend_clustering_mode == CM_STREAMING_REPLICATION)
+#define LOGICAL (pool_config->backend_clustering_mode == CM_LOGICAL_REPLICATION)
+#define SLONY (pool_config->backend_clustering_mode == CM_SLONY)
+#define DUAL_MODE (REPLICATION || MASTER_SLAVE)
+#define RAW_MODE (pool_config->backend_clustering_mode == CM_RAW)
+#define SL_MODE (STREAM || LOGICAL) /* streaming or logical replication mode */
+
#define MAJOR(p) (pool_get_major_version())
#define TSTATE(p, i) (CONNECTION(p, i)->tstate)
#define INTERNAL_TRANSACTION_STARTED(p, i) (CONNECTION(p, i)->is_internal_transaction_started)
LOGICAL_MODE
} MasterSlaveSubModes;
+typedef enum ClusteringModes
+{
+ CM_STREAMING_REPLICATION = 1,
+ CM_NATIVE_REPLICATION,
+ CM_LOGICAL_REPLICATION,
+ CM_SLONY,
+ CM_RAW
+} ClusteringModes;
+
typedef enum LogStandbyDelayModes
{
LSD_ALWAYS = 1,
*/
typedef struct
{
+ ClusteringModes backend_clustering_mode; /* Backend clustering mode */
char *listen_addresses; /* hostnames/IP addresses to listen on */
int port; /* port # to bind */
char *pcp_listen_addresses; /* PCP listen address to listen on */
+++ /dev/null
-# ----------------------------
-# pgPool-II configuration file
-# ----------------------------
-#
-# This file consists of lines of the form:
-#
-# name = value
-#
-# Whitespace may be used. Comments are introduced with "#" anywhere on a line.
-# The complete list of parameter names and allowed values can be found in the
-# pgPool-II documentation.
-#
-# This file is read on server startup and when the server receives a SIGHUP
-# signal. If you edit the file on a running system, you have to SIGHUP the
-# server for the changes to take effect, or use "pgpool reload". Some
-# parameters, which are marked below, require a server shutdown and restart to
-# take effect.
-#
-
-
-#------------------------------------------------------------------------------
-# CONNECTIONS
-#------------------------------------------------------------------------------
-
-# - pgpool Connection Settings -
-
-listen_addresses = 'localhost'
- # Host name or IP address to listen on:
- # '*' for all, '' for no TCP/IP connections
- # (change requires restart)
-port = 9999
- # Port number
- # (change requires restart)
-socket_dir = '/tmp'
- # Unix domain socket path
- # The Debian package defaults to
- # /var/run/postgresql
- # (change requires restart)
-listen_backlog_multiplier = 2
- # Set the backlog parameter of listen(2) to
- # num_init_children * listen_backlog_multiplier.
- # (change requires restart)
-serialize_accept = off
- # whether to serialize accept() call to avoid thundering herd problem
- # (change requires restart)
-reserved_connections = 0
- # Number of reserved connections.
- # Pgpool-II does not accept connections if over
- # num_init_chidlren - reserved_connections.
-
-# - pgpool Communication Manager Connection Settings -
-
-pcp_listen_addresses = '*'
- # Host name or IP address for pcp process to listen on:
- # '*' for all, '' for no TCP/IP connections
- # (change requires restart)
-pcp_port = 9898
- # Port number for pcp
- # (change requires restart)
-pcp_socket_dir = '/tmp'
- # Unix domain socket path for pcp
- # The Debian package defaults to
- # /var/run/postgresql
- # (change requires restart)
-
-# - Backend Connection Settings -
-
-backend_hostname0 = 'localhost'
- # Host name or IP address to connect to for backend 0
-backend_port0 = 5432
- # Port number for backend 0
-backend_weight0 = 1
- # Weight for backend 0 (only in load balancing mode)
-backend_data_directory0 = '/var/lib/pgsql/data'
- # Data directory for backend 0
-backend_flag0 = 'ALLOW_TO_FAILOVER'
- # Controls various backend behavior
- # ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
- # or ALWAYS_MASTER
-backend_application_name0 = 'server0'
- # walsender's application_name, used for "show pool_nodes" command
-#backend_hostname1 = 'host2'
-#backend_port1 = 5433
-#backend_weight1 = 1
-#backend_data_directory1 = '/data1'
-#backend_flag1 = 'ALLOW_TO_FAILOVER'
-#backend_application_name1 = 'server1'
-
-# - Authentication -
-
-enable_pool_hba = off
- # Use pool_hba.conf for client authentication
-pool_passwd = 'pool_passwd'
- # File name of pool_passwd for md5 authentication.
- # "" disables pool_passwd.
- # (change requires restart)
-authentication_timeout = 60
- # Delay in seconds to complete client authentication
- # 0 means no timeout.
-
-allow_clear_text_frontend_auth = off
- # Allow Pgpool-II to use clear text password authentication
- # with clients, when pool_passwd does not
- # contain the user password
-
-
-# - SSL Connections -
-
-ssl = off
- # Enable SSL support
- # (change requires restart)
-#ssl_key = './server.key'
- # Path to the SSL private key file
- # (change requires restart)
-#ssl_cert = './server.cert'
- # Path to the SSL public certificate file
- # (change requires restart)
-#ssl_ca_cert = ''
- # Path to a single PEM format file
- # containing CA root certificate(s)
- # (change requires restart)
-#ssl_ca_cert_dir = ''
- # Directory containing CA root certificate(s)
- # (change requires restart)
-
-ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
- # Allowed SSL ciphers
- # (change requires restart)
-ssl_prefer_server_ciphers = off
- # Use server's SSL cipher preferences,
- # rather than the client's
- # (change requires restart)
-ssl_ecdh_curve = 'prime256v1'
- # Name of the curve to use in ECDH key exchange
-ssl_dh_params_file = ''
- # Name of the file containing Diffie-Hellman parameters used
- # for so-called ephemeral DH family of SSL cipher.
-
-#------------------------------------------------------------------------------
-# POOLS
-#------------------------------------------------------------------------------
-
-# - Concurrent session and pool size -
-
-num_init_children = 32
- # Number of concurrent sessions allowed
- # (change requires restart)
-max_pool = 4
- # Number of connection pool caches per connection
- # (change requires restart)
-
-# - Life time -
-
-child_life_time = 300
- # Pool exits after being idle for this many seconds
-child_max_connections = 0
- # Pool exits after receiving that many connections
- # 0 means no exit
-connection_life_time = 0
- # Connection to backend closes after being idle for this many seconds
- # 0 means no close
-client_idle_limit = 0
- # Client is disconnected after being idle for that many seconds
- # (even inside an explicit transactions!)
- # 0 means no disconnection
-
-
-#------------------------------------------------------------------------------
-# LOGS
-#------------------------------------------------------------------------------
-
-# - Where to log -
-
-log_destination = 'stderr'
- # Where to log
- # Valid values are combinations of stderr,
- # and syslog. Default to stderr.
-
-# - What to log -
-
-log_line_prefix = '%t: pid %p: ' # printf-style string to output at beginning of each log line.
-
-log_connections = off
- # Log connections
-log_hostname = off
- # Hostname will be shown in ps status
- # and in logs if connections are logged
-log_statement = off
- # Log all statements
-log_per_node_statement = off
- # Log all statements
- # with node and backend informations
-log_client_messages = off
- # Log any client messages
-log_standby_delay = 'none'
- # Log standby delay
- # Valid values are combinations of always,
- # if_over_threshold, none
-
-# - Syslog specific -
-
-syslog_facility = 'LOCAL0'
- # Syslog local facility. Default to LOCAL0
-syslog_ident = 'pgpool'
- # Syslog program identification string
- # Default to 'pgpool'
-
-# - Debug -
-
-#log_error_verbosity = default # terse, default, or verbose messages
-
-#client_min_messages = notice # values in order of decreasing detail:
- # debug5
- # debug4
- # debug3
- # debug2
- # debug1
- # log
- # notice
- # warning
- # error
-
-#log_min_messages = warning # values in order of decreasing detail:
- # debug5
- # debug4
- # debug3
- # debug2
- # debug1
- # info
- # notice
- # warning
- # error
- # log
- # fatal
- # panic
-
-#------------------------------------------------------------------------------
-# FILE LOCATIONS
-#------------------------------------------------------------------------------
-
-pid_file_name = '/var/run/pgpool/pgpool.pid'
- # PID file name
- # Can be specified as relative to the"
- # location of pgpool.conf file or
- # as an absolute path
- # (change requires restart)
-logdir = '/var/log/pgpool'
- # Directory of pgPool status file
- # (change requires restart)
-
-
-#------------------------------------------------------------------------------
-# CONNECTION POOLING
-#------------------------------------------------------------------------------
-
-connection_cache = on
- # Activate connection pools
- # (change requires restart)
-
- # Semicolon separated list of queries
- # to be issued at the end of a session
- # The default is for 8.3 and later
-reset_query_list = 'ABORT; DISCARD ALL'
- # The following one is for 8.2 and before
-#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'
-
-
-#------------------------------------------------------------------------------
-# REPLICATION MODE
-#------------------------------------------------------------------------------
-
-replication_mode = off
- # Activate replication mode
- # (change requires restart)
-replicate_select = off
- # Replicate SELECT statements
- # when in replication mode
- # replicate_select is higher priority than
- # load_balance_mode.
-
-insert_lock = on
- # Automatically locks a dummy row or a table
- # with INSERT statements to keep SERIAL data
- # consistency
- # Without SERIAL, no lock will be issued
-lobj_lock_table = ''
- # When rewriting lo_creat command in
- # replication mode, specify table name to
- # lock
-
-# - Degenerate handling -
-
-replication_stop_on_mismatch = off
- # On disagreement with the packet kind
- # sent from backend, degenerate the node
- # which is most likely "minority"
- # If off, just force to exit this session
-
-failover_if_affected_tuples_mismatch = off
- # On disagreement with the number of affected
- # tuples in UPDATE/DELETE queries, then
- # degenerate the node which is most likely
- # "minority".
- # If off, just abort the transaction to
- # keep the consistency
-
-
-#------------------------------------------------------------------------------
-# LOAD BALANCING MODE
-#------------------------------------------------------------------------------
-
-load_balance_mode = off
- # Activate load balancing mode
- # (change requires restart)
-ignore_leading_white_space = on
- # Ignore leading white spaces of each query
-white_function_list = ''
- # Comma separated list of function names
- # that don't write to database
- # Regexp are accepted
-black_function_list = 'currval,lastval,nextval,setval'
- # Comma separated list of function names
- # that write to database
- # Regexp are accepted
-
-black_query_pattern_list = ''
- # Semicolon separated list of query patterns
- # that should be sent to primary node
- # Regexp are accepted
- # valid for streaming replicaton mode only.
-
-database_redirect_preference_list = ''
- # comma separated list of pairs of database and node id.
- # example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
- # valid for streaming replicaton mode only.
-app_name_redirect_preference_list = ''
- # comma separated list of pairs of app name and node id.
- # example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
- # valid for streaming replicaton mode only.
-allow_sql_comments = off
- # if on, ignore SQL comments when judging if load balance or
- # query cache is possible.
- # If off, SQL comments effectively prevent the judgment
- # (pre 3.4 behavior).
-
-disable_load_balance_on_write = 'transaction'
- # Load balance behavior when write query is issued
- # in an explicit transaction.
- # Note that any query not in an explicit transaction
- # is not affected by the parameter.
- # 'transaction' (the default): if a write query is issued,
- # subsequent read queries will not be load balanced
- # until the transaction ends.
- # 'trans_transaction': if a write query is issued,
- # subsequent read queries in an explicit transaction
- # will not be load balanced until the session ends.
- # 'always': if a write query is issued, read queries will
- # not be load balanced until the session ends.
-
-statement_level_load_balance = off
- # Enables statement level load balancing
-
-#------------------------------------------------------------------------------
-# MASTER/SLAVE MODE
-#------------------------------------------------------------------------------
-
-master_slave_mode = off
- # Activate master/slave mode
- # (change requires restart)
-master_slave_sub_mode = 'stream'
- # Master/slave sub mode
- # Valid values are combinations stream, slony
- # or logical. Default is stream.
- # (change requires restart)
-
-# - Streaming -
-
-sr_check_period = 0
- # Streaming replication check period
- # Disabled (0) by default
-sr_check_user = 'nobody'
- # Streaming replication check user
- # This is necessary even if you disable
- # streaming replication delay check with
- # sr_check_period = 0
-
-sr_check_password = ''
- # Password for streaming replication check user.
- # Leaving it empty will make Pgpool-II to first look for the
- # Password in pool_passwd file before using the empty password
-
-sr_check_database = 'postgres'
- # Database name for streaming replication check
-delay_threshold = 0
- # Threshold before not dispatching query to standby node
- # Unit is in bytes
- # Disabled (0) by default
-
-# - Special commands -
-
-follow_master_command = ''
- # Executes this command after master failover
- # Special values:
- # %d = failed node id
- # %h = failed node host name
- # %p = failed node port number
- # %D = failed node database cluster path
- # %m = new master node id
- # %H = new master node hostname
- # %M = old master node id
- # %P = old primary node id
- # %r = new master port number
- # %R = new master database cluster path
- # %N = old primary node hostname
- # %S = old primary node port number
- # %% = '%' character
-
-#------------------------------------------------------------------------------
-# HEALTH CHECK GLOBAL PARAMETERS
-#------------------------------------------------------------------------------
-
-health_check_period = 0
- # Health check period
- # Disabled (0) by default
-health_check_timeout = 20
- # Health check timeout
- # 0 means no timeout
-health_check_user = 'nobody'
- # Health check user
-health_check_password = ''
- # Password for health check user
- # Leaving it empty will make Pgpool-II to first look for the
- # Password in pool_passwd file before using the empty password
-
-health_check_database = ''
- # Database name for health check. If '', tries 'postgres' frist, then 'template1'
-
-health_check_max_retries = 0
- # Maximum number of times to retry a failed health check before giving up.
-health_check_retry_delay = 1
- # Amount of time to wait (in seconds) between retries.
-connect_timeout = 10000
- # Timeout value in milliseconds before giving up to connect to backend.
- # Default is 10000 ms (10 second). Flaky network user may want to increase
- # the value. 0 means no timeout.
- # Note that this value is not only used for health check,
- # but also for ordinary conection to backend.
-
-#------------------------------------------------------------------------------
-# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
-#------------------------------------------------------------------------------
-#health_check_period0 = 0
-#health_check_timeout0 = 20
-#health_check_user0 = 'nobody'
-#health_check_password0 = ''
-#health_check_database0 = ''
-#health_check_max_retries0 = 0
-#health_check_retry_delay0 = 1
-#connect_timeout0 = 10000
-
-#------------------------------------------------------------------------------
-# FAILOVER AND FAILBACK
-#------------------------------------------------------------------------------
-
-failover_command = ''
- # Executes this command at failover
- # Special values:
- # %d = failed node id
- # %h = failed node host name
- # %p = failed node port number
- # %D = failed node database cluster path
- # %m = new master node id
- # %H = new master node hostname
- # %M = old master node id
- # %P = old primary node id
- # %r = new master port number
- # %R = new master database cluster path
- # %N = old primary node hostname
- # %S = old primary node port number
- # %% = '%' character
-failback_command = ''
- # Executes this command at failback.
- # Special values:
- # %d = failed node id
- # %h = failed node host name
- # %p = failed node port number
- # %D = failed node database cluster path
- # %m = new master node id
- # %H = new master node hostname
- # %M = old master node id
- # %P = old primary node id
- # %r = new master port number
- # %R = new master database cluster path
- # %N = old primary node hostname
- # %S = old primary node port number
- # %% = '%' character
-
-failover_on_backend_error = on
- # Initiates failover when reading/writing to the
- # backend communication socket fails
- # If set to off, pgpool will report an
- # error and disconnect the session.
-
-detach_false_primary = off
- # Detach false primary if on. Only
- # valid in streaming replicaton
- # mode and with PostgreSQL 9.6 or
- # after.
-
-search_primary_node_timeout = 300
- # Timeout in seconds to search for the
- # primary node when a failover occurs.
- # 0 means no timeout, keep searching
- # for a primary node forever.
-
-auto_failback = off
- # Dettached backend node reattach automatically
- # if replication_state is 'streaming'.
-auto_failback_interval = 60
- # Min interval of executing auto_failback in
- # seconds.
-
-#------------------------------------------------------------------------------
-# ONLINE RECOVERY
-#------------------------------------------------------------------------------
-
-recovery_user = 'nobody'
- # Online recovery user
-recovery_password = ''
- # Online recovery password
- # Leaving it empty will make Pgpool-II to first look for the
- # Password in pool_passwd file before using the empty password
-
-recovery_1st_stage_command = ''
- # Executes a command in first stage
-recovery_2nd_stage_command = ''
- # Executes a command in second stage
-recovery_timeout = 90
- # Timeout in seconds to wait for the
- # recovering node's postmaster to start up
- # 0 means no wait
-client_idle_limit_in_recovery = 0
- # Client is disconnected after being idle
- # for that many seconds in the second stage
- # of online recovery
- # 0 means no disconnection
- # -1 means immediate disconnection
-
-
-#------------------------------------------------------------------------------
-# WATCHDOG
-#------------------------------------------------------------------------------
-
-# - Enabling -
-
-use_watchdog = off
- # Activates watchdog
- # (change requires restart)
-
-# -Connection to up stream servers -
-
-trusted_servers = ''
- # trusted server list which are used
- # to confirm network connection
- # (hostA,hostB,hostC,...)
- # (change requires restart)
-ping_path = '/bin'
- # ping command path
- # (change requires restart)
-
-# - Watchdog communication Settings -
-
-wd_hostname = ''
- # Host name or IP address of this watchdog
- # (change requires restart)
-wd_port = 9000
- # port number for watchdog service
- # (change requires restart)
-wd_priority = 1
- # priority of this watchdog in leader election
- # (change requires restart)
-
-wd_authkey = ''
- # Authentication key for watchdog communication
- # (change requires restart)
-
-wd_ipc_socket_dir = '/tmp'
- # Unix domain socket path for watchdog IPC socket
- # The Debian package defaults to
- # /var/run/postgresql
- # (change requires restart)
-
-
-# - Virtual IP control Setting -
-
-delegate_IP = ''
- # delegate IP address
- # If this is empty, virtual IP never bring up.
- # (change requires restart)
-if_cmd_path = '/sbin'
- # path to the directory where if_up/down_cmd exists
- # If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
- # (change requires restart)
-if_up_cmd = '/usr/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'
- # startup delegate IP command
- # (change requires restart)
-if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth0'
- # shutdown delegate IP command
- # (change requires restart)
-arping_path = '/usr/sbin'
- # arping command path
- # If arping_cmd starts with "/", if_cmd_path will be ignored.
- # (change requires restart)
-arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
- # arping command
- # (change requires restart)
-
-# - Behaivor on escalation Setting -
-
-clear_memqcache_on_escalation = on
- # Clear all the query cache on shared memory
- # when standby pgpool escalate to active pgpool
- # (= virtual IP holder).
- # This should be off if client connects to pgpool
- # not using virtual IP.
- # (change requires restart)
-wd_escalation_command = ''
- # Executes this command at escalation on new active pgpool.
- # (change requires restart)
-wd_de_escalation_command = ''
- # Executes this command when master pgpool resigns from being master.
- # (change requires restart)
-
-# - Watchdog consensus settings for failover -
-
-failover_when_quorum_exists = on
- # Only perform backend node failover
- # when the watchdog cluster holds the quorum
- # (change requires restart)
-
-failover_require_consensus = on
- # Perform failover when majority of Pgpool-II nodes
- # aggrees on the backend node status change
- # (change requires restart)
-
-allow_multiple_failover_requests_from_node = off
- # A Pgpool-II node can cast multiple votes
- # for building the consensus on failover
- # (change requires restart)
-
-enable_consensus_with_half_votes = off
- # apply majority rule for consensus and quorum computation
- # at 50% of votes in a cluster with even number of nodes.
- # when enabled the existence of quorum and consensus
- # on failover is resolved after receiving half of the
- # total votes in the cluster, otherwise both these
- # decisions require at least one more vote than
- # half of the total votes.
- # (change requires restart)
-
-# - Lifecheck Setting -
-
-# -- common --
-
-wd_monitoring_interfaces_list = '' # Comma separated list of interfaces names to monitor.
- # if any interface from the list is active the watchdog will
- # consider the network is fine
- # 'any' to enable monitoring on all interfaces except loopback
- # '' to disable monitoring
- # (change requires restart)
-
-
-wd_lifecheck_method = 'heartbeat'
- # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
- # (change requires restart)
-wd_interval = 10
- # lifecheck interval (sec) > 0
- # (change requires restart)
-
-# -- heartbeat mode --
-
-wd_heartbeat_port = 9694
- # Port number for receiving heartbeat signal
- # (change requires restart)
-wd_heartbeat_keepalive = 2
- # Interval time of sending heartbeat signal (sec)
- # (change requires restart)
-wd_heartbeat_deadtime = 30
- # Deadtime interval for heartbeat signal (sec)
- # (change requires restart)
-heartbeat_destination0 = 'host0_ip1'
- # Host name or IP address of destination 0
- # for sending heartbeat signal.
- # (change requires restart)
-heartbeat_destination_port0 = 9694
- # Port number of destination 0 for sending
- # heartbeat signal. Usually this is the
- # same as wd_heartbeat_port.
- # (change requires restart)
-heartbeat_device0 = ''
- # Name of NIC device (such like 'eth0')
- # used for sending/receiving heartbeat
- # signal to/from destination 0.
- # This works only when this is not empty
- # and pgpool has root privilege.
- # (change requires restart)
-
-#heartbeat_destination1 = 'host0_ip2'
-#heartbeat_destination_port1 = 9694
-#heartbeat_device1 = ''
-
-# -- query mode --
-
-wd_life_point = 3
- # lifecheck retry times
- # (change requires restart)
-wd_lifecheck_query = 'SELECT 1'
- # lifecheck query to pgpool from watchdog
- # (change requires restart)
-wd_lifecheck_dbname = 'template1'
- # Database name connected for lifecheck
- # (change requires restart)
-wd_lifecheck_user = 'nobody'
- # watchdog user monitoring pgpools in lifecheck
- # (change requires restart)
-wd_lifecheck_password = ''
- # Password for watchdog user in lifecheck
- # Leaving it empty will make Pgpool-II to first look for the
- # Password in pool_passwd file before using the empty password
- # (change requires restart)
-
-# - Other pgpool Connection Settings -
-
-#other_pgpool_hostname0 = 'host0'
- # Host name or IP address to connect to for other pgpool 0
- # (change requires restart)
-#other_pgpool_port0 = 5432
- # Port number for other pgpool 0
- # (change requires restart)
-#other_wd_port0 = 9000
- # Port number for other watchdog 0
- # (change requires restart)
-#other_pgpool_hostname1 = 'host1'
-#other_pgpool_port1 = 5432
-#other_wd_port1 = 9000
-
-
-#------------------------------------------------------------------------------
-# OTHERS
-#------------------------------------------------------------------------------
-relcache_expire = 0
- # Life time of relation cache in seconds.
- # 0 means no cache expiration(the default).
- # The relation cache is used for cache the
- # query result against PostgreSQL system
- # catalog to obtain various information
- # including table structures or if it's a
- # temporary table or not. The cache is
- # maintained in a pgpool child local memory
- # and being kept as long as it survives.
- # If someone modify the table by using
- # ALTER TABLE or some such, the relcache is
- # not consistent anymore.
- # For this purpose, cache_expiration
- # controls the life time of the cache.
-
-relcache_size = 256
- # Number of relation cache
- # entry. If you see frequently:
- # "pool_search_relcache: cache replacement happend"
- # in the pgpool log, you might want to increate this number.
-
-check_temp_table = catalog
- # Temporary table check method. catalog, trace or none.
- # Default is catalog.
-
-check_unlogged_table = on
- # If on, enable unlogged table check in SELECT statements.
- # This initiates queries against system catalog of primary/master
- # thus increases load of master.
- # If you are absolutely sure that your system never uses unlogged tables
- # and you want to save access to primary/master, you could turn this off.
- # Default is on.
-enable_shared_relcache = on
- # If on, relation cache stored in memory cache,
- # the cache is shared among child process.
- # Default is on.
- # (change requires restart)
-
-relcache_query_target = master # Target node to send relcache queries. Default is master (primary) node.
- # If load_balance_node is specified, queries will be sent to load balance node.
-#------------------------------------------------------------------------------
-# IN MEMORY QUERY MEMORY CACHE
-#------------------------------------------------------------------------------
-memory_cache_enabled = off
- # If on, use the memory cache functionality, off by default
- # (change requires restart)
-memqcache_method = 'shmem'
- # Cache storage method. either 'shmem'(shared memory) or
- # 'memcached'. 'shmem' by default
- # (change requires restart)
-memqcache_memcached_host = 'localhost'
- # Memcached host name or IP address. Mandatory if
- # memqcache_method = 'memcached'.
- # Defaults to localhost.
- # (change requires restart)
-memqcache_memcached_port = 11211
- # Memcached port number. Mondatory if memqcache_method = 'memcached'.
- # Defaults to 11211.
- # (change requires restart)
-memqcache_total_size = 67108864
- # Total memory size in bytes for storing memory cache.
- # Mandatory if memqcache_method = 'shmem'.
- # Defaults to 64MB.
- # (change requires restart)
-memqcache_max_num_cache = 1000000
- # Total number of cache entries. Mandatory
- # if memqcache_method = 'shmem'.
- # Each cache entry consumes 48 bytes on shared memory.
- # Defaults to 1,000,000(45.8MB).
- # (change requires restart)
-memqcache_expire = 0
- # Memory cache entry life time specified in seconds.
- # 0 means infinite life time. 0 by default.
- # (change requires restart)
-memqcache_auto_cache_invalidation = on
- # If on, invalidation of query cache is triggered by corresponding
- # DDL/DML/DCL(and memqcache_expire). If off, it is only triggered
- # by memqcache_expire. on by default.
- # (change requires restart)
-memqcache_maxcache = 409600
- # Maximum SELECT result size in bytes.
- # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
- # (change requires restart)
-memqcache_cache_block_size = 1048576
- # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
- # Defaults to 1MB.
- # (change requires restart)
-memqcache_oiddir = '/var/log/pgpool/oiddir'
- # Temporary work directory to record table oids
- # (change requires restart)
-white_memqcache_table_list = ''
- # Comma separated list of table names to memcache
- # that don't write to database
- # Regexp are accepted
-black_memqcache_table_list = ''
- # Comma separated list of table names not to memcache
- # that don't write to database
- # Regexp are accepted
--- /dev/null
+pgpool.conf.sample-stream
\ No newline at end of file
# take effect.
#
+#------------------------------------------------------------------------------
+# BACKEND CLUSTERING MODE
+# Choose one of: 'streaming_replication', 'native_replication',
+# 'logical_replication', 'slony' or 'raw'
+#------------------------------------------------------------------------------
+backend_clustering_mode = 'logical_replication'
#------------------------------------------------------------------------------
# CONNECTIONS
# MASTER/SLAVE MODE
#------------------------------------------------------------------------------
-master_slave_mode = on
- # Activate master/slave mode
- # (change requires restart)
-master_slave_sub_mode = 'logical'
- # Master/slave sub mode
- # Valid values are combinations stream, slony
- # or logical. Default is stream.
- # (change requires restart)
-
# - Streaming -
sr_check_period = 0
--- /dev/null
+# ----------------------------
+# pgPool-II configuration file
+# ----------------------------
+#
+# This file consists of lines of the form:
+#
+# name = value
+#
+# Whitespace may be used. Comments are introduced with "#" anywhere on a line.
+# The complete list of parameter names and allowed values can be found in the
+# pgPool-II documentation.
+#
+# This file is read on server startup and when the server receives a SIGHUP
+# signal. If you edit the file on a running system, you have to SIGHUP the
+# server for the changes to take effect, or use "pgpool reload". Some
+# parameters, which are marked below, require a server shutdown and restart to
+# take effect.
+#
+
+#------------------------------------------------------------------------------
+# BACKEND CLUSTERING MODE
+# Choose one of: 'streaming_replication', 'native_replication',
+# 'logical_replication', 'slony' or 'raw'
+# (change requires restart)
+#------------------------------------------------------------------------------
+backend_clustering_mode = 'raw'
+
+#------------------------------------------------------------------------------
+# CONNECTIONS
+#------------------------------------------------------------------------------
+
+# - pgpool Connection Settings -
+
+listen_addresses = 'localhost'
+ # Host name or IP address to listen on:
+ # '*' for all, '' for no TCP/IP connections
+ # (change requires restart)
+port = 9999
+ # Port number
+ # (change requires restart)
+socket_dir = '/tmp'
+ # Unix domain socket path
+ # The Debian package defaults to
+ # /var/run/postgresql
+ # (change requires restart)
+reserved_connections = 0
+ # Number of reserved connections.
+ # Pgpool-II does not accept connections if over
+ # num_init_chidlren - reserved_connections.
+
+
+# - pgpool Communication Manager Connection Settings -
+
+pcp_listen_addresses = '*'
+ # Host name or IP address for pcp process to listen on:
+ # '*' for all, '' for no TCP/IP connections
+ # (change requires restart)
+pcp_port = 9898
+ # Port number for pcp
+ # (change requires restart)
+pcp_socket_dir = '/tmp'
+ # Unix domain socket path for pcp
+ # The Debian package defaults to
+ # /var/run/postgresql
+ # (change requires restart)
+listen_backlog_multiplier = 2
+ # Set the backlog parameter of listen(2) to
+ # num_init_children * listen_backlog_multiplier.
+ # (change requires restart)
+serialize_accept = off
+ # whether to serialize accept() call to avoid thundering herd problem
+ # (change requires restart)
+
+# - Backend Connection Settings -
+
+backend_hostname0 = 'host1'
+ # Host name or IP address to connect to for backend 0
+backend_port0 = 5432
+ # Port number for backend 0
+backend_weight0 = 1
+ # Weight for backend 0 (only in load balancing mode)
+backend_data_directory0 = '/data'
+ # Data directory for backend 0
+backend_flag0 = 'ALLOW_TO_FAILOVER'
+ # Controls various backend behavior
+ # ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
+ # or ALWAYS_MASTER
+backend_application_name0 = 'server0'
+ # walsender's application_name, used for "show pool_nodes" command
+#backend_hostname1 = 'host2'
+#backend_port1 = 5433
+#backend_weight1 = 1
+#backend_data_directory1 = '/data1'
+#backend_flag1 = 'ALLOW_TO_FAILOVER'
+#backend_application_name1 = 'server1'
+
+# - Authentication -
+
+enable_pool_hba = off
+ # Use pool_hba.conf for client authentication
+pool_passwd = 'pool_passwd'
+ # File name of pool_passwd for md5 authentication.
+ # "" disables pool_passwd.
+ # (change requires restart)
+authentication_timeout = 60
+ # Delay in seconds to complete client authentication
+ # 0 means no timeout.
+
+allow_clear_text_frontend_auth = off
+ # Allow Pgpool-II to use clear text password authentication
+ # with clients, when pool_passwd does not
+ # contain the user password
+
+# - SSL Connections -
+
+ssl = off
+ # Enable SSL support
+ # (change requires restart)
+#ssl_key = './server.key'
+ # Path to the SSL private key file
+ # (change requires restart)
+#ssl_cert = './server.cert'
+ # Path to the SSL public certificate file
+ # (change requires restart)
+#ssl_ca_cert = ''
+ # Path to a single PEM format file
+ # containing CA root certificate(s)
+ # (change requires restart)
+#ssl_ca_cert_dir = ''
+ # Directory containing CA root certificate(s)
+ # (change requires restart)
+
+ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
+ # Allowed SSL ciphers
+ # (change requires restart)
+ssl_prefer_server_ciphers = off
+ # Use server's SSL cipher preferences,
+ # rather than the client's
+ # (change requires restart)
+ssl_ecdh_curve = 'prime256v1'
+ # Name of the curve to use in ECDH key exchange
+ssl_dh_params_file = ''
+ # Name of the file containing Diffie-Hellman parameters used
+ # for so-called ephemeral DH family of SSL cipher.
+
+#------------------------------------------------------------------------------
+# POOLS
+#------------------------------------------------------------------------------
+
+# - Concurrent session and pool size -
+
+num_init_children = 32
+ # Number of concurrent sessions allowed
+ # (change requires restart)
+max_pool = 4
+ # Number of connection pool caches per connection
+ # (change requires restart)
+
+# - Life time -
+
+child_life_time = 300
+ # Pool exits after being idle for this many seconds
+child_max_connections = 0
+ # Pool exits after receiving that many connections
+ # 0 means no exit
+connection_life_time = 0
+ # Connection to backend closes after being idle for this many seconds
+ # 0 means no close
+client_idle_limit = 0
+ # Client is disconnected after being idle for that many seconds
+ # (even inside an explicit transactions!)
+ # 0 means no disconnection
+
+
+#------------------------------------------------------------------------------
+# LOGS
+#------------------------------------------------------------------------------
+
+# - Where to log -
+
+log_destination = 'stderr'
+ # Where to log
+ # Valid values are combinations of stderr,
+ # and syslog. Default to stderr.
+
+# - What to log -
+
+log_line_prefix = '%t: pid %p: ' # printf-style string to output at beginning of each log line.
+
+log_connections = off
+ # Log connections
+log_hostname = off
+ # Hostname will be shown in ps status
+ # and in logs if connections are logged
+log_statement = off
+ # Log all statements
+log_per_node_statement = off
+ # Log all statements
+ # with node and backend informations
+log_client_messages = off
+ # Log any client messages
+log_standby_delay = 'if_over_threshold'
+ # Log standby delay
+ # Valid values are combinations of always,
+ # if_over_threshold, none
+
+# - Syslog specific -
+
+syslog_facility = 'LOCAL0'
+ # Syslog local facility. Default to LOCAL0
+syslog_ident = 'pgpool'
+ # Syslog program identification string
+ # Default to 'pgpool'
+
+# - Debug -
+
+#log_error_verbosity = default # terse, default, or verbose messages
+
+#client_min_messages = notice # values in order of decreasing detail:
+ # debug5
+ # debug4
+ # debug3
+ # debug2
+ # debug1
+ # log
+ # notice
+ # warning
+ # error
+
+#log_min_messages = warning # values in order of decreasing detail:
+ # debug5
+ # debug4
+ # debug3
+ # debug2
+ # debug1
+ # info
+ # notice
+ # warning
+ # error
+ # log
+ # fatal
+ # panic
+
+#------------------------------------------------------------------------------
+# FILE LOCATIONS
+#------------------------------------------------------------------------------
+
+pid_file_name = '/var/run/pgpool/pgpool.pid'
+ # PID file name
+ # Can be specified as relative to the"
+ # location of pgpool.conf file or
+ # as an absolute path
+ # (change requires restart)
+logdir = '/tmp'
+ # Directory of pgPool status file
+ # (change requires restart)
+
+
+#------------------------------------------------------------------------------
+# CONNECTION POOLING
+#------------------------------------------------------------------------------
+
+connection_cache = on
+ # Activate connection pools
+ # (change requires restart)
+
+ # Semicolon separated list of queries
+ # to be issued at the end of a session
+ # The default is for 8.3 and later
+reset_query_list = 'ABORT; DISCARD ALL'
+ # The following one is for 8.2 and before
+#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'
+
+
+#------------------------------------------------------------------------------
+# REPLICATION MODE
+#------------------------------------------------------------------------------
+
+replicate_select = off
+ # Replicate SELECT statements
+ # when in replication mode
+ # replicate_select is higher priority than
+ # load_balance_mode.
+
+insert_lock = off
+ # Automatically locks a dummy row or a table
+ # with INSERT statements to keep SERIAL data
+ # consistency
+ # Without SERIAL, no lock will be issued
+lobj_lock_table = ''
+ # When rewriting lo_creat command in
+ # replication mode, specify table name to
+ # lock
+
+# - Degenerate handling -
+
+replication_stop_on_mismatch = off
+ # On disagreement with the packet kind
+ # sent from backend, degenerate the node
+ # which is most likely "minority"
+ # If off, just force to exit this session
+
+failover_if_affected_tuples_mismatch = off
+ # On disagreement with the number of affected
+ # tuples in UPDATE/DELETE queries, then
+ # degenerate the node which is most likely
+ # "minority".
+ # If off, just abort the transaction to
+ # keep the consistency
+
+
+#------------------------------------------------------------------------------
+# LOAD BALANCING MODE
+#------------------------------------------------------------------------------
+
+load_balance_mode = on
+ # Activate load balancing mode
+ # (change requires restart)
+ignore_leading_white_space = on
+ # Ignore leading white spaces of each query
+white_function_list = ''
+ # Comma separated list of function names
+ # that don't write to database
+ # Regexp are accepted
+black_function_list = 'currval,lastval,nextval,setval'
+ # Comma separated list of function names
+ # that write to database
+ # Regexp are accepted
+
+black_query_pattern_list = ''
+ # Semicolon separated list of query patterns
+ # that should be sent to primary node
+ # Regexp are accepted
+ # valid for streaming replicaton mode only.
+
+database_redirect_preference_list = ''
+ # comma separated list of pairs of database and node id.
+ # example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
+ # valid for streaming replicaton mode only.
+
+app_name_redirect_preference_list = ''
+ # comma separated list of pairs of app name and node id.
+ # example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
+ # valid for streaming replicaton mode only.
+allow_sql_comments = off
+ # if on, ignore SQL comments when judging if load balance or
+ # query cache is possible.
+ # If off, SQL comments effectively prevent the judgment
+ # (pre 3.4 behavior).
+
+disable_load_balance_on_write = 'transaction'
+ # Load balance behavior when write query is issued
+ # in an explicit transaction.
+ # Note that any query not in an explicit transaction
+ # is not affected by the parameter.
+ # 'transaction' (the default): if a write query is issued,
+ # subsequent read queries will not be load balanced
+ # until the transaction ends.
+ # 'trans_transaction': if a write query is issued,
+ # subsequent read queries in an explicit transaction
+ # will not be load balanced until the session ends.
+ # 'always': if a write query is issued, read queries will
+ # not be load balanced until the session ends.
+
+statement_level_load_balance = off
+ # Enables statement level load balancing
+
+# - Streaming -
+
+sr_check_period = 10
+ # Streaming replication check period
+ # Disabled (0) by default
+sr_check_user = 'nobody'
+ # Streaming replication check user
+ # This is neccessary even if you disable streaming
+ # replication delay check by sr_check_period = 0
+sr_check_password = ''
+ # Password for streaming replication check user
+ # Leaving it empty will make Pgpool-II to first look for the
+ # Password in pool_passwd file before using the empty password
+
+sr_check_database = 'postgres'
+ # Database name for streaming replication check
+delay_threshold = 10000000
+ # Threshold before not dispatching query to standby node
+ # Unit is in bytes
+ # Disabled (0) by default
+
+# - Special commands -
+
+follow_master_command = ''
+ # Executes this command after master failover
+ # Special values:
+ # %d = failed node id
+ # %h = failed node host name
+ # %p = failed node port number
+ # %D = failed node database cluster path
+ # %m = new master node id
+ # %H = new master node hostname
+ # %M = old master node id
+ # %P = old primary node id
+ # %r = new master port number
+ # %R = new master database cluster path
+ # %N = old primary node hostname
+ # %S = old primary node port number
+ # %% = '%' character
+
+#------------------------------------------------------------------------------
+# HEALTH CHECK GLOBAL PARAMETERS
+#------------------------------------------------------------------------------
+
+health_check_period = 0
+ # Health check period
+ # Disabled (0) by default
+health_check_timeout = 20
+ # Health check timeout
+ # 0 means no timeout
+health_check_user = 'nobody'
+ # Health check user
+health_check_password = ''
+ # Password for health check user
+ # Leaving it empty will make Pgpool-II to first look for the
+ # Password in pool_passwd file before using the empty password
+
+health_check_database = ''
+ # Database name for health check. If '', tries 'postgres' frist,
+health_check_max_retries = 0
+ # Maximum number of times to retry a failed health check before giving up.
+health_check_retry_delay = 1
+ # Amount of time to wait (in seconds) between retries.
+connect_timeout = 10000
+ # Timeout value in milliseconds before giving up to connect to backend.
+ # Default is 10000 ms (10 second). Flaky network user may want to increase
+ # the value. 0 means no timeout.
+ # Note that this value is not only used for health check,
+ # but also for ordinary conection to backend.
+
+#------------------------------------------------------------------------------
+# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
+#------------------------------------------------------------------------------
+#health_check_period0 = 0
+#health_check_timeout0 = 20
+#health_check_user0 = 'nobody'
+#health_check_password0 = ''
+#health_check_database0 = ''
+#health_check_max_retries0 = 0
+#health_check_retry_delay0 = 1
+#connect_timeout0 = 10000
+
+#------------------------------------------------------------------------------
+# FAILOVER AND FAILBACK
+#------------------------------------------------------------------------------
+
+failover_command = ''
+ # Executes this command at failover
+ # Special values:
+ # %d = failed node id
+ # %h = failed node host name
+ # %p = failed node port number
+ # %D = failed node database cluster path
+ # %m = new master node id
+ # %H = new master node hostname
+ # %M = old master node id
+ # %P = old primary node id
+ # %r = new master port number
+ # %R = new master database cluster path
+ # %N = old primary node hostname
+ # %S = old primary node port number
+ # %% = '%' character
+failback_command = ''
+ # Executes this command at failback.
+ # Special values:
+ # %d = failed node id
+ # %h = failed node host name
+ # %p = failed node port number
+ # %D = failed node database cluster path
+ # %m = new master node id
+ # %H = new master node hostname
+ # %M = old master node id
+ # %P = old primary node id
+ # %r = new master port number
+ # %R = new master database cluster path
+ # %N = old primary node hostname
+ # %S = old primary node port number
+ # %% = '%' character
+
+failover_on_backend_error = on
+ # Initiates failover when reading/writing to the
+ # backend communication socket fails
+ # If set to off, pgpool will report an
+ # error and disconnect the session.
+
+detach_false_primary = off
+ # Detach false primary if on. Only
+ # valid in streaming replicaton
+ # mode and with PostgreSQL 9.6 or
+ # after.
+
+search_primary_node_timeout = 300
+ # Timeout in seconds to search for the
+ # primary node when a failover occurs.
+ # 0 means no timeout, keep searching
+ # for a primary node forever.
+
+#------------------------------------------------------------------------------
+# ONLINE RECOVERY
+#------------------------------------------------------------------------------
+
+recovery_user = 'nobody'
+ # Online recovery user
+recovery_password = ''
+ # Online recovery password
+ # Leaving it empty will make Pgpool-II to first look for the
+ # Password in pool_passwd file before using the empty password
+
+recovery_1st_stage_command = ''
+ # Executes a command in first stage
+recovery_2nd_stage_command = ''
+ # Executes a command in second stage
+recovery_timeout = 90
+ # Timeout in seconds to wait for the
+ # recovering node's postmaster to start up
+ # 0 means no wait
+client_idle_limit_in_recovery = 0
+ # Client is disconnected after being idle
+ # for that many seconds in the second stage
+ # of online recovery
+ # 0 means no disconnection
+ # -1 means immediate disconnection
+
+auto_failback = off
+ # Dettached backend node reattach automatically
+ # if replication_state is 'streaming'.
+auto_failback_interval = 60
+ # Min interval of executing auto_failback in
+ # seconds.
+
+#------------------------------------------------------------------------------
+# WATCHDOG
+#------------------------------------------------------------------------------
+
+# - Enabling -
+
+use_watchdog = off
+ # Activates watchdog
+ # (change requires restart)
+
+# -Connection to up stream servers -
+
+trusted_servers = ''
+ # trusted server list which are used
+ # to confirm network connection
+ # (hostA,hostB,hostC,...)
+ # (change requires restart)
+ping_path = '/bin'
+ # ping command path
+ # (change requires restart)
+
+# - Watchdog communication Settings -
+
+wd_hostname = ''
+ # Host name or IP address of this watchdog
+ # (change requires restart)
+wd_port = 9000
+ # port number for watchdog service
+ # (change requires restart)
+wd_priority = 1
+ # priority of this watchdog in leader election
+ # (change requires restart)
+
+wd_authkey = ''
+ # Authentication key for watchdog communication
+ # (change requires restart)
+
+wd_ipc_socket_dir = '/tmp'
+ # Unix domain socket path for watchdog IPC socket
+ # The Debian package defaults to
+ # /var/run/postgresql
+ # (change requires restart)
+
+
+# - Virtual IP control Setting -
+
+delegate_IP = ''
+ # delegate IP address
+ # If this is empty, virtual IP never bring up.
+ # (change requires restart)
+if_cmd_path = '/sbin'
+ # path to the directory where if_up/down_cmd exists
+ # If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
+ # (change requires restart)
+if_up_cmd = '/usr/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'
+ # startup delegate IP command
+ # (change requires restart)
+if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth0'
+ # shutdown delegate IP command
+ # (change requires restart)
+arping_path = '/usr/sbin'
+ # arping command path
+ # If arping_cmd starts with "/", if_cmd_path will be ignored.
+ # (change requires restart)
+arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
+ # arping command
+ # (change requires restart)
+
+# - Behaivor on escalation Setting -
+
+clear_memqcache_on_escalation = on
+ # Clear all the query cache on shared memory
+ # when standby pgpool escalate to active pgpool
+ # (= virtual IP holder).
+ # This should be off if client connects to pgpool
+ # not using virtual IP.
+ # (change requires restart)
+wd_escalation_command = ''
+ # Executes this command at escalation on new active pgpool.
+ # (change requires restart)
+wd_de_escalation_command = ''
+ # Executes this command when master pgpool resigns from being master.
+ # (change requires restart)
+
+# - Watchdog consensus settings for failover -
+
+failover_when_quorum_exists = on
+ # Only perform backend node failover
+ # when the watchdog cluster holds the quorum
+ # (change requires restart)
+
+failover_require_consensus = on
+ # Perform failover when majority of Pgpool-II nodes
+ # aggrees on the backend node status change
+ # (change requires restart)
+
+allow_multiple_failover_requests_from_node = off
+ # A Pgpool-II node can cast multiple votes
+ # for building the consensus on failover
+ # (change requires restart)
+
+
+enable_consensus_with_half_votes = off
+ # apply majority rule for consensus and quorum computation
+ # at 50% of votes in a cluster with even number of nodes.
+ # when enabled the existence of quorum and consensus
+ # on failover is resolved after receiving half of the
+ # total votes in the cluster, otherwise both these
+ # decisions require at least one more vote than
+ # half of the total votes.
+ # (change requires restart)
+
+# - Lifecheck Setting -
+
+# -- common --
+
+wd_monitoring_interfaces_list = '' # Comma separated list of interfaces names to monitor.
+ # if any interface from the list is active the watchdog will
+ # consider the network is fine
+ # 'any' to enable monitoring on all interfaces except loopback
+ # '' to disable monitoring
+ # (change requires restart)
+
+wd_lifecheck_method = 'heartbeat'
+ # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
+ # (change requires restart)
+wd_interval = 10
+ # lifecheck interval (sec) > 0
+ # (change requires restart)
+
+# -- heartbeat mode --
+
+wd_heartbeat_port = 9694
+ # Port number for receiving heartbeat signal
+ # (change requires restart)
+wd_heartbeat_keepalive = 2
+ # Interval time of sending heartbeat signal (sec)
+ # (change requires restart)
+wd_heartbeat_deadtime = 30
+ # Deadtime interval for heartbeat signal (sec)
+ # (change requires restart)
+heartbeat_destination0 = 'host0_ip1'
+ # Host name or IP address of destination 0
+ # for sending heartbeat signal.
+ # (change requires restart)
+heartbeat_destination_port0 = 9694
+ # Port number of destination 0 for sending
+ # heartbeat signal. Usually this is the
+ # same as wd_heartbeat_port.
+ # (change requires restart)
+heartbeat_device0 = ''
+ # Name of NIC device (such like 'eth0')
+ # used for sending/receiving heartbeat
+ # signal to/from destination 0.
+ # This works only when this is not empty
+ # and pgpool has root privilege.
+ # (change requires restart)
+
+#heartbeat_destination1 = 'host0_ip2'
+#heartbeat_destination_port1 = 9694
+#heartbeat_device1 = ''
+
+# -- query mode --
+
+wd_life_point = 3
+ # lifecheck retry times
+ # (change requires restart)
+wd_lifecheck_query = 'SELECT 1'
+ # lifecheck query to pgpool from watchdog
+ # (change requires restart)
+wd_lifecheck_dbname = 'template1'
+ # Database name connected for lifecheck
+ # (change requires restart)
+wd_lifecheck_user = 'nobody'
+ # watchdog user monitoring pgpools in lifecheck
+ # (change requires restart)
+wd_lifecheck_password = ''
+ # Password for watchdog user in lifecheck
+ # Leaving it empty will make Pgpool-II to first look for the
+ # Password in pool_passwd file before using the empty password
+ # (change requires restart)
+
+# - Other pgpool Connection Settings -
+
+#other_pgpool_hostname0 = 'host0'
+ # Host name or IP address to connect to for other pgpool 0
+ # (change requires restart)
+#other_pgpool_port0 = 5432
+ # Port number for other pgpool 0
+ # (change requires restart)
+#other_wd_port0 = 9000
+ # Port number for other watchdog 0
+ # (change requires restart)
+#other_pgpool_hostname1 = 'host1'
+#other_pgpool_port1 = 5432
+#other_wd_port1 = 9000
+
+
+#------------------------------------------------------------------------------
+# OTHERS
+#------------------------------------------------------------------------------
+relcache_expire = 0
+ # Life time of relation cache in seconds.
+ # 0 means no cache expiration(the default).
+ # The relation cache is used for cache the
+ # query result against PostgreSQL system
+ # catalog to obtain various information
+ # including table structures or if it's a
+ # temporary table or not. The cache is
+ # maintained in a pgpool child local memory
+ # and being kept as long as it survives.
+ # If someone modify the table by using
+ # ALTER TABLE or some such, the relcache is
+ # not consistent anymore.
+ # For this purpose, cache_expiration
+ # controls the life time of the cache.
+relcache_size = 256
+ # Number of relation cache
+ # entry. If you see frequently:
+ # "pool_search_relcache: cache replacement happend"
+ # in the pgpool log, you might want to increate this number.
+
+check_temp_table = catalog
+ # Temporary table check method. catalog, trace or none.
+ # Default is catalog.
+
+check_unlogged_table = on
+ # If on, enable unlogged table check in SELECT statements.
+ # This initiates queries against system catalog of primary/master
+ # thus increases load of master.
+ # If you are absolutely sure that your system never uses unlogged tables
+ # and you want to save access to primary/master, you could turn this off.
+ # Default is on.
+enable_shared_relcache = on
+ # If on, relation cache stored in memory cache,
+ # the cache is shared among child process.
+ # Default is on.
+ # (change requires restart)
+
+relcache_query_target = master # Target node to send relcache queries. Default is master (primary) node.
+ # If load_balance_node is specified, queries will be sent to load balance node.
+#------------------------------------------------------------------------------
+# IN MEMORY QUERY MEMORY CACHE
+#------------------------------------------------------------------------------
+memory_cache_enabled = off
+ # If on, use the memory cache functionality, off by default
+ # (change requires restart)
+memqcache_method = 'shmem'
+ # Cache storage method. either 'shmem'(shared memory) or
+ # 'memcached'. 'shmem' by default
+ # (change requires restart)
+memqcache_memcached_host = 'localhost'
+ # Memcached host name or IP address. Mandatory if
+ # memqcache_method = 'memcached'.
+ # Defaults to localhost.
+ # (change requires restart)
+memqcache_memcached_port = 11211
+ # Memcached port number. Mondatory if memqcache_method = 'memcached'.
+ # Defaults to 11211.
+ # (change requires restart)
+memqcache_total_size = 67108864
+ # Total memory size in bytes for storing memory cache.
+ # Mandatory if memqcache_method = 'shmem'.
+ # Defaults to 64MB.
+ # (change requires restart)
+memqcache_max_num_cache = 1000000
+ # Total number of cache entries. Mandatory
+ # if memqcache_method = 'shmem'.
+ # Each cache entry consumes 48 bytes on shared memory.
+ # Defaults to 1,000,000(45.8MB).
+ # (change requires restart)
+memqcache_expire = 0
+ # Memory cache entry life time specified in seconds.
+ # 0 means infinite life time. 0 by default.
+ # (change requires restart)
+memqcache_auto_cache_invalidation = on
+ # If on, invalidation of query cache is triggered by corresponding
+ # DDL/DML/DCL(and memqcache_expire). If off, it is only triggered
+ # by memqcache_expire. on by default.
+ # (change requires restart)
+memqcache_maxcache = 409600
+ # Maximum SELECT result size in bytes.
+ # Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
+ # (change requires restart)
+memqcache_cache_block_size = 1048576
+ # Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
+ # Defaults to 1MB.
+ # (change requires restart)
+memqcache_oiddir = '/var/log/pgpool/oiddir'
+ # Temporary work directory to record table oids
+ # (change requires restart)
+white_memqcache_table_list = ''
+ # Comma separated list of table names to memcache
+ # that don't write to database
+ # Regexp are accepted
+black_memqcache_table_list = ''
+ # Comma separated list of table names not to memcache
+ # that don't write to database
+ # Regexp are accepted
# take effect.
#
+#------------------------------------------------------------------------------
+# BACKEND CLUSTERING MODE
+# Choose one of: 'streaming_replication', 'native_replication',
+# 'logical_replication', 'slony' or 'raw'
+# (change requires restart)
+#------------------------------------------------------------------------------
+backend_clustering_mode = 'native_replication'
#------------------------------------------------------------------------------
# CONNECTIONS
# REPLICATION MODE
#------------------------------------------------------------------------------
-replication_mode = on
- # Activate replication mode
- # (change requires restart)
replicate_select = off
# Replicate SELECT statements
# when in replication mode
# take effect.
#
+#------------------------------------------------------------------------------
+# BACKEND CLUSTERING MODE
+# Choose one of: 'streaming_replication', 'native_replication',
+# 'logical_replication', 'slony' or 'raw'
+# (change requires restart)
+#------------------------------------------------------------------------------
+backend_clustering_mode = 'slony'
#------------------------------------------------------------------------------
# CONNECTIONS
statement_level_load_balance = off
# Enables statement level load balancing
-#------------------------------------------------------------------------------
-# MASTER/SLAVE MODE
-#------------------------------------------------------------------------------
-
-master_slave_mode = on
- # Activate master/slave mode
- # (change requires restart)
-master_slave_sub_mode = 'slony'
- # Master/slave sub mode
- # Valid values are combinations stream, slony
- # or logical. Default is stream.
- # (change requires restart)
-
# - Streaming -
sr_check_period = 0
# take effect.
#
+#------------------------------------------------------------------------------
+# BACKEND CLUSTERING MODE
+# Choose one of: 'streaming_replication', 'native_replication',
+# 'logical_replication', 'slony' or 'raw'
+# (change requires restart)
+#------------------------------------------------------------------------------
+backend_clustering_mode = 'streaming_replication'
#------------------------------------------------------------------------------
# CONNECTIONS
statement_level_load_balance = off
# Enables statement level load balancing
-#------------------------------------------------------------------------------
-# MASTER/SLAVE MODE
-#------------------------------------------------------------------------------
-
-master_slave_mode = on
- # Activate master/slave mode
- # (change requires restart)
-master_slave_sub_mode = 'stream'
- # Master/slave sub mode
- # Valid values are combinations stream, slony
- # or logical. Default is stream.
- # (change requires restart)
-
# - Streaming -
sr_check_period = 10
SAMPLE_CONF=$PGPOOLDIR/pgpool.conf.sample-stream
;;
n ) MODENAME="raw mode"
- SAMPLE_CONF=$PGPOOLDIR/pgpool.conf.sample
+ SAMPLE_CONF=$PGPOOLDIR/pgpool.conf.sample-raw
;;
l ) MODENAME="logical replication mode"
SAMPLE_CONF=$PGPOOLDIR/pgpool.conf.sample-logical
;;
y ) MODENAME="slony mode"
- SAMPLE_CONF=$PGPOOLDIR/pgpool.conf.sample-master-slave
+ SAMPLE_CONF=$PGPOOLDIR/pgpool.conf.sample-slony
;;
esac
MemoryContextInit();
- pool_config->replication_mode = 1;
+ pool_config->backend_clustering_mode = CM_NATIVE_REPLICATION;
if (argc != 2)
{