<productname>Pgpool-II</productname>, <emphasis>they are
blocked (not rejected with an error, like <productname>PostgreSQL</>)
until a connection to any <productname>Pgpool-II</productname>
- process is closed</emphasis>. Up to
+ process is closed</emphasis> unless <xref linkend="guc-reserved-connections">
+ is set to 1 or more.. Up to
<xref linkend="guc-listen-backlog-multiplier">*
num_init_children can be queued.
-->
preforkする<productname>Pgpool-II</productname>のサーバプロセスの数です。
デフォルト値は32です。
num_init_childrenは<productname>Pgpool-II</productname>に対してクライアントが同時に接続できる上限の数でもあります。
-num_init_childrenより多いクライアントが<productname>Pgpool-II</productname>に接続しようとした場合、<emphasis>それらのクライアントは、<productname>Pgpool-II</productname>のどれかのプロセスへの接続が閉じられるまで待たされます(<productname>PostgreSQL</>のように接続拒否エラーにはなりません。)</emphasis>。
+num_init_childrenより多いクライアントが<productname>Pgpool-II</productname>に接続しようとした場合、<xref linkend="guc-reserved-connections">が1以上に設定されている場合を除き、<emphasis>それらのクライアントは、<productname>Pgpool-II</productname>のどれかのプロセスへの接続が閉じられるまで待たされます(<productname>PostgreSQL</>のように接続拒否エラーにはなりません。)</emphasis>。
待たされる数の上限は、<xref linkend="guc-listen-backlog-multiplier"> * num_init_children です。
</para>
このパラメータはサーバ起動時にのみ設定可能です。
</para>
</listitem>
- </varlistentry>
- </variablelist>
+ </varlistentry>
+
+ <varlistentry id="guc-reserved-connections" xreflabel="reserved_connections">
+ <term><varname>reserved_connections</varname> (<type>integer</type>)
+ <indexterm>
+<!--
+ <primary><varname>reserverd_connections</varname> configuration parameter</primary>
+-->
+ <primary><varname>reserverd_connections</varname>設定パラメータ</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+<!--
+ When this parameter is set to 1 or greater, incoming
+ connections from clients are not accepted with error
+ message "Sorry, too many clients already", rather than
+ blocked if the number of current connections from clients
+ is more than (<xref linkend="guc-num-init-children"> -
+ <varname>reserved_connections</varname>). For example,
+ if <varname>reserved_connections</varname> = 1
+ and <xref linkend="guc-num-init-children"> = 32, then the
+ 32th connection from a client will be refused. This
+ behavior is similar
+ to <productname>PostgreSQL</productname> and good for
+ systems on which the number of connections from clients is
+ large and each session may take long time. In this case
+ length of the listen queue could be very long and may
+ cause the system unstable. In this situation setting this
+ parameter to non 0 is a good idea to prevent the listen
+ queue becomes very long.
+-->
+このパラメータが1以上に設定されていると、(<xref linkend="guc-num-init-children"> - <varname>reserved_connections</varname>)以上のクライアントからの接続はブロックされるのではなく受け付けられず、"Sorry, too many clients already"というエラーになります。
+たとえば、<varname>reserved_connections</varname> = 1で、<xref linkend="guc-num-init-children"> = 32なら、32番目のクライアントからの接続は拒否されます。
+これは、<productname>PostgreSQL</productname>と似た挙動で、クライアントからの接続数が多く、各セッションが長時間に渡るようなシステムで有効です。
+この場合、listenキューが非常に長くなる可能性があり、システムを不安定にさせます。
+こうした状況では、このパラメータを0以外にして、listenキューが長くなるのを防ぐのは良い考えです。
+ </para>
+ <para>
+<!--
+ If this parameter is set to 0, no connection from clients
+ will be refused. The default value is 0.
+ This parameter can only be set at server start.
+-->
+このパラメータを0にすると、クライアントからの接続要求は拒否されなくなります。
+デフォルト値は0です。
+このパラメータはサーバ起動時にのみ設定可能です。
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
</sect2>
<sect2 id="runtime-config-authentication-settings">
blocked (not rejected with an error,
like <productname>PostgreSQL</productname>) until a
connection to any <productname>Pgpool-II</productname>
- process is closed</emphasis>. Up to
+ process is closed</emphasis>
+ unless <xref linkend="guc-reserved-connections"> is set
+ to 1 or more. Up to
<xref linkend="guc-listen-backlog-multiplier">*
num_init_children can be queued.
</para>
</para>
</listitem>
</varlistentry>
+
+ <varlistentry id="guc-reserved-connections" xreflabel="reserved_connections">
+ <term><varname>reserved_connections</varname> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>reserverd_connections</varname> configuration parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ When this parameter is set to 1 or greater, incoming
+ connections from clients are not accepted with error
+ message "Sorry, too many clients already", rather than
+ blocked if the number of current connections from clients
+ is more than (<xref linkend="guc-num-init-children"> -
+ <varname>reserved_connections</varname>). For example,
+ if <varname>reserved_connections</varname> = 1
+ and <xref linkend="guc-num-init-children"> = 32, then the
+ 32th connection from a client will be refused. This
+ behavior is similar
+ to <productname>PostgreSQL</productname> and good for
+ systems on which the number of connections from clients is
+ large and each session may take long time. In this case
+ length of the listen queue could be very long and may
+ cause the system unstable. In this situation setting this
+ parameter to non 0 is a good idea to prevent the listen
+ queue becomes very long.
+ </para>
+ <para>
+ If this parameter is set to 0, no connection from clients
+ will be refused. The default value is 0.
+ This parameter can only be set at server start.
+ </para>
+ </listitem>
+ </varlistentry>
</variablelist>
</sect2>
NULL, NULL, NULL
},
+ {
+ {"reserved_connections", CFGCXT_INIT, CONNECTION_POOL_CONFIG,
+ "Number of reserved connections.",
+ CONFIG_VAR_TYPE_INT, false, 0
+ },
+ &g_pool_config.reserved_connections,
+ 0,
+ 0, INT_MAX,
+ NULL, NULL, NULL
+ },
+
{
{"listen_backlog_multiplier", CFGCXT_INIT, CONNECTION_CONFIG,
"length of connection queue from frontend to pgpool-II",
int num_init_children; /* # of children initially pre-forked */
int listen_backlog_multiplier; /* determines the size of the
* connection queue */
+ int reserved_connections; /* # of reserved connections */
bool serialize_accept; /* if non 0, serialize call to accept() to
* avoid thundering herd problem */
int child_life_time; /* if idle for this seconds, child exits */
static RETSIGTYPE authentication_timeout(int sig);
static void send_params(POOL_CONNECTION * frontend, POOL_CONNECTION_POOL * backend);
static void send_frontend_exits(void);
-static void connection_count_up(void);
+static int connection_count_up(void);
static void connection_count_down(void);
static bool connect_using_existing_connection(POOL_CONNECTION * frontend,
POOL_CONNECTION_POOL * backend,
StartupPacket *sp;
int front_end_fd;
SockAddr saddr;
+ int con_count;
/* reset per iteration memory context */
MemoryContextSwitchTo(ProcessLoopContext);
if (front_end_fd == RETRY)
continue;
- connection_count_up();
+ /*
+ * Check if max connections from clients execeeded.
+ */
+ con_count = connection_count_up();
+ if (con_count > (pool_config->num_init_children - pool_config->reserved_connections))
+ {
+ POOL_CONNECTION * cp;
+ cp = pool_open(front_end_fd, false);
+ if (cp == NULL)
+ {
+ connection_count_down();
+ continue;
+ }
+ connection_count_down();
+ pool_send_fatal_message(cp, 3, "53300",
+ "Sorry, too many clients already",
+ "",
+ "",
+ __FILE__, __LINE__);
+ ereport(ERROR,
+ (errcode(ERRCODE_TOO_MANY_CONNECTIONS),
+ errmsg("Sorry, too many clients already")));
+ pool_close(cp);
+ continue;
+ }
+
accepted = 1;
check_config_reload();
}
/*
- * Count up connection counter (from frontend to pgpool)
- * in shared memory
+ * Count up connection counter (from frontend to pgpool) in shared memory and
+ * returns current counter value. Please note that the returned value may not
+ * be up to date since locking has been already released.
*/
-static void
+static int
connection_count_up(void)
{
pool_sigset_t oldmask;
POOL_SETMASK2(&BlockSig, &oldmask);
pool_semaphore_lock(CONN_COUNTER_SEM);
Req_info->conn_counter++;
+ elog(DEBUG5, "connection_count_up: number of connected children: %d", Req_info->conn_counter);
pool_semaphore_unlock(CONN_COUNTER_SEM);
POOL_SETMASK(&oldmask);
+ return Req_info->conn_counter;
}
/*
*/
if (Req_info->conn_counter > 0)
Req_info->conn_counter--;
+ elog(DEBUG5, "connection_count_down: number of connected children: %d", Req_info->conn_counter);
pool_semaphore_unlock(CONN_COUNTER_SEM);
POOL_SETMASK(&oldmask);
}
serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
+reserved_connections = 0
+ # Number of reserved connections.
+ # Pgpool-II does not accept connections if over
+ # num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
+reserved_connections = 0
+ # Number of reserved connections.
+ # Pgpool-II does not accept connections if over
+ # num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
+reserved_connections = 0
+ # Number of reserved connections.
+ # Pgpool-II does not accept connections if over
+ # num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
+reserved_connections = 0
+ # Number of reserved connections.
+ # Pgpool-II does not accept connections if over
+ # num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
+reserved_connections = 0
+ # Number of reserved connections.
+ # Pgpool-II does not accept connections if over
+ # num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
StrNCpy(status[i].desc, "whether to serialize accept() call", POOLCONFIG_MAXDESCLEN);
i++;
+ StrNCpy(status[i].name, "reserved_connections", POOLCONFIG_MAXNAMELEN);
+ snprintf(status[i].value, POOLCONFIG_MAXVALLEN, "%d", pool_config->reserved_connections);
+ StrNCpy(status[i].desc, "number of reserved connections", POOLCONFIG_MAXDESCLEN);
+ i++;
+
StrNCpy(status[i].name, "max_pool", POOLCONFIG_MAXNAMELEN);
snprintf(status[i].value, POOLCONFIG_MAXVALLEN, "%d", pool_config->max_pool);
StrNCpy(status[i].desc, "max # of connection pool per child", POOLCONFIG_MAXDESCLEN);