MySQL Connection Pool Configuration: 5 Steps to High Concurrency

This article explains in five steps how to set up a MySQL connection pool to deal with high concurrency.It includes techniques such as parameter optimization, resource management, and performance monitoring, which help developers and system administrators quickly enhance database response time and avoid connection bottlenecks.It is intended for teams that need to improve the performance of their databases.

Why is a pool needed?

Let's imagine a scenario: A website suddenly gets a huge number of users, and database connections are created and closed at a furious pace, until the server freezes.The connection pool is like a "butler," preparing a fixed number of connections in advance and then distributing them as needed and collecting them when they are no longer needed.This saves the costs of repeatedly opening and closing connections, and also prevents the database from being overwhelmed by an explosion in the number of connections.

Preparation for deployment.

Get to know your own database.

First check the current maximum number of connections (max_connections) using the command SHOW VARIABLES LIKE 'max_connections'.Don ’ t just look at the default values, but combine these with the server ’ s memory capacity — each connection probably occupies 8MB of memory, so you have to figure out how many you can handle.

Choose the right connection pool tool.

HikariCP and Druid each has its own strengths.HikariCP is fast, and Druid is good at monitoring.If you're using Spring Boot, then just use the default HikariCP, which is hassle-free. If you want to do deep monitoring, then you can switch to Druid.

Five steps to the key parameters.

Don't try to connect too many at once in the beginning.

initialSize is recommended to be set at 1.5 times the daily traffic.For instance, if 20 connections are usually enough, then set 30.Don't try to load 100 of them right away, or you'll use up memory and slow down the boot process.

The maximum connection number is left with room to spare.

maxActive should be about 20 % less than the number of connections allowed in the database.For example, if the maximum number of connections allowed in the database is 200, the maximum number of connections in the pool should be 160.Leave some buffer space to prevent other programs from suddenly grabbing the connection.

Overtime must be reasonable.

If there is no activity within 30 seconds, the connection is recycled.If you set it too short, you will have to rebuild the connection too often, and if you set it too long, you risk having "zombie" connections that are taking up space but not doing anything.

A mechanism for maintaining life cannot be lacking.

Open the TestWhileIdle configuration, and set the heartbeat packet to be sent every minute.This way, the connection can be closed in time, and requests will not be sent to the "dead" connection.

Security systems must keep up.

Use the Druid web monitoring page or Prometheus + Grafana to keep an eye on the number of active connections and the number of connections waiting.If the waiting queue is often long, either increase maxActive or check quickly to see if there are any slow SQL queries causing problems.

A guide to avoiding pitfalls.

I once encountered someone who set maxActive to the same value as the database's max_connections, and as a result, other maintenance scripts were unable to connect to the database.Another pitfall was that the "remove abandoned" function was not configured, so that a program bug caused the connection to not close, and the connection pool was exhausted, causing the whole service to crash.We should take a different road.

The results speak for themselves.

Using JMeter, they simulated 100 concurrent users and compared the QPS (queries per second) and average response time before and after the configuration.QPS should rise by more than 30 %, and 95 % of the requests should be responded to in milliseconds.If the effect is not obvious, check again to see if the slow queries were not optimized, or if the connection pool parameters were not set correctly.