Support » Fixing WordPress » MySQL and Persistent Connections

  • I notice that WordPress is using the mysql_connect() function to establish a new database connection on every page. The mysql_pconnect() function uses the exact same API, but has built-in connection pooling to keep connections alive between requests. Generally, using mysql_pconnect() improves performance because your PHP application doesn’t have to wait for a new connection to be made, and reduces the number of connections that need to be open to the database.

    I’ve been doing some very informal benchmarking on my server. Once I exceed ~70 simultaneous requests, WordPress is unable to establish more connections to the database, and I receive a lot of errors. (We’re talking like an 80% failure rate with 200 simultaneous requests.) When I switch the mysql_connect() function to mysql_pconnect(), the failure rate drops noticeably (down to about 0.5%). There’s no noticeable adverse affect on resource usage or .

    Does anyone know if there was a deliberate design decision *not* to use persistent connections? If not, does anyone have a reason why I *shouldn’t* submit a bug or a patch to switch to the mysql_pconnect() function?

Viewing 4 replies - 1 through 4 (of 4 total)
  • Before submitting a patch, tell us a little more about your informal test configuration. What are the settings for Apache and MySQL? How much memory in the test server? Which theme were you using, and how many queries were made per page for that theme?

    The persistent connection documentation at php.net has some interesting information, too.

    Thread Starter ryangwu82

    (@ryangwu82)

    If it’s not obvious, I’m a web developer in the daytime too, though not a PHP developer. Every web application I’ve worked on has used database connection pooling with persistent connections. I’ve never seen an application which *didn’t* use it, which is why I was asking.

    I did take a look at the PHP documentation on persistent connections, earlier this evening. As far as I can tell, there are a couple drawbacks but they shouldn’t apply to WordPress.

    Anyway, my tests were run using ApacheBench (ab) from a separate machine. All I was doing was querying a page that would display an individual post. I was using a custom theme and there were 13 queries running per page request. I started with 5 concurrent requests and went up from there in multiples of 5. Once I got above 50, I used multiples of 10.

    I know this isn’t the most accurate load test ever written, but I was just playing around for fun when I noticed this issue. It seems like if I’m running into a database connection issue on this page, it wouldn’t be any *better* on a more comprehensive test. I figured I’d throw the idea out to the community to get some advice.

    Here are the rest of the specs on the test server: Dell PowerEdge with single Pentium 4 processor (HyperThreading enabled), 1GB RAM, CentOS 4.1, Linux 2.6.9 kernel with SMP.

    Apache 2.0.52 is configured like this:
    StartServers 8
    MinSpareServers 5
    MaxSpareServers 20
    MaxClients 150
    MaxRequestsPerChild 100

    PHP 4.3.9 is configured to use the eAccelerator bytecode cache, version 0.9.3. It has an 8MB memory limit per script. The MySQL module is configured with no limit on database connections (links) or persistent connections.

    MySQL 4.1.10a has, wow, a gazillion settings I could send. It’s set up for 100 max connections and 0 max user connections. Anything else in particular which you might be interested in?

    During the testing, I was restarting Apache and MySQL between each test. In order to “prime” the server I would run about 200 hits through ApacheBench before running a particular test case. There were always only 5 concurrent connections during that phase, so I don’t know if that was really an accurate way to do it.

    I’ve always been told that persistent connections were a bad thing and they used much more resources or something like that.

    However, I’m not good enough with servers / server configurations to make a judgement of my own.

    Quote Issues with persistent connections: “At first each connections takes up resources. This includes MySQL internal resources (memory, table cache elements, connections) as well as OS level resources – threads, open files and sockets. Some operating systems are quite OK with this, while others may show performance degradation. MySQL resources are often more important. By having the same amount of memory that would be used for persistent connections, you usually can configure MySQL to use larger buffers, which often gives extra performance. Will this outweigh the benefit from persistent connections? It completely depends on the application.

    Persistent connections can also lead you to the trouble if you’re not using them carefully enough. Some issues described here are handled automatically by good persistent connection support, while others are not. If you’ve started transaction but did not complete it, the result could be unpredictable – based on what next connection user will do first commit or rollback transaction. Exactly the same issue happens with locked tables. They could be locked stale leading to various issues. Moreover other thread specific settings are left – connection specific variables (sort_buffer,record_buffer…), server level variables (@my_variable), state dependend functions found_rows(), last_insert_id() etc.”

    For the average joe blogger persistent connections would not be a good option but adding support in say config.php to enable them might be an excellent compromise here for larger sites.

Viewing 4 replies - 1 through 4 (of 4 total)
  • The topic ‘MySQL and Persistent Connections’ is closed to new replies.