• cfmack

    (@cfmack)


    I am not sure what the heaviest trafficed wordpress site is ( though I would be interested to find out ), I run a site that receives 40,000 Unique Visitors a day. I have been runnning it on a dedicated server running RHEL4, MySQL 4.1, Apache 2, two 2.4 ghz intels, 4 gigs of RAM. We regularly reach 500 visiters online (using wp_onlinecounter), with 697 being our max.

    Our main site spikes periodically through out the day regularly reaching up to 54 load avg. No matter how high I set the Apache MaxClients, it extends to “2” passed the limit. Right now I have it set to 300 and I reach 302 and start hitting swap. (I set it to 500 once. BIG MISTAKE – CPU load hit 154+ and crashed my server). Normally, we don’t hit swap.

    Accourding to MyTop, I see a huge amount of “Sleeping” queries and which suggest to me that WordPress is not closing the connections (perhaps the reason with the spikes).

    I have turned off most of the plugins. I recently installed WP-Cache but have yet to see the results. I also updated to WordPress 2.0.5 and install “Referer Karma” plugin to watch for bots.

    Unforunately, we need to use .htaccess REWRITE rules and Permalinks because my boss loves the SEO and the ease of use.

    With Apache, I have KeepAlive on with 100 max requests and 15 sec time outs.

    With MySQL, well its better if I just include it.

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    log-slow-queries=/var/log/mysql_slow_query.log
    open-files-limit=1000

    query_cache_type=1
    query_cache_limit=2M
    query_cache_size=64M

    key_buffer=1024M

    set-variable = max_connections=3072
    set-variable = thread_cache=40
    set-variable = back_log=500
    set-variable = table_cache=256M
    set-variable = read_rnd_buffer_size=3M

    set-variable = max_allowed_packet=1M
    set-variable = max_connect_errors=999999

    skip-locking
    skip-name-resolve

    [mysql.server]
    user=mysql
    basedir=/var/lib

    [safe_mysqld]
    err-log=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid

    [isamchk]
    key_buffer=64M
    sort_buffer=64M
    read_buffer=16M
    write_buffer=16M

    [myisamchk]
    key_buffer=64M
    sort_buffer=64M
    read_buffer=16M
    write_buffer=16M

    [client]
    socket=/var/lib/mysql/mysql.sock

    I will not switch to pconnect because I am using only one local server and pconnect will only make matter worse.

    I was thinking of explitely hacking the core files to add a mysql_close function to the db object and called it a the very end of the footer.php. That will only save a few milliseconds (if any).

    I can’t find anything else to improve performance and prevent the periodic spikes. I am beating my head against the wall to get more out of this server and optimize my software (including WordPress).

    Any ideas? Is this normal for wordpress to spike the CPU like this?

    What is the most heavily trafficed word press site anyways? Perhaps, they can give me some hints.

    Any help would be appreciated. Thanks, cmack AT madmanmedia DOT com

Viewing 5 replies - 1 through 5 (of 5 total)
  • Thread Starter cfmack

    (@cfmack)

    I also changed a few of the database tables to InnoDB like wp_onlinecounter. That had no effect obviously when I turned off the plugin.

    cfmack: Spikes like this suggest an attack of some sort, especially the fact that it always reaches past the limit set. Normal users would not be doing this – I’d focus on identifying heavy users from logs and blocking appropriately.

    On the WordPress front I’ve developed a WordPress Throttling Plugin/API which may possibly help with some of this. The questions are really how much of the output are you willing to sacrifice for the sake of saving bandwidth/connections.

    At present for example it’s possible to use the throttle to temporarily replace images with links, or to redirect visitors to Coral Cache CDN to remove load off the server. It would possibly even be possible to simply do this with a lot of site content (i.e. keep the main URL local, fetch images from cache).

    Another poster recently was asking about restricting multiple repetitive connections from a single IP/etc. – again, the possibility to redirect “bad behaving” clients off your server and through the cache may be beneficial?

    We’ve started experiencing a similar problem. Was this one ever resolved? If so, how?

    Oops!!! and use lighttpd instead apache

Viewing 5 replies - 1 through 5 (of 5 total)
  • The topic ‘Heavy load with 40000 Unique Visitors’ is closed to new replies.