so with in the last week I have been having some serious trouble with my site and I am starting to think it may be DB related….seeing that I have eliminated just about all other issues.
perhaps you can take a look and see what you can suggesst as it has been ages since I did anything with a LIVE DB.
so some background. Last week monday the server just stopped responding and we had to have Rackspace Restart it because it was just locked up. The report came back that it had run our of resource… this is hard to beleive as it has 4 gig of memory so I thought eleminate each issue indevidually and see if I can figure out what the issue is exactly.
so here we are and my guess is because of the high swap rate, the DB is consuming so much resource that it is spinking the memeory and locking it up
Uptime is less than 1 day, performance tuning may not be accurate. To have more accurate averages it is recommended to let the server run for longer than a day before running this analyzer
There are lots of rows being sorted.
While there is nothing wrong with a high amount of row sorting, you might want to make sure that the queries which require a lot of sorting use indexed columns in the ORDER BY clause, as this will result in much faster sorting
There are too many joins without indexes. This means that joins are doing full table scans. Adding indexes for the columns being used in the join conditions will greatly speed up table joins
The rate of reading the first index entry is high.
This usually indicates frequent full index scans. Full index scans are faster than table scans but require lots of CPU cycles in big tables, if those tables that have or had high volumes of UPDATEs and DELETEs, running ‘OPTIMIZE TABLE’ might reduce the amount of and/or speed up full index scans. Other than that full index scans can only be reduced by rewriting queries.
The rate of reading data from a fixed position is high. This indicates that many queries need to sort results and/or do a full table scan, including join queries that do not use indexes. Add indexes where applicable.
The rate of reading the next table row is high. This indicates that many queries are doing full table scans. Add indexes where applicable.
Many temporary tables are being written to disk instead of being kept in memory. Increasing max_heap_table_size and tmp_table_size might help. However some temporary tables are always being written to disk, independent of the value of these variables. To eliminate these you will have to rewrite your queries to avoid those conditions (Within a temporary table: Presence of a BLOB or TEXT column or presence of a column bigger than 512 bytes) as mentioned in the beginning of an Article by the Pythian Group
Many temporary tables are being written to disk instead of being kept in memory.
Increasing max_heap_table_size and tmp_table_size might help. However some temporary tables are always being written to disk, independent of the value of these variables. To eliminate these you will have to rewrite your queries to avoid those conditions (Within a temporary table: Presence of a BLOB or TEXT column or presence of a column bigger than 512 bytes) as mentioned in the MySQL Documentation
MyISAM key buffer (index cache) % used is low.
You may need to decrease the size of key_buffer_size, re-examine your tables to see if indexes have been removed, or examine queries and expectations about what indexes are being used.
The rate of opening tables is high. Opening tables requires disk I/O which is costly. Increasing table_open_cache might avoid this.
The rate of opening files is high. Consider increasing open_files_limit, and check the error log when restarting after changing open_files_limit.
Too many clients are aborted. Clients are usually aborted when they did not close their connection to MySQL properly. This can be due to network issues or code not closing a database handler properly. Check your network and code.
- The topic ‘DB issues keep bring the site to its Knees…….HELP!’ is closed to new replies.