Support » Plugin: NewStatPress » Statpress table large amount of data

  • Hi!.
    i am using newstatpress from 2011, now i have a notification problem when i make a backup for the big amount of data of the table xxxx_statpress (statpress ha un numero di campi elevato 3195864).

    Times ago i made a purge of old data but lose the total count of visits, so from that day i haven’t purge anymore.

    Have you made any fixes for solve that problem, or still if i erase old data i lose the first visit and the total amount of hits?

    I read in another post, that plugin count all the log stored, so i can deduce it still consider the first hits and all log stored to count visits?


    PS. if in this way, but it’s not more simple have another table in which store the first visit and update the total amount of visits, to have the possibility to erase old data and leave the database lighter?

Viewing 1 replies (of 1 total)
  • Plugin Author ice00



    almost 99% of service provider limit the time a script can be executed into the server side before killing it for using too much cpu usage on the server (this time is about 15 seconds in average).

    This has limited any kind of approach to reduce the DB usage by storing the data in some manner in other tables.

    I made many experiment into the past using my DB (over 1,5GB of data) but due to the kill of PHP process by the server that interrupt the table migration, the procedure fails in a non consistent state (e.g. missing data or duplicate data from one table to an aggregate one). Such scripts instead needs even minutes of running time to convert such big database 🙁

    The unique remainin approach to a solution could be to have some cron jobs that run for some seconds every minutes onto the server and convert a piece of DB at time. It will tooks maybe weeks to convert the DB, but the problem is that not all the service provider gives free access to crons jobs and some providers only sell that usage!!

    A similar approach is that at every page visits, a peace of DB will be converted, but first, the page generation will be delayed and second it tooks months to convert all the DB 🙁

    So at the actual time you have two partial solutions. First is to allow the automatic purge of the spiders visits. Maybe spider visits is a not vital data to see, so deleting spider visits older that one month is a good solution to half the DB usage.

    Second you have to purge the old data and use the offset value setting for showing the sum of deleted data (please note that in this version offset are not working as sanization required by WordPress has break that features, but we are working to fix it in next version).

    So, if you had a total visits of lets say 2.500.000, and after you prune a year you get a 1.000.000 visits, then you had to put 1.500.000 as visits offset in NewStatPress option.

    Please note that big database and/or lighter database after prune will have a big hidden problem that could pop up.

    At DB level, the ID is defined as:

    id mediumint(9) NOT NULL AUTO_INCREMENT,

    that means it can store up to 8388607 records then every storing will give a hidden error and no data will be added to DB.
    Maybe when StatPress were created that seems a big values, but I have already surpassed it in one of my site!

    A solution of this seems easy:

    id unsigned mediumint(9) NOT NULL AUTO_INCREMENT,

    this double the capacity (16777215) with the same DB usage by using unsigned number over signed one.

    The problem is still the service provider limits: such instruction, at DB level, need minutes to run with big database so the provider blocks it and the result is that you will not be able to see your page (as DB table variation update is running at every page visits!).

    To end this, the unique pratical solution should be the cron jobs that will take the DB table with 3 months of data and aggregate it in other tables, but this is not yet coded.

Viewing 1 replies (of 1 total)
  • The topic ‘Statpress table large amount of data’ is closed to new replies.