Performance problem after upgrade 1.12 -> 2.0.3?

Hi!
We’ve been running piwik for a month now and tried to optimize it as much as possible by following the guides you’ve provided and managed to get good speed, ~1200 recs/s.
Thanks to the nice speed we’ve been able to import a years worth of logs for a few sites, each site generating ~500MB of logs per day.
Anyway, last wednesday we noticed that we were a bit behind version wise so we decided to upgrade and now we’ve noticed a BIG drop in performance, see below:
Purging Piwik archives for dates: 2014-01-25 2014-01-26
375474 requests imported successfully
Requests imported per second: 1306.18 requests per second
Purging Piwik archives for dates: 2014-01-27 2014-01-26
502795 requests imported successfully
Requests imported per second: 1166.89 requests per second
Purging Piwik archives for dates: 2014-01-27 2014-01-28
470100 requests imported successfully
Requests imported per second: 1247.19 requests per second
Purging Piwik archives for dates: 2014-01-28 2014-01-29
466384 requests imported successfully
Requests imported per second: 187.04 requests per second
Purging Piwik archives for dates: 2014-01-30 2014-01-29
502208 requests imported successfully
Requests imported per second: 185.03 requests per second
Purging Piwik archives for dates: 2014-01-30 2014-01-31
456583 requests imported successfully
Requests imported per second: 180.38 requests per second
Purging Piwik archives for dates: 2014-01-31 2014-02-01
292815 requests imported successfully
Requests imported per second: 208.74 requests per second

We’re almost exclusively importing IIS-logs via import_logs.py (this is where we see the performance issue).

The setup:
Quad core Intel® Xeon® CPU E5-2609 v2 @ 2.50GHz
16GB ram
High speed san-disk for mysql
MySQL 5.1.69 with following settings:
symbolic-links=0
innodb_file_per_table
innodb_log_buffer_size=64M
innodb_log_file_size=512M
read_buffer_size=2M
sort_buffer_size=4M
innodb_flush_log_at_trx_commit=0
innodb_lock_wait_timeout=50
innodb_buffer_pool_size = 8589934592
default-storage-engine=innodb
key_buffer_size=67108864
query_cache_type=1
query_cache_size=67108864
query_cache_limit=2097152
thread_cache_size=128
table_open_cache=256

Server Version: Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3
APC Version 3.1.9

And the only thing that changed from 28:e januari -> 29:e was that we upgraded to 2.0.3 from 1.12.
I’ve been digging around but can’t seem to figure out what’s wrong, there seem to be something that’s taking unnecessary time during the import because the load is much lower during imports now, we’ve gone from cpu @ ~70-80% to 20-30%…

Do you have any idea what might have changed that could cause this?

Best regards,
Johan

Thanks for the report, i’m very interested by such experience. Unfortunately I cannot think of something that would become slow between these two versions. While we’ve changed lots of things we haven’t added complexity.

Maybe try disabl Provider plugin, disable DevicesDetection and see? does it help>

Wow, fast response! Thanks!
Anyway, I tried disabling Provider (devicedetection was already off) and it got slightly better, 3-400 recs/s.
But I’ve noticed one thing, when I strace the children of import_logs.py they don’t work much and just spits out a lot of lines like these:
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x26b0c40, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x26b0c40, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x7f4978006100, FUTEX_WAIT_PRIVATE, 0, NULL) = 0

My first guess is some lock contention or am I way off here?

i think it could be an issue with python what version are you running now maybe upgrade it?

Are you stracing the python script?

instead maybe look at htop or so, as I expect the bottleneck is most likely IO and the mysql database. or maybe it’s the webserver/php ?

Yes I was stracing the pythonscripts etc (grasping for straws here :D) and now I’ve tried using htop as well but can’t see anything that seem off.

I’ve tried the following versions of python:
2.6.6 - default (this is centos 6.4)
2.7.6 - no improvement
3.3.3 - had to run 2to3 on import_logs and fiddled a bit but didn’t get it to work properly.

Now I’ve tried another thing that puzzles me a bit, I disabled APC just to see if that does any difference and it only went down from 150 -> 145 recs/s.
I’ve also noticed that the python scripts are barely consuming any cpu so I’m leaning more and more to problems in the php-part of the code?
Also the database seems fast enough but do you think that running some optimize tables or other maintenance tasks after the upgrade would be a good idea (I’ve read that optimize is not too good to run on innodb-tables and will take a long time but might be worth a shot anyway?)

Thanks again for your help!

Now I’ve run some tcpdumps as well… And I see that the python script sends in a POST and then it took ~18seconds before it got an answer back from Apache so more and more is pointing towards PHP…
I also ran mytop during the import and i don’t see any slow queries so my guess is that things are hanging around in apache/php for a while, question is why.
I guess the below log also shows that it’s the actual recording that is slow:

0 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
Parsing log /tmp/xxxx_u_ex140129.log…
5627 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 55 lines recorded, 10 records/sec (avg), 55 records/sec (current)
5627 lines parsed, 55 lines recorded, 9 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 55 lines recorded, 7 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 11 records/sec (avg), 40 records/sec (current)
5627 lines parsed, 95 lines recorded, 10 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 9 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 8 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 7 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 7 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 6 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 6 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 5 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 95 lines recorded, 5 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 354 lines recorded, 19 records/sec (avg), 259 records/sec (current)
5627 lines parsed, 354 lines recorded, 18 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 354 lines recorded, 17 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 31 records/sec (avg), 301 records/sec (current)
5627 lines parsed, 655 lines recorded, 29 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 28 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 27 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 26 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 25 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 24 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 23 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 22 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 21 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 21 records/sec (avg), 0 records/sec (current)
5627 lines parsed, 655 lines recorded, 20 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 51 records/sec (avg), 1055 records/sec (current)
5667 lines parsed, 1710 lines recorded, 50 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 48 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 47 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 46 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 44 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 43 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 42 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 41 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 1710 lines recorded, 40 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 2138 lines recorded, 49 records/sec (avg), 428 records/sec (current)
5667 lines parsed, 2408 lines recorded, 54 records/sec (avg), 270 records/sec (current)
5667 lines parsed, 2430 lines recorded, 53 records/sec (avg), 22 records/sec (current)
5667 lines parsed, 2430 lines recorded, 52 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 3622 lines recorded, 76 records/sec (avg), 1192 records/sec (current)
5667 lines parsed, 3622 lines recorded, 75 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 3622 lines recorded, 73 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 3622 lines recorded, 72 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 3622 lines recorded, 70 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 3622 lines recorded, 69 records/sec (avg), 0 records/sec (current)
5667 lines parsed, 4838 lines recorded, 91 records/sec (avg), 1216 records/sec (current)
Purging Piwik archives for dates: 2014-01-29

Not sure it’s relevant but the files under /var/www/html/tmp/cache/tracker are constantly updated which seems a bit excessive.
My impression was that there’s a ttl for that tracker cache which defaults to 5min?

I think that was it!
I redirected /var/www/html/tmp/cache/tracker to /dev/shm and then managed to go from ~150 recs/s to ~600 recs/s!
Still slower than ~1500 recs/s but still a big improvement! So I guess you really should look into the tracker cache code?

I’m really surprised that the tracker cache would cause such big performance hit, it seems very unlikely.
You can change the TTL of the cache in the config file, by default 5 minutes: matomo/global.ini.php at master · matomo-org/matomo · GitHub

increase to 50 minutes or so. Does it work better for you?

Do you find anything else causing slowness?

Well, yes, I’m surprised too but here’s the logimports for the last few days for one of the sites:
317545 requests imported to 1 sites
Requests imported per second: 200.91 requests per second
421311 requests imported to 1 sites
Requests imported per second: 188.04 requests per second
427060 requests imported to 1 sites
Requests imported per second: 149.21 requests per second
423076 requests imported to 1 sites
Requests imported per second: 187.9 requests per second
410092 requests imported to 1 sites
Requests imported per second: 598.48 requests per second

and another:
983 requests imported to 1 sites
Requests imported per second: 64.52 requests per second
2844 requests imported to 1 sites
Requests imported per second: 157.29 requests per second
3666 requests imported to 1 sites
Requests imported per second: 148.58 requests per second
2634 requests imported to 1 sites
Requests imported per second: 147.65 requests per second
4822 requests imported to 1 sites
Requests imported per second: 833.92 requests per second

The thing is that the /var/www/html/tmp/cache/tracker/x.php is regenerated at every request which I guess is a bit tiresome when writing to disk, so there seems to be some kind of bug in the cache-code? Or have I missed some setting somewhere?

I have tried to reproduce but coundn’t. In my case when i import logs, the tracker files are not regenerated.

I was running: $ ls tmp/cache/tracker/ -la --time-style=full-iso
during import but I’m not seeing “modified” date change during log import.

Can you explain what you do exactly and what makesyou think it’s re-generated?

Hi again, sorry for the late answer.
I’ve attached a short strace of a httpd-process that shows the tracker-cache-files being regenerated.
This happens during import of log files and I’m starting to suspect that it might be an issue with our php-version?
PHP 5.3.3 (cli) (built: Jul 12 2013 20:35:47)
Copyright © 1997-2010 The PHP Group
Zend Engine v2.3.0, Copyright © 1998-2010 Zend Technologies

(it’s a standard CentOS 6.4 setup)

could you try upgrade your PHP to a more recent? if this does not fix it, please comment again as we can investigate further.

Btw there may be other things to speed up import. if you find some tips let us know!