Our goal, for clarity - Keep the roll-up, optimize so we don’t need >1G of memory to archive it, if at all possible.
I’m wondering if
memory_limit 1G+ is to be expected for this kind of configuration/traffic, or if there’s something we could be doing better? We are looking into moving to more powerful hosting that allows us to push the
memory_limit above 1G, but want to make sure it’s necessary first.
The problem - Our archiving script runs as a cron job once every hour (UI triggering disabled) and it errors out pretty much every time due to hitting the 1G PHP
memory_limit. (According to logs it is definitely using 1G, not loading the wrong config file or something like that.) Our hosting doesn’t allow us to increase above 1G.
Matomo config/traffic info - Our Matomo instance isn’t particularly “high-traffic” (We get ~ 70k-80k pageviews per month). BUT, archiving always fails on the same site, a “roll-up” of 17 sites that contribute nearly all of our traffic. (We track 65 sites total, many of them dev sites, those 17 contribute ~95% of those ~70k-80k pageviews.) And the same period, “year.” No other sites/periods have memory issues archiving w/
Other factors that may be contributing:
We have tons of custom reports, this roll-up has 19. (10 available to all sites, 9 specifically for the roll-up.) Many use custom variables.
We’ve been running Matomo 1.5 yrs now & our DB has 45.8G of data. This seems odd, according to this Matomo article we should be well under that w/ <2 mill pageviews total, <1G??
- We have one “year” table that’s taking up 40G of those 45.8G alone! It’s the 2020 table. May be a similar issue to this post?
Many of our URLs are searches and we don’t currently ignore any search params. (Our “Pageviews - URLs” report for the roll-up is huuuge) Would this make a difference?
Any ideas? Things we can investigate, look into, etc., am I onto something with the bullets above? Thanks in advance!