How big is big? Looking for Tips & Tricks for huge on-premise Installations

Hi Matomo Community,

I am reaching out to you today with the question, “How big is big?” and what tips & tricks have you accumulated in managing large installations. To provide context to this topic, here are some technical details about our current installation.

We run Matomo in custom-built Docker containers (nginx/fpm) within Kubernetes. The pods are completely stateless and do not share a filesystem. The Matomo configuration is entirely managed as “Config as Code” and is part of the Docker image. This means any change to the config.ini.php is made in Git, not “on a server.” Configurations stored in the database, such as sites, users, etc., are naturally exempt and can be altered in the Matomo UI.

We operate this setup across multiple Kubernetes clusters in an active/active scenario. A MariaDB Galera Cluster serves as a common database. Tracking itself is completely separated from the Matomo installation through Kafka and custom-developed producers/consumers, ensuring we never lose trackings during Matomo downtimes. As a result, spikes in load are not an issue, even with several thousand direct trackings producing 30k QPS on the database posing no challenge.

Matomo is in an “air-gapped environment” and therefore has no internet access itself. We regularly update Matomo to the latest release, currently using version 5.0.1.

Currently, we track approximately:

Period Visits Pageviews Actions
Daily 1,380,571 7,376,787 14,699,837
Monthly 52,455,028 215,690,476 406,303,997

Our database is about 800GB in size, with the tracking tables currently taking up the most space.

TableData size Index size Row count
Tracker Tables 407.2 G 183.3 G 1,634,638,201
Report Tables 225.6 G 10.4 G 161,894,294

The largest tracking tables are, of course:

TableData size Index size Row count
matomo_log_link_visit_action 154.2 G 142.8 G 1,038,079,095
matomo_log_action 200.5 G 11.9 G 382,180,934

The tracking tables contain 90 days of raw data. Everything beyond this is deleted every two days.

The database currently has 40 CPUs and an InnoDB Buffer Size of 256GB RAM available, allowing the “working set” of the tracking tables to fit entirely into memory if needed.

Here’s the output of one of the newly implemented CLI functions since 5.x (thanks for that, really useful):

./console diagnostics:archiving-instance-statistics
Site Count 140
Segment Count 1440
Last full Month Hits 406933519
Last 12 Month Hits 934238905

As can be seen, we operate a large number of sites with an even larger number of segments. It’s worth noting that almost all segments are distributed among the top5 sites with the most trackings.

We currently do not calculate years, only days, weeks, and months.
In addition, there is a double-digit number of custom reports and the calculation of plugins, as we own the Premium Bundle and use all plugins.

To manage all reports within 24 hours, multiple archivers run in parallel. Some of them calculate all sites and periods without many parameters, while others calculate only very specific sites and, for example, only the last month. In the last week of the month, its nearly impossible to calculate the monthly reports for all sites and all segments within 24 Hours.

Calculating all reports is currently our biggest problem. Since our sites track many “unique URLs” and many segments exist searching for “URL starts with…”, the calculation of many segments sometimes takes several minutes, and in some cases, up to 1-2 hours. However, we are currently working on reducing the unique URLs (e.g. excluding more parameters https://matomo.org/faq/how-to/faq_81/#faq_81 ) and eliminating some segments.

We would also be happy to share the output of ./console diagnostics:archiving-config upon request.

It goes without saying that we are aware of all points from the following articles and took them into consideration:

Of course, we are also familiar with:

if new versions provide new parameters, we do performance tests, e.g. enable_segment_first_table_join_prefix.

We are always interested in tips & tricks or best practices from the Matomo team and community and would appreciate any feedback.

  • How many archivers do you run?
  • How long do your archivers run?
  • What have been the biggest challenges in operating large Matomo installations?
  • Have you performed any special tuning of your database?
  • Have you created additional indexes beyond the recommendations?
  • Do you use special OPCache parameters to make the UI even faster?
  • etc.

We would also be interested to know if anyone has decided to split a large installation into several smaller ones to improve handling?!

Lastly, I would like to express great appreciation to all Matomo staff and contributors. Posts from @Lukas , @karthik, @SteveG or @thomas_matomo , to just mention a few, were really helpful for us in the past.

Whether support, developers, or plugin providers, you all do a fantastic job. Keep it up!

We look forward to any feedback.

Greetings from Germany,
Olli