Archive_bolb many process?

Hi Everyone,
I am having an issue with our matomo installation. Every evening by around 21:25 and during around 2,5 hours matomo server gets very busy and start to create over 1500 RequestWorkers and over 600 sql connections trying to insert archive_bolb. Query | Waiting for table metadata lock | INSERT INTO matomo_queuedtracking_list_trackingQueueV1 (list_value) …

During this time matomo server cannot register or be accessed.
Matomo errors : /matomo/libs/Zend/Db/A dapter/Pdo/Abstract.php(144): SQLSTATE[HY000] [1040] Too many connections
over 100000 errors are produce during that time.
And during a sql search: SELECT idsite, date1, date2, period, name, COUNT(*) as count FROM matomo_archive_blob_2023_09 GROUP BY idsite, date1, date2, period, name HAVING count > 1;
I found out that there are many counts that are >1.

We have cronjob every hour console core:archive --url=“xxx” that runs without errors.

ps -e |` grep apache2 | wc -l >>> 1500
ss -nputw | grep x.x.x.x | wc -l > 600

Matomo version: 4.15.1
MySQL version: 8.0.25
PHP version: 8.1.23
Operativsystem: Debian 11.7
Kernel: 5.10.0-25-amd64

pm = dynamic
pm.max_children = 500
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.max_requests = 500

upload_max_filesize = 32M
post_max_size = 128M

I hope we can solve this issue that has been going on for so many month…
Thank you for any help given.

thank you for your response.
As I wrote previously my max_connections are now 600. The standard value for mysql is 151. then I went to 300, and now to 600.
there is no schedule/task that I can change. It is a matomo plugin/ vendor or core issue.
I believe more in a bad write php that created so many new connection to write onto mysql…

Hi @makhad
I think this is due to archiving task.
What is the daily traffic? (number of daily visits, and actions), how many sites, segments, personal reports?
How many servers do you use?
Is the archiving CRON running only on 1 server? What is the archiving CRON frequency? (I suggest the CRON is run after midnight, than before midnight, then you’ll archive data one time less, and be able to get acurate data for the previous day)
You can also have a look there:

Hi @heurteph-ei
Thank you for your time and reply.
I do agree it is the archiving task. I was thinking that it has something to do with the matomo plugin [QueuedTracking]( (v4.0.7).
We use cronjobs every 5 minutes after the hour, 24 hours/day. According to recommendations.
we have two server:

  1. front end (matomo server) has 8 CPU and 64 GB RAM. SSD.
  2. The Backend, the DB server: has 12 CPU and 64 GB RAM. SSD.

We get around 60.000 visits /day. Around 184.000 pageviews and around 248.000 actions. (round numbers). That means under 2 millions visits by 31 days. (Supposed to be good for 10 million pages Tracking / month).

max_connections = 600
max_allowed_packet = 1000M
join_buffer_size = 2M
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 2

We see a lot of those:
Waiting for table metadata lock INSERT INTO matomo_queuedtracking_list_trackingQueueV1_1 (list_value)
Waiting for table metadata lock INSERT INTO matomo_queuedtracking_list_trackingQueueV1_2 (list_value)
Waiting for table metadata lock INSERT INTO matomo_queuedtracking_list_trackingQueueV1_3 (list_value)
During the day until 21:30 the apache process are around 31. after that they are 3000.
The php8.1-fpm are during the day around 11. at 21:30 500 process
the mysql connection during the day are between 1-2 at 21:30 becomes 500 connections.

Those are huge numbers.
We are at a lost, we follow all recommendations except for two that we are not yet sure what they are and the consequences of them:

; Do not allow users to trigger the Matomo archiving process.; Ensures that no unexpected data processing triggers from UI or API.
enable_browser_archiving_triggering = 0;
 Uncomment to make sure that API requests with; a &segment= parameter will not trigger archiving; browser_archiving_disabled_enforce = 1

We follow the GDPR deleting data after 180 days.

Any help is appreciated.
Best Regards

Hi @makhad

As a CRON job is configured (you confirm it is run one an hour, not one every 5 minutes?), you can disable the archiving task run by HTTP request (eg when a user visits Matomo UI, or when a tracking request is received)

How did you configure this?

Hi @heurteph-ei
Thank you for getting back to me.
Yes, I am positive it is 5 minute pass the hour. I follow the logs I see it triggered. and I don’t have a /5 in my cronjob. 5 * * * * su www-data -s /bin/bash -c "/usr/bin/php /usr/share/matomo/console core:archive --url=“xxxxx.xx”

For the QueuedTracking:
Backend: Mysql
Queue is enabled
Number of queue workers = 8
Number of requests that are processed in one batch = 25

Process during tracking request is enabled.

Now I have notice that in the backend the DB was doing a backup during that time. sqldump to file.
So I have moved it to a different time and waiting for the results. I can imagine that the sqldump lock some files during its work. I have the feeling that is the problem.

Thank you for your engagement.
Best Regards

1 Like

Hi @makhad

Indeed, that can be the source! :+1:
If the problem continues, maybe the number of queue workers could be reduced…?

Don’t forget to share the result of re-schedule the DB backup time (Or maybe you can prevent the CRON execution at backup time?)

Hi @heurteph-ei Philippe
I can confirm it is the backup that is doing the table locks.
I have moved the backup window to another time so the issue just follow the backup window.

We are going to try to use another way of doing backups so maybe the problem goes away.
The Matomo archive job is not easy to change or else I have to make 23 of them …

Thank you for your input.
Best Regards,

1 Like