I am having an issue with our matomo installation. Every evening by around 21:25 and during around 2,5 hours matomo server gets very busy and start to create over 1500 RequestWorkers and over 600 sql connections trying to insert archive_bolb. Query | Waiting for table metadata lock | INSERT INTO matomo_queuedtracking_list_trackingQueueV1 (list_value) …
During this time matomo server cannot register or be accessed.
Matomo errors : /matomo/libs/Zend/Db/A dapter/Pdo/Abstract.php(144): SQLSTATE[HY000]  Too many connections
over 100000 errors are produce during that time.
And during a sql search: SELECT idsite, date1, date2, period, name, COUNT(*) as count FROM matomo_archive_blob_2023_09 GROUP BY idsite, date1, date2, period, name HAVING count > 1;
I found out that there are many counts that are >1.
We have cronjob every hour console core:archive --url=“xxx” that runs without errors.
thank you for your response.
As I wrote previously my max_connections are now 600. The standard value for mysql is 151. then I went to 300, and now to 600.
there is no schedule/task that I can change. It is a matomo plugin/ vendor or core issue.
I believe more in a bad write php that created so many new connection to write onto mysql…
I think this is due to archiving task.
What is the daily traffic? (number of daily visits, and actions), how many sites, segments, personal reports?
How many servers do you use?
Is the archiving CRON running only on 1 server? What is the archiving CRON frequency? (I suggest the CRON is run after midnight, than before midnight, then you’ll archive data one time less, and be able to get acurate data for the previous day)
You can also have a look there:
Thank you for your time and reply.
I do agree it is the archiving task. I was thinking that it has something to do with the matomo plugin [QueuedTracking]( (v4.0.7).
We use cronjobs every 5 minutes after the hour, 24 hours/day. According to recommendations.
we have two server:
front end (matomo server) has 8 CPU and 64 GB RAM. SSD.
The Backend, the DB server: has 12 CPU and 64 GB RAM. SSD.
We get around 60.000 visits /day. Around 184.000 pageviews and around 248.000 actions. (round numbers). That means under 2 millions visits by 31 days. (Supposed to be good for 10 million pages Tracking / month).
We see a lot of those:
Waiting for table metadata lock INSERT INTO matomo_queuedtracking_list_trackingQueueV1_1 (list_value)
Waiting for table metadata lock INSERT INTO matomo_queuedtracking_list_trackingQueueV1_2 (list_value)
Waiting for table metadata lock INSERT INTO matomo_queuedtracking_list_trackingQueueV1_3 (list_value)
During the day until 21:30 the apache process are around 31. after that they are 3000.
The php8.1-fpm are during the day around 11. at 21:30 500 process
the mysql connection during the day are between 1-2 at 21:30 becomes 500 connections.
Those are huge numbers.
We are at a lost, we follow all recommendations except for two that we are not yet sure what they are and the consequences of them:
; Do not allow users to trigger the Matomo archiving process.; Ensures that no unexpected data processing triggers from UI or API.
enable_browser_archiving_triggering = 0;
Uncomment to make sure that API requests with; a &segment= parameter will not trigger archiving; browser_archiving_disabled_enforce = 1
As a CRON job is configured (you confirm it is run one an hour, not one every 5 minutes?), you can disable the archiving task run by HTTP request (eg when a user visits Matomo UI, or when a tracking request is received)
Thank you for getting back to me.
Yes, I am positive it is 5 minute pass the hour. I follow the logs I see it triggered. and I don’t have a /5 in my cronjob. 5 * * * * su www-data -s /bin/bash -c "/usr/bin/php /usr/share/matomo/console core:archive --url=“xxxxx.xx”
For the QueuedTracking:
Queue is enabled
Number of queue workers = 8
Number of requests that are processed in one batch = 25
Process during tracking request is enabled.
Now I have notice that in the backend the DB was doing a backup during that time. sqldump to file.
So I have moved it to a different time and waiting for the results. I can imagine that the sqldump lock some files during its work. I have the feeling that is the problem.