Too many active concurrent transactions when archiving

Ah… And I just noticed, since we have more than one website, the archive seems to not work only in one of them… the other shows no error…?

Also, I disabled Goals a couple of days ago to check if that helps… Today I re-enabled it… And the error now includes this: Too many active concurrent transactions - in plugin Goals

Very strange…
Maybe @innocraft has a clue on such behaviour?

So… I changed the frequency to once a day, and re-enabled all plugins that I previously disabled… the result looks like this:

https://zerobin.net/?793d713b2e6d7858#SiUD2+DXm+sNkoFFLhoCuIAB/WvipUR2wkDMExGzyl8=

So, still no fix, even though I also enabled Q’ed tracking and set the archive job to 00:00, where we might only have 3-4 active users not doing much.

Just to be sure: Do you confirm there is only one single server hosting your Matomo instance (Apache/PHP)?

Yes:

I have no really ideas…
Maybe have a look at the MySQL:

Hi @cronventis

There could be several reasons for this to be happening. Could you perhaps check the following for us:

  1. Are you executing the core:archive cron on all hosts or only one host?
    The core:archive task should only be executed on one of the servers connected to the Matomo database. Running it on several servers can cause issues

  2. Did you make any recent changes to the config.ini.php file? For example increasing datatable row limits?
    This is explained in this FAQ: https://matomo.org/faq/how-to/faq_54/

Increasing these limits too high can cause very slow archiving and can lead to quite large increases in the archive table sizes.

  1. Do you limit the number of concurrently running archivers? If not and if the archive task starts to get stuck on a specific report then you could have a situation where several archivers are all running at the same time. The worst I’ve seen here was an archiving issue caused by a third party plugin that resulted in over 70 concurrently running core:archive tasks (Because the crontab command used did not limit the number of concurrently running archivers)

To check this you can run for example a top -c command and check how many climulti or core:archive commands are currently running.

In order to limit the number of concurrently running archivers your server also needs to support the “Managing processes via CLI” as explained in this FAQ: https://matomo.org/faq/troubleshooting/how-to-make-the-diagnostic-managing-processes-via-cli-to-display-ok/

To limit the number of concurrent archivers add the following to your crontab:
--concurrent-archivers=2

  1. Finally, installing and setting up the Queued Tracking plugin will reduce the number of active connections to the database. This can be used with MySQL as @heurteph-ei mentioned.
1 Like

Hey Richard…

  1. tried both, only one and all of them. for now, If I disable goals and VisitFrequency, the archive is running for all sites, no issues at all
  2. no, it just stopped working “without any reason”… At least not from config side.
  3. no, I dont limit them, but there is only one container, scheduled at midnight once a day that does the archive, so there should not be any issue with that.
  4. done that, but as soon as I enable VisitFrequency, the archive does not work. But these are numbers we would like to have. Same with Goals, but we dont use goals… so at least that can stay deactivated.

@cronventis There might be a recent code change that is causing something like higher memory usage during Goals archiving.
What version of Matomo do you currently have installed?

Regarding the VisitFrequency plugin:
Do you perhaps have any of the config settings mentioned in this faq enabled: https://matomo.org/faq/how-to/faq_113/

If you have for example Unique visitor metrics enabled for Year or Date Range periods then it could cause very slow archiving as it requires processing the RAW tracking data for the entire period. Eg. for Year it needs to check every single visit in the log_visit table for the entire year which can be a LOT of data.

Could you also let us know how many actions you’re tracking on your Matomo server?

Sorry for the late reply.

Today I updated Matomo to 4.12.2.
I have not enabled goals, since we just dont use them right now.

In my config, there is nothing like enable_processing_unique_visitors anywhere.

About our Actions:
Dashboard all Websites (Total: 543.241 Visits, 8.206.301 Pageloads, 8.280.601 Actions)
This is for September (01-30) 2022

Currently, without VisitFrequency (Core) Plugin enabled, the archiving works fine.
But as soon as I enable it, I still get

Got invalid response from API request: ?module=API&method=CoreAdminHome.archiveReports&idSite=1&period=day&date=2022-10-19&format=json&trigger=archivephp. Response was '{"result":"error","mes
sage":"Mysqli statement execute error : Too many active concurrent transactions - in plugin VisitFrequency. #0 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(284): Piwik\\ArchiveProcessor\\PluginsArchiver->callAggrega
teAllPlugins(25403, '0', false) #1 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(176): Piwik\\ArchiveProcessor\\Loader->prepareAllPluginsArchive(25403, '0') #2 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(159
): Piwik\\ArchiveProcessor\\Loader->insertArchiveData(13130, 0) #3 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(99): Piwik\\ArchiveProcessor\\Loader->prepareArchiveImpl(false) #4 \/var\/www\/html\/core\/Context.p
hp(75): Piwik\\ArchiveProcessor\\Loader->Piwik\\ArchiveProcessor\\{closure}() #5 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(103): Piwik\\Context::changeIdSite(1, Object(Closure)) #6 \/var\/www\/html\/plugins\/Core
AdminHome\/API.php(284): Piwik\\ArchiveProcessor\\Loader->prepareArchive(false) #7 [internal function]: Piwik\\Plugins\\CoreAdminHome\\API->archiveReports('1', Object(Piwik\\Period\\Day), '2022-10-19', false, false, fals
e) #8 \/var\/www\/html\/core\/API\/Proxy.php(244): call_user_func_array(Array, Array) #9 \/var\/www\/html\/core\/Context.php(28): Piwik\\API\\Proxy->Piwik\\API\\{closure}() #10 \/var\/www\/html\/core\/API\/Proxy.php(335): P
iwik\\Context::executeWithQueryParameters(Array, Object(Closure)) #11 \/var\/www\/html\/core\/API\/Request.php(267): Piwik\\API\\Proxy->call('\\\\Piwik\\\\Plugins\\\\...', 'archiveReports', Array) #12 \/var\/www\/html\/plug
ins\/API\/Controller.php(45): Piwik\\API\\Request->process() #13 [internal function]: Piwik\\Plugins\\API\\Controller->index() #14 \/var\/www\/html\/core\/FrontController.php(631): call_user_func_array(Array, Array) #15
\/var\/www\/html\/core\/FrontController.php(169): Piwik\\FrontController->doDispatch('API', false, Array) #16 \/var\/www\/html\/core\/dispatch.php(32): Piwik\\FrontController->dispatch() #17 \/var\/www\/html\/index.php(2
5): require_once('\/var\/www\/html\/c...') #18 \/var\/www\/html\/core\/CliMulti\/RequestCommand.php(79): require_once('\/var\/www\/html\/i...') #19 \/var\/www\/html\/vendor\/symfony\/console\/Symfony\/Component\/Console\/Comma
nd\/Command.php(257): Piwik\\CliMulti\\RequestCommand->execute(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #20 \/var\/www\/html\/vendor\/symfony\/consol
e\/Symfony\/Component\/Console\/Application.php(874): Symfony\\Component\\Console\\Command\\Command->run(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #21
 \/var\/www\/html\/vendor\/symfony\/console\/Symfony\/Component\/Console\/Application.php(195): Symfony\\Component\\Console\\Application->doRunCommand(Object(Piwik\\CliMulti\\RequestCommand), Object(Symfony\\Component\\Cons
ole\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #22 [internal function]: Symfony\\Component\\Console\\Application->doRun(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symf
ony\\Component\\Console\\Output\\ConsoleOutput)) #23 \/var\/www\/html\/core\/Console.php(135): call_user_func(Array, Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOut
put)) #24 \/var\/www\/html\/core\/Access.php(670): Piwik\\Console->Piwik\\{closure}() #25 \/var\/www\/html\/core\/Console.php(136): Piwik\\Access::doAsSuperUser(Object(Closure)) #26 \/var\/www\/html\/core\/Console.php(87):
Piwik\\Console->doRunImpl(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #27 \/var\/www\/html\/vendor\/symfony\/console\/Symfony\/Component\/Console\/Appli
cation.php(126): Piwik\\Console->doRun(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput)) #28 \/var\/www\/html\/console(32): Symfony\\Component\\Console\\Appli
cation->run() #29 {main}, caused by: Mysqli statement execute error : Too many active concurrent transactions #0 \/var\/www\/html\/libs\/Zend\/Db\/Statement.php(300): Zend_Db_Statement_Mysqli->_execute(Array) #1 \/var\/w
ww\/html\/libs\/Zend\/Db\/Adapter\/Abstract.php(479): Zend_Db_Statement->execute(Array) #2 \/var\/www\/html\/core\/DataAccess\/LogAggregator.php(325): Zend_Db_Adapter_Abstract->query('INSERT IGNORE I...', Array) #3 \/var
\/www\/html\/core\/DataAccess\/LogAggregator.php(351): Piwik\\DataAccess\\LogAggregator->createTemporaryTable('logtmpsegmentb1...', 'SELECT \/* trigg...', Array) #4 \/var\/www\/html\/core\/DataAccess\/LogAggregator.php(561)
: Piwik\\DataAccess\\LogAggregator->generateQuery('count(distinct ...', Array, 'log_visit.visit...', '', '') #5 \/var\/www\/html\/core\/ArchiveProcessor\/PluginsArchiver.php(285): Piwik\\DataAccess\\LogAggregator->queryV
isitsByDimension() #6 \/var\/www\/html\/core\/ArchiveProcessor\/PluginsArchiver.php(102): Piwik\\ArchiveProcessor\\PluginsArchiver->aggregateDayVisitsMetrics() #7 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(278): P
iwik\\ArchiveProcessor\\PluginsArchiver->callAggregateCoreMetrics() #8 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(176): Piwik\\ArchiveProcessor\\Loader->prepareAllPluginsArchive(false, false) #9 \/var\/www\/htm
l\/core\/ArchiveProcessor\/Loader.php(159): Piwik\\ArchiveProcessor\\Loader->insertArchiveData(false, false) #10 \/var\/www\/html\/core\/ArchiveProcessor\/Loader.php(99): Piwik\\ArchiveProcessor\\Loader->prepareArchiveIm
pl('VisitsSummary') #11 \/var\/www\/html\/core\/Context.php ... )\n#45 \/var\/www\/html\/console(32): Symfony\\Component\\Console\\Application->run()\n#46 {main}"}'

@cronventis Is it perhaps possible that you could increase the maximum number of allowed connections and transactions on your Database server? It’s possible that it’s just configured with too low of a value for the number of actions you’re tracking in a month.
Do you know what the current limit is set to?

I can show you this:

MariaDB [(none)]> show variables like "max_connections";
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 1000  |
+-----------------+-------+
1 row in set (0.001 sec)

But I’m not sure how to show you concurrent transactions allowed…

A value of 1000 here should be more than enough in most cases.

We would never expect that this number of connections are reached just by the core:archive task since by default we limit the number of concurrent requests. What might be happening potentially is that during the core:archive task the database server is slower to respond to tracking requests which leads to a accumulation of active connections that are not being closed quickly enough.

In this case I would make two recommendations:

  1. Install and configure the Queued Tracking plugin
    This plugin allows the Matomo application to simply dump the tracking request into a temporary table (I.e. it doesn’t need to do INSERTs into multiple different tables for every tracking request). Then at a certain threshold it does a batch insert into the database which is much more efficient and uses less connections/transactions.
    https://plugins.matomo.org/QueuedTracking

  2. Configure your database server with a Reader/Writer setup: https://matomo.org/faq/how-to-install/faq_35746/
    This will separate Tracking requests and Archiving requests between two replicated databases thereby removing the potential for archiving to slow down tracking and vice-versa

Could you give this a try and let us know if the situation improves?

1 Like

Yes, I have increased that number some weeks ago. First to something senseful (like 200) and then all the way… :slight_smile: Just to be sure.
I have also installed the Queued Tracking Plugin already.

I dont think I will be ablte do have a second db in the next couple of weeks, its just not on the prio list… But maybe in the future…

@rstark I don’t remember if maybe there could be some issue with Mysqli…? I don’t remember if this is the one that must be the DB accessor first choice…

Yes: I’m using mysqli.

@cronventis do you use segments?
As I see you would like to use VisitFrequency plugin, you may try to disable the process of segment reports for this VisitFrequency plugin and see if it works better:

This worked.

But does that fix the problem? Or just skip it for me? :wink:

Unfortunately, it just skipped it… But it allows the archiving for VisitFrequency outside segments… It is better than nothing…
If you have several tracked sites, you may just disable the archiving of segments for one or another site (see the last tip of the link I previously provided… Many attempts to try) Then it would archive for all segments of all sites except the one(s) that is the source of problem…

Changing the DB Connector to PDO might help in some cases, so it might be worth trying. There has been some issues when using Mysqli in the past that were resolved by switching to a different connector.

When disabling the segments for the plugin as @heurteph-ei mentioned it skips the processing of segments for that specific plugin (The data for Visit Frequency would not be shown in segment reports).

But this might indicate that there are either a very high number of segments on your instance or that you have some segments that due to their configuration cause very large amounts of processing on the database.

Could you maybe check the number of segments using:
SELECT COUNT(*) FROM matomo_segment;

Other than that you may need to review any segments that have a high number of conditions to see if they can be optimized to reduce the load on the database.

1 Like