Archiving fills all available disk-space

Hello!
I need some advice please as I have tried all hints which I found online so far. I am running several matomo-instances on different hosting packages and on dedicated servers for a long time, performing updates regularly.
In the last week of March I updated one of these matomos from 3.12(? not sure anymore) to 3.13 via online-updater from within the matomo dashboard.
It’s a rather small website with about 300 visitors daily and haven’t set up a cron job for archiving in this installation. When I enter the dashboard I can access everything up to the date when I performed the update but not anything afterwards and in addition as soon as I enter the dashboard (which means archiving starts) something goes completely wrong and my whole server (it’s a dedicated server) gets filled until nothing but restart helps.
All webpages and services on this server are gone within 15-25 minutes because of no-space-left-on-device. After a restart all is ok again and free space has recovered. I tried to turn off the archiving via entering the dashboard and added a cronjob, but when I start archiving manually via ./console core:archive the same happens. I cannot identify which table is filled and why. I also tried to set up a fresh installation and connected it to the existing database, but the same behaviour remains. None of the other installations is showing this behaviour and I don’t know what else to try.

I already tried the hints mentioned in this thread: Size of piwik_archive_blob files is astronomical and After updating the 2019_01 archive blob tables are gaining size · Issue #10439 · matomo-org/matomo · GitHub but nothing changed.
I ported the installation to a local ubuntu machine using a virtual host to see if I can dig in deeper without killing a server online regularly but the behaviour is the same. As soon as I kill mysqld in this machine all space gained is free again so it has to be something in the database-accessing. The funny thing is that the log says “nothing to do” and in the background there still is obviously something running regarding mysql and the harddrive gets filled while I’m just sitting there doing nothing. Now as long as I don’t enter this specific matomo instance’s backend (respectively as long as I don’t start console core:archive) nothing happens at all, free space remains free.

Does anybody know a way to make archiving work again? Any help is appreciated.
Regards

Hi,

The bug that is causing huge archives was fixed yesterday. You can find out more here:

You can also try applying

and reporting back if it works for you (but as always, please make a backup beforehand)

Thanks for your info. I tried ./console core:purge-old-archive-data all which didn’t help. Now I switched to beta and installed 3.13.5-b1.
Indeed this version was able to do the archiving, but still my server on which this installation is hosted is running into some issues which I cannot really identify. I see that since the archiving has started my disk-space usage has started growing again (ok, I have time left as there are about 2TB free space left for now) and the system is getting slow.
Experience shows that the server will reach 100% hdd-usage and then only a reboot will do.
This is a production server on which I host some customer’s pages and each one has his/her own matomo on it. If every matomo becomes a memory monster I’ll run into severe problems, even on a 3TB HDD with 32GB RAM.

Here’s a snapshot of our server-health-check considering HDD-usage provided by plesk: image

And here you can see the CPU-load caused by mysql, I had to reboot this morning several times.
image

Database-size is ok, it’s 433MB at this moment, this specific installation has been set up in 2012 and worked well until April 5th.

Any advice? Thanks a lot!

Hi,

I think 3.13.5-b1 has been released before the fix linked above has been merged.

For testing, you would need to manually apply the patch (or put https://github.com/matomo-org/matomo/pull/15800.patch into the patch program).

Update: A beta2 including this change will be released very soon