QueuedTracking plugin not fast enough for us?

We’d like to track about 5 million actions per day. We have more than 200,000 items in the queue according to ./console queuedtracking:monitor and it’s not getting fewer (at least not enough - the queue generally works, but we just have too many requests to track it seems?).

We’re using MySQL. Is my understanding correct that switching to Redis would not help, as the slow parts seems to be to put the data into MySQL (i.e. not the queue table, but the final table)?

In the settings under “Number of queue workers” it says “Be aware you need to make sure to start the workers manually.” Starting manually means calling ./console queuedtracking:process, doesn’t it?

It always just prints:

Starting to process request sets, this can take a while
This worker finished queue processing with 0req/s (0 requests in 0.01 seconds)

So we’re still at 1 worker. Increasing the number in the settings (web UI) doesn’t help, ./console queuedtracking:monitor still says 1 workers active.

Any idea what we’re doing wrong? Our server has 8 cores and its load is about 3-4.

According to top, the load is spread like this (%CPU column:)
60% mysql
60% php-fpm7.2
20% nginx

Output of ./console queuedtracking:test:

Settings that will be used:
Backend: mysql
NumQueueWorkers: 1
NumRequestsToProcess: 25
ProcessDuringTrackingRequest: 1
QueueEnabled: 1

Redis backend only settings (does not apply when using MySQL backend):
Port: 6379
Timeout: 0
Database: 0
UseSentinelBackend: 0
SentinelMasterName: mymaster

Version / stats:
PHP version: 7.2.24-0ubuntu0.18.04.1
Uname: Linux analytics 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64
Backend version: 5.7.27-0ubuntu0.18.04.1
Memory: array (
  'used_memory_human' => 'disabled',
  'used_memory_peak_human' => 'disabled',

Performing some tests:
Connection works in general
Initial expire seems to be set correctly
setIfNotExists works fine
expireIfKeyHasValue seems to work fine
Extending expire seems to be set correctly
expireIfKeyHasValue correctly expires only when the value is correct
Expire is still set which is correct
deleteIfKeyHasValue seems to work fine
List feature seems to work fine


yes you will need a lot more workers to process this much data, each worker calling ./console queuedtracking:process should start a new one. Can you try specify --queue-id parameter to each process maybe?

Thanks, it seems ./console queuedtracking:process --queue-id=2 worked - either that or the order in which I started the workers in the shell and made the change in the plugin settings in the web UI. ./console queuedtracking:monitor now confirms that 2 workers are in use and that’s enough for the current load to make the queue size slowly decrease.