Setting up Matomo in separate servers or containers

Hey everyone! New(ish) to the PHP world, so bear with me, please.

So I am in the process of setting up Matomo and was trying to pursue the following configuration using three different LXC (or servers, for that matter) with no shared volumes, all of them using Debian 12 Bookworm:

NGINX (LB) <=> Matomo <=> MySQL

How this is supposed to work is as follows:

  • NGINX receives a request from the Internet.
  • NGINX calls PHP-FPM remotely, via FastCGI (TCP), over the internal/local network.
  • PHP-FPM queries MySQL when needed, over the local network.
  • NGINX caches static content, so only the .php are passed from NGINX to PHP-FPM in subsequent requests.

What I am trying to do here is to replicate what I already have with Python, Ruby and Java across my clusters (all communications encrypted with TLS):

  • NGINX <=> Puma (Ruby) <=> PostgreSQL
  • NGINX <=> Gunicorn (Python) <=> PostgreSQL
  • NGINX <=> Jetty (Java) <=> PostgreSQL

Static files aside, this is almost working (I get 404 errors in the FPM log, and I asume this must be a configuration error). But after even more hours of reading documentation, I’ve realised that this is just not going to work because:

  • NGINX would have to keep a copy of the static files as PHP-FPM cannot serve static files.
  • NGINX cannot use TLS when communicating to PHP-FPM as the later does not support it.
  • Plus I still have to figure out what’s wrong in my setup, heh :smiley:

So, at this point, I am giving up and this is what I am going to do:

  • NGINX (public) as reverse proxy with public IP address.
  • NGINX (private) with PHP-FPM on the LXC where Matomo is deployed, using UNIX socket.
  • PHP-FPM querying MySQL when needed.

NGINX (public) will use proxy_pass to send requests to NGINX (private), which will server static content itself and use fastcgi_pass to send .php requests to PHP-FPM over UNIX socket.

Is this the way? Am I missing anything? Any feedback will be much appreciated. I’ve been searching for hours and hours, reading documenation, blog posts and Stack Overflow and the likes, but so far it’s proven to be a tough nut to crack.