We use a self-hosted version of Matomo, which I inherited from a different dev. I’m not too experienced with Matomo. We track our website which does about 100k+ visits/month, on a server (8 vCPUs 16GB / 100GB Disk) and database (16 GB RAM / 290 GB Disk) which should be able to handle this right? I want to fetch certain data from the API, but keep running into problems.
I want to get the amount of page views for a specific page and impressions of a specific piece of content, using the API.
First problem was the API randomly throwing errors (502 bad gateway) when trying to get pageviews. I used the method Actions.getPageUrl, with a pageUrl=MYPAGEURL param. After much headache with this randomly crashing, I switched to using VisitsSummary.get with segment=pageUrl==MYPAGEURL. I also programmatically added segments (using SegmentEditor.add) that match this condition (definition=pageUrl==MYPAGEURL) and created the cron job (followed these instructions: https://matomo.org/faq/on-premise/how-to-set-up-auto-archiving-of-your-reports/) to archive these reports. I wanted to try and fetch the data by the segmentId’s so it would be more stable (data already archived and ready to go?) but it seems there’s no method to fetch visitor data by segmentId through the API? Anyway, the Actions.getPageUrl seems a bit more stable now, not sure if its because of the segments.
Second issue is retrieving content impressions, using Contents.getContentPieces. When filtering for the contentpiece I need, I use the label=MYLABEL123 param in my API request, but that only works when the contentPiece in question is within the top 100 contentpieces. If it isn’t (but I’m sure its being tracked and theres data for it), I simply get an empty array for a response. Tried adding filter_limit=-1 param (altho I only expect 1 result to match my query) but that doesn’t help.
What am I doing wrong making this seemingly simple task such a headache? Thanks in advance!