How to bypass database connection limitations- brainstorming

Hello Matomo Forum,

This is my first post, so thank you for your interest and patience :slight_smile:

I am currently in the process of figuring out the best tool for my purposes, which I will describe below:

At my company we are dealing with sensitive data, that is why our web application ( the place where Matomo will possibly be included ) works in a closed infrastructure, or rather architecture. That means there is NO connection to the internet. Alas, we cannot influence given architecture and the data tier is a Microsoft SQL Server. Yes, currently there is no support for that, neither will there be one if there is no one willing to pay for that development.
Furthermore, there are 2 more problems, first being on-premise solutions supporting sql servers are very sparsely available (in fact I haven’t found a suitable sw after googling for 2 days now), secondly we already tested with Matomo and realized the missing support for sql server a little late. And we like Matomo :slight_smile:

That’s why I tried figuring out a way of somehow bypassig that problem. Here’s my idea:
There will be a local MySQL db which serves as a kind of proxy or transfer storage place if you wish. Data is stored there e.g. on a daily basis or whatever would be best for setup. Real-time analysis is really not necessary in our scenario.
Second, we setup the SQL Server Manager (SSMS) for job automation to migrate the already stored data to the SQL Server via Server Migration Assistant (SSMA), which is not a big deal actually. Of course our “transfer” db would be limited and needed to be wiped after succesful migration. Until that point, the only goal achieved would be the data stored on the wanted place.

But what good would that be, heh. I don’t know how Matomo processes data, what format they use?
I read that you could also use Server Log File information, you’d have Matomo import those to parse through and get the needed data out of it. Maybe even going so far as to only save logs in the first place?
Or, activate the query log function to fetch proper queries and have some sort of tool or method to translate that query to SQL server dialect, execute it on the server and have it write the result back to the transfer db. Then re-do command to be properly executed with the received data and afterwards clean the transfer db again of course.

I know how all of this may sound. A lot of work for nothing to gain really. Possibly there are some errors in my thinking as well.

Guys; I’d appreciate you helping me out with this or joining the discussion with new ideas

Have a great day