Thanks Matt for your fast reply, useful as usual. Reading it I can understand you can rely on data integrity by unit-tests.
In the past we did an upgrade of mayor release from v1.9.x to v2.1.0. Our console execution maybe hung/died after about half an hour of database upgrade, doing human functional tests we discovered we lost Frequency metrics data before 2012-02-xx (can’t remember).
In the meanwhile piwik went 2.3.0 so after this sadness we restarted again upgrading 1.x to 2.3.0 this time flawless, without feeling any data integrity fault.
Due to this upgrade is now felt a bit tricky, that’s because we ran console using su www-data -c ./console but without any output on the screen during the process. (I’m always using console as root on dev-VM)
The problem is minor because there’s a backup and we can repeat the task, but you see, this operations it costs each try at least half an hour on 2 servers (application + rdbms) resources usage. More, I felt human check against data a bit messy because data is huge and it could be automated so humans could spend time doing more edifying things.
In my opinion I might try running console as root doing the core:upgrade but in the meanwhile I would also give them a visual feedback of data integrity between the two installations.
Reading Update guide, this procedure looks like easy and straightforward but in the reality environments could differs a lot making the difference. I mean: maybe someone has two or more dbms master/slave, or maybe some else has two or more balanced application servers or maybe the whole ensemble is on the cloud, so upgrade procedures could differ.