More problems with server failing to process

Which sql query should I run manually to delete any visitor logs older than 120 days?

I might as well try that from either the cmd line or phpmyadmin, since whatever is calling it from the Piwik settings isn’t completing. This way, at least it’ll complete, or if it fails there’s a chance that I’d get an error to track down.

I’d like to try to get the database down as small as possible, since Arvixe support told me I couldn’t import a database that’s 3Gb, and I still need to move this site there now that the massive log import is completed.

Delete all data from the tables piwik_log_* where DATE < ‘2012-01-01’ for example

that query doesn’t work… keeps telling me there’s an error near date.

I think the problem is that there are no “date” fields in any of the log tables? Or do I have to convert the date I want to match a field that’s in all the tables?

what showed up in the slow query logs this time was:


# Query_time: 186  Lock_time: 0  Rows_sent: 1  Rows_examined: 12641093
SELECT COUNT(*) FROM piwik_log_link_visit_action WHERE idvisit <= '2376138';

# Query_time: 746  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
DELETE FROM piwik_log_link_visit_action WHERE idvisit <= '2376138' LIMIT 100000;

# Query_time: 6  Lock_time: 0  Rows_sent: 1  Rows_examined: 12541093
SELECT COUNT(*) FROM piwik_log_link_visit_action WHERE idvisit <= '2376138';

I wonder if this is why the log purges aren’t working… they aren’t finding anything to match the queries? That doesn’t make sense, though, but I’m no SQL coder, not by a long shot, so I’m flying on a wing and a prayer here.

as an update, the purge old visitor logs finally worked a few weeks ago, churning through that large backlog of data (took a little over 9hrs to complete iirc). The Piwik installation was then moved to it’s new dedicated server, and since then, all visitor log deletions since then seem to be taking more normal amounts of time.

But I’m still having a problem with the archive processing.

Currently, it’s set to 7200 seconds, with PHP execution time and PHP input parsing time both set to 1800 seconds, and PHP memory limit is set to 4Gb.

I even changed mysql to allow persistent connections forever, and I’m STILL getting this error:


SUMMARY OF ERRORS
Error: Got invalid response from API request: /index.php?module=API&method=VisitsSummary.getVisits&idSite=1&period=year&date=last7&format=php&trigger=archivephp. Response was '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>500 Internal Server Error</title> </head><body> <h1>Internal Server Error</h1> <p>The 

1 total errors during this script execution, please investigate and try and fix these errors
ERROR: 1 total errors during this script execution, please investigate and try and fix these errors. First error was: Got invalid response from API request: /index.php?module=API&method=VisitsSummary.getVisits&idSite=1&period=year&date=last7&format=php&trigger=archivephp. 
Response was '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>500 Internal Server Error</title> </head><body> <h1>Internal Server Error</h1> <p>The

and this is all that shows up in error_log:


[warn] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server
[error] Premature end of script headers: index.php

I’ve tried using CGI wrapper instead of FastCGI, and all that does is make the script fail out sooner, and always on crunching the numbers for the entire year for Site #1.

The only thing I don’t have set is “use big tables” in mysql settings, or increasing the archiving to 9600, but given how much data my Site #1 collects (my big traffic site), I don’t know if that will make things better or worse.

I just ran an update on mysql and php, so I’ll check the logs in the morning and see if those made any difference.

any other ideas?

I found a couple potential things to try

PRM was the culprit, killing PHP processes. Because of the nature of FCGI, they tend to grow in memory use, so they would get killed with those results.
Bottom line, processes managers like these should be configured or avoided.
FCGI works well with some configuration, though, as CPanel staff states, should be avoided by unexperienced users, specially if traffic is uneven between hosted sites, requiring special vhost parameters to avoid “process slot” problems.

also a homeloader issue not sure if applicable but at least might help eliminate to sources.

Also what version of FCGi are you using?

this server is not technically a shared server, and is running Virtualmin, and it’s supposed to have the equivalent of a 4 CPU, 12Gb RAM vm all to itself, no there are no other sites to share resources with on that instance.

the domain instance is set up for unlimited PHP script run time, 1500s PHP input parsing time, 4Gb max memory, mysql timeout is set to 1800s,

And no matter how much memory I’ve configured PHP to be able to use (increased from 1Gb to 4Gb over the past week or so), that process crunching data for the year for Site 1 dies at around between 1100 and 1200 seconds… it’s not even the same amount of time passed each time it dies, and nowhere near the 1800s I have it set to for the entire server! It irks me :slight_smile:

server says: mod_fcgid-2.3.7-1

and I have a few decades experience as a Unix admin, I’m just limited to what I can change on this particular system :slight_smile:

Just for the heck of it, I’ve increased the cron to archive every 4hrs, just to see what happens. I’m not sure what else I can try to get my year numbers to process normally.

I found this link

http://www.virtualmin.com/node/20895

It could be php version fcgid combo and app.

I could dig up a repo that has php 5.4 and try that.

there aren’t any known issues with Piwik and php 5.4, right? :slight_smile:

Should work well with php 5.4

Piwik seems to run a little faster with PHP 5.4.9, but the error persists:

from syslog:


PHP Fatal error:  1 total errors during this script execution, please investigate and try and fix these errors. First error was: Got invalid response from API request: index.php?module=API&method=VisitsSummary.getVisits&idSite=1&period=year&date=last10&format=php&trigger=archivephp. Response was '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">#012<html><head>#012<title>500 Internal Server Error</title>#012</head><body>#012<h1>Internal Server Error</h1>#012<p>The in /public_html/misc/cron/archive.php on line 561

the error log has the same FastCGI error as before

Here’s the full entry from the cron run of archive.php:


ERROR: Got invalid response from API request: index.php?module=API&method=VisitsSummary.getVisits&idSite=1&period=year&date=last10&format=php&trigger=archivephp. 

Response was '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>500 Internal Server Error</title> </head><body> <h1>Internal Server Error</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator,  root@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log.</p> </body></html> '
Archived website id = 1, period = year, 0 visits, Time elapsed: 1048.770s

Check the apache server error log for more info.

Would this happen to be something on your setup?

increase the susohin.memory_limit to 512M

@lesjokolat not on my system

@matt, the error_log has the identical errors as before, same as for the past 5 weeks (see the Oct 24 post), nothing’s changed there. The entries in the error_log are always:


[warn] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server
[error] Premature end of script headers: index.php

I still have it set to run via cron every 4hrs, so I’m waiting for it’s second attempt to process the archives after upgrading to 1.9.3-b7 (first attempt for Site 1 declined, saying that it had already processed data 3hrs 58min earlier)

hmmm ok here is another distro that shows the same error could this be applicable to your setup?

or here is a suexec issue

nope. See the very first post this post in this thread for details… the only thing different is now the Piwik install is running 1.9.3-b7, and I’ve increased the RAM to 5120M and the PHP execution time to unlimited.

You mentioned you tried running it as something other than FastCGI did you try just regular cgi mode?

If possible can you copy the php.ini contents(minus and secure info)?

yes, both regular CGI mode and Apache mod_php fail with the same error, only much sooner than the 1100 or so seconds. I manually adjusted the fcgi timeout to match the 1800s PHP timeout but that hasn’t worked yet, and I have no idea why, but after the next cron run I’m planning on rebooting the whole server and see if that changes anything.

I’ve even tried adjusting the mysql timeouts to ridiculous levels to no avail. And it’s only when processing the year totals for the one site, and that’s the same site that I spent a few months importing that 25Gb worth of access_log data. It should be showing me somewhere between 1-2 million visits, but I get zero for that site when I switch to the year view.

Now if I manually run archive.php with --force-all-periods set to a year, the data will show up, but the next time it processes via cron, it’ll fail, and I’ll get zero again for that one site for the year.

It’s frustrating. This is the only outstanding problem I have with Piwik anymore, and the fact that other people can track sites with millions of visitors and my one site that gets that many visitors can’t handle it is boggling.

This server was built to handle this Piwik install alone because Hostgator shut it down, and it still can’t? Very odd.

How about this i am guessing no GD invovled but some interersting archive info...

Don't...
Rely on GD support or their help, forums or any other of their 'documentation'
Use their default PHP or PHP5 cron job strings ie: /web/cgi-bin/php or /web/cgi-bin/php5
Wrap the URL in quotes (take note below)
Bother wasting time trying to hack Piwik

Do...
Ensure you set archive.php to 755 permissions
Insert following into cron job 'browse' line:
/usr/bin/curl http(s)://www.mysite.com/path/to/piwik/misc/cron/archive.php?token_auth=[insert-token]
Replace [insert-token] with your token_auth key

Special thanks to - http://eckstein.id.au/1274/internet/tutorial-godaddy-wordpress-wpomatic-cron/

nope, not applicable.

that would be a solution if I were hosted at GoDaddy and my entire site were failing. not the case here.

any chance of seeing the php.ini? even pared down version? PM if you like?