Jump to content

Too Many Connections


ProblemHelpPlease

Recommended Posts

I sometime get a "too many connnections" error in the server logs. I know this means I have exceeded the number of permitted connections to the mysql server but comparing the error against the access logs shows that it is unlikely that this is just a high number of visitors. Visitor number are around 700 per day with 6500 page views.I noticed from looking at the processlist for the mysql server that I am getting a lot of Sleep commands running. The mysql.connect_timeout is set to 60 and I have read elsewhere that this can cause problems with "too many connections" on apache servers running PHP and reducing it helps sort the problem.I dont have direct access to the php.ini file for the mysql server so cant change mysql.connect_timeout or the amount of allowed connections (which is set to 25).I am wondering if the problem is related to another piece of code I added. I needed some content to refresh when pressing the back button so added session_start(); to the top of the page to force a reload. This worked as intended but I am now wondering if this is causing each page a users opens to generate a new connection rather than reusing the Sleep commands that are running. I am not sure if this is how the Sleep commands work or if a new connection would be used anyway.I am also seeing pages that timeout before returning headers , but don't think this is related to the mysql problem.Both problems only happen about 1 in every 1000 page views or so.Does anyone have any knowledge on this kind of problem.

Link to comment
Share on other sites

The sleep() function (if that's what you have in mind) halts PHP execution for a certain period of time... making it do NOTHING (not even output stuff) in the meantime.If the page is refreshed or HTTP connection is aborted in the meantime, the script will still be running at least until the sleep is over, or if the server OS explicitly terminates it somehow. If you've set up a MySQL connection beforehand, this means that this connection will remain alive for the duration of the sleep (unless it times out before the sleep is over).sleep() is rarely (if ever) good for real scenarios... it's best usage is as a debugging tool - when you want to see the impact of your code halting at a certain critical place or want to troubleshoot concurrency issues.

Link to comment
Share on other sites

I was refering to the output from a show processlist.Kill 7541381 xxxxxxxxxxxxxx localhost xxxxxxxxxxxxxxx Sleep 18Regarding an open connection that has not yet terminated.Also as a seperate but related question, does any know if when using phpmyadmin and using the processes tab should the full query be displayed for an item in the process list or does it cut it off after a certain number of charaters and only show part of the query being run.

Link to comment
Share on other sites

Ah, that... sorry.Well... does PHP run as CGI or an Apache module? You made it sound as if it's running as an Apache module... if so, you should probably use mysql_pconnect() to create a "persistant connection", i.e. a connection that will be reused across HTTP requests if it connects to the same server with the same credentials.Using this won't exactly help in finding out where the cause is, but it's sure to be a solution if PHP is running as an Apache module.

Link to comment
Share on other sites

  • 1 month later...

Ok, I have new information on this problem so i have reopened this post in the hope someone knows something about it.The "too many connection" issue is due to the entire server limit of 451 being reached. It normally operates at about 10 connections per second but over a 2-3 minute period is climed to reach 451, stayed there for about 1 min then gradually fell again. Duing this time the processlist for my site on the server (it is shared) remained at about 9 connections with all of them being in the Sleep state (at about 250).I am guessing that all sites on the server that are opening connections are not closing for some reason resulting in the connections climboing rapidly and then hitting them limit.At the same time the number of queries drops to 0 and the traffic to the mysql server drops to 0, probably because nobody can connect to it.Does anyone have any ideas as to what might cause this (it only happens very rarely) or how to go about finding out what might be the cause.

Link to comment
Share on other sites

Sounds like you've been visited by "Anonymous" lol. Check the version of the server it may be that there is a bug that causes your problem.

Link to comment
Share on other sites

The mysql server is version 5.1.55. I am not aware of any known problems with this version that would match the problem. I am also unable to find anyone else having the same problem. Many have a too many connections problem but none are server wide and act in such a rapid manner.

Link to comment
Share on other sites

I am trying to test another site on the server and show that the error appears in 2 sites at the same time but need to generate constant traffic to the new site for the error to appear (so someone looking at the site would generate the error log).I have tried the code below on the first website.

$con = mysql_connect("$host","$usr","$pwd");if (!$con) { die('<META HTTP-EQUIV="Refresh" CONTENT="0;URL=http://www.othersite.xxxx/">'); }mysql_select_db("$db", $con);

Which I was hoping would redirect traffic to site 2 if site 1 had the connections error, however the error just happened and no traffic was moved to the site 2 although site 1 had traffic. I tested it by making the $pwd wrong so it would fail at login and it redirected me successfully so I thought it would work.Does anyone know why this happened and has anyone got a a better way to do this.

Link to comment
Share on other sites

Yes, I use mysql_close().The issue at the minute is successfully catching the error and redirecting the page to the other website.I am now trying the code below.

$con = mysql_connect("$host","$usr","$pwd");if (mysql_errno() == 1203){  // 1203 == ER_TOO_MANY_USER_CONNECTIONS (mysqld_error.h)  header("Location: http://othersite/");  exit;}mysql_select_db("$db", $con);

Link to comment
Share on other sites

FWIW, on a shared host, as far as CPU and connection utilization is concerned, it's always important to keep tabs on everyone. That's why good hosts (unlike the one you're on) have per-user limits as well as a global one. You might be a "good citizen", but if there's a prick there who's doing ever-lasting cron jobs that open several connections, he's alone going to block the whole server if there's no per-user limit.Also, your code seems like it would work, but you might want to make it slightly more efficient by referencing the variables directly, i.e.

$con = mysql_connect($host, $usr, $pwd);if (mysql_errno() === 1203){  // 1203 == ER_TOO_MANY_USER_CONNECTIONS (mysqld_error.h)  header("Location: http://othersite/");  exit;}mysql_select_db($db, $con);

Link to comment
Share on other sites

  • 2 weeks later...

Thats the odd thing, the shared server does have a per user level of limitations. The connections are at 25 each. It's all connections that are not closing, so when you add up all users connections not closing it quickly hits the server limit even though each user has no more than 25 open.I'm not sure why i was using "" in the code, its old code so not something I ever look at in much detail.

Link to comment
Share on other sites

I have managed a temporary fix. Every time a connection is made to the website I am checking to see if any mysql processes are currently running that are stuck in Sleep with a time of more than 20 and killing them if they exist, the script then continues with its normal operation. It seems to be working so far. I have also managed to get the max connections allowed for the entire server doubled which should reduce the amount of time the server spends at the full connection limit during one of its moments.

Link to comment
Share on other sites

Ok, I now have a new problem. About 1 in 7000 pages viewed on the site (which is about the daily average) results in one of the queries failing with a "MySQL server has gone away" error.It has only started doing this since I added the KILL script to the connections.Can anyone see if the scriot below would cause this problem or if it is likely to be something else (like a slow query)

$con = mysql_connect($host,$usr,$pwd);if (!$con) { die('Could not connect: ' . mysql_error()); }mysql_select_db($db,$con);## KILL ALL SLEEPING CONNECTIONS OVER 20 SECONDS LONG ##$result_kill = mysql_query("SHOW processlist"); while($row_kill = mysql_fetch_array($result_kill)) { $process_id_kill = $row_kill["Id"]; if ($row_kill["Time"] > 20  && $row_kill["Command"] == "Sleep"){ $sqlto_kill = "kill $process_id_kill"; mysql_query($sqlto_kill);}}

I dont see how if the connection is in Sleep it would be closing an open connection that was still processing a query.

Link to comment
Share on other sites

It could be that between the time you get the process list and loop through that one of the connections has become active. That's called a race condition, the data that you're using might have changed between the time you fetch it and use it. That happening is more likely the longer it takes MySQL to kill the process. If it takes 1 second to kill each process, and you're looping through 20 processes, then 20 seconds have passed by the time you get to the last one, and it may be active now.

Link to comment
Share on other sites

Just a thought... have you tried explicitly terminating your DB connections as soon as you're done with them?There may be a small interval between the time of your last query and the time PHP ends and the connection is therefore closed. Also, if you're willing to sacrifice memory in return for less number of active connections and CPU, you could use MYSQLI_STORE_RESULT retrieval mode, and close the connection right after the last query (since the result would be stored already).

Link to comment
Share on other sites

Thanks for your ideas, I have already tried your suggestions boen and it didnt seem to improve the results. Regarding the KILL process taking a second for each process, I can only have 10 processes running at any given point at the minute so it it couldnt reach 20 seconds unles it took 2 seconds per process. I have increased the time to 200 to see what effect it has anyway.

Link to comment
Share on other sites

I understood that, from test it takes a very short amount of time but maybe for someo other reason at some time it would take longer.I increased the time to 200 but it still resulted in the same issue.I have traced the "server has gone away" message to only occuring on a particular query.The query in questions is

SELECT * FROM $some_table WHERE something1 = 'blah1' AND something2 = '$something2'

The table it is searching is made up of 6 fields, the longest being a varchar 50, it has 16225 rows which will continue to grow over time. Table size is 3.3MiB.I am using indexing on each of the fields individually with the cardinality of something1 being 3 and something2 being 5408. Is this the best way to index the table, would an index on 2 fields together be better, and can you see why this query might have a "server gone away issue" once in 7000 times of accessing it.This query is one of the last to be run before the page loads.

Link to comment
Share on other sites

I'm not sure about the indexing, but if you aren't using all 6 fields from the result, it's probably a good idea to replace "*" with the exact columns, just to limit the amount of data that needs to be retrieved.

Link to comment
Share on other sites

For the field with a cardinality of 3, you might consider changing that to an enum field where you list the possible values. That requires much less space to store (typically 1 byte for enum, where text is 1 byte per character), and should make lookups faster. You may also consider an index spanning both columns you're searching for.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...