Jump to content

Board really slow


Slick Mongoose

Recommended Posts

Okay, I'm still seeing CPU usage stay below 20% even at the heaviest load, and neither Apache nor MySQL are consuming overly large amounts of memory, and I even wrote a test program to run against another database in the same MySQL instance, and the test program just flew through the data.

So at this point, the slowdown /has/ to be in the Westeros table data specifically. As a result, I've now tried iterating across the entire database and 1) doing REPAIR TABLE, just in case, 2) turning off all compression/space-optimization settings that had been put on the Invision tables (I'm not sure when/by what, but looks like one of the various Invision maintenance tools set them there). I also shut the webserver and MySQL down for about 15 minutes to run integrity checks on the actual raw files, and tweak the MySQL cache settings. I can at least attest that there are no index mismatch errors or other MyISAM format errors in the entire (QUITE LARGE) westeros database.

Unfortunately, I am starting to think that the Invision database may just be too large for the Invision scripts to handle happily; at this point, the ASoIaF database is, even in optimized form, 1.29 /gigabytes/ of forum data. On older (1.x) versions of Invision, I know that the queries used to build certain bits of state could reaaaaaally start to chew time when you were going through more than about 1.2GB of posts. The forums may, unfortunately, just be reaching that point, though I did not think that Invision still had that limitation.

Moreover, I would think that the MySQL server would be chewing more CPU/RAM, then. At least based on my personal experience. To add to my confusion, the Westeros wiki (which literally runs out of the same database, on the same server) is still blazingly fast, which suggests the problem is not in the westeros database.

The other possibility which occurs to me is the Amazon ad over there on the sidebar of the forum, which seems to hang for several seconds while retrieving the ad every time the skin is parsed. That doesn't seem to account for the entire slowdown, but may well be accounting for at least part of it. For the first day or so (when I did the initial performance tests on the new server), the Amazon ad /was not working/, which might account for the dramatic speed differences we're seeing. The new server is in a different datacenter than the old one, and while the new one has a fatter pipe (more data can be pushed through, faster), there might be a slower overall connection between stack (the new server) and Amazon than there was on segfault (the old server).

The more I think about that, the more I believe that possibility, actually. Because not only is the wiki quite speedy, but database operations /on the same page/, like a Quick Edit/Repost of an existing post, still happen extremely quickly even on the forum. The slowdown only seems to be on a full load of a page, where a skin parse will be triggered (and thus the Amazon ad). Hrm.

That possibility, however, Ran or Linda will have to poke at. I am not going to muck about with their skin templates. :)

Link to comment
Share on other sites

Yeah, even once I get pages mostly loaded, I get "waiting for pagead2.googlesyndication.com" in the thing at the bottom of the window where it says "Done" if shit has gone properly. Anyhow, good to know it's not just my intertubes.

Link to comment
Share on other sites

Thanks for looking into this, Sparks. :)

FWIW, I have an ad-blocker on both my work and home systems, and this board is still painfully slow. I don't know if it's still trying to look up the ads (if that functionality is somehow embedded in the skin) even though they are blocked, and ends up hanging there...or something else.

Ran/Sparks -- is it possible to offload a huge chunk of the database into an archive? Like, any thread that hasn't seen activity for more than 3 months automatically gets archived, but is still accessible via search? That way we don't have to lose old threads to major culls, but we can keep the day-to-day active threads up front and quick to access? I could be completely talking out of my ass, here. I have only the barest knowledge of database management.

Link to comment
Share on other sites

I think at least some of the Google ads are loaded on the client side, poking at this a bit more. But the Amazon ad appears to get loaded /into the skin/ on each draw, so it's literally that *this* server goes out and gets the ad from Amazon each time, *then* renders the skin (including the ad's HTML) and then finishes sending you the page. So if there's a delay on the Amazon ad service, that would /definitely/ affect the rendering time, even if you have ad blockers on.

Link to comment
Share on other sites

Things are complicated a bit by our plan to upgrade to IPB 3.0 soonish, and then we'll have to redo skins all over again... So we'll see when we do that if there's anything we can do when we do that.

X-ray,

I know that some forums have archiving functions. I have no idea if IPB does, and if it does anything useful. Will look around. That'd be a nice solution to things, if it helped.

Link to comment
Share on other sites

All right, I think we've fixed this. Let us know.

Hats off to Sparks, who kept telling us what the problem was, but I kept approaching the problem ass-backwards and failing to wrap my head around a solution. And then it dawned on me while walking the dog, and took all of three minutes to implement.

Link to comment
Share on other sites

The slowdown came from Latest News and Updates. We used a simple bit of PHP to include a URL from www.westeros.org to generate that. The problem was that, for some obscure reason, doing that was way, way slower than just directly going to the page (all the code was doing was checking http://www.westeros.org/ASoIaF/News/ -- try it, you'll see how quick it is). Sparks had explained all this to us, but then we (read: I) got stuck on the fact that I couldn't see why it should be so slow, and how to make that webpage into a static page, because our CMS didn't work that way, etc.

And then, well, when finally trying out a method of using PHP to write to a file, which was running into silly complexities (I've got an embarassing post now up on a forum asking for help for a ridiculous patchwork of PHP and Perl) ... it dawned on me that I could just, you know, run a script server-side every 5 minutes (the very same script we were executing directly before, each time someone loaded a page on the site) and have it dump the output into a file on this server, then display that. Voila.

It is, in all seriousness, _one_ line of code pasted into a cron file. Fixes the whole thing.

I was a dunce.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...