Geek News Central Server getting crushed

I guess it is the price of having a popular podcast. Sadly we are finding out that the server we have invested a lot of time and money in is not able to handle the load that we are putting upon it.

I am not convinced it is the box as we can get to it via shell so I am going to ask for some support from the provider. I understand slow connects but Apache should not fold under the load. But I suppose when several thousand people try to download our show at the same time no server is going to be able to survive that slam.

I think I have a solution so I will work that over the next week or so. Meanwhile we put the show back in the capable hands of the folks..

About Todd Cochrane

Todd Cochrane is the Founder of Geek News Central and host of the Geek News Central Podcast. He is a Podcast Hall of Fame Inductee and was one of the very first podcasters in 2004. He wrote the first book on podcasting, and did many of the early Podcast Advertising deals in the podcasting space. He does two other podcasts in addition to Geek News Central. The New Media Show and Podcast Legends.

One thought on “Geek News Central Server getting crushed

  1. In our discussion yesterday, I was actually impressed by how little you are spending on hosting (even though it is a lot on a low revenue start-up basis). It’s not just bandwidth where you are suffering but also server maintenance per your posts on the management interface to GoDaddy.

    As you scale up, you might want to think about going for more of a datacenter approach. In particular, I think you will need a real sys admin and a hosting service like rackspace (currently judged by netcraft to be the most reliable).

    Companies I know that are running their business on the web typically spend between $900 and $1500 per *month* on hosting.

    The previous poster mentioned Laporte’s solution, Coral, a volunteer content distribution network to lower bandwidth hits on any one server. That works fine for smaller scale bandwidth distribution but could turn out to be as unreliable as some of your current solutions once you scale.

    Good luck,

Comments are closed.