Apache vs nginx?

Tags: nginx, optimisation, apache, tomcat, http, web, server, web server,

Added: 2013-04-26T00:00

Apache vs nginx?

People have been talking about nginx for a long time now.
I've always stuck with Apache, because it's tried and tested, it's what I know, and it's always been good enough for my purposes.

But there comes a time when you should re-evaluate your thinking to just make sure you're not blindly doing what you've always done even though things have changed in the meantime.

Apache used to use a process-per-connection model (prefork). As you can imagine, that didn't scale too well, so then it added the option of a thread-per-connection (worker). This works much better (thread creation being a darn-sight cheaper than forking a new process off - especially with one webpage possible having many tens of images, css files, javascript files, etc that also need to be loaded). Add in HTTP pipelining, and it works pretty much OK for most users.

However, spawning a thread for each incoming connection isn't free. There's a memory cost for each one, and also, each thread spends a large amount of time doing nothing, waiting for network activity to happen.

Would nginx work better for me?



What if you could condense the 100 threads that all spend 1% of their time doing stuff into 1 thread that spends 100% of its time doing stuff. Hey presto, you can still support 100 connections, but this time with only one thread, and much less memory.
Enter nginx.

nginx uses an asynchronous event-driven model. Various other servers also have a mode that does similar things - Tomcat for one, and I think Apache now does too (event).

The first rule of optimisation is Don't.
The second rule is Don't - yet.
Having ignored those very good rules, the third rule is Measure it before, then make the single change, and then measure it afterwards.
The reason for this is that you might well make it slower, or make no difference at all.

I decided to benchmark my site with http://blazemeter.com/. When you sign up, you get 10 free 50 concurrent user tests.

Testing


The first test I ran ran fine at the 50-user level, with requests to a variety of URLs taking about 360-390ms, until suddenly, all the requests started taking a very long time, not recovering until the test finished.

Looking into this, I saw lots of com.mongodb.DBPortPool$SemaphoresOut: Out of semaphores to get db connection exceptions.
What appeared to have happened was that the default values in the MongoOptions were 10 connections per host, and a "threadsAllowedToBlockForConnectionMultiplier" multiplier of 5, giving 50 Mongo connections before that error starting being thrown. So, the test was using all of the Mongo connections, but the moment a few other requests came in, it tipped the balance, these exceptions started bubbling up, and hey presto, everything went wrong until the test ended.

After the test, I was going to run the test again so I restarted Mongo, Apache, and Tomcat, but when I shut down Tomcat, I saw lots of warnings in my logs saying:
SEVERE: The web application [] appears to have started a thread named [Thread-4437] but has failed to stop it. This is very likely to create a memory leak.
26-Apr-2013 22:23:38 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads

The thread numbers went from 83 all the way up to 4596, which is a lot of threads to have hanging around. This is something I definitely need to sort out before retrying my Apache measurements.

Comments

Comment

I have a problem in my gmail when i try to open it says Peer's certificate has an invalid signature. (Error code: sec_error_bad_signature) how can i manage this problem?

Comment

I have a problem in my gmail when i try to open it says Peer's certificate has an invalid signature. (Error code: sec_error_bad_signature) how can i manage this problem?
Add a comment

Your IP:
Please enter 2826335 here: