Why Nginx faster then apache?

The main difference is its architecture. How it was designed to handle HTTP request.
Apache:

Apache creates processes and threads to handle additional connections.
We can configure the server to control the maximum number of allowable processes. But if this is creased outside of server capacity, too many processes exhaust memory and can cause the machine to swap memory to disk, severely degrading performance. Plus, when the limit of processes is reached, Apache refuses additional connections.

Nginx:
Nginx handles the HTTP request asynchroneously and its event based.
It also do not create new process per request instead it has no of process defined (.e.g 4 total nginx processes) during start time and it creates threads out of it. Each of these processes is single-threaded. Each worker can handle thousands of concurrent connections. It does this asynchronously with one thread, rather than using multi-threaded programming.

Linux find cpu information in details:

Script to get cpu info in details.

 

PHYSICAL_CPU=`grep ‘physical id’ /proc/cpuinfo | sort | uniq | wc -l`
VIRTUAL_CPU=`grep ^processor /proc/cpuinfo | wc -l`
CORES=`grep ‘cpu cores’ /proc/cpuinfo | head -1 | cut -f2 -d “:”`
MEMORY=`cat /proc/meminfo  | head -1 | awk ‘{printf “%.0f”,$2/(1024*1024)}’`
CPU_SPEED=`grep “^cpu MHz” /proc/cpuinfo | head -1 | awk -F”:” ‘{printf “%0.2f”,($2/1000)}’`
CPU_CACHE_SIZE=`grep “^cache size” /proc/cpuinfo| head -1 | awk -F”:” ‘{print $2}’`
KERNEL=`uname -m`

ARCHS=`grep flags /proc/cpuinfo | uniq | egrep -o -w “tm|lm” | wc -l`

if [ ${ARCHS} -eq 2 ]
then
SUPPORTED_ARCH=”x86_64,x86″
else
SUPPORTED_ARCH=”x86″
fi

echo “Hostname: $HOSTNAME”
echo -n “Physical Processors: ”
echo ${PHYSICAL_CPU}

echo -n “Virtual Processors: ”
echo ${VIRTUAL_CPU}

echo -n “CPU Cores: ”
echo ${CORES}

echo -n “CPU Speed: ”
echo “${CPU_SPEED} GHz”

echo -n “Cache Size: ”
echo “${CPU_CACHE_SIZE}”

echo -e “Memory: ${MEMORY}G”

echo “Kernel Arch: ${KERNEL}”

echo “CPU Arch: ${SUPPORTED_ARCH}”

echo “Notes:”
if [ ${CORES} -eq 1 -a ${VIRTUAL_CPU} -gt ${PHYSICAL_CPU} ]
then
echo -e “\tCPU is Hyperthreading”
fi

if [ ${ARCHS} -eq 2 -a `echo ${SUPPORTED_ARCH} | grep -c ${KERNEL}` -eq 0 ]
then
echo -e “\tHardware is 64-bit while installed kernel is 32-bit”
fi

Difference between prefork and worker apache mpm module

MPM stands for Multi Processing Module which extends apache’s capability to implement hybrid multi processing multi threading in apache web server.

Default mpm can be checked with httpd -l or apachectl -l

The default MPM for Unix is the Prefork module.
The Worker MPM was introduced in Apache2.

Before I explain the difference between Prefork and worker its necessary to understand how it works. So let’s see how it works.

  • Prefork MPM

Working operation : – A single control process is responsible for launching child processes which listen for connections and serve them when they arrive. Apache always tries to maintain several spare or idle server processes, which stand ready to serve incoming requests. In this way, clients do not need to wait for a new child processes to be forked before their requests can be served.
We can adjust this spare process through the apche conf. For a normal server which is having 256 simultaneous connections can use the default prefork settings.

Perfork is the default module given by apache.

# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die. If MaxRequestsPerChild is 0, then the process will never expire

As the name suggests this will pre fork necessary child process while starting apache. It is suitable for websites which avoids threading for compatibility for non-thread-safe libraries . It is also known as the best mpm for isolating each request.

  • Worker MPM

 

puppet error stack level too deep Report processor failed: undefined method `[]’ for nil:NilClass

In my case I was getting

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: stack level too deep
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

I did mistake in my manifest where I called node within node

e.g.

node apache_template inhirets apache_template {
}

This was causing continues loop and causing stack level too deep error message. Needed to remove that typo or by providing correct node to be inhirited. 🙂

mrepo/rhel-2014.09-x86_64/RPMS.all/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 – “The requested URL returned error: 404 Not Found

Now if you see this error on the Amazon Linux AMO that you have build in AWS and while installing any package with your own custom repository than you are on the right page for the solution. 🙂

The root cause is AMI image (and Amazon) itself. it doesn’t use redhat version numbering like 5, 6, 7. It uses date for releases e.g. 2014.09. It doesn’t use official Centos and RedHat repositories and uses internal amazon repositories with different structure and logic.
Failure from above is caused by “latest” symlink in URL which points to latest Amazon release. we can’t set such symlink pointed exclusively to Percona Centos 6 repository.

I see two options there to resolve it as of now:

1. Do not allow AMI to define $releasever as “latest” and set it manually in percona-release.repo. you need to replace $releasever with exact Centos version in percona-release.repo. example command to replace on Centos 6 based AMIs: sed -i ‘s/$releasever/6/g’ /etc/yum.repos.d/percona-release.repo.
* do not use Amazon AMIs in such case, because they are not exactly the same OS, it’s some kind of OS fork made by Amazon and adjusted exclusively for Amazon services, software and infrastructure. use Centos AMIs.

It helped me so try it if that solves the issue.

What is the difference between NIC Teaming and Bonding

NIC Teaming and NIC bonding are two different things.

NIC Teaming uses one of two methods, failover, and load-balancing with fail over. With a team you do not get a single 2gb connection (with two 1 gb NICs). You get two pipes that act as one, but merely are load balancing the traffic over each NIC, and each NIC acts as a fail over to the other. If you transfer a 100 gb file, you are not going to get 2gb of throughput…you still only get 1 gb, but you will not kill the network performance because the second NIC is still available to service other traffic.

True bonding would be taking two NICs and bonding them together to get a single fat pipe. This requires the switch to support this as well. I have not seen much bonding in the server world…more done at the network level.

VMWare acts the same way. It is purely load balancing and fail over. Since VMWare is done at the OS level, you can mix and match different vendor NICs in a team. I have done this without issue. Just make sure they are on the HCL.