WordPress blogs error : wp-includes/theme.php on line 521

Warning: array_keys() [function.array-keys]: The first argument should be an array in /wp-includes/theme.php on line 521

Warning: uksort() [function.uksort]: The argument should be an array in /wp-admin/includes/class-wp-themes-list-table.php on line 48

Warning: array_slice() expects parameter 1 to be array, boolean given in /wp-admin/includes/class-wp-themes-list-table.php on line 55

Cause : You would have messed up with folder name themes and its permission.
Fix: Get the permission and folder name corrected if you have renamed it with some other name by mistake. If its already fine then get the same wordpress version and replace wp-content folder unless you have customize plugins and themes. If you have customize then you need to redevelop it.

Useful tools for techies especially for developers and sys admin

There are many situation in programming and testing where we can use these tools to get our work done faster and effectively.

1) Firebug Download
Very interesting tool. Can not live without it if you really have to do Javascript and CSS testing. Not only that it also helps in request tracking and cookie management.

2) FireCookie download
Another interesting FireFox add on for cookie management. You can change the cookie on the fly and add new cookie whenever required. Very use full if your site is using cookie intensively.

3) YSlow Download
Add on to firefox, very use full if you have to asses performance of your site. specially recommendation and site score by Yslow is use full to improve overall performance of site.

4) Web Developer Download
Add on to firefox. You can do ton of things from debugging java script to changing and testing css, HTML on the fly with web developer tool. Have to have tool for HTML developer.

5) HTTP Watch Download
Very use full tool for both IE and firefox for inspecting http traffic on site. Very use full to debug some performance issue. Can watch AJAX request and response and debug it. You cn also use Net tab in firebug for same perpose though. But some time I feel Net tab doesn’t work, HTTP Watch is more relaiable.

6) Fiddler Download
ooooo … debugging traffic and web issue in IE is really difficult. Fiddler is one of those tool that can help to watch traffic on site easily.

7) Samurai Thread dump analyzer Download
Very use full tool to analyze thread dump. If your site is having performance issues (100% CPU usage). You can use this tool to analyze all the waiting threads. You can take thread dump using command kill -3

8) JadEclipse Download JAD Executable Download
Use full tool to decompile class file in eclipse. After installing JAD eclipse, go to windows -> preferences -> Jad Eclipse -> and set Path to decompiler as C:JADjad.exe and Directory as temp file as D:TEMP. for jad eclipse to work.

9) Jmeter Download
Very use full tool to do load testing. Since this tool is free you can easily do load testing on your site whenever you want. Also this tool is very easy to set up and configure.

10) HTML Parser Download
Another Use full free java API to parse HTML. Documentation of this API is not good though with some inspection you will find this API very interesting and easy to use.

11) Regular Expression check Link
If you are using regular expression a lot, this web site will help you to create and test your regular expression. I use this link quite often to test my regex expressions.

12) Key Notes Download
Well, This is not any tool as such but very use full to keep your notes.

13) Java Code analyzer tool Download Download for eclipse
It is a very use full tool to analyze Java code performance. There are plug ins available for many IDE. Tool also tells you if you have any code issue in your code (Null pointer exception and all). Very use full to develop a quality code.

14) Message Post tool (Wget) Download
Wget is very handy massage POST tool and can be used to POST XML across applications.

15) Visual VM (Java Profiling tool) Download
Very nice and neat free Java profiling tool. For enterprise application I will even recommend YourKit Download. But for quick and free memory issue problems you can can use this tool effectively. You should have Java 6.0 for this to run.

16) Any Edit plugin for eclipse Download
If the JSP pages contains a lot of white spaces or tabs, it may take more time to load the page and requires more network band width. Any Edit is a nice tool to remove unnecessary spaces from the page.

17) Heap Dump Analyzer (MAT) Download
Some time your application suffer with memory issues, for example out of memory error. And you don’t have any idea what is going on. There are many different reasons for out of memory error but most common is memory leak. Eclipse Memory Analyzer (MAT) is a power full to tool to analyze heap dump and narrow down the problem. Please note that you should have -XX:+HeapDumpOnOutOfMemoryError parameter set to collect heap dump. Java 1.6 also comes with a tool called jmap (memory map) to force heap dump. More information can be found here.

Delete mails from exchange server

First you will need to install fecthmail.
You need to create one hidden file with email user’s details with “.fetchmailrc” name.

poll YOUR_MAIL_SERVER_HERE.com protocol IMAP:
user YOUR_USER_ID_HERE with password YOUR_PASSWORD_HERE some text

Then to fetch the mail you will have to fire this in order to flush the mail from your exchange server.

#/usr/local/bin/fetchmail -a -K -v -F –limitflush –limit 5

Linux High IO load.. what to check for trouble shooting?

When you look at the CPU activity of your computer, one of the parameters is the iowait. This value shows how much time your CPU wastes while it is waiting for I/O operations for complete. These include disk read/write operations, network, IPC, etc. Is this behavior a problem and, if so, what causes it and how to fix it? One one of the popular Unix-related forums one “genius” wrote:

The iowait “problem” is funny. It’s like when people complain that Linux is “using all my memory”. Yeah, no shit. You should be upset if you are copying files and your computer is /not/ in 100% iowait.

In reality, 100% iowait indicates that there is a problem and in most cases – a big problem that may even lead to data loss. Essentially, there is a bottleneck somewhere in the system. Maybe one of your disks is getting ready to die; or, perhaps, the NIC firmware is having problems with the latest kernel upgrade you installed. The troubleshooting process starts with the potentially more serious possibility: bad disk.

Take a quick look at /etc/messages, /etc/dmesg, /etc/boot.log and any other system log files. You are looking for disk I/O errors, failed read/write operations, bad sectors – anything that indicates a hardware problem with a disk. If you don’t find anything, look for IRQ and disk controller errors. Also look for memory errors and kernel panics. The three most likely culprits of high iowait are: bad disk, faulty memory and network problems.

If you still see nothing relevant, it is time to test your system. If possible, kick all the users off the box, shut down Web server, database and any other user application. Log in via command line and stop XDM.

Open three shell windows: run “top” in one, “iostat -x 1? in the other and “find /etc -type f -print” in the third. Make sure you can see all three windows at the same time. This is a simple test that should generate some I/O activity on the system disk. Repeat this process for other disks. If you see iowait hovering near 100%, chance are you have a problem but we don’t know what it is yet. However, now we do know that network is probably not the cause.

deathstar:/ # iostat -x 1
Linux 2.6.5-7.201-default (deathstar) 12/20/08

avg-cpu: %user %nice %sys %iowait %idle
2.83 0.42 1.45 9.11 86.20

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
hda 40.63 66.34 27.45 6.04 936.50 581.23 468.25 290.61 45.32 2.42 72.16 2.22 7.42
hdc 0.01 0.00 0.01 0.00 0.03 0.00 0.02 0.00 4.02 0.00 1.17 1.17 0.00
sda 0.09 2.32 4.15 1.33 71.56 29.23 35.78 14.62 18.37 0.65 118.49 6.39 3.51
sdb 3.47 0.00 1.90 0.00 15.32 0.01 7.66 0.01 8.08 0.74 391.31 5.68 1.08
fd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 45.00 45.00 0.00

deathstar:/ # top
top – 21:28:28 up 1:22, 2 users, load average: 0.09, 0.14, 0.16
Tasks: 77 total, 1 running, 76 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.8% us, 1.3% sy, 0.4% ni, 86.2% id, 9.1% wa, 0.1% hi, 0.0% si
Mem: 508644k total, 503612k used, 5032k free, 34052k buffers
Swap: 1020088k total, 458980k used, 561108k free, 16012k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 16 0 640 56 28 S 0.0 0.0 0:05.14 init
2 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
3 root 5 -10 0 0 0 S 0.0 0.0 0:00.09 events/0
4 root 5 -10 0 0 0 S 0.0 0.0 0:00.00 khelper

Next step, lets stress out your CPU but not the disks. The command below will try to create an endless zip file in /dev/null. This generates no disk activity, but loads the CPU. Continue running “top” and “iostat -x 1? in the other two windows.

cat /dev/zero | bzip2 -c > /dev/null

If you see high CPU load but low iowait, we can eliminate CPU issues, IRQ conflicts, and faulty memory. Just to be on the safe side, let’s test memory anyway:

deathstar:/ # free
total used free shared buffers cached
Mem: 508644 503504 5140 0 37036 48968
-/+ buffers/cache: 417500 91144
Swap: 1020088 516196 503892

This server has 508644Kb of RAM. Use the corresponding value for the following test:

deathstar:/ # dd if=/dev/hda2 bs=508644 of=/backups/memtest count=1050
1050+0 records in
1050+0 records out

deathstar:/ # md5sum /backups/memtest ; md5sum /backups/memtest ; md5sum /backups/memtest
04762ff36b2231aac75754ab9c1a564a /backups/memtest
04762ff36b2231aac75754ab9c1a564a /backups/memtest
04762ff36b2231aac75754ab9c1a564a /backups/memtest

The three MD5 values above should be identical. If they are not – your system has a faulty RAM chip.

When you have eliminated hardware problems as possible causes of high iowait, the next step is to review firmware and drivers. You are particularly interested in disk controller firmware: unstable performance and no error messages are the signs of a firmware problem. Try really hard to remember if you made any system changes recently, especially something that required a reboot – like kernel upgrade, for example. If this is the case, roll back the upgrade or search for upgrade firmware. You should grab a copy of Sysinfo (free 30-day trial) to help you identify makes and models of your disks, controllers, etc.

While your disks and controllers may be tip-top, your may have a problem with a filesystem. Even if you see high iowait when accessing any filesystem, you should still check out the partition where /var is mounted and swap – if there is a problem, it will manifest itself regardless of what your system is doing. But here you will run into a little problem: fsck will not scan a mounted partition and you cannot unmount /var. Let’s say these are your partitions:

deathstar:/ # more /etc/fstab
/dev/hda2 / reiserfs acl,user_xattr 1 1
/dev/hda1 swap swap pri=42 0 0

You need to fsck /dev/hda2 because this is where your /var is mounted. Download KNOPPIX or Ubuntu LiveCD, boot from CD (without installing) and “fsck /dev/hda2? from there. If everything looks clean, shut down your system, take the CD out and boot normally. The next step is to check out swap. If you just run fsck on the swap partition, it will fail:

deathstar:/ # fsck /dev/hda1
fsck 1.34 (25-Jul-2003)
fsck: fsck.swap: not found
fsck: Error 2 while executing fsck.swap for /dev/hda1

You need to disable swap on /dev/hda1 before you can scan it. Before you can do this, you need to add another swap area: you cannot run without any swap space. So, to add swap on the fly, create a swap file (1Gb in this example):

deathstar:/ # dd if=/dev/zero of=/swapfile bs=1024 count=1048576
1048576+0 records in
1048576+0 records out

deathstar:/ # chmod 600 /swapfile

deathstar:/ # ls -lash /swapfile
1.1G -rw——- 1 root root 1.0G Dec 20 22:48 /swapfile

Now you can set up and activate the new swap file:

deathstar:/ # mkswap /swapfile
Setting up swapspace version 1, size = 1073737 kB
deathstar:/ # free
total used free shared buffers cached
Mem: 508644 500996 7648 0 38912 147332
-/+ buffers/cache: 314752 193892
Swap: 1020088 521784 498304
deathstar:/ # swapon /swapfile
deathstar:/ # free
total used free shared buffers cached
Mem: 508644 502232 6412 0 39400 147392
-/+ buffers/cache: 315440 193204
Swap: 2068656 521784 1546872

Now we need to deactivate the original swap partition. This operation may take a couple minutes to complete:

deathstar:/ # swapoff /dev/hda1
deathstar:/ # free
total used free shared buffers cached
Mem: 508644 501624 7020 0 31712 10416
-/+ buffers/cache: 459496 49148
Swap: 1048568 167032 881536

The next step is to create a standard filesystem on the old swap partition, so that fsck has something to scan:

deathstar:/ # mke2fs -c /dev/hda1
mke2fs 1.34 (25-Jul-2003)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
127744 inodes, 255024 blocks
12751 blocks (5.00%) reserved for the super user
First data block=0
8 block groups
32768 blocks per group, 32768 fragments per group
15968 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Checking for bad blocks (read-only test): done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done

The previous operation already ran fsck and so, if you see no errors, you can now re-activate your original swap space and remove the temporary swap you created:

deathstar:/ # mkswap /dev/hda1
Setting up swapspace version 1, size = 1044574 kB
deathstar:/ # swapon /dev/hda1
deathstar:/ # swapoff /swapfile
deathstar:/ # rm /swapfile
deathstar:/ # free
total used free shared buffers cached
Mem: 508644 503172 5472 0 33668 9256
-/+ buffers/cache: 460248 48396
Swap: 1020088 156300 863788

Anothe command commonly used for analyzing system bottlenecks is vmstat. The following example runs vmstat five times at 2-second intervals:

deathstar:~ # vmstat -S M 2 5
procs ———–memory———- —swap– —–io—- –system– —-cpu—-
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 15 174 70 58 0 0 189 50 5 6 1 3 94 1
0 0 15 174 70 58 0 0 0 0 1005 35 4 0 96 0
0 1 15 174 70 58 0 0 0 258 1515 45 0 6 88 7
0 0 15 173 71 58 0 0 0 194 1083 24 0 1 83 16
0 0 15 173 71 58 0 0 0 0 1003 19 0 0 100 0

Explanation of vmstat columns:

(a) procs is the process-related fields are:

* r: The number of processes waiting for run time.
* b: The number of processes in uninterruptible sleep.

(b) memory is the memory-related fields are:

* swpd: the amount of virtual memory used.
* free: the amount of idle memory.
* buff: the amount of memory used as buffers.
* cache: the amount of memory used as cache.

(c) swap is swap-related fields are:

* si: Amount of memory swapped in from disk (/s).
* so: Amount of memory swapped to disk (/s).

(d) io is the I/O-related fields are:

* bi: Blocks received from a block device (blocks/s).
* bo: Blocks sent to a block device (blocks/s).

(e) system is the system-related fields are:

* in: The number of interrupts per second, including the clock.
* cs: The number of context switches per second.

(f) cpu is the CPU-related fields are:

These are percentages of total CPU time.

* us: Time spent running non-kernel code. (user time, including nice time)
* sy: Time spent running kernel code. (system time)
* id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
* wa: Time spent waiting for IO. Prior to Linux 2.5.41, shown as zero.

If you failed to identify the cause of the iowait problem, you should consider the possibility that there is no problem: perhaps your system is handling extra load and running short on resources. Take a look at the running processes and see what’s eating up memory. Perhaps you upgraded an application and now it is using more RAM, which leads to high swapping, which leads to high disk activity, which leads to high iowait.

The solutions are simple:

1. Install more RAM
2. Move swap to another disk or – even better – move it to another disk on a separate controller.
3. Move user applications to another disk/controller and specify default log locations outside of the system disk.

– Jayesh

../../../libraries/libldap/error.c:273: ldap_parse_result: Assertion `r != ((void *)0)’ failed

If you are getting error as mentioned below while doing some operation your linux server bash shell.
../../../libraries/libldap/error.c:273: ldap_parse_result: Assertion `r != ((void *)0)’ failed

Then its due to nss-ldap software running on your server. One of the reason I found and fixed with was nscd service was down on my server restarting it fixed the issue.

Error I saw in logs were..

/var/log/messages:

Oct 28 03:01:27 HOSTNAME nscd: nss_ldap: reconnected to LDAP server ldap://domain.com/ after 1 attempt
Nov 10 02:49:58 HOSTNAME nscd: nss_ldap: reconnecting to LDAP server (sleeping 4 seconds)…
Nov 10 02:50:14 HOSTNAME nscd: nss_ldap: reconnected to LDAP server ldap://domain.com/ after 2 attempts
Jan 18 07:45:09 HOSTNAME kernel: nscd[5114]: segfault at 00002b1c735dee78 rip 00002b1b6d4fe885 rsp 000000004185c6d0 error 4

Fix :
[root@HOSTNAME webdocs]# /etc/init.d/nscd status
nscd dead but subsys locked
You have new mail in /var/spool/mail/root
[root@HOSTNAME webdocs]# /etc/init.d/nscd restart
Stopping nscd: [FAILED]
Starting nscd: [ OK ]
[root@HOSTNAME webdocs]# /etc/init.d/nscd status
nscd (pid 30292) is running…
You have new mail in /var/spool/mail/root
[root@HOSTNAME webdocs]#

fetchmail: client/server synchronization error Query status=7 (ERROR)

IMAP commands to test DELETE of message
One of the more common commands that seems to fail is the DELETE command from the device. This can sometimes be caused by the user not having a Trash folder, or the Trash folder not being at the top level of the message store. It sometimes helps to telnet into the user’s account and perform the same IMAP commands that the NotifyLink server is performing, in order to see where we may be failing. To do so:
1. Telnet into the mail server over port 143 and login to the user’s mailbox. To login, the IMAP command is: a login username password , where you replace the username and password with the user’s actual email username and password.
2. Select the folder where the original message to be deleted is located. For example, if it is in the INBOX folder, then type: a select INBOX
3. We now need to know the UID of the message you want to delete. You can either pull this from the MessageID field of NotificationCheckpoint table based on the NotifyLink MID that we are trying to delete. Or if you don’t know the UID, you can use the FETCH command like: a FETCH 1 (UID) , where 1 is the index of the message in the folder. So in this example you are trying to get the UID for the first message in the folder, where the first message is the OLDEST message in the folder.
4. Using the UID from step 3, now copy the message to the trash folder (suppose its UID is 123) : a uid copy 123 trash
5. Now mark the message as deleted: a uid store 123 +flags.silent (Seen Deleted)
6. Finally, expunge the inbox by calling: a expunge
Determining where in the process that we fail may help in determining the root cause of why the DELETE command is failing.

#telnet server 143
And follow above instruction..

– Jayesh

strings: ‘/lib/libc.so.6’: No such file centos

If you are getting above error while installing siteminder agent then its due to glibc not installed on centos as its installed with “minimal install” option..

# ./nete-wa-6qmr5-cr035-rhas30-x86-64.bin -i console
Preparing to install…
Extracting the JRE from the installer archive…
Unpacking the JRE…
Extracting the installation resources from the installer archive…
Configuring the installer for this system’s environment…
strings: ‘/lib/libc.so.6’: No such file

Launching installer…

./nete-wa-6qmr5-cr035-rhas30-x86-64.bin: /tmp/install.dir.18984/Linux/resource/jre/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
./nete-wa-6qmr5-cr035-rhas30-x86-64.bin: line 2479: /tmp/install.dir.18984/Linux/resource/jre/bin/java: Success

Fix: install yum and then “yum install glibc”

– Cheers

Making postfix (other mta) as default mta(mail server)

The alternatives program is a way to change the default mail server so that it will be Postfix. This program is only necessary for CentOS not Suse or Ubuntu, as Suse and Ubuntu both use Postfix as the default.
Alternatives is a program that will allow you to assess and change the mail program or MTA options. To view current links to the program use the following command.
alternatives –display mta
mta – status is manual.
link currently points to /usr/sbin/sendmail.postfix
/usr/sbin/sendmail.sendmail – priority 90
slave mta-pam: /etc/pam.d/smtp.sendmail
slave mta-mailq: /usr/bin/mailq.sendmail
slave mta-newaliases: /usr/bin/newaliases.sendmail
slave mta-rmail: /usr/bin/rmail.sendmail
slave mta-sendmail: /usr/lib/sendmail.sendmail
slave mta-mailqman: /usr/share/man/man1/mailq.sendmail.1.gz
slave mta-newaliasesman: /usr/share/man/man1/newaliases.sendmail.1.gz
slave mta-aliasesman: /usr/share/man/man5/aliases.sendmail.5.gz
slave mta-sendmailman: /usr/share/man/man8/sendmail.sendmail.8.gz
/usr/sbin/sendmail.postfix – priority 30
slave mta-pam: /etc/pam.d/smtp.postfix
slave mta-mailq: /usr/bin/mailq.postfix
slave mta-newaliases: /usr/bin/newaliases.postfix
slave mta-rmail: /usr/bin/rmail.postfix
slave mta-sendmail: /usr/lib/sendmail.postfix
slave mta-mailqman: /usr/share/man/man1/mailq.postfix.1.gz
slave mta-newaliasesman: /usr/share/man/man1/newaliases.postfix.1.gz
slave mta-aliasesman: /usr/share/man/man5/aliases.postfix.5.gz
slave mta-sendmailman: /usr/share/man/man1/sendmail.postfix.1.gz
Current `best’ version is /usr/sbin/sendmail.sendmail.
If you wanted to change from a Sendmail MTA to Postfix MTA use this command:
alternatives –set mta /usr/sbin/sendmail.postfix.
You should not see any output.
To select an alternative from those MTAs available use this command:
alternatives –config mta
You will see this output which will allow you to choose an MTA using a number.
alternatives –config mta
There are 2 programs which provide ‘mta’.
Selection Command
———————————————–
* 1 /usr/sbin/sendmail.sendmail
+ 2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number:

Setup Global load balancing for your site using Open source nginx

 

 

Nginx, called engine-x is a high performance HTTP server and reverse proxy, with proxy capabilities for IMAP/POP3/SMTP. Nginx is the creation of Russian developer, Igor Sysoev, and has been running in production for over two years. The latest stable release at the time of writing is Nginx 0.5.30, and is the focus of this article. While Nginx is capable of proxying non-HTTP protocols, we’re going to focus on HTTP and HTTPS.

 

High Performance, Yet Lightweight

Nginx uses a master process and N+1 worker process model. The number of workers is controlled by the configuration, yet the memory footprint and resources used by Nginx are several orders of magnitude less than Apache. Nginx uses epoll() in Linux. In our lab, Nginx was handling hundreds of requests per second, while using about 16MB of ram and a consistent load average of about 1.00. This is considerably better than Apache 2.2, and Pound doesn’t scale well with this type of usage (high memory usage, lots of threads). In general, Nginx offers a very cost effective solution.

 

Lighttpd

Lighttpd is a great lightweight option, but it has a couple of drawbacks. Nginx has very good reverse proxy capabilities with integrated basic load balancing. This makes it a very good option as a front end to dynamic web applications, such as those running under Rails and using Mongrel. Lighttpd on the other hand, has an old and unmaintained proxy module. Now it does have a new proxy module with Lighttpd 1.5.x, but that is the other problem with Lighttpd, where its going. Lighttpd 1.4 is lightweight, relies on very few external libraries and is fast. Lighttpd 1.5.x on
the other hand requires many more external libraries, including glib, now I don’t know about you but anything using glibc is far from “lightweight”.

 

Basic Configuration

The basic configuration of Nginx specifies the unprivileged user to run as, the number of worker processes, error log, pid and events block. After this basic configuration block, you have per protocol blocks (http for example).

 

 

  • user nobody;
  • worker_processes 4;
  • error_log logs/error.log;
  • pid logs/nginx.pid;
  • events {
  • worker_connections 1024;
  • }

 

 

Basic HTTP server

Nginx is relatively easy to configure as a basic web server, it supports IP and Name based virtual hosts, and it uses a pcre based URI processing system. Configuring static hosting is very easy, you just specify a new server block:

 

 

  • server {
  • listen 10.10.10.100:80;
  • server_name www.foocorp.com foocorp.com;
  • access_log logs/foocorp.com.log main;
  • location / {
  • index index.html index.htm;
  • root /var/www/static/foocorp.com/htdocs;
  • }
  • }

 

 

Here we are listening on port 80 on 10.10.10.100, with name virtual hosting using www.foocorp.com and foocorp.com. The server_name option also supports wildcards, so you can specify *.foocorp.com and have it handled by the configuration. The usual access logs, and root specifies htdocs. If you have a large number of name virtual hosts, you’ll need to increase the size of the hash bucket with server_names_hash_bucket_size 128;

 

Gzip compression

Nginx like many other web servers, can compress content using gzip.

 

 

  • gzip on;
  • gzip_min_length 1100;
  • gzip_buffers 4 8k;
  • gzip_types text/plain text/html text/css text/js;

 

 

Here Nginx allows you to enable gzip, specify a minimum length to compress, buffers and the mime types that Nginx will compress. Gzip compression is supported by all modern browsers.

 

HTTP Load Balancing

Nginx can be used a simple HTTP load balancer, in this configuration, you would place Nginx in front of your existing web servers. The existing web servers can be running Nginx as well. In HTTP load balancer mode, you simply need to add an upstream block to the configuration :

 

 

  • upstream a.serverpool.foocorp.com {
  • server 10.80.10.10:80;
  • server 10.80.10.20:80;
  • server 10.80.10.30:80;
  • }
  • upstream b.serverpool.foocorp.com {
  • server 10.80.20.10:80;
  • server 10.80.20.20:80;
  • server 10.80.20.30:80;
  • }

 

 

Then in the server block, you add the line:

 

 

  • proxy_pass http://a.serverpool.foocorp.com;

 

 

Health Check Limitations

Nginx has only simple load balancing capabilities. It doesn’t have health checking capabilities and it uses a simple load balancing algorithm. However, Nginx is a relatively new project, so one would expect to see various load balancing algorithms and health checking support added over time. While it might not be wise to replace your commercial load balancer with Nginx anytime soon, Nginx is almost there in terms of a very competitive solution. Monit, and other monitoring applications offer good options to compensate for a lack of health checking capabilities in Nginx.

 

Global Server Load Balancing

Nginx has a very interesting capability. With a little configuration can provide Global Server Load Balancing. Now Global Server Load Balancing (GSLB) is a feature you’ll find on high-end load balancing switches such as those from F5, Radware, Nortel, Cisco etc. Typically GSLB is an additional license you have to purchase for a few thousand dollars, on top of a switch that typically start around US$10,000.

 

GSLB works by having multiple sites distributed around the world, so you might have a site in Europe, a site in Asia and a site in North America. Normally, you would direct traffic by region by using different top level domains (TLD). So www.foocorp.com might go to North America, www.foocorp.co.uk to Europe, www.foocorp.com.cn to the server in Asia. This isn’t a very effective solution because it relies on the user to visit the proper domain. A user in Asia, might see a print advertisement for the North American market, hitting the .com address means they aren’t visiting the closest and fastest server.

 

GSLB works by looking at the source IP address of the request, and then determines which site is closest to that source address. The simplest method is to break the Internet address space down per region, then to route
traffic to the local site in that region. When we say region, we mean – North America, South America, EMEA (Europe, Middle East and Africa) and APAC (Asia-Pacific).

 

Configuring Nginx for GSLB

The geo {} block is used to configure GSLB in Nginx, the geo block causes Nginx to look at the source IP, and set a variable based on the configuration. The nice thing with Nginx is that you can set a default.

 

 

  • geo $gslb {
  • default na;
  • include conf/gslb.conf
  • }

 

 

Here in our configuration, we’re setting the default to na (North America) and then including the gslb.conf. The configuration file gslb.conf is a basic file consisting of subnet variable. Here is an excerpt from gslb.conf:

 

 

  • 32.0.0.0/8 emea;
  • 41.0.0.0/8 emea;
  • 43.0.0.0/8 apac;

 

 

When Nginx receives a request from a source IP in 32.0.0.0/8 (for those of you unfamiliar with slash notation, this is the entire Class A, 32.0.0.0 thru 32.255.255.255), it sets the variable $gslb to emea. We then use that later in the configuration to redirect.

 

Inside the location block of our server configuration in Nginx, we add a number of if statements before the proxy_pass (if used) statement. These instruct the server to do a HTTP 302 Redirect (temporary redirect).

 

 

  • if ($gslb = emea) {
  • rewrite ^(.*) http://europe.foocorp.com$1 redirect;
  • }
  • if ($gslb = apac) {
  • rewrite ^(.*) http://asia.foocorp.com$1 redirect;
  • }

 

 

These are configured under the www.foocorp.com named virtual server, if someone from North America hits www.foocorp.com, it hits the default and simply loads from the same server. If the user is from Europe, the request should match one of the subnets listed in gslb.conf, and sets the gslb variable to emea. This request causes the North American site hosting the .com domain to redirect the client to the server(s) at the site in Europe.

 

On the European server, the configuration is slightly different. Instead of the emea check, you check for NA and redirect to the US site. This is to handle the situation when someone in North America hits the .eu or .co.uk site.

 

 

  • if ($gslb = na) {
  • rewrite ^(.*) http://www.foocorp.com$1 redirect;
  • }

 

 

Traffic Control: In-region not always faster

The problem with commercial solutions is that they are too generalized. In our example configurations so far, we make some pretty wild assumptions. The problem with the Internet is that a user in Asia, might not for example, have a faster connection to servers in Asia. A good example of this is India and Pakistan. A server hosted in Hong Kong or Singapore, is in Asia, and would be considered “in region” for customers in India and Pakistan. The reality though is that traffic from those countries to Hong Kong, is actually routed through Europe, so packets from India to Hong Kong, go from India thru Europe, across the United States and hit Hong Kong from the Pacific. However, in the same subnet, customers in Australia are only a few hops away from Hong Kong.

 

In such a situation, with commercial solutions, you are just out of luck, but with Nginx you can fine tune how traffic is directed. Here we know 120.0.0.0/6 is mainly APAC, but 122.162.0.0/16 and 122.163.0.0/16 have faster connections to Europe. So, we simply add these subnets to the configuration. Nginx will use the closest match to the source IP. So 122.162.0.0/16 is
finer grained than 120.0.0.0/6, so Nginx will use it.

 

Manual Tuning

The initial tuning can be done by using the whois command, for example whois 120.0.0.0 will give you an idea which region it belongs to – ARIN, RIPE, etc. ARIN, RIP, APNIC, AFRINIC, and LACNIC are regional internet registries or RIR. An RIR is an organization overseeing the allocation and registration of Internet number resources within a particular region of the world. IP addresses both IPv4 and IPv6 are managed by these RIRs. However, as in our previous example, you’re going to need to fine tune the gslb configuration with traceroute and ping information. Probably the best approach is to do a general configuration and then fine tune the configuration based on feedback from customers.

 

Cost Savings vs. Features

Looking at a well known Layer 4-7 switching solution, you would need a minimum of $15k per site to purchase the necessary equipment and licensing. Commercial solutions do have some additional fault tolerant measures, such as the ability to measure load and availability of servers at remote sites. However, with Nginx offering a very close solution which is available for FREE with the source code, it is only a matter of time before such features are part of Nginx or available thru other projects.

 

gslb.conf

The following is an initial example of gslb.conf, it should be sufficient for most users.

 

 

  • 25.0.0.0/8 uk;
  • 32.0.0.0/8 emea;
  • 41.0.0.0/8 emea;
  • 43.0.0.0/8 apac;
  • 51.0.0.0/8 uk;
  • 53.0.0.0/8 emea;
  • 57.0.0.0/8 emea;
  • 58.0.0.0/8 apac;
  • 59.0.0.0/8 apac;
  • 60.0.0.0/8 apac;
  • 61.0.0.0/8 apac;
  • 62.0.0.0/8 emea;
  • 77.0.0.0/8 emea;
  • 78.0.0.0/7 emea;
  • 80.0.0.0/5 emea;
  • 88.0.0.0/6 emea;
  • 90.192.0.0/11 uk;
  • 91.104.0.0/13 uk;
  • 91.125.0.0/16 uk;
  • 92.0.0.0/8 emea;
  • 93.0.0.0/8 emea;
  • 116.0.0.0/6 apac;
  • 120.0.0.0/6 apac;
  • 122.162.0.0/16 uk;
  • 122.163.0.0/16 uk;
  • 124.0.0.0/7 apac;
  • 126.0.0.0/8 apac;
  • 129.0.0.0/8 emea;
  • 130.0.0.0/8 emea;
  • 131.0.0.0/8 emea;
  • 133.0.0.0/8 apac;
  • 134.0.0.0/8 emea;
  • 139.0.0.0/8 emea;
  • 141.0.0.0/8 emea;
  • 145.0.0.0/8 emea;
  • 150.0.0.0/8 apac;
  • 151.0.0.0/8 emea;
  • 157.0.0.0/8 apac;
  • 162.0.0.0/8 emea;
  • 163.0.0.0/8 emea;
  • 164.0.0.0/8 emea;
  • 171.0.0.0/8 emea;
  • 188.0.0.0/8 emea;
  • 193.0.0.0/8 emea;
  • 194.0.0.0/8 emea;
  • 195.0.0.0/8 emea;
  • 196.0.0.0/8 emea;
  • 202.0.0.0/7 apac;
  • 210.0.0.0/7 apac;
  • 212.0.0.0/7 emea;
  • 217.0.0.0/8 emea;
  • 218.0.0.0/6 apac;
  • 219.0.0.0/8 apac;
  • 220.0.0.0/7 apac;
  • 222.0.0.0/8 apac;