Postfix regexp tables are memory hogs

Posted on August 31st, 2007 in General,Linux System Administration by Brandon

I spent a good part of the day today troubleshooting memory problems on some postfix mail servers. Each smtpd process was using over 11 MB of Ram which seems really high. Each concurrent SMTP session has its own smtpd process, and with over 150 concurrent connections, that was using well over 1.5 GB of Ram.

[root@mail ~]# ps aux|grep -i smtpd |head -n1
postfix   3978  0.0  0.5 16096 11208 ?       S    12:29   0:00 smtpd -n smtp -t inet -u

After some trial and error of temporarily disabling stuff in the main.cf file, I narrowed the memory usage to a regexp table in a transport map:

transport_maps = regexp:/etc/postfix/transport.regexp

The transport.regexp file had about 1400 lines in it to match various possible address variations for a stupid mailing list application. Each mailing list has 21 different possible commands (addresses). By combining those 21 different commands into a single regex, I was able to cut those 1400 lines down to about 70. Now the smtpd processes use just under 5mb each:

[root@mail ~]# ps aux|grep -i smtpd |head -n1
postfix   7634  0.0  0.2  9916 4996 ?        S    13:31   0:00 smtpd -n smtp -t inet -u

So, by my math, a savings of about 6,000 kb of memory by removing 1300 lines from the regexp file means that each regexp used about 4.5 kb of memory. Overall, with 150+ simultaneous smtpd processes, that resulted in several hundred megs of memory saved on each mail server.

Manually testing postgrey through a telnet session

Posted on August 31st, 2007 in General by Brandon

I’m working on implementing some new, custom features in Postgrey, and needed to test it manually via telnet instead of sending an email every time that I wanted to try it out. Evidently Postfix has a custom protocol for communicating via it’s check_policy_service command (and probably others). By doing a tcpdump, I was able to capture this, which makes it simple to test postgrey, and presumably other similar postfix-compatible programs.

[root@mail1 tmp]# telnet postgrey 10023
Trying 10.20.30.40 ...
Connected to postgrey.mydomain.tld (10.20.30.40).
Escape character is '^]'.
request=smtpd_access_policy
protocol_state=RCPT
protocol_name=ESMTP
client_address=201.1.2.3
client_name=imaspammer.brasiltelecom.net.br
helo_name=imaspammerl.brasiltelecom.net.br
sender=bogus@user.com
recipient=poorfoool@somedomain.com
queue_id=
instance=66cf.46d5964c.0
size=0
sasl_method=
sasl_username=
sasl_sender=
ccert_subject=
ccert_issuer=
ccert_fingerprint=

action=DEFER_IF_PERMIT Temporary Failure - Recipient address rejected - \
   Try back in 180 seconds: See http://www.webpipe.net/failedmail.php?domain=somedomain.com

^]
telnet> quit
Connection closed.

Just telnet to the machine on the port its listening on (you have to be running postgrey with the inet option, not unix sockets). Then copy paste everything between the ‘request=’ line and the first blank line’. Then hit enter and postgrey should reply with an appropriate response.

Process Memory Usage using smaps

Posted on August 31st, 2007 in General by Brandon

I’m digging into a mail server and trying to figure out the actual memory usage of some processes. ‘ps’ only gives a little information, which may or may not be incredibly useful:

[root@mail tmp]# ps aux|grep someprocess
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     14368  0.0  0.0   3884   668 pts/0    R+   11:26   0:00 someprocess

From that you can see the VSZ and RSS. I’m not totally sure what exactly those include, but I’ve learned enough to know that they may include the size of shared libraries and things that aren’t unique to this process. Upon trying to find out a little more, I came across this post which explained it well, and has a simple perl script that breaks it down even further:

http://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html

When running that smem.pl script with a PID, it produces something like this:

[root@mail1 tmp]# ./smem.pl 14291
VMSIZE:       9840 kb
RSS:          3692 kb total
              1892 kb shared
               816 kb private clean
               984 kb private dirty

From that, you can see how much of the memory usage is ‘Private’ and therefore unique to this process. This is the best way that I have found to view actual memory usage that is unique to a given process.

Crude file recovery on an ext3 partition

Posted on August 30th, 2007 in General by Brandon

I was working on project for the past couple days and was just about to enable it permanently. Before that, though, I ran a ‘yum update’. I wasn’t paying attention to what was updated though, and the program that I was working on got updated during the process. My modified version of the script was wiped out

Not willing to throw away a couple days worth of modifications, I was desperate to recover my changes. Fortunately the script was still running, so I know that it wasn’t really deleted from the disk yet. Since the file was still locked, the file system just marked the file as deleted, but hadn’t really deleted it. A ‘lsof’ showed that it was still there but deleted. It gave me an inode number, but I couldn’t find any way to use that.

Instead, I came up with a pretty crude way to find my script:

cat /dev/sda1 | strings | grep -A 10000 -B 10000 "some_string_unique_to_my_script" > /tmp/somefile

This cats out the actual content of the device file, searches for strings in it, and then grep’s for your unique string, and saves 10,000 lines before and after it into /tmp/somefile. I was then able to look through /tmp/somefile and find my script in there. It is not in a format that you can just copy/paste out. But all the significant pieces were in there, and I was able to recover everything that I needed without rewriting everything.

Simple console-based bandwidth monitoring utilities

Posted on August 23rd, 2007 in General by Brandon

A co-worker today introduced me to ‘iftop’ which is an incredibly handy utility for  monitoring current bandwidth utilization through an interface.  Its sortof like a simple, command-line version of ntop. It is available in most Debian repositories, and in the RPMForge repositry if you use RHEL/CentOS.  More information on its homepage at http://www.ex-parrot.com/~pdw/iftop/

Also, another handy bandwidth monitoring program to run on your server is vnstat.  It runs via cron every 5 minutes and  can provide some simple historic bandwidth usage graphs.  I didn’t see any packages for vnstat, but its the simplest build I’ve ever seen.  Just download it and run ‘make && make install’  More information at http://humdi.net/vnstat/

WordPress bug: spawn_cron() doesn’t properly consider the port of the cron.php file

Posted on August 16th, 2007 in General,Linux System Administration by Brandon

I ran into a problem today where a user would submit a new post in WordPress, and it would cause the web server to lock up. Restarting the web server would start Apache properly, and would serve static content fine until the user requested another page from WordPress where it would lock up again.

The configuration is a little odd, so it probably doesn’t happen to many users. In order for it to occur, you have to have the “WordPress Address” setting as a URL starting with ‘https’, and then write your post using a non-https URL. I tracked this down to a problem with the cron function built into wordpress. Specifically this bit of code in the spawn_cron() function in includes/cron.php

$cron_url = get_option( 'siteurl' ) . '/wp-cron.php';
$parts = parse_url( $cron_url );

if ($parts['scheme'] == 'https') {
        // support for SSL was added in 4.3.0
        if (version_compare(phpversion(), '4.3.0', '>=') && function_exists('openssl_open')) {
                $argyle = @fsockopen('ssl://' . $parts['host'], $_SERVER['SERVER_PORT'], $errno, $errstr, 0.01);
        } else {
                return false;
        }
} else {
        $argyle = @ fsockopen( $parts['host'], $_SERVER['SERVER_PORT'], $errno, $errstr, 0.01 );
}
if ( $argyle )
        fputs( $argyle,
                  "GET {$parts['path']}?check=" . wp_hash('187425') . " HTTP/1.0\\r\\n\\r\\n"
                . "Host: {$_SERVER['HTTP_HOST']}rnrn"
        );

The line that says:

$argyle = @fsockopen('ssl://' . $parts['host'], $_SERVER['SERVER_PORT'], $errno, $errstr, 0.01);

Assumes that you are hitting the current page on the same server/port as that returned by get_option( ‘siteurl’ ). Since the user was hitting the non-https version of the site, that would cause this code in the spawn_cron() function to connect to port 80 and try to establish an SSL connection. WordPress would get that request as “\x80|\x01\x03\x01″, and issue it the home page, which would, in-turn, re-run the cron function again. That sub-request would redo the same thing over, and that would continue until Apache ran out of connections. At that point it would try to request the page again, and would wait endlessly for a connection to open up, and never would.

So, to solve, I added one line, and modified another like this:

[root@server wp-includes]# diff cron.php cron.php.original
90,91c90
90,91c90
< $port = isset($parts['port']) ? $parts['port'] : 443;
<                       $argyle = @fsockopen('ssl://' . $parts['host'], $port, $errno, $errstr, 0.01);
---
>                       $argyle = @fsockopen('ssl://' . $parts['host'], $_SERVER['SERVER_PORT'], $errno, $errstr, 0.01);
96,97c95
< $port = isset($parts['port']) ? $parts['port'] : 80;
<               $argyle = @ fsockopen( $parts['host'], $port, $errno, $errstr, 0.01 );
---
>               $argyle = @ fsockopen( $parts['host'], $_SERVER['SERVER_PORT'], $errno, $errstr, 0.01 );

That makes it consider the port of the url returned by get_option( ‘siteurl’ ), instead of using the port you are currently connected on. It defaults to port 443 if the url begins with https, and port 80 if not.

I posted the fix to the wordpress forums at http://wordpress.org/support/topic/130492 Hopefully this gets included in future releases of WordPress

Testing servers through encrypted connections

Posted on August 15th, 2007 in General,Linux System Administration by Brandon

When testing out Web or Mail servers, I often find myself telneting to the server and issuing raw commands directly. Doing this is incredibly useful for tracking down the source of many problems. Until now, I have never know how to do the same thing over encrypted channels like HTTPS or POP3S. However, I just discovered that the openSSL library has a simple tool that works great. Run the command:

openssl s_client -connect hostname:port

That will perform all of the SSL handshake and display the output for you, and then give you a regular prompt, just like telnet would. For SMTP over TLS it is a little more complicated because you generally would connect to the remote server and then issue the STARTTLS command to negotiate encryption. In that case, you could use the command:

openssl s_client -starttls smtp -crlf -connect host:port

That will tell the openssl client to connect, and send ‘STARTTLS’ before attempting to negotiate the encryption. After that, you’ll end up with a 220 response at which to proceed with your normal SMTP session
Modern versions of openSSL also allow STARTTLS with pop3:

openssl s_client -starttls pop3  -connect host:port

Implementing greylisting on Qmail

Posted on August 14th, 2007 in General,Linux System Administration by Brandon

With my previous success with greylisting, I have decided that it definitely works well and is worth the little bit of effort it takes to get is installed.    Configuring postifx was very simple, and I (unfortunately) run several mail server that run Qmail.   After a few minutes of googling, I decided on qgreylist, which was the simplest implementation by far.

Several of the alternatives required patching and recompiling qmail, which I definitely didn’t want to do.  qgreylist is just a simple Perl script that runs “in between” the tcpserver and the qmail-smtpd process.   You download it, change the path to it’s working directory, and tweak a couple other variables.  Then copy it into a permanent location, and configure qmail’s smtpd process to send messages to it.   It took a little longer than postgrey, but not too bad.

Find the best book buyback prices with BookScouter.com

Posted on August 6th, 2007 in General,Programming by Brandon

A few weeks ago I posted about a quick service I put together that compared textbook buyback prices from a few of the top websites.  I’ve been working on expanding that the past few weeks, and am now unveiling a site dedicated to it.

BookScouter.com is the most comprehensive comparison site for quickly searching for textbook sale prices.   It currently scrapes prices from 21 other sites – which is all of them that I could find.  The website is written in PHP using a custom framework that I’ve developed and use exclusively now.   I found an excellent website called opensourcetemplates.org that has website templates available for free.  Their ‘Nautilius’ theme is the one I chose for this site.

The backend of the site is written in Perl.  It uses a pretty straightforward LWP to fetch the page, and some regular expressions to pull the price from the pages it obtains.  Each site was custom coded, but I got it down to a pretty re-usable script where I just customize a few of the things, like the input variable name for the ISBN and the regex that contains the price.    A few of the sites were more complicated than the others and required fetching a couple pages to obtain a session ID.

I’m pretty happy with the end result.   Please try to look up a few books and see if you have anything of value sitting around.   No registration or any personal information is ever required and it is completely free to use.

Converting Qmail / vpopmail forwards to database format

Posted on August 3rd, 2007 in General by Brandon

I’m not a fan of Qmail, and am in the process of migrating users off of it. As one of the steps in a long process, I’m first migrating users from one qmail server to another. The destination server uses a database-backed vpopmail installation to store some of the user information in, while the source server is still using the traditional file-based structure. Each email alias each had a file named .qmail-USERNAME which contains one forward per line. So a forward for brandon would be named .qmail-brandon and contain something like this:

&user@somedestination.com
&user2@anotherdestination.com

There exists a utility named ‘vconvert‘ which converts the actual user accounts into the new, database format. But after a little searching, I was unable to find a similar utility to convert aliases. So I wrote up a quick one in perl.  I tried pasting it here, but WordPress mangles the syntax.  Instead you can view it separately or download it

Next Page »