Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Author: Brandon (Page 17 of 29)

Configuring Postfix SASL to authenticate against Courier Authlib

I ran across a system today that was using the VHCS control panel. It looks like the system wasn’t correctly configured to allow SMTP authentication. It uses Postfix as the MTA and Courier-IMAP for the Imap/POP3 server. It was populating the Courier-authentication database with email addresses and passwords to use for logging into the incoming mail server, but postfix wasn’t configured to use the same database for authenticating and providing an outgoing mail server.

This is what I had to do to get it working

Edit your system’s smtpd.conf file (/var/lib/sasl2/smtpd.conf for RedHat and derivatives. /etc/postfix/sasl/smtpd.conf for Debian and Ubuntu derivatives). And put in this content:

I think this is a default install looks like:

pwcheck_method: saslauthd
mech_list: PLAIN LOGIN

So change it to this:

pwcheck_method: authdaemond
mech_list: PLAIN LOGIN
authdaemond_path: /var/run/courier/authdaemon/socket

Of course, make sure that the authdaemond_path is correct for your system, and change as needed.

Then restart postfix and see if that works. You can use my SMTP Authentication String tool to get your encoded password and try it through telnet. Tail your mail log to see if it gets any errors.

On the system I was working on. Postfix was configured to chroot the smtpd processes (in /etc/postfix/master.cf). I got errors in the mail log that looked like this:

Jan 24 19:52:46 host postfix/smtpd[14528]: warning: SASL authentication failure: cannot connect to Courier authdaemond: No such file or directory
Jan 24 19:52:46 host postfix/smtpd[14528]: warning: SASL authentication failure: Password verification failed
Jan 24 19:52:46 host postfix/smtpd[14528]: warning: host.local[127.0.0.1]: SASL plain authentication failed: generic failure

So, in that case, I simply hard-linked the courier authdaemon socket file inside of the chroot (/var/spool/postfix)

cd /var/spool/postfix
ln /var/run/courier/authdaemon/socket courier-authdaemon-socket

Then change the authdaemond_path to just ‘courier-authdaemon-socket’. Restart postfix and it should work

Getting a MySQL last insert_id from an ADOdb connection

The PHP ADOdb libraries are a database abstraction layer that tries to hide the database specific commands from the programmer.  It tries to allow the programmer to write code that will be portable between any backend database engine.  Since not all databases provide an insert id, ADOdb provides a wrapper for it in the form of it’s Insert_ID() function.

It implements it in a really ugly way though. Whenever you use it’s pseudo insert_id functions, it creates a _seq table with a single column and a single row. For example, if you are inserting something into a table named ‘users’, it will create a table named ‘users_seq’ with a single ‘id’ column. It generates one row in that column with an insert id that it calculates and increments on it’s own.

First off, that is really ugly. I hate having a whole bunch of extra tables in my database, and it makes it even worse that they only have a single value in them. I wish they would have implemented that differently, and made a single ‘_sequences’ table with two columns (table and id).   At least that would keep the tables to a minimum and centralize where all of the insert id’s are at.

The other bad part about it, is if you access the database with anything other than the ADOdb application, it is difficult to use this required structure. In most cases, things break and I get duplicate key constraints and it is just generally a pain.

So I’ve decided not to use it ever again. Its not likely that I’ll ever change the database anyway, so I might as well take advantage of the handy insert_id functionality already provided by MySQL.  Just do your queries as you would normally, including the ‘INSERTID’, and then you can retrieve the insert_id like in this example:

CREATE TABLE `users` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `name` varchar(80) NOT NULL,
  `email` varchar(80) NOT NULL,
) ;
// I assume you know how to create a ADOdb object
$db ->query("
    INSERT INTO USERS (id, name, email) VALUES ('INSERTID', 'Joe User', '[email protected]')
");
$user_id = $db->_connectionID->insert_id;

Performing post-output script processing in PHP

After several hours of researching end experimenting, I think I finally came up with a way for a PHP script to display a page, close the connection to the browser, and then to continue processing. The idea is that I can add some potentially lengthy processing to the script by executing it after the browser has closed the connection, but to a visitor, the page appears to load quickly.

I experimented with PHP’s register_shutdown_function, but that doesn’t really do what I need (unless running < PHP 4.0.3). Evidently PHP doesn’t have any way to close STDOUT, like other languages do.

The trick is in sending a Connection: close and Content-Length header. Once a client has received the specified number of bytes, it will close the connection, even though the script may continue. Unfortunately, that means that you need to know the length of the page before displaying it. That can be handled with output buffering, but does make the solution less than ideal.

Here is an example that works for me using PHP 5.1.6.

<?php

$start_time = microtime(true);
function bclog($message)
{
    global $start_time;
    $fh = fopen('/tmp/logfile', 'a');
    $elapsed = microtime(true) - $start_time;
    fwrite($fh, "$elapsed - $message\n");
    fclose($fh);
}

header('Content-type: text/plain');
header('Connection: close');
ob_start();

for ($i = 0; $i < 1024; $i++ ) {
    echo "#";
}
bclog("I'm done outputting my normal content");

// Figure the size of our content
$size = ob_get_length();
// And send the content-length header
header("Content-Length: $size");

// Now flush all of our output buffers
ob_end_flush();
ob_flush();
flush();

sleep(5);
bclog("Now I'm done with all of my post-processing - FYI, content length was $size");
?>

If you hit that page in a browser, you will notice that the browser displays the content and is done right away. However, you can tail that logfile, and see something like this:

0.0002360343933 - I'm done outputting my normal content
5.0019490718842 - Now I'm done with all of my post-processing - FYI, content length was 1024

It is not an ideal solution, but I think that is about as good as it is going to get

Using Jailkit for chrooting shell accounts

I’ve toyed around with chrooting a shell account to a directory before, but never really done it before. Today a customer wanted it done, so I had a chance to figure it all out. I’ve considered the using chrooted ssh before, but that requires a patch to SSH. Today I came across jailkit which leaves SSH alone, but implements the chroot as the users shell. It seemed pretty straightforward, plus provides some utilities for creating the jail.

cd /usr/local/src
wget https://olivier.sessink.nl/jailkit/jailkit-2.4.tar.gz
tar -xvzf jailkit-2.4.tar.gz
cd jailkit-2.4
./configure && make && make install

The tools were then available. Their examples said to put the jail environment, but I figured I might want to create per-user jails, so I created it in /home/jail-someuser like this:

jk_init -v -j /home/jail-someuser basicshell editors extendedshell netutils ssh sftp scp

That creates the directory and copies all of the specified programs into place inside the jail. In addition, it also copies all of necessary libraries as well – which is much easier than finding them with ldd.

Now, just create the actual user account and some directories for inside the jail:

mkdir /home/jail-someuser/home/someuser
useradd -d /home/jail-someuser/./home/someuser -s /usr/sbin/jk_chrootsh
chown someuser:someuser home/jail-someuser/./home/someuser
mkdir /home/jail-someuser/tmp
chmod a+rwx /home/jail-someuser/tmp

I was then able to log in by SSHing to the box as someuser. Upon logging in, I noticed that the default debian bash login script had some problems because the ‘id’ command wasn’t available. Also, vi wasn’t available, so I copied both of those programs those into the jail (fortunately their required libraries seem to already be there)

Overall it was pretty painless to install and get working. I’m quite impressed.

The new wave of HTTP referrer spam

I’ve noticed an increase in HTTP Referrer spam on my own web site and in some websites that I manage. See Wikipedia’s articles on the HTTP Referrer and Referrer spam for a definition of what exactly referrer spam is.

Wikipedia, and some other pages on the Internet that I found describing referrer spam say that the spammer’s intent is to end up on published web stats pages in order to create links to their site. I don’t think that is (or no longer is) the case.

I would argue that the real intent of these spammers is to get the website owner who is looking at the stats, to click on their links. Most users who have a blog or small website check their statistics often, and are really interested when they find a new site that appears to be linking to theirs. It is very likely that they will intentionally look at any new incoming links.

As evidence along this route, I just noticed that I got 4 hits on one of my sites with the following referrer:

https://www.amazon.com/s/ref=sr_pg_4&tag=somespamer_20

I’m familiar with Amazon’s link structure and immediately noticed that it was an affiliate URL. If you hit that URL, then Amazon will attribute your click as coming from the spammer. Amazon will set a cookie that contains the spammers affiliate ID, and any purchase that you make at Amazon in the next 30 days will be credited to the spammer. They will then get a 4% commission on your purchases.

Obviously, not everybody buys something from Amazon once a month, but I’d bet that enough people do to make it worth the risk. Fortunately, it looks like Amazon has already caught on to this one, and that particular link just goes to an error page now.

That is a pretty deceitful and probably successful tactic for the spammer. Creating referrer spam is incredibly easy. I don’t think there is any great way to detect it either. I’ve seen some WordPress plugins and such that attempt to deal with it, but I don’t think there is much going on in this area yet.

My first thought would be to request the referred page and look for links to your site. That has some potential problems working reliably on a large scale though. Also, it might enable a sortof distributed denial of service by proxy attack.

Another possible way to fight referrer spam would involve a blacklist. t could contain both IP Addresses of known spammers, and the links that they are spamming. I found one called referrercop that looks like it is owned by Google now, so that may show some promise – although it doesn’t look like it has been updated recently.

Regular Expression matching with newlines

I ran across a regular expression modifier today that I have not used before. When matching some text that spans multiple lines, you can use the ‘s’ modifier at the end of the regular expression to treat the string as a single line.

For example, I was trying to match some html that spanned multiple lines like this

<td class='something'>  This is the text I want to match
</td>

This expression didn’t match:

preg_match_all("#<td class='someting'>(.+?)</td>#", $source_string, $matches);

But simply adding the ‘s’ flag after the closing #, it worked as desired:

preg_match_all("#<td class='someting'>(.+?)</td>#s", $source_string, $matches);

PHP Performance – isset() versus empty() versus PHP Notices

I’m cleaning up a lot of PHP code and always program with PHP error_reporting set to E_ALL and display_errors turned on so that I make sure to catch any PHP messages that come up. Since starting on this site, I have fixed literally hundreds (maybe thousands) of PHP Notices about using uninitialized variables and non-existent array indexes.

I have been fixing problems like this where $somevar is sometimes undefined:

if ($somevar)

by changing it to:

if (isset($somevar) && $somevar)

This successfully gets rid of the NOTICEs, but adds some overhead because PHP has to perform two checks. After fixing a lot of this in this manner, I’ve noticed that the pages seem to be generated a little slower.

So, to provide some conclusive results to myself, I wrote up a quick benchmarking script – available at php_empty_benchmark.php. It goes through 1,000,000 tests using each of these methods:

  1. if ($a) – This generates a notice if $a is not set
  2. if (isset($a)) – A simple clean way to check if the variable is set (note that it is not equivalent to the one above)
  3. if (isset($a) && ($a) – The one that I have been using which is equivalent to if($a), but doesn’t generate a notice.
  4. if (!empty($a)) – This is functionally equivalent to if($a), but doesn’t generate a notice.

It measures the time to perform 1 million tests using a defined percentage of values that are set.  It then computes the difference as a percentage of the time taken for the original test (the one that generates the notices).   A ‘diff’ of 100 means that the execution time is the same, greater than 100 means that it is faster, and less than 100 means that it is slower. A typical test produced these results:

    With NOTICE: 0.19779300689697
    With isset:  0.19768500328064 / Diff: 100.05463419811
    With both:   0.21704912185669 / Diff: 91.128222590815
    with !empty: 0.19779801368713 / Diff: 99.997468735875

In summary, using the if (isset($a) && $a) syntax is about 8-10% slower than generating the PHP Notice. Using !empty() should be a drop-in replacement that doesn’t generate the notice and has virtually no performance impact. Using ifset() also has no performance impact, but is not exactly the same as ‘if($a)’ since isset() will return true if the variable is set to a false value. I included it here, because it often make the code a little more readable than the !empty($a) syntax. For example:

$myvalue = !empty($_REQUEST['myvalue']) ? $_REQUEST['myvalue'] : '';

Versus

$myvalue = isset($_REQUEST['myvalue']) ? $_REQUEST['myvalue'] : '';

KnitMeter.com Beta

My wife has gotten seriously into knitting in the past year and was recently wondering about how much she had knit in the past year. I was surprised that there doesn’t seem to be a website for tracking such information, so decided to make one for her (and for anybody else who might want it).

The concept is pretty simple – just put enter how much you knit each day and it will add it up for you and can summarize it by project. It generates a little widget that knitters can put on their blogs to compare with others.

The site still needs a little work here and there, but is pretty functional at this point. Users are free to sign up and try it out – all for free of course. I’m looking for user input to see what still needs some work.

Installing trac with webadmin on CentOS5

I’m not overly familiar with Python applications, so it takes a little while for me to figure it out each time. I need to document it somewhere so I don’t have to reinvent the wheel every time – might as well do it here so that others can find it.

Install the rpmforge repository

wget  https://dag.wieers.com/rpm/packages/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm

rpm -i rpmforge-release-0.3.6-1.el5.rf.i386.rpm

Install trac from the rpmforge repo

apt-get install trac

Install ez_setup

wget https://peak.telecommunity.com/dist/ez_setup.py

python ez_setup.py

And install webadmin with easy_install

easy_install https://svn.edgewall.com/repos/trac/sandbox/webadmin/

Poor experience and uptime with rapidvps.com

I heard good things about RapidVPS from several member of my local LUG.  I’d also heard good things about slicehost, but they seem to be perpetually unavailable.  So when I was setting up a new development and testing server, I figured that I’d give rapidvps a try.  I kindof like seeing how different companies do things and they have a pretty decent package for $30/month.

It turns out that was a poor choice.   I was unimpressed from the first day.   My new RapidVPS server was a pretty vanilla install of CentOS5.  Not much had been customized for their environment.   The name servers in /etc/resolv.conf didn’t even work and there were a bunch of other little annoyances that just didn’t make sense.  I blew it off at the time since I was able to get them resolved pretty quickly.

Their support staff was fairly responsive, but tended to skirt the direct questions that I asked.  For example, I asked specifically why the name servers were incorrect on a fresh install, and they just replied that they were fixed now.

I primarily use this machine for PHP development and testing.  I spend 6-8 hours a day logged in via SSH editing files directly.  So I notice pretty quickly when things go wrong.   One or two times a week I noticed that the IO load gets really high and things take forever.  Doing a simple directory listing was taking over 30 seconds.  When I sent in a support request about that, their reply was something along the lines that most customers use them for running LAMP websites, and that they generally work fine for that purpose, and that the high IO wouldn’t be a problem.

On several other occasions, their network has just become incredibly slow.   Replies from support indicated that one of their customers was getting attacked.    Right now, my server appears to be completely down, and they just replied that the machine is ‘recovering/doing a raid rebuild’ and will be up shortly.

So, I’ve had this machine for almost two months and had all of these problems.    I’d like to just ditch them and sign up for another server at RimuHosting.   But I’ve spent quite a bit of time getting everything configured just right and don’t have time at the moment to move everything somewhere else.

I guess I’ll have to deal with it for another month or so, until development slows down a little bit.  Then I’ll have to spend a few days migrating everything to a new service.  In the mean time, I definitely won’t be recommending RapidVPS to anybody.

« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑