Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Category: Linux System Administration (Page 6 of 11)

Compression for MySQL Replication

I have a MySQL database that does a fair number of updates and inserts. The server is replicated to an off-site server located across the country. With MySQL replication, any Insert, Update, or Delete statements are written to the binary log, then sent from the master server in San Jose to the slave in New York.

I noticed today that the slave server was falling behind the master and had trouble keeping up. I noticed that there was a sizable amount of bandwidth between the two servers and after investigating for a little while, determined that the bandwidth between the servers wasn’t sufficient to keep up with the replication.

We have applications running on the server in New York that were significantly behind or slow. After a bit of research, I found the slave_compressed_protocol setting in MySQL which allows the master and slave to compress the replication data between the two servers. After enabling that, the slave was able to catch up within a matter of minutes and has stayed caught up just fine. The bandwidth usage has dropped from a consistent 600 kb/s to around 20 kb/s.

Upon looking into MySQL replication, I also experimented with SSH compression since the replication goes through an SSH Tunnel. I had similar success with SSH compression as well.

Poor Performance After Enabling Repliction Due to sync_binlog

I was pretty happy with myself with setting up some fairly complicated MySQL circular replication the other night.  I did it far after peak hours so as not to disturb any visitors if it caused any problems.   Everything appeared to be working great until I started watching things the next morning.

I started to notice that the main MySQL server seemed to be running really slow.   One process that we have usually completes in a couple hours, ended up taking well over 16 hours to complete.   I spent the whole day troubleshooting it, which got me familiar with all sorts of handy tools.   ‘mytop‘ is a handy version of ‘top’ for MySQL queries.  I got familiar with iostat for watching disk I/O performance.

In the end, after a whole day of troubleshooting it came down to the ‘sync_binlog‘ setting that I had enabled because I read some howto that mentioned it was useful for the replication master.  My understanding now of the setting is that it causes the operating system to tell the disk to sync the file to disk after each write to the binary log (every UPDATE, INSERT, or DELETE).   The idea is that when the data is sync’d to disk, the drive physically writes it to the drive, instead of keeping it in a cache.    My application does a ton, of inserts, so it was killing performance.

A Case for Choosing Good Server Names

This morning, I had a client call me bright and early, frantic about some mail problems they were having.  All of their mail servers had stopped accepting incoming SMTP connections for some reason, and they couldn’t figure out why.

After a little bit of investigation, I found that they were using postfix with MySQL-based virtual domains.   The MySQL authentication was failing, which meant that postfix was unable to look up any valid recipient names.   That, in turn was causing tons of retried connections, until they hit the maximum number of connections where Postfix would refuse additional connections.

The problem is that these mail servers were initially set up with some dumb names for some reason.    A new administrator noticed the silly names in their Reverse DNS entries and changed them to some more sensible names.  The MySQL permissions were based off of the hostnames, so when the names in Reverse DNS changed, it broke the permissions, and the clients were unable to connect.

Solving the problem was simple enough – I just corrected the MySQL permissions, and then had to deal with some huge mail queues for a little while as all of the messages waiting to come in were finally allowed all at once.

The moral of the story is to use sensible names to start out with.   These names were chosen to be sortof funny I guess, but it didn’t end up being so amusing in the midst of all of the problems it caused.  As a side note, I usually do MySQL permissions based on IP Address as well, so that you further reduce this kind of problem.

Random Password Generator

There are times when I’ve been focusing on programming all day, and it is easier to write a program to do something trivial, then it is to just do it the simple way. Today was such a day. Instead of typing some random character to make up new user’s password, I wrote a script to do it for me:

#!/usr/bin/perl
#################################################################
## Quick Random Password Generator
## Author: Brandon Checketts
## https://www.brandonchecketts.com/
#################################################################

my $length = $ARGV[0] || 10;

my $charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";

my $pw = "";
for (my $i=0; $i < $length; $i++) {
    my $pos = rand(length($charset));
    $pw .= substr($charset, $pos, 1);
}
print "\\nRandom Password: $pw\\n\\n";

Of course you can modify the default length and/or characters to make something more suitable for your use.

Sample Usage:

[root@dev ~]# ~/bin/pwgen

Random Password: mYTZrSpE8B

[root@dev ~]# ~/bin/pwgen 20

Random Password: EoSQpypmeK3SZCVPodaM

Postfix Vacation Message

The idea of a vacation message is kindof odd to me, but I had one client request it today, so took a look at configuring it. On RHEL/CentOS distros, the ‘vacation’ binary is distributed with sendmail, and is not available with postfix, so you have to build it yourself. Fortunately, it is about the easiest thing I have ever compiled.

[root@host ~]# yum install gdbm-devel
[root@host ~]# cd /usr/local/src/
[root@host ~]# wget https://internap.dl.sourceforge.net/sourceforge/vacation/vacation-1.2.7.0-rc2.tar.gz
[root@host ~]# tar -xvzf vacation*
[root@host ~]# cd vacation-1.2.7.0-rc2
[root@host ~]# make
[root@host ~]# make install

That’s it. Not even a configure script. That should install the vacation binary in /usr/bin/vacation.

Now just create a vacation message by putting a ‘.vacation.msg’ in the user’s home directory with the auto-reply content:

Subject: On vacation message.

I'm on vacation and will not be reading my mail for a while.
Your mail will be dealt with when I return. 

And finally, create a .forward file that tells your mail program to deliver to the vacation program:

\myuser, "|/usr/bin/vacation  myuser"

That should be it. I tested and verified that it works. Note that you have to provide a to: header with the recipient’s address.

Next, I might try some experiments to see if I can get it to work with virtual users.

Checking MySQL Replication

MySQL replication is pretty easy to set up, but needs a few extra things to make it more reliable. I wrote this quick PHP script to alert me when replication has failed and is more than 5 minutes behind the master.

<?php

$user = 'username';
$pass = 'password';
$host = 'localhost';
// Grant this user permission to check the status with this mysql statement
// GRANT REPLICATION CLIENT on *.* TO 'user'@'host' IDENTIFIED BY 'password';

$threshold = 300;

$db = mysql_connect($host, $user, $pass);

$result = mysql_query('SHOW SLAVE STATUS');
if (!$result) {
    // Make sure that your user has the 'REPLICATION CLIENT' privlege
    echo "Error 'SHOW SLAVE STATUS' command failed\n";
    echo mysql_error()."\n";
    exit(1);
}

$status = mysql_fetch_array($result);

if (!isset($status['Seconds_Behind_Master'])) {
    echo "Error: Seconds_Behind_Master column not found in result\n";
    print_r($status);
    exit(2);
}

if ($status['Seconds_Behind_Master'] > $threshold) {
    $minutes = floor($status['Seconds_Behind_Master'] / 60);
    echo "Error: Slave is $minutes minutes behind the master server\n";
    exit(3);
}

exit(0);
?>

This script is intended to be run periodically from cron. It doesn’t generate any output unless something is wrong. The behavior of cron is that when a script generates output, it will email the output to the user, so make sure that you have mail on your system configured to send you the cron output correctly. The script also exits with a non-zero status on each error, so you might include this in a more complicated script that attempts to do something else based on the status.

I use something like this in a non-privileged user’s crontab:

*/15 * * * /usr/bin/php /path/to/check_replication.php

Getting Dkimproxy Installed and Configured

Dkimproxy is a great program for getting Postfix to both sign and validate DomainKeys and DKIM messages. Prior to dkimproxy, one would have used dk-filter and dkim-filter which did DomainKeys and DKIM signing separately. dkimproxy is a newer version that combines the functionality into one program. Installing it can be a bit complicated because it isn’t available in most distro repositories, and requires several Perl modules that need to be installed. Configuring it can be difficult as well, because it involves making changes DNS and postfix, in addition to its own configuration.

I wrote these steps below as I went through a recent installation for a customer

You can install the required Perl modules through the RPM Forge Repository if you have it enabled with the command (Thanks JohnB for mentioning that):

yum install perl-Net-Server perl-Error perl-Mail-DKIM

Otherwise, you can install them manually with CPAN. First install the openssl-devel package (You’ll need it for CPAN to install Mail::DKIM)

yum install openssl-devel

Now install the required Perl modules

# perl -MCPAN -e shell
> install Net::Server
> install Error
> install Mail::DKIM

Download and install the actual dkimproxy code:

cd /usr/local/src
wget https://internap.dl.sourceforge.net/sourceforge/dkimproxy/dkimproxy-1.0.1.tar.gz
tar -xvzf dkimproxy-1.0.1.tar.gz
cd dkimproxy-1.0.1
./configure --prefix=/usr/local/dkimproxy
make
make install

You should now have the program installed in /usr/local/dkimproxy. A sample init file is included, so we can copy it into place also:

cp /usr/local/src/dkimproxy-1.0.1/sample-dkim-init-script.sh /etc/init.d/dkimproxy

Create a ‘dkim’ user and group, but lock the password:

useradd -d /usr/local/dkimproxy dkim
passwd -l dkim

That should be enough to get dkimproxy running, but it isn’t configured yet.

Create a key file for your domain

cd /usr/local/dkimproxy/etc/
openssl genrsa -out domain.tld.key 1024
openssl rsa -in domain.tld.key -pubout -out domain.tld.pub

Now create a DNS TXT record for mail._domainkey.domain.tld with the contents of domain.tld.pub. Your public key will span at least two lines, so combine all of the lines of the key together when putting it in your DNS record. The whole DNS record will look something like this:

k=rsa; t=s; p=MFwwDQYJ......0JMCAwEAAQ==

(Note that the key is pretty long and I’ve shortened it here)
You could now confirm the key is correct in your DNS:

[root@host etc]# host -ttxt mail._domainkey.domain.tls
mail._domainkey.domain.tld descriptive text "k=rsa\; t=s\; p=MFwwDQYJ......0JMCAwEAAQ=="

(Note that the key is pretty long and I’ve shortened it here)

Now tell dkimproxy about the key files, and configuration parameters. Create /usr/local/dkimproxy/etc/dkimproxy_out.conf with this content

# specify what address/port DKIMproxy should listen on
listen    127.0.0.1:10027

# specify what address/port DKIMproxy forwards mail to
relay     127.0.0.1:10028

# specify what domains DKIMproxy can sign for (comma-separated, no spaces)
domain    domain.tld

# specify what signatures to add
signature dkim(c=relaxed)
signature domainkeys(c=nofws)

# specify location of the private key
keyfile   /usr/local/dkimproxy/etc/domain.tld.key

# specify the selector (i.e. the name of the key record put in DNS)
selector  mail

And copy the sample inbound config to the real inbound config

cd /usr/local/dkimproxy/etc
cp dkimproxy_in.conf.example dkimproxy_in.conf

Now you should be able to start up dkimproxy, and configure it to start at boot:

/etc/init.d/dkimproxy start
chkconfig dkimproxy on

And the last step is just to modify the postfix configuration to tell it to forward messages sent to port 587 through dkimproxy for signing. I added these three sections to /etc/postfix/master.cf

### dkimproxy filter - see https://dkimproxy.sourceforge.net/postfix-outbound-howto.html
#
# modify the default submission service to specify a content filter
# and restrict it to local clients and SASL authenticated clients only
#
submission  inet  n     -       n       -       -       smtpd
    -o smtpd_etrn_restrictions=reject
    -o smtpd_sasl_auth_enable=yes
    -o content_filter=dksign:[127.0.0.1]:10027
    -o receive_override_options=no_address_mappings
    -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject

# specify the location of the DKIM signing proxy
# Note: the smtp_discard_ehlo_keywords option requires a recent version of
# Postfix. Leave it off if your version does not support it.
dksign    unix  -       -       n       -       10      smtp
    -o smtp_send_xforward_command=yes
    -o smtp_discard_ehlo_keywords=8bitmime,starttls

# service for accepting messages FROM the DKIM signing proxy
127.0.0.1:10028 inet  n  -      n       -       10      smtpd
    -o content_filter=
    -o receive_override_options=no_unknown_recipient_checks,no_header_body_checks
    -o smtpd_helo_restrictions=
    -o smtpd_client_restrictions=
    -o smtpd_sender_restrictions=
    -o smtpd_recipient_restrictions=permit_mynetworks,reject
    -o mynetworks=127.0.0.0/8
    -o smtpd_authorized_xforward_hosts=127.0.0.0/8

If you want it to sign messages sent from the command line sendmail program, modify the pickup service to use the content_filter like this:

pickup    fifo  n       -       n       60      1       pickup
    -o content_filter=dksign:[127.0.0.1]:10027

Finally restart postfix with ‘postfix reload’, and you *should* have a working installation. You can now use my Domainkeys/Dkim validator to test and ensure that it is working.

Getting Un-locked out of Webmin

Webmin has come a long way in the last year or two. I still (and always will) prefer the command line, but many customers are certainly much more comfortable using a web-interface to configure their server. I have to reset a password every once in a while and have to look up the steps every time. It seems I can never find them quickly, so I’ll put them here so I can find them easily next time.

Blocked IP addresses are stored a text file named /var/webmin/blocked. IP Address should be cleared out after some period of time, but you can hasten the process by clearing it manually:

cp /dev/null /var/webmin/blocked

Webmin passwords are stored in /etc/webmin/miniserv.users. A script for changing a user password is provided with webmin. Just run it like this:

/usr/libexec/webmin/changepass.pl /etc/webmin root

Creating a Permanent SSH Tunnel Between Linux Servers

I recently had a need to create a permanent SSH tunnel between Linux servers. My need was to allow regular non-encrypted MySQL connections over an encrypted tunnel, but there could be many other uses as well. Google can identify plenty of resources regarding the fundamental SSH commands for port forwarding but I didn’t ever find a good resource for setting up a connection and ensuring that it remains active, which is what I hope to provide here.

The SSH commands for port forwarding can be found in the ssh man page. The steps described here will create an unprivileged user named ‘tunnel’ on each server. That user will then be used to create the tunnel and run a script via cron to ensure that it remains up.

First, select one of the servers that will initiate the SSH connection. SSH allows you to map both local and remote ports, so it doesn’t really matter which end of the connection you choose to initiate the connection. I’ll refer to the box that initiates the connection as Host A, and the box that we connect to as Host B.

Create a ‘tunnel’ user on Host A:

[root@hosta ~]# useradd -d /home/tunnel tunnel
[root@hosta ~]# passwd tunnel       ## Set a strong password
[root@hosta ~]# su - tunnel           ## Become the user 'tunnel'

Now create a public/private key pair:

[tunnel@hosta ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tunnel/.ssh/id_rsa):    ## hit enter to accept the default
Enter passphrase (empty for no passphrase):                           ## don't use a  passphrase
Enter same passphrase again:
Your identification has been saved in /home/tunnel/.ssh/id_rsa.
Your public key has been saved in /home/tunnel/.ssh/id_rsa.pub.
The key fingerprint is:
6f:30:b8:e1:36:49:74:b9:32:68:6e:bf:3e:62:d3:c2 tunnel@hosta

Now cat out the id_rsa.pub file which contains the public key that we will need to put on host b:

[tunnel@hosta ~]# cat /.ssh/id_rsa.pub
ssh-rsa blahAAAAB3NzaC1yc2EAAAABIwAAAQEA......6BEKKCxTIxgBqjLP tunnel@hosta

Now create a ‘tunnel’ user on Host B and save the public key for tunnel@hosta in the authorized_keys file

[root@hostb ~]# useradd -d /home/tunnel tunnel
[root@hostb ~]# passwd tunnel       ## Set a strong password
[root@hostb ~]# su - tunnel
[tunnel@hostb ~]# mkdir .ssh
[tunnel@hostb ~]# vi .ssh/authorized_keys   ## Now paste in the public key for tunnel@hosta

At this point you should be able to ssh from tunnel@hosta to tunnel@hostb without using a password. Depending on your configuration, you might need to allow the user ‘tunnel’ in /etc/ssh/sshd_config. You might also set some SSH options like the destination port in ~/.ssh/config.

Now, create this script as hosta:/home/tunnel/check_ssh_tunnel.sh

createTunnel() {
    /usr/bin/ssh -f -N -L13306:hostb:3306 -L19922:hostb:22 tunnel@hostb
    if [[ $? -eq 0 ]]; then
        echo Tunnel to hostb created successfully
    else
        echo An error occurred creating a tunnel to hostb RC was $?
    fi
}
## Run the 'ls' command remotely.  If it returns non-zero, then create a new connection
/usr/bin/ssh -p 19922 tunnel@localhost ls
if [[ $? -ne 0 ]]; then
    echo Creating new tunnel connection
    createTunnel
fi

Save that file and make it executable:

chmod 700 ~/check_ssh_tunnel.sh

This script will attempt to SSH to localhost port 19922 and run the ‘ls’ command. If that fails, it will attempt to create the SSH tunnel. The command to create the SSH tunnel will tunnel local port 13306 to port 3306 on hostb. You should modify that as necessary for your configuration. It will also create a tunnel for local port 19922 to port 22 on hostb which the script uses for testing the connection.

Now just add that script to the user ‘tunnel’s crontab to check every few minutes, and it will automatically create a tunnel and reconnect it if something fails. When it does create a new connection it will send an email to the ‘tunnel’ user, so you can create a .forward file to forward those messages to you.

Identifying Weak SSL or SSH Keys on CentOS

With the Debian OpenSSL problems, everybody is wanting to know if their server is vulnerable to any attacks. Fortunately, CentOS machines shouldn’t be directly affected and have fewer issues than if you are using Debian or Ubuntu derivatives. Unfortunately though, your system may still be vulnerable if you have any users that may have generated their keys on an affected machine. So it is definitely necessary to check, even if you are not running a distribution that is affected.

This is the steps I have been going through to look for any weak keys on a CentOS server

Download the weak key detector provided by Debian (there may be better tools to use by now). It is available on the announcement page. (I’m not linking to it intentionally).

[root@host ~]# cd /tmp
[root@host tmp]# wget https://security.debian.org/project/extra/dowkd/dowkd.pl.gz
--20:44:31--  https://security.debian.org/project/extra/dowkd/dowkd.pl.gz
Resolving security.debian.org... 128.31.0.36, 130.89.175.54, 212.211.132.32, ...
Connecting to security.debian.org|128.31.0.36|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14231783 (14M) [application/x-gzip]
Saving to: `dowkd.pl.gz'

100%[================================>] 14,231,783  6.42M/s   in 2.1s

20:44:33 (6.42 MB/s) - `dowkd.pl.gz' saved [14231783/14231783]

[root@host tmp]# gunzip dowkd.pl.gz

Then check a couple known files – Start out with your SSH host keys in /etc/ssh/

[root@host tmp]# perl dowkd.pl file /etc/ssh/*key*
/etc/ssh/ssh_host_dsa_key:1: warning: unparsable line
/etc/ssh/ssh_host_key:1: warning: unparsable line
summary: keys found: 4, weak keys: 0

Then check any certificates in /etc/pki/tls:

[root@host tmp]# for file in `find /etc/pki/tls/ -name "*key"`; do echo -n "$file - "; perl /tmp/dowkd.pl file $file; done
/etc/pki/tls/certs/mydomain.ca.key - summary: keys found: 1, weak keys: 0
/etc/pki/tls/certs/secure.mydomain.ca.key - summary: keys found: 1, weak keys: 0
/etc/pki/tls/private/localhost.key - summary: keys found: 1, weak keys: 0

Any for any SSL certificates that Apache might be using in /etc/httpd/conf/ssl.key/:

[root@host tmp]# perl dowkd.pl  file /etc/httpd/conf/ssl.key/*
summary: keys found: 4, weak keys: 0

And finally, any users who might have authorized a weak key via their authorized_users file:

[root@host tmp]# for file in `find / -name authorized_keys`; do echo -n "$file "; perl dowkd.pl file $file; done
/home/someuser/.ssh/authorized_keys summary: keys found: 6, weak keys: 0
summary: keys found: 7, weak keys: 0

Note that any that say ‘warning: no blacklist found’ means that the tool didn’t have a blacklist for the key type, so they might need to be checked with another tool unless you are sure that they are okay.

You should also check any other locations for keys. The locations could vary widely on different machines, depending on the configuration of your server. Those locations specified above should cover most of the default locations on a CentOS 4 or CentOS 5 server, but every server is different. If you don’t find it now, its quite likely that an attacker will later.

« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑