LastPass Challenges with Multiple Organizations

As a parallel entrepreneur, I’m a participating member of multiple companies. That brings with it some unique challenges, as many software tools don’t gracefully handle a user belonging to multiple organizations. I’ve learned to deal with that in many situations. Typically I’ll often having to log out and back in as the desired user or have multiple browsers or browser profiles open – one for each organization.

One area that has been particularly challenging has been group password management. There are not a lot of software options, although there are getting to be some new players. LastPass is the most mature option, and is the product that I have used for a long time. I investigated some alternatives including 1Password and DashLane. Both of those looked a little more modern and polished, but neither seemed to have mature support for multiple organizations.

Lastpass does claim to have robust support for organizations, but there is minimal, if any, mention on their website or elsewhere that mentions belonging to multiple organizations. It has taken me a lot of experimenting, but I’ve finally come up with a solution that works well.

You might think, as the diagram above indicates, that each organization to which you belong should invite your personal account to become a member of the organization. You would be wrong. Although this seems like the intuitive relationship, it does not work since LastPass only allows a personal account to attach to exactly one LastPass Enterprise account. Not more.

The correct way to belong to multiple Enterprise Accounts in LastPass is to choose one of the organizations to be your “Main” account to which you log in on a daily basis. You connect your Personal account to this enterprise account so that your personal sites appear alongside your work passwords.

Then, to add additional organizations, you don’t purchase a user license in those other organizations. Instead you create one or more shared folders, and share the folders with the email address for your “Main” organization account. There is a limitation that you can’t be an admin of the shared folders in these other organizations since you are not part of the Enterprise, but sharing and day-to-day password usage works generally as expected.

This method seems less intuitive, but works well now that I’ve figured it out. As I’ve learned more about how LastPass works internally, I understand why this unorthodox configuration is required

A few other quirks I’ve found, which just take some getting used-to:

  • Shared folders from my personal account DO NOT SHOW UP when logged into my enterprise account. You have to share to your main organization email address instead.
  • Folder structure in my Personal Account is not confusing in the User-Interface when browsing passwords in my enterprise account. The folder-within-folder structure doesn’t render well, and it is confusing as to which “level” I’m at.

I hope that the folks at LastPass are able to simplify this or make it more obvious how it is to be configured.

Do you have a better solution for password sharing with multiple organizations? Please let me and others know in the comments.

Setting Up Virtualmin on an OpenVZ Guest

I’m experimenting with a hosting control panel and am interested in Virtualmin. I generally avoid web-based control panels, because they generally make direct configuration via the command line and manually editing config files very difficult. However one of Virtualmin’s goals is to not interfere with such manual configurations. I’ve had plenty of clients who use Webmin, and they seem to do a good job, so Virtualmin seems like a good choice.

These are the steps that I went through to get a new OpenVZ guest set up with the GPL version of Virtualmin.

Download a CentOS 5 OS template and create the guest

# wget http://download.openvz.org/template/precreated/centos-5-x86_64.tar.gz
# vzctl create <VEID> --ostemplate centos-5-x86_64

I replaced all of these limits in /etc/vz/<VEID>.conf. This is based off of a different running machine with some fairly generous limits. Most importantly, it includes 1GB of RAM.

# UBC parameters (in form of barrier:limit)
KMEMSIZE="43118100:44370492"
LOCKEDPAGES="256:256"
PRIVVMPAGES="262144:262144"
SHMPAGES="21504:21504"
NUMPROC="2000:2000"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="65536:65536"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="380:420"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="10321920:16220160"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="4504320:16777216"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="5000:5000"
DCACHESIZE="3409920:3624960"
NUMFILE="18624:18624"
AVNUMPROC="180:180"
NUMIPTENT="128:128"

Then set up some host-specific parameters and start it up.

# vzctl set <VEID> --ipadd 10.0.0.1 --hostname yourhostname.com --nameserver 4.2.2.1 --diskspace 4G --save
# vzctl start <VEID>
# vzctl enter <VEID>

You are now logged in to the guest, where you can download and install virtualmin

# yum update
# cd /root
# wget http://software.virtualmin.com/gpl/scripts/install.sh
# sh install.sh
 Continue? (y/n) y

That should install without significant errors. Finally, set a password for root, and then log in to Virtualmin to go through the post-installation configuration

passwd root

Login at https://<your-ip>:10000/ and go through the post-installation configuration

ProFTPd allows multipled DefaultRoot lines for flexible chrooting

The ProFTPd documentation gives good examples of how to use the DefaultRoot directive to chroot users to a specific directory.

A customer today wanted to have different chroot directories for different groups of users. The documentation didn’t mention if it was okay to include multiple DefaultRoot lines. After some experimenting, I can verify that it is allowed and works well.

I used something like this in /etc/proftpd/proftpd.conf

DefaultRoot                     ~ jailed
DefaultRoot                     ~/../.. othergroup

Users in the group ‘jailed’ are chrooted to their own home directory immediately upon logging in. Users in the ‘othergroup’ are chrooted two levels up from their home directory. If you want to get really specific, each user generally has a group of their own, so you can effectively do this a the user-level as well.

MySQLDump To a Remote Server

I was running out of disk space on a server today. The server had a large database table that was no longer used, so I wanted to archive it and then drop the table. But the server didn’t have enough disk space to dump it out to disk before copying it off to a remote server for archiving.

The first thought was to run mysqldump dump on the destination machine, and to access the database over the network. That however, doesn’t compress or encrypt the data. Plus I would have had to create a mysql user with permission to access the database remotely.

The solution I came up with worked out well: mysqldump directly to the remote host with this command:

mysqldump <DATABASE_NAME> [mysqldump options] | gzip -c | ssh user@remotehost "cat > /path/to/some-file.sql.gz"

That pipes the mysqldump command through gzip, then to through and SSH connection. SSH on the remote side runs the ‘cat’ command to read the stdin, then redirects that to the actual file where I want it saved.

Testing for Vulnerable Caching Name Servers

Most of the technical community has probably heard of the recently found DNS weakness.  The basic premise is that if a recursive nameserver doesn’t use sufficently random source ports when making recursive queries, it can be vulnerable to an attacker who is trying to poisen the cache, or fill it with incorrect data.

I’ve now heard reports about it from various news sources who make it sound much more drastic than it actually is.   Granted, it is a serious flaw, but fortunately most companies with any desire for security use SSL, which provides an additional layer for identity verification.  Also, for most any company with an IT staff, patching the DNS server with the required fixes should be a fairly trivial task.   The most important servers to be fixed are those run by ISPs and Datacenters, both of which should have their servers fixed by now.

Tools for testing your DNS servers are fairly easy to come by.  dns-oarc.net has a web-based test, although I don’t know how it discovers your DNS Servers.  For windows users, you can run ‘nslookup’ like this:

C:\Documents and Settings\Brandon>nslookup
Default Server:  cns.manassaspr.va.dc02.comcast.net
Address:  68.87.73.242

> set type=TXT
> porttest.dns-oarc.net
Server:  cns.manassaspr.va.dc02.comcast.net
Address:  68.87.73.242

Non-authoritative answer:
porttest.dns-oarc.net   canonical name = porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.
j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net
porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net
text =

        "68.87.73.245 is GREAT: 26 queries in 2.3 seconds from 25 ports with std
 dev 16592"
>

To test from a linux machine, you can use dns-oarc’s test with dig like this:

root@server:~# dig porttest.dns-oarc.net in txt +short
porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
"72.249.0.34 is GREAT: 26 queries in 1.2 seconds from 26 ports with std dev 20533"

Your are looking for a response that contains GOOD or GREAT. If your results contain something else, you should notify your ISP or Data Center to have them fix their servers.

Random Password Generator

There are times when I’ve been focusing on programming all day, and it is easier to write a program to do something trivial, then it is to just do it the simple way. Today was such a day. Instead of typing some random character to make up new user’s password, I wrote a script to do it for me:

#!/usr/bin/perl
#################################################################
## Quick Random Password Generator
## Author: Brandon Checketts
## http://www.brandonchecketts.com/
#################################################################

my $length = $ARGV[0] || 10;

my $charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";

my $pw = "";
for (my $i=0; $i < $length; $i++) {
    my $pos = rand(length($charset));
    $pw .= substr($charset, $pos, 1);
}
print "\\nRandom Password: $pw\\n\\n";

Of course you can modify the default length and/or characters to make something more suitable for your use.

Sample Usage:

[root@dev ~]# ~/bin/pwgen

Random Password: mYTZrSpE8B

[root@dev ~]# ~/bin/pwgen 20

Random Password: EoSQpypmeK3SZCVPodaM

Getting Un-locked out of Webmin

Webmin has come a long way in the last year or two. I still (and always will) prefer the command line, but many customers are certainly much more comfortable using a web-interface to configure their server. I have to reset a password every once in a while and have to look up the steps every time. It seems I can never find them quickly, so I’ll put them here so I can find them easily next time.

Blocked IP addresses are stored a text file named /var/webmin/blocked. IP Address should be cleared out after some period of time, but you can hasten the process by clearing it manually:

cp /dev/null /var/webmin/blocked

Webmin passwords are stored in /etc/webmin/miniserv.users. A script for changing a user password is provided with webmin. Just run it like this:

/usr/libexec/webmin/changepass.pl /etc/webmin root

Creating a Permanent SSH Tunnel Between Linux Servers

I recently had a need to create a permanent SSH tunnel between Linux servers. My need was to allow regular non-encrypted MySQL connections over an encrypted tunnel, but there could be many other uses as well. Google can identify plenty of resources regarding the fundamental SSH commands for port forwarding but I didn’t ever find a good resource for setting up a connection and ensuring that it remains active, which is what I hope to provide here.

The SSH commands for port forwarding can be found in the ssh man page. The steps described here will create an unprivileged user named ‘tunnel’ on each server. That user will then be used to create the tunnel and run a script via cron to ensure that it remains up.

First, select one of the servers that will initiate the SSH connection. SSH allows you to map both local and remote ports, so it doesn’t really matter which end of the connection you choose to initiate the connection. I’ll refer to the box that initiates the connection as Host A, and the box that we connect to as Host B.

Create a ‘tunnel’ user on Host A:

[root@hosta ~]# useradd -d /home/tunnel tunnel
[root@hosta ~]# passwd tunnel       ## Set a strong password
[root@hosta ~]# su - tunnel           ## Become the user 'tunnel'

Now create a public/private key pair:

[tunnel@hosta ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tunnel/.ssh/id_rsa):    ## hit enter to accept the default
Enter passphrase (empty for no passphrase):                           ## don't use a  passphrase
Enter same passphrase again:
Your identification has been saved in /home/tunnel/.ssh/id_rsa.
Your public key has been saved in /home/tunnel/.ssh/id_rsa.pub.
The key fingerprint is:
6f:30:b8:e1:36:49:74:b9:32:68:6e:bf:3e:62:d3:c2 tunnel@hosta

Now cat out the id_rsa.pub file which contains the public key that we will need to put on host b:

[tunnel@hosta ~]# cat /.ssh/id_rsa.pub
ssh-rsa blahAAAAB3NzaC1yc2EAAAABIwAAAQEA......6BEKKCxTIxgBqjLP tunnel@hosta

Now create a ‘tunnel’ user on Host B and save the public key for tunnel@hosta in the authorized_keys file

[root@hostb ~]# useradd -d /home/tunnel tunnel
[root@hostb ~]# passwd tunnel       ## Set a strong password
[root@hostb ~]# su - tunnel
[tunnel@hostb ~]# mkdir .ssh
[tunnel@hostb ~]# vi .ssh/authorized_keys   ## Now paste in the public key for tunnel@hosta

At this point you should be able to ssh from tunnel@hosta to tunnel@hostb without using a password. Depending on your configuration, you might need to allow the user ‘tunnel’ in /etc/ssh/sshd_config. You might also set some SSH options like the destination port in ~/.ssh/config.

Now, create this script as hosta:/home/tunnel/check_ssh_tunnel.sh

createTunnel() {
    /usr/bin/ssh -f -N -L13306:hostb:3306 -L19922:hostb:22 tunnel@hostb
    if [[ $? -eq 0 ]]; then
        echo Tunnel to hostb created successfully
    else
        echo An error occurred creating a tunnel to hostb RC was $?
    fi
}
## Run the 'ls' command remotely.  If it returns non-zero, then create a new connection
/usr/bin/ssh -p 19922 tunnel@localhost ls
if [[ $? -ne 0 ]]; then
    echo Creating new tunnel connection
    createTunnel
fi

Save that file and make it executable:

chmod 700 ~/check_ssh_tunnel.sh

This script will attempt to SSH to localhost port 19922 and run the ‘ls’ command. If that fails, it will attempt to create the SSH tunnel. The command to create the SSH tunnel will tunnel local port 13306 to port 3306 on hostb. You should modify that as necessary for your configuration. It will also create a tunnel for local port 19922 to port 22 on hostb which the script uses for testing the connection.

Now just add that script to the user ‘tunnel’s crontab to check every few minutes, and it will automatically create a tunnel and reconnect it if something fails. When it does create a new connection it will send an email to the ‘tunnel’ user, so you can create a .forward file to forward those messages to you.

Identifying Weak SSL or SSH Keys on CentOS

With the Debian OpenSSL problems, everybody is wanting to know if their server is vulnerable to any attacks. Fortunately, CentOS machines shouldn’t be directly affected and have fewer issues than if you are using Debian or Ubuntu derivatives. Unfortunately though, your system may still be vulnerable if you have any users that may have generated their keys on an affected machine. So it is definitely necessary to check, even if you are not running a distribution that is affected.

This is the steps I have been going through to look for any weak keys on a CentOS server

Download the weak key detector provided by Debian (there may be better tools to use by now). It is available on the announcement page. (I’m not linking to it intentionally).

[root@host ~]# cd /tmp
[root@host tmp]# wget http://security.debian.org/project/extra/dowkd/dowkd.pl.gz
--20:44:31--  http://security.debian.org/project/extra/dowkd/dowkd.pl.gz
Resolving security.debian.org... 128.31.0.36, 130.89.175.54, 212.211.132.32, ...
Connecting to security.debian.org|128.31.0.36|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14231783 (14M) [application/x-gzip]
Saving to: `dowkd.pl.gz'

100%[================================>] 14,231,783  6.42M/s   in 2.1s

20:44:33 (6.42 MB/s) - `dowkd.pl.gz' saved [14231783/14231783]

[root@host tmp]# gunzip dowkd.pl.gz

Then check a couple known files – Start out with your SSH host keys in /etc/ssh/

[root@host tmp]# perl dowkd.pl file /etc/ssh/*key*
/etc/ssh/ssh_host_dsa_key:1: warning: unparsable line
/etc/ssh/ssh_host_key:1: warning: unparsable line
summary: keys found: 4, weak keys: 0

Then check any certificates in /etc/pki/tls:

[root@host tmp]# for file in `find /etc/pki/tls/ -name "*key"`; do echo -n "$file - "; perl /tmp/dowkd.pl file $file; done
/etc/pki/tls/certs/mydomain.ca.key - summary: keys found: 1, weak keys: 0
/etc/pki/tls/certs/secure.mydomain.ca.key - summary: keys found: 1, weak keys: 0
/etc/pki/tls/private/localhost.key - summary: keys found: 1, weak keys: 0

Any for any SSL certificates that Apache might be using in /etc/httpd/conf/ssl.key/:

[root@host tmp]# perl dowkd.pl  file /etc/httpd/conf/ssl.key/*
summary: keys found: 4, weak keys: 0

And finally, any users who might have authorized a weak key via their authorized_users file:

[root@host tmp]# for file in `find / -name authorized_keys`; do echo -n "$file "; perl dowkd.pl file $file; done
/home/someuser/.ssh/authorized_keys summary: keys found: 6, weak keys: 0
summary: keys found: 7, weak keys: 0

Note that any that say ‘warning: no blacklist found’ means that the tool didn’t have a blacklist for the key type, so they might need to be checked with another tool unless you are sure that they are okay.

You should also check any other locations for keys. The locations could vary widely on different machines, depending on the configuration of your server. Those locations specified above should cover most of the default locations on a CentOS 4 or CentOS 5 server, but every server is different. If you don’t find it now, its quite likely that an attacker will later.

bcSpamblock Updated to Version 1.3

Thanks to jontiw for pointing out a potential problem in my bcSpamblock code.  He noted the the PHP crypt() function returns the salt along with the encrypted value.  My code was passing the salt to the visitor so that an attacker could potentially learn the salt value that a website was using and create valid responses.

I modified the code to strip out that salt before passing it to the user.  I also modified the data used to create the salt so that previous vulnerable version doesn’t use the same value for the site.  The wordpress plugin has also been updated as well.

I was happy to see other people looking through my code and pointing this type of issue out.