Using LastPass to Save Passwords and Log In to Multiple AWS Accounts With Two-Factor Authentication

I have multiple businesses, so I log into AWS multiple times per day.

That is a little tricky to do using LastPass since AWS has some hidden form fields that must be filled in
when using two-factor authentication through Google Authenticator.

In order to make it work correctly, I’ve had to modify the extra details in LastPass to add some extra hidden fields. If you set these up in your LastPass credentials for AWS, you should be able to log in with just a couple clicks, like usual, instead of having to type in some of those fields every time or having them overwritten.

Also, make sure to check the “Disable Autofill” checkbox an all of your AWS LastPass entries. Otherwise, one of them will overwrite the hidden form fields on the Two-Factor authentication page

Ubuntu 20.04 Cloud-Init Example to Create a User That Can Use sudo

Use the steps below and example config to create a cloud-init file that creates a user, sets their password, and enables SSH access. The Cloud Config documentation has some examples, but they don’t actually work for being able to ssh into a server and run commands via sudo

First, create a password hash with mkpasswd command:

$ mkpasswd -m sha-512

Make note of the output string. You need to enter it exactly in the passwd line of your cloud-init config.

This is the minimal configuration to create a user using cloud-init:

  - name: brandon
    groups: [ sudo ]
    shell: /bin/bash
    lock_passwd: false
    passwd: "$6$nq4v1BtHB8bg$Oc2TouXN1KZu7F406ELRUATiwXwyhC4YhkeSRD2z/I.a8tTnOokDeXt3K4mY8tHgW6n0l/S8EU0O7wIzo.7iw1"
    - ssh-ed25519 AAAAC3NzaC1lZDI1zzzBBBGGGg3BZFFzTexMPpOdq34a6OlzycjkPhsh4Qg2tSWZyXZ my-key-name

A few things that are noteworthy:

  • The string in the passwd field is enclosed in quotes
  • lock_passwd: false is required to use sudo. Otherwise, the system user account created will have a disabled password and will be unable to use sudo. You’ll just continually be asked for a password, even if you enter it correctly.
  • I prefer the method of adding the user to the sudo group to grant access to sudo. There are other ways to make that work as well, but I feel like this is the cleanest.
  • Adding any users, will prevent the default ubuntu user from being created.
  • LastPass Challenges with Multiple Organizations

    As a parallel entrepreneur, I’m a participating member of multiple companies. That brings with it some unique challenges, as many software tools don’t gracefully handle a user belonging to multiple organizations. I’ve learned to deal with that in many situations. Typically I’ll often having to log out and back in as the desired user or have multiple browsers or browser profiles open – one for each organization.

    One area that has been particularly challenging has been group password management. There are not a lot of software options, although there are getting to be some new players. LastPass is the most mature option, and is the product that I have used for a long time. I investigated some alternatives including 1Password and DashLane. Both of those looked a little more modern and polished, but neither seemed to have mature support for multiple organizations.

    Lastpass does claim to have robust support for organizations, but there is minimal, if any, mention on their website or elsewhere that mentions belonging to multiple organizations. It has taken me a lot of experimenting, but I’ve finally come up with a solution that works well.

    You might think, as the diagram above indicates, that each organization to which you belong should invite your personal account to become a member of the organization. You would be wrong. Although this seems like the intuitive relationship, it does not work since LastPass only allows a personal account to attach to exactly one LastPass Enterprise account. Not more.

    The correct way to belong to multiple Enterprise Accounts in LastPass is to choose one of the organizations to be your “Main” account to which you log in on a daily basis. You connect your Personal account to this enterprise account so that your personal sites appear alongside your work passwords.

    Then, to add additional organizations, you don’t purchase a user license in those other organizations. Instead you create one or more shared folders, and share the folders with the email address for your “Main” organization account. There is a limitation that you can’t be an admin of the shared folders in these other organizations since you are not part of the Enterprise, but sharing and day-to-day password usage works generally as expected.

    This method seems less intuitive, but works well now that I’ve figured it out. As I’ve learned more about how LastPass works internally, I understand why this unorthodox configuration is required

    A few other quirks I’ve found, which just take some getting used-to:

    • Shared folders from my personal account DO NOT SHOW UP when logged into my enterprise account. You have to share to your main organization email address instead.
    • Folder structure in my Personal Account is not confusing in the User-Interface when browsing passwords in my enterprise account. The folder-within-folder structure doesn’t render well, and it is confusing as to which “level” I’m at.

    I hope that the folks at LastPass are able to simplify this or make it more obvious how it is to be configured.

    Do you have a better solution for password sharing with multiple organizations? Please let me and others know in the comments.

    Setting Up Virtualmin on an OpenVZ Guest

    I’m experimenting with a hosting control panel and am interested in Virtualmin. I generally avoid web-based control panels, because they generally make direct configuration via the command line and manually editing config files very difficult. However one of Virtualmin’s goals is to not interfere with such manual configurations. I’ve had plenty of clients who use Webmin, and they seem to do a good job, so Virtualmin seems like a good choice.

    These are the steps that I went through to get a new OpenVZ guest set up with the GPL version of Virtualmin.

    Download a CentOS 5 OS template and create the guest

    # wget
    # vzctl create <VEID> --ostemplate centos-5-x86_64

    I replaced all of these limits in /etc/vz/<VEID>.conf. This is based off of a different running machine with some fairly generous limits. Most importantly, it includes 1GB of RAM.

    # UBC parameters (in form of barrier:limit)

    Then set up some host-specific parameters and start it up.

    # vzctl set <VEID> --ipadd --hostname --nameserver --diskspace 4G --save
    # vzctl start <VEID>
    # vzctl enter <VEID>

    You are now logged in to the guest, where you can download and install virtualmin

    # yum update
    # cd /root
    # wget
    # sh
     Continue? (y/n) y

    That should install without significant errors. Finally, set a password for root, and then log in to Virtualmin to go through the post-installation configuration

    passwd root

    Login at https://<your-ip>:10000/ and go through the post-installation configuration

    ProFTPd allows multipled DefaultRoot lines for flexible chrooting

    The ProFTPd documentation gives good examples of how to use the DefaultRoot directive to chroot users to a specific directory.

    A customer today wanted to have different chroot directories for different groups of users. The documentation didn’t mention if it was okay to include multiple DefaultRoot lines. After some experimenting, I can verify that it is allowed and works well.

    I used something like this in /etc/proftpd/proftpd.conf

    DefaultRoot                     ~ jailed
    DefaultRoot                     ~/../.. othergroup

    Users in the group ‘jailed’ are chrooted to their own home directory immediately upon logging in. Users in the ‘othergroup’ are chrooted two levels up from their home directory. If you want to get really specific, each user generally has a group of their own, so you can effectively do this a the user-level as well.

    MySQLDump To a Remote Server

    I was running out of disk space on a server today. The server had a large database table that was no longer used, so I wanted to archive it and then drop the table. But the server didn’t have enough disk space to dump it out to disk before copying it off to a remote server for archiving.

    The first thought was to run mysqldump dump on the destination machine, and to access the database over the network. That however, doesn’t compress or encrypt the data. Plus I would have had to create a mysql user with permission to access the database remotely.

    The solution I came up with worked out well: mysqldump directly to the remote host with this command:

    mysqldump <DATABASE_NAME> [mysqldump options] | gzip -c | ssh user@remotehost "cat > /path/to/some-file.sql.gz"

    That pipes the mysqldump command through gzip, then to through and SSH connection. SSH on the remote side runs the ‘cat’ command to read the stdin, then redirects that to the actual file where I want it saved.

    Testing for Vulnerable Caching Name Servers

    Most of the technical community has probably heard of the recently found DNS weakness.  The basic premise is that if a recursive nameserver doesn’t use sufficently random source ports when making recursive queries, it can be vulnerable to an attacker who is trying to poisen the cache, or fill it with incorrect data.

    I’ve now heard reports about it from various news sources who make it sound much more drastic than it actually is.   Granted, it is a serious flaw, but fortunately most companies with any desire for security use SSL, which provides an additional layer for identity verification.  Also, for most any company with an IT staff, patching the DNS server with the required fixes should be a fairly trivial task.   The most important servers to be fixed are those run by ISPs and Datacenters, both of which should have their servers fixed by now.

    Tools for testing your DNS servers are fairly easy to come by. has a web-based test, although I don’t know how it discovers your DNS Servers.  For windows users, you can run ‘nslookup’ like this:

    C:\Documents and Settings\Brandon>nslookup
    Default Server:
    > set type=TXT
    Non-authoritative answer:   canonical name = porttest.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.
    text =
            " is GREAT: 26 queries in 2.3 seconds from 25 ports with std
     dev 16592"

    To test from a linux machine, you can use dns-oarc’s test with dig like this:

    root@server:~# dig in txt +short
    " is GREAT: 26 queries in 1.2 seconds from 26 ports with std dev 20533"

    Your are looking for a response that contains GOOD or GREAT. If your results contain something else, you should notify your ISP or Data Center to have them fix their servers.

    Random Password Generator

    There are times when I’ve been focusing on programming all day, and it is easier to write a program to do something trivial, then it is to just do it the simple way. Today was such a day. Instead of typing some random character to make up new user’s password, I wrote a script to do it for me:

    ## Quick Random Password Generator
    ## Author: Brandon Checketts
    my $length = $ARGV[0] || 10;
    my $charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890";
    my $pw = "";
    for (my $i=0; $i < $length; $i++) {
        my $pos = rand(length($charset));
        $pw .= substr($charset, $pos, 1);
    print "\\nRandom Password: $pw\\n\\n";

    Of course you can modify the default length and/or characters to make something more suitable for your use.

    Sample Usage:

    [root@dev ~]# ~/bin/pwgen
    Random Password: mYTZrSpE8B
    [root@dev ~]# ~/bin/pwgen 20
    Random Password: EoSQpypmeK3SZCVPodaM

    Getting Un-locked out of Webmin

    Webmin has come a long way in the last year or two. I still (and always will) prefer the command line, but many customers are certainly much more comfortable using a web-interface to configure their server. I have to reset a password every once in a while and have to look up the steps every time. It seems I can never find them quickly, so I’ll put them here so I can find them easily next time.

    Blocked IP addresses are stored a text file named /var/webmin/blocked. IP Address should be cleared out after some period of time, but you can hasten the process by clearing it manually:

    cp /dev/null /var/webmin/blocked

    Webmin passwords are stored in /etc/webmin/miniserv.users. A script for changing a user password is provided with webmin. Just run it like this:

    /usr/libexec/webmin/ /etc/webmin root

    Creating a Permanent SSH Tunnel Between Linux Servers

    I recently had a need to create a permanent SSH tunnel between Linux servers. My need was to allow regular non-encrypted MySQL connections over an encrypted tunnel, but there could be many other uses as well. Google can identify plenty of resources regarding the fundamental SSH commands for port forwarding but I didn’t ever find a good resource for setting up a connection and ensuring that it remains active, which is what I hope to provide here.

    The SSH commands for port forwarding can be found in the ssh man page. The steps described here will create an unprivileged user named ‘tunnel’ on each server. That user will then be used to create the tunnel and run a script via cron to ensure that it remains up.

    First, select one of the servers that will initiate the SSH connection. SSH allows you to map both local and remote ports, so it doesn’t really matter which end of the connection you choose to initiate the connection. I’ll refer to the box that initiates the connection as Host A, and the box that we connect to as Host B.

    Create a ‘tunnel’ user on Host A:

    [root@hosta ~]# useradd -d /home/tunnel tunnel
    [root@hosta ~]# passwd tunnel       ## Set a strong password
    [root@hosta ~]# su - tunnel           ## Become the user 'tunnel'

    Now create a public/private key pair:

    [tunnel@hosta ~]$ ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/tunnel/.ssh/id_rsa):    ## hit enter to accept the default
    Enter passphrase (empty for no passphrase):                           ## don't use a  passphrase
    Enter same passphrase again:
    Your identification has been saved in /home/tunnel/.ssh/id_rsa.
    Your public key has been saved in /home/tunnel/.ssh/
    The key fingerprint is:
    6f:30:b8:e1:36:49:74:b9:32:68:6e:bf:3e:62:d3:c2 tunnel@hosta

    Now cat out the file which contains the public key that we will need to put on host b:

    [tunnel@hosta ~]# cat /.ssh/
    ssh-rsa blahAAAAB3NzaC1yc2EAAAABIwAAAQEA......6BEKKCxTIxgBqjLP tunnel@hosta

    Now create a ‘tunnel’ user on Host B and save the public key for tunnel@hosta in the authorized_keys file

    [root@hostb ~]# useradd -d /home/tunnel tunnel
    [root@hostb ~]# passwd tunnel       ## Set a strong password
    [root@hostb ~]# su - tunnel
    [tunnel@hostb ~]# mkdir .ssh
    [tunnel@hostb ~]# vi .ssh/authorized_keys   ## Now paste in the public key for tunnel@hosta

    At this point you should be able to ssh from tunnel@hosta to tunnel@hostb without using a password. Depending on your configuration, you might need to allow the user ‘tunnel’ in /etc/ssh/sshd_config. You might also set some SSH options like the destination port in ~/.ssh/config.

    Now, create this script as hosta:/home/tunnel/

    createTunnel() {
        /usr/bin/ssh -f -N -L13306:hostb:3306 -L19922:hostb:22 tunnel@hostb
        if [[ $? -eq 0 ]]; then
            echo Tunnel to hostb created successfully
            echo An error occurred creating a tunnel to hostb RC was $?
    ## Run the 'ls' command remotely.  If it returns non-zero, then create a new connection
    /usr/bin/ssh -p 19922 tunnel@localhost ls
    if [[ $? -ne 0 ]]; then
        echo Creating new tunnel connection

    Save that file and make it executable:

    chmod 700 ~/

    This script will attempt to SSH to localhost port 19922 and run the ‘ls’ command. If that fails, it will attempt to create the SSH tunnel. The command to create the SSH tunnel will tunnel local port 13306 to port 3306 on hostb. You should modify that as necessary for your configuration. It will also create a tunnel for local port 19922 to port 22 on hostb which the script uses for testing the connection.

    Now just add that script to the user ‘tunnel’s crontab to check every few minutes, and it will automatically create a tunnel and reconnect it if something fails. When it does create a new connection it will send an email to the ‘tunnel’ user, so you can create a .forward file to forward those messages to you.