How we saved over $700/month by switching from Carta to Google Drive

Carta is the Gold Standard for startups to keep their CAP Table, but at a price.

One of my companies hasn’t really raised any money, but we have a 50+ stakeholders do to a merger and employee options. We execute maybe 2-3 documents per year related to capital. So the $8,400 annual price of Carta cost us about $4,000 per transaction that we did. Obviously, that is absurd.

We ended up downloading all of the reports and PDFs of all existing options. And added some instructions for what we need to do when new options are granted, exercised, etc. We save the CAP table and related documents in a Google Drive (that we already pay for), and ended up saving $8,400+ per year!

I understand that there are a few other things, such as 409A valuations and peace of mind that come with having a professional software like Carta manage your CAP table, but the savings, for us, are an easy trade-off.

How Do Clients Securely Connect to SSL & HTTPS Servers?

This question arose from Steven Chu on my previous post about MySQL SSL Connections without Client Certificates. How is the client able to securely connect to a server using SSL if it doesn’t already know or trust the Server Certificate?

It is important to understand that there are a few different, interrelated topics here. All of these involve SSL and certificates, but in differing ways, so they are often conflated. Secure communication over SSH shares the same concepts, but has different mechanisms.

  1. Encryption of the traffic between client and server.
  2. Verification that the server is who the client believes it to be.
  3. Authentication of the client to the server.

For SSL and HTTPS communication, the first two concepts are accomplished together because there is no point in communicating securely with a remote party if you can’t verify that the remote party is who they claim to be and that there isn’t a “Man in the Middle” able to intercept the secure traffic.

You actually communicate securely with unknown servers all of the time. When you loaded this web page, your browser didn’t know anything about www.brandonchecketts.com beforehand. Same thing when you load your bank’s website. You never configured your browser specifically to trust their website. So how is it able to verify that it is actually your bank, and not an attacker who is impersonating your bank?

Certificate Authorities

Anybody can create an SSL Certificate with any name on it. In my SSL Certificate Notes post, you can find instructions for creating an SSL certificate. Note that you simply type in the name for the certificate. So you could attempt to create a certificate for any host you care to try. However, an essential part of the process is the Signing of the Certificate. You can self-sign a certificate with any name on it. But if you want your certificate to be recognized and trusted by anybody else, you need to have a recognized Certificate Authority (CA) sign it. If you were to try to create a certificate for www.google.com, no Certificate Authority is going to sign that since you can’t validate that you are authorized to create certificatesnobody else in the world is going to trust it.

When a Certificate Authority signs a certificate, it is their job to verify that the certificate owner is who they claim to be in some way. On the public internet, that is largely done through DNS or Email validation. For example, on this site, I use a certificate issued by Amazon Web Services. In order obtain that certificate, I had to verify that I own the domain. Since the domain is also hosted at AWS, it is quite easy for me to create the DNS records for verification, and AWS can validate it within seconds. I couldn’t, for example, validate a certificate that was for ‘www.google.com’. I’d be unable to validate it with any certificate authority since I can’t make the required DNS entries or receive emails to the required email addresses for google.com.

Extended Validation certificates, offered by some Certificate Authorities, and recognized by some web browsers with a different color banner, often have additional verification steps other than just DNS or Email.

Intermediate Certificates and Multiple layers of Certificate Authoritiees

When a certificate is signed by a recognized Certificate Authority, your client can trust it, because it trusts the Certificate Authority. On the public Internet, most of the time there are multiple layers of Certificate Authorities.

On OSX, you can find the list of root certificates it trusts in the “Keychain Access” system app, in the “System Roots” section. On an Ubuntu or Debian Linux system, the trusted certificates are files that exist or are symlinked in `/etc/ssl/certs`. These systems have dozens to hundreds of certificates that they trust. Look closely and you’ll see that most of them expire 10+ years into the future. These “Root” certificates are highly protected and usually don’t directly sign certificates. The Certificate Authority will often delegate access to intermediate authorities with their own keys that can further sign certificates.

In Chrome, you can click the lock icon next to the URL, and find details about the certificate, including the intermediate certificates. As of the writing of this post, you can see that the SSL Certificate issued to brandonchecketts.com is issued by “Amazon”, which is trusted by the root certificate named “Amazon Root CA 1”. I can find that root certificate in the list of certificates trusted by my OSX system.

Client Authentication

I mentioned the third step above about the client authenticating to the server. In many cases, like this website, there is no need for the client to authenticate to the server since the content is public and intended to be viewed anonymously. If authentication is required, for instance to create a new post, then I simply log in with a username and password entered of the HTTPS connection. Same as you do every day.

Many well-meaning articles about generating SSL Certificates for services other than HTTPS often mention creating an SSL client certificate. The Client Certificate is then provided to the server so that the server can validate the client is who they claim to be. The Client Certificate is simply an alternate (often thought of as “more secure”) method of authenticating than a username and password, or sometimes even in addition to a username and password. In practice, I’ve seen that usernames and passwords transmitted over an encrypted connection are very common, well understood, and just as secure as using an SSL Client Certificate.

Silly Security: TreasuryDirect.gov is the worst website ever

I saw some content today about savings bonds having a great interest rate. So I tried to sign up. I didn’t know I was going to waste an hour to simply create an account. This has to be the worst website I’ve ever seen.

Somewhere in the middle of the process, after entering a fantastic password generated by my password manager, to log back into the site, I was presented with this virtual keyboard. You are forced to enter your password using the virtual keyboard by clicking on the keys. Entering 40 random characters by clicking on the image is SUPER TEDIOUS.

Not to mention, it took me about 10 attempts to enter the password correctly. I didn’t notice it until getting extremely frustrated, but clicking a button on the virtual keyboard will sometimes double-click the character.

After getting into the site, any attempt to navigate using the browsers forward/back buttons will immediately log you out. As will an accidental double-click on any of the navigation.

It’s a good thing they have a monopoly on savings bonds, because nobody would try to use this and stay sane!

Silly Security: Don’t Show Me The Secret, Then Confirm I Have It!

I just received a replacement credit card from Health Equity because my previous card is expiring. Their validation screens made me laugh.

The first screen shows the card you are replacing, and includes the last four digits of the card.

 
Then the following screen asks for the last four digits of the card number “In order to verify possession”.

You probably shouldn’t tell me the last four digits before asking me to confirm that I have the card.

Make Sure You Are Calculating Net Promoter Score Correctly

The Net Promoter Score can be a pretty valuable metric for determining customer happiness, and, more importantly, how likely your customers are to tell other people about your product.

The basic idea is that you ask customers how likely they are to recommend your product to someone. Those who respond as a 9 or 10 are considered “Promoters”. When asked about your product, they’ll respond positively and encourage others to use your product as well. Customers who answer with a 7 or 8 are satisfied, but not likely to talk positively about your product. Customers who answer with a six or below are considered “detractors”. When asked about your product, they’ll respond negatively, detracting from your reputation. If you have a higher number of “promoters” than “detractors”, then your NPS Score will be positive. More detractors than promoters will result in a negative NPS score.

There is an excellent tool for calculating your Net Promoter Score at Delighted.com that helps to visualize this.

I was recently meeting with a leadership team and they mentioned that their Net Promoter Score was 6.6. That’s not a great score, but its not terrible. I don’t usually hear it expressed as a decimal, but I didn’t think much of it. After meeting with the team after several months, they kept mentioning NPS Score with a decimal and it had increased to 6.7. It was then that I began to ask questions into how they were calculating that. It turns out it was a simple average on a rating from 1 to 10. That is NOT an NPS Score! If anybody ever tells you their Net Promoter Score is between 1 and 10, make sure to dig in and make sure they are calculating it correctly! Scores should range from -100 (All detractors) to +100 (All promoters).

When calculated correctly, this product’s NPS score was actually negative. That helps to explain why revenue growth has been a challenge and marketing dollars are not moving the needle as they’d like.

Contrast that with another organization I meet with regularly. They calculate their NPS Score correctly and it’s a 60! No wonder this company has incredible growth and is doing well.

While your NPS score is negative, your first priority should be fixing the product and customer experience. Otherwise, every customer that signs up is likely going to detract from others using your product.

How to Think About Annual Contracts, Up-front Payments

I’ve helped several teams lately go through an analysis of when to consider annual prepayments for services. These are some of the decision criteria and metrics that I use to consider if an annual contract or pre-payment should be considered.

As a baseline, calculate the full amount that you would pay monthly. For most software products, this is the regularly advertised price. Make sure you are looking at the actual monthly plan proce though. A lot of services have started advertising as “$x per month billed annually“. Make sure to select the monthly payment price whe you see that. Some services, like commercial insurance charge a small per-payment fee for “installment plans” that should be included.

Next, calculate the full price if paid up-front. Of course, you need to include discounts that are offered. Sometimes, an offer may make it a period other than one year, such as “buy now and get 13 months for the price of 12”, which makes it a little more complex. In that case, you could consider the annual price as 12/13 of the amount you pay. Or, if the extra month is not really material, you may chose to ignore the extea month.

After you’ve got those two numbers (the annual and monthly prices), you should consider the other terms and internal needs.

Consider if your usage of the service is expected to change much over the next 12 months.

Also, consider how much flexibilty you lose with an annual pre-payment. Some services, like Slack give you a credit if usage decreases. Others have no flexibility and you pay that amount, even if usage decreases or you cancel.

In general, I expect around a 15% discount for a full up-front payment and very flexible terms for changes in usage or cancellation. If terms are more strict, I’d aim for more like a 30% (or more) discount for the commitment and up-front payment.

Finally, consider your own cash flow and capital positions. If you have an plenty of cash in the bank, you can lean toward the saving of an annual prepayment. If you don’t have a lot of cash, You’ll favor the monthly terms.

What are your thoughts and experience? What else should be considered when evaluation annual payments?

Using LastPass to Save Passwords and Log In to Multiple AWS Accounts With Two-Factor Authentication

I have multiple businesses, so I log into AWS multiple times per day.

That is a little tricky to do using LastPass since AWS has some hidden form fields that must be filled in
when using two-factor authentication through Google Authenticator.

In order to make it work correctly, I’ve had to modify the extra details in LastPass to add some extra hidden fields. If you set these up in your LastPass credentials for AWS, you should be able to log in with just a couple clicks, like usual, instead of having to type in some of those fields every time or having them overwritten.

Also, make sure to check the “Disable Autofill” checkbox an all of your AWS LastPass entries. Otherwise, one of them will overwrite the hidden form fields on the Two-Factor authentication page

Ubuntu 20.04 Cloud-Init Example to Create a User That Can Use sudo

Use the steps below and example config to create a cloud-init file that creates a user, sets their password, and enables SSH access. The Cloud Config documentation has some examples, but they don’t actually work for being able to ssh into a server and run commands via sudo

First, create a password hash with mkpasswd command:

$ mkpasswd -m sha-512
Password:  
$6$nq4v1BtHB8bg$Oc2TouXN1KZu7F406ELRUATiwXwyhC4YhkeSRD2z/I.a8tTnOokDeXt3K4mY8tHgW6n0l/S8EU0O7wIzo.7iw1

Make note of the output string. You need to enter it exactly in the passwd line of your cloud-init config.

This is the minimal configuration to create a user using cloud-init:

users:
  - name: brandon
    groups: [ sudo ]
    shell: /bin/bash
    lock_passwd: false
    passwd: "$6$nq4v1BtHB8bg$Oc2TouXN1KZu7F406ELRUATiwXwyhC4YhkeSRD2z/I.a8tTnOokDeXt3K4mY8tHgW6n0l/S8EU0O7wIzo.7iw1"
    ssh-authorized-keys:
    - ssh-ed25519 AAAAC3NzaC1lZDI1zzzBBBGGGg3BZFFzTexMPpOdq34a6OlzycjkPhsh4Qg2tSWZyXZ my-key-name

A few things that are noteworthy:

  • The string in the passwd field is enclosed in quotes
  • lock_passwd: false is required to use sudo. Otherwise, the system user account created will have a disabled password and will be unable to use sudo. You’ll just continually be asked for a password, even if you enter it correctly.
  • I prefer the method of adding the user to the sudo group to grant access to sudo. There are other ways to make that work as well, but I feel like this is the cleanest.
  • Adding any users, will prevent the default ubuntu user from being created.
  • Google Docs and Sheets should Almost Always be Restricted to Defined Users

    Somebody sends you a link to a Google Sheet and it just works. It’s magical.
    But that magic comes at a cost. I see far, far too many organizations that regularly share Google Documents and Sheets by using the share with “Anyone with the link” option that Google easily provides.

    That is almost ALWAYS a bad idea. The convenience of having it shared with anybody is, at the same time, a potential security problem today and in the future.

    But that long link with the 44 random-looking characters would be impossible for somebody to guess, right?

    Yes. It would be statistically improbable for somebody to just guess a random string of 44 characters that would result in an actual document. It is possible that an attacker could write programs that could guess millions and millions of links to try them until they found some documents that actually exist. But that’s not the most likely weakness.

    Consider what happens when you email a for your spreadsheet to somebody else. You have zero control over who accesses it after that. What if the recipient forwards your email with the link to somebody else? Often emails to businesses are forwarded into Customer Relationship Management (CRM) or similar systems where that link is now accessible to many other people in the organization. What if an attacker has access to a recipients email? Or a CRM system? How about if an employee leaves the company and they still have it in a browser history.

    In all of those scenarios, and hundreds more that you can’t imagine, if your document is shared with “Anyone with the link”, literally anybody that sees that link can open it and you have absolutely no knowledge that they did.

    Always share only with specific email addresses.

    Sharing with Google Groups

    Sharing with specific people can become a headache to maintain as people change roles. Consider using the Google Groups feature in your organization. You can set up a Google Group for something like ‘client-yourclientname@myorganziation.com’ or ‘team-myteamname@myorganization.com’ and ask to have documents shared with that group instead of individual people. You can then add and remove people from the groups to provide access to only those that are allowed.

    See More information about sharing with Groups at https://support.google.com/a/users/answer/9308872?hl=en

    Solving ECS Stuck in Pending and Frozen / Stalled ECS Hosts Problems

    We’ve had a strange, hard to track-down problem for months now. It has felt like a bug with Amazon ECS, but everything seems to have been working correctly.

    The main way that we’ve observed this problem is that ECS would say that it was launching tasks, but they would stay in a “PENDING” state forever. Conversely, when tasks needed to be killed, the desired state would change to Stopped, but the ECS Console would indicate that they were still running. We discovered quickly, that some of our ECS Host Servers would become completely unresponsive. Sometimes with 100% CPU usage, sometimes with near zero CPU Usage. Terminating the instance, and having the Auto-Scaling group recreate it would generally solve the problem, but its never good to have things frozen without understanding why.

    Often, the host servers would be completely unresponsive. We were usually unable to SSH into the server to investigate. When able to access them, looked through logs and found it full of failures about being unable to talk to external resources. After diving pretty deep, we figured out that the route table was missing a default gateway. It’s hard to talk to anything when you can only use a local network.

    This is an example of a missing default gateway.

    [ec2-user@ip-172-31-45-74 ~]$ route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    172.31.32.0     0.0.0.0         255.255.240.0   U     0      0        0 eth0
    

    On a functioning instance, it should look like this. Notice the destination of 0.0.0.0 with the IP Address to the Default Gateway:

    [ec2-user@ip-172-31-39-228 ~]$ route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         172.31.32.1     0.0.0.0         UG    0      0        0 eth0
    169.254.169.254 0.0.0.0         255.255.255.255 UH    0      0        0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    172.31.32.0     0.0.0.0         255.255.240.0   U     0      0        0 eth0
    

    It was puzzling how the machine would work for a while, and then its default gateway would disappear.

    I’m still not certain how exactly that is happening. However, the system log indicates that there is a period of extremely high load
    and it gets frozen for minutes (maybe hours) at a time.

    Some of these log entries are indicitive of major delays:

    Jan 20 13:26:44 ip-172-31-123-45.ec2.internal crond[21992]: (root) INFO (Job execution of per-minute job scheduled for 13:25 delayed into subsequent minute 13:26. Skipping job run.)
    
    Jan 17 21:20:31 ip-172-31-45-166.ec2.internal chronyd[2696]: Forward time jump detected!
    

    Notice how these logs are out of order too:

    Jan 20 13:39:22 ip-172-31-123-45.ec2.internal kernel: R13: 00007faf9dc777a8 R14: 00000000000031f9 R15: 00007faf9dc7d510
    Jan 20 13:28:30 ip-172-31-123-45.ec2.internal dockerd[4660]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.MakeErrorHandler.func1 (httputils.go:107)
    Jan 20 13:36:03 ip-172-31-123-45.ec2.internal dhclient[3275]: XMT: Solicit on eth0, interval 129760ms.
    Jan 20 13:28:30 ip-172-31-123-45.ec2.internal dockerd[4660]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.MakeErrorHandler.func1 (httputils.go:107)
    

    Finally, this may be the thing that ultimately disables the networking. It looks like `oom-killer` killed the `dhclient-script`, which maybe left the network in an very bad state:

    Jan 20 15:28:36 ip-172-31-45-74.ec2.internal kernel: dhclient-script invoked oom-killer: gfp_mask=0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null),  order=0, oom_score_adj=0
    Jan 20 15:28:36 ip-172-31-45-74.ec2.internal kernel: dhclient-script cpuset=/ mems_allowed=0
    

    You can simply run

    sudo dhclient eth0

    to have it grab the default gateway from DHCP again. But its best to put other memory limits in place to prevent it from running out of resources to begin with.