How Do Clients Securely Connect to SSL & HTTPS Servers?

This question arose from Steven Chu on my previous post about MySQL SSL Connections without Client Certificates. How is the client able to securely connect to a server using SSL if it doesn’t already know or trust the Server Certificate?

It is important to understand that there are a few different, interrelated topics here. All of these involve SSL and certificates, but in differing ways, so they are often conflated. Secure communication over SSH shares the same concepts, but has different mechanisms.

  1. Encryption of the traffic between client and server.
  2. Verification that the server is who the client believes it to be.
  3. Authentication of the client to the server.

For SSL and HTTPS communication, the first two concepts are accomplished together because there is no point in communicating securely with a remote party if you can’t verify that the remote party is who they claim to be and that there isn’t a “Man in the Middle” able to intercept the secure traffic.

You actually communicate securely with unknown servers all of the time. When you loaded this web page, your browser didn’t know anything about www.brandonchecketts.com beforehand. Same thing when you load your bank’s website. You never configured your browser specifically to trust their website. So how is it able to verify that it is actually your bank, and not an attacker who is impersonating your bank?

Certificate Authorities

Anybody can create an SSL Certificate with any name on it. In my SSL Certificate Notes post, you can find instructions for creating an SSL certificate. Note that you simply type in the name for the certificate. So you could attempt to create a certificate for any host you care to try. However, an essential part of the process is the Signing of the Certificate. You can self-sign a certificate with any name on it. But if you want your certificate to be recognized and trusted by anybody else, you need to have a recognized Certificate Authority (CA) sign it. If you were to try to create a certificate for www.google.com, no Certificate Authority is going to sign that since you can’t validate that you are authorized to create certificatesnobody else in the world is going to trust it.

When a Certificate Authority signs a certificate, it is their job to verify that the certificate owner is who they claim to be in some way. On the public internet, that is largely done through DNS or Email validation. For example, on this site, I use a certificate issued by Amazon Web Services. In order obtain that certificate, I had to verify that I own the domain. Since the domain is also hosted at AWS, it is quite easy for me to create the DNS records for verification, and AWS can validate it within seconds. I couldn’t, for example, validate a certificate that was for ‘www.google.com’. I’d be unable to validate it with any certificate authority since I can’t make the required DNS entries or receive emails to the required email addresses for google.com.

Extended Validation certificates, offered by some Certificate Authorities, and recognized by some web browsers with a different color banner, often have additional verification steps other than just DNS or Email.

Intermediate Certificates and Multiple layers of Certificate Authoritiees

When a certificate is signed by a recognized Certificate Authority, your client can trust it, because it trusts the Certificate Authority. On the public Internet, most of the time there are multiple layers of Certificate Authorities.

On OSX, you can find the list of root certificates it trusts in the “Keychain Access” system app, in the “System Roots” section. On an Ubuntu or Debian Linux system, the trusted certificates are files that exist or are symlinked in `/etc/ssl/certs`. These systems have dozens to hundreds of certificates that they trust. Look closely and you’ll see that most of them expire 10+ years into the future. These “Root” certificates are highly protected and usually don’t directly sign certificates. The Certificate Authority will often delegate access to intermediate authorities with their own keys that can further sign certificates.

In Chrome, you can click the lock icon next to the URL, and find details about the certificate, including the intermediate certificates. As of the writing of this post, you can see that the SSL Certificate issued to brandonchecketts.com is issued by “Amazon”, which is trusted by the root certificate named “Amazon Root CA 1”. I can find that root certificate in the list of certificates trusted by my OSX system.

Client Authentication

I mentioned the third step above about the client authenticating to the server. In many cases, like this website, there is no need for the client to authenticate to the server since the content is public and intended to be viewed anonymously. If authentication is required, for instance to create a new post, then I simply log in with a username and password entered of the HTTPS connection. Same as you do every day.

Many well-meaning articles about generating SSL Certificates for services other than HTTPS often mention creating an SSL client certificate. The Client Certificate is then provided to the server so that the server can validate the client is who they claim to be. The Client Certificate is simply an alternate (often thought of as “more secure”) method of authenticating than a username and password, or sometimes even in addition to a username and password. In practice, I’ve seen that usernames and passwords transmitted over an encrypted connection are very common, well understood, and just as secure as using an SSL Client Certificate.

APISigning Now Works with Amazon Simple Email Service (SES)

APISigning.com has been signing Amazon Product Advertising requests for a couple of years now. Amazon recently announced their Simple Email Service that makes it easy to send emails via an API. The SES API requires that requests be authenticated using some cryptographic functions that are not easily available on all platforms or programming languages. In those cases, developers can use the APISigning SES Service to calculate the correct signature and perform the request on their behalf.

APISigning has free accounts that effectively allows 10k signing requests each month. Users who require additional requests can subscribe to a paid account with higher limits.

KnitMeter.com Has Been Upgraded

KnitMeter Logo

KnitMeter Has Been Upgraded

KnitMeter.com was originally started over four years ago in December of 2007 as a small project that my wife thought would be useful. Since then, the site hasn’t changed much, but it has managed to grow to thousands of users who have knit nearly 20 thousand miles of yarn. I’ve received numerous requests and have finally gotten a chance to impliment what many of you have been requesting for a while now. New features on the site include:

  • Users can now add entries for knitting, crocheting, and spinning
  • Completely new and modernized design and logo
  • You can customize your widgets directly on KnitMeter.com rather than editing the code for the widget on your website
  • The website and the KnitMeter Facebook Application are now completely integrated. Entries added in one will be displayed and counted in the other
  • The Facebook application can (again) publish your entries to your news feed, but only when you tell it to
  • You can chose to make your profile public, which will display some of the most recent entries on the KnitMeter home page with a link to your website
  • Added several new timeframes, including specific calendar years (ie: I knit 4.3 miles in 2010)
  • Numerous technical changes that should make the site faster to use and make it easier to make future changes

These new features have been rolled out over the past couple of weeks. I appreciate the patience of those who have dealt with a few bugs over that time, and I believe that everything should be pretty bug-free now. I encourage you to check out the new site and to start adding up the mileage for your own projects. The next major milestone will be when we have gone through enough yarn to go around the earth (about 24,901 miles). At the present rate, we should hit that figure in about 3-5 months.

Happy Knitting, Crocheting, and Spinning,
Brandon Checketts
KnitMeter.com

Website Performance: Tables Versus CSS

Most website designers have been using CSS for page layout for several years now, but I occasionally see some websites that continue to use HTML tables for layout. As I’ve been focusing on website performance lately, I’ve found some references that modern browsers render sites using tables for layout slower than they do sites that use CSS. I decided to investigate and confirmed that there are many possible situations where sites using large tables will appear to load much slower than those using CSS. I put together two pages to confirm:

This page uses <div> elements for layout
and
This pages uses a large table for layout

On both pages I’ve added a 5-second sleep near the end of the page to show what might happen if the server was slow, if there were network problems, or any other number of things may have happened.

Notice that the page created using a table changes a lot after the delay. I’ve tried it in Firefox 3 which extends the main (yellow) content section all of the way to the right until it receives the rest of the document, at which point it has to shrink that part to make room for the section on the right. Internet Explorer behaves even worse. It leaves a blank white page until after the delay, at which point it draws the whole table.

By contrast, the page created with CSS positioning shows all of the content above the delay and has it in the correct position. When the rest of the document is sent it just fills in the appropriate content, but doesn’t have to re-arrange anything on the page.

Script to Import Static Pages into GetSimple CMS

I’ve recently been impressed with a very simple Content Management System called GetSimple. It provides just the very basics that allows a user to edit their own website content. For brochure sites with owners who don’t want the complexity of a larger CMS, I think it is pretty ideal.

When I develop a site though, I typically have a header and footer, and then all of the content pages exist as PHP files that simply include that header and footer. Converting a static site like that into the CMS takes a bunch of copy/pasting. I always try to avoid such tedious jobs, and so developed a script that will import those static pages into a GetSimple installation.

To run this script, I wanted to import a bunch of files in a ‘static’ directory where I had moved all of the static files to. I then ran this from the command line to import all of the content into GetSimple

# for file in `find static -type f`
> do
> ./getsimple_import_file.php $file
> done

The script is available as getsimple_import_file.php

It takes a little configuration before running it. It works by simulating the data that you would submit when creating the page through the web interface, so we have to fake the necessary session cookie. Uncomment the bit in the middle that will display your cookie and run the script once. You’ll need to copy your cookie name and value into the script before doing any actual imports.

Once you’ve done that, you will probably want to change the regular expression that attempts to grab the page title from your file. You may also want to manipulate how it figures the URL to use.

Feel free to post comments here if you found this useful, or made any changes you’d like to share with other users

Enabling HTTP Page Caching with PHP

I’ve been doing a lot of work on BookScouter.com lately to reduce page load time and generally increase the performance of the website for both users and bots. One of the tips that the load time analyzer points out is to enable an expiration time for static content. That is easy enough for images and such by using an Apache directive such as:

    ExpiresActive On
    ExpiresByType image/gif A2592000
    ExpiresByType image/jpg A2592000
    ExpiresByType image/png A2592000

But pages generated with PHP by default have the Pragma: no-cache header set, so that the users’ browsers do not cache the content at all. In most cases, even hitting the back button will generate another request to the server which must be completely processed by the script. You may be able to cache some of the most intensive operations inside your script, but this solution will eliminate that request completely. Simply add this code to the top of any page that contains semi-static content. It effectively sets the page expiration time to one hour in the future. So if a visitor hits the same URL within that hour, the page is served locally from their browser cache instead of making a trip to the server. It also sends an HTTP 304 (Not Modified) response code if the user requests to reload the page within the specified time. That may or may-not be desired based on your site.

$expire_time = 60*60; // One Hour
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + $expire_time));
header("Cache-Control: max-age={$expire_time}");
header('Last-Modified: '.gmdate('D, d M Y H:i:s \G\M\T', time()));
header('Pragma: public');

if ((!empty($_SERVER['HTTP_IF_MODIFIED_SINCE'])) && (time() - strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE']) < = $expire_time)) {
    header('HTTP/1.1 304 Not Modified');
    exit;
}   

KnitMeter is now a Facebook App

KnitMeter.com is a site that I wrote quickly for my wife to keep track of how much she has knit. It generate a little ‘widget’ image that can be placed on blogs, forums, etc and says how many miles of yarn you have knit in some period. The site has been live for about a year and a half now and has a couple thousand registered users.

I have been receiving an increasing number of requests to add a method for adding a KnitMeter it to Facebook. I’ve experimented with a couple of other ideas on Facebook and found that it was pretty straightforward to write an app. KnitMeter seems like a decent candidate for a social app, so I started working on it about a week ago. I’m happy to say that I just made the application live late last night. It is available at http://apps.facebook.com/knitmeter/. If you use other social media apps, then go here where you can buy YouTube subscribers or views, or even Instagram followers.

Features include:

  • Ability to add projects and add knitted lengths to a project (or not)
  • Settings for inputting lengths in feet, yards, or meters
  • Display how much you’ve knit in feet, yards, meters, kilometers, or miles
  • When entering a new length, you can choose to have it publish a ‘story’ on your profile page
  • You can add a tab on your profile page that shows each of your projects as well as a total
  • You can add a KnitMeter ‘box’ to the side of your profile page, or on your ‘boxes’ tab.

I recreated the database from scratch and defined it a little better, so I have a little bit of work to do in migrating the existing site and database over to the new structure. Once that is done users will be able to import their data from the existing KnitMeter.com by providing their email/password.