Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Author: Brandon (Page 9 of 29)

Setting Up Virtualmin on an OpenVZ Guest

I’m experimenting with a hosting control panel and am interested in Virtualmin. I generally avoid web-based control panels, because they generally make direct configuration via the command line and manually editing config files very difficult. However one of Virtualmin’s goals is to not interfere with such manual configurations. I’ve had plenty of clients who use Webmin, and they seem to do a good job, so Virtualmin seems like a good choice.

These are the steps that I went through to get a new OpenVZ guest set up with the GPL version of Virtualmin.

Download a CentOS 5 OS template and create the guest

# wget https://download.openvz.org/template/precreated/centos-5-x86_64.tar.gz
# vzctl create <VEID> --ostemplate centos-5-x86_64

I replaced all of these limits in /etc/vz/<VEID>.conf. This is based off of a different running machine with some fairly generous limits. Most importantly, it includes 1GB of RAM.

# UBC parameters (in form of barrier:limit)
KMEMSIZE="43118100:44370492"
LOCKEDPAGES="256:256"
PRIVVMPAGES="262144:262144"
SHMPAGES="21504:21504"
NUMPROC="2000:2000"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="65536:65536"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="380:420"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="10321920:16220160"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="4504320:16777216"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="5000:5000"
DCACHESIZE="3409920:3624960"
NUMFILE="18624:18624"
AVNUMPROC="180:180"
NUMIPTENT="128:128"

Then set up some host-specific parameters and start it up.

# vzctl set <VEID> --ipadd 10.0.0.1 --hostname yourhostname.com --nameserver 4.2.2.1 --diskspace 4G --save
# vzctl start <VEID>
# vzctl enter <VEID>

You are now logged in to the guest, where you can download and install virtualmin

# yum update
# cd /root
# wget https://software.virtualmin.com/gpl/scripts/install.sh
# sh install.sh
 Continue? (y/n) y

That should install without significant errors. Finally, set a password for root, and then log in to Virtualmin to go through the post-installation configuration

passwd root

Login at https://<your-ip>:10000/ and go through the post-installation configuration

No-Referer Plugin for Firefox 3.7

Not allowing Firefox to install plugins from previous versions is kindof annoying. Especially when the developer doesn’t maintain something. The No-Referer plugin is very simple and handy, but hasn’t been updated to allow it to be installed in Firefox 3.5 or above.

In many cases, all that needs to be done is a version number needs to be incremented in the install.rdf file. I’ve posed a new version of this that works for FireFox 3.7 here

Converting Tables to InnoDB Takes Forever

I have been familiar with some of the benefits of the InnoDB storage engine for a long time. Of particular interest to me has been row-level locking which should significantly reduce some locking problems that I have on a big table during peak times.

I have made various attempts over the past six months or so to convert this table to InnoDB, but each attempt has always taken a tremendously long time and I have always ended up canceling the query before it completes, because it doesn’t seem to be making any progress.

I finally have had a reason and opportunity to dig into this more and have spent the last day or two learning and experimenting with InnoDB. Of particular use was this set of slides from a 2007 presentation on InnoDB performance. It had some very good information about understanding how InnoDB works.

Slide 9 of the presentation includes this extremely helpful bit:

PRIMARY KEY in random order are costly and lead to table fragmentation (primary key inserts should normally be in an ascending order)
Load data in primary_key order if you can

In the data that I have been attempting to convert from MyISAM, my primary keys are inserted in whatever order that that the data has occurred. When converting that to InnoDB, it would basically run a bunch of insert statements using my unordered data. Each insert would require the storage engine to move a bunch of data in the InnoDB table around to keep the primary keys in order.

In my case, I was converting a MyISAM table containing about 1.9 million rows and occupying 600 MB of disk space. That took over 8 hours using the unordered data. After ordering the data and retrying, it then took about 10 minutes.

Also, I learned the usefulness of the ‘SHOW TABLE STATUS’ command. It is semi-useful in seeing the progress of the import. It is a bit strange to me it shows the number of rows in the table changing by 10% every second, but it is better than nothing:

mysql> SHOW TABLE STATUS WHERE name = 'sometable'

*************************** 1. row ***************************
           Name: books_innodb
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 2158501
 Avg_row_length: 286
    Data_length: 617611264
Max_data_length: 0
   Index_length: 70008832
      Data_free: 0
 Auto_increment: NULL
    Create_time: 2009-08-25 10:03:43
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options:
        Comment: InnoDB free: 1799168 kB

Also worth noting is the data inserted in random order used about 30% more space according to the Data_length value.

AmazonAPISigning.com

Amazon Associate Web Services will start requiring API Requests to contain a cryptographic signature on August 15th. Any website that uses their Product API’s will need to be modified to correctly sign the requests using their Secret Key.

I’ve just created AmazonAPISigning.com which is a website that offers services to help in making the transition to the signed requests. Specifically, it offers a programming service to modify websites code to implement the required changes. It also offers a free API Signing service for those websites that aren’t able to implement the signatures within their own code for whatever reason. The signing service is specifically intended for widgets and tools that are implemented completely via JavaScript, and thus aren’t able to keep their private key hidden from the Javascript Source code.

The signing service may also serve as a really quick solution for webmasters to be able to start signing requests. Their website code simply needs to change the hostname used in the Amazon API requests, and the service will start calculating the signatures on their behalf.

ProFTPd allows multipled DefaultRoot lines for flexible chrooting

The ProFTPd documentation gives good examples of how to use the DefaultRoot directive to chroot users to a specific directory.

A customer today wanted to have different chroot directories for different groups of users. The documentation didn’t mention if it was okay to include multiple DefaultRoot lines. After some experimenting, I can verify that it is allowed and works well.

I used something like this in /etc/proftpd/proftpd.conf

DefaultRoot                     ~ jailed
DefaultRoot                     ~/../.. othergroup

Users in the group ‘jailed’ are chrooted to their own home directory immediately upon logging in. Users in the ‘othergroup’ are chrooted two levels up from their home directory. If you want to get really specific, each user generally has a group of their own, so you can effectively do this a the user-level as well.

PHP Code to Sign any Amazon API Requests

Starting next month, any requests to the Amazon Product Advertising API need to be cryptographically signed. Amazon has given about three months notice and the deadline is quickly approaching. I use the Amazon web services on several sites and came up a fairly generic way to convert an existing URL to a signed URL. I’ve tested with several sites and a variety of functions, and this is working well for me so far:

function signAmazonUrl($url, $secret_key)
{
    $original_url = $url;

    // Decode anything already encoded
    $url = urldecode($url);

    // Parse the URL into $urlparts
    $urlparts       = parse_url($url);

    // Build $params with each name/value pair
    foreach (split('&', $urlparts['query']) as $part) {
        if (strpos($part, '=')) {
            list($name, $value) = split('=', $part, 2);
        } else {
            $name = $part;
            $value = '';
        }
        $params[$name] = $value;
    }

    // Include a timestamp if none was provided
    if (empty($params['Timestamp'])) {
        $params['Timestamp'] = gmdate('Y-m-d\TH:i:s\Z');
    }

    // Sort the array by key
    ksort($params);

    // Build the canonical query string
    $canonical       = '';
    foreach ($params as $key => $val) {
        $canonical  .= "$key=".rawurlencode(utf8_encode($val))."&";
    }
    // Remove the trailing ampersand
    $canonical       = preg_replace("/&$/", '', $canonical);

    // Some common replacements and ones that Amazon specifically mentions
    $canonical       = str_replace(array(' ', '+', ',', ';'), array('%20', '%20', urlencode(','), urlencode(':')), $canonical);

    // Build the sign
    $string_to_sign             = "GET\n{$urlparts['host']}\n{$urlparts['path']}\n$canonical";
    // Calculate our actual signature and base64 encode it
    $signature            = base64_encode(hash_hmac('sha256', $string_to_sign, $secret_key, true));

    // Finally re-build the URL with the proper string and include the Signature
    $url = "{$urlparts['scheme']}://{$urlparts['host']}{$urlparts['path']}?$canonical&Signature=".rawurlencode($signature);
    return $url;
}

To use it, just wrap your Amazon URL with the signAmazonUrl() function and pass it your original string and secret key as arguments. As an example:

$xml = file_get_contents('https://webservices.amazon.com/onca/xml?some-parameters');

becomes

$xml = file_get_contents(signAmazonUrl('https://webservices.amazon.com/onca/xml?some-parameters', $secret_key));

Like most all of the variations of this, it does require the hash functions be installed to use the hash_hmac() function. That function is generally available in PHP 5.1+. Older versions will need to install it with Pecl. I tried using a couple of versions that try to create the Hash in pure PHP code, but none worked and installing it via Pecl was pretty simple.

(Note that I’ve slightly revised this code a couple of times to fix small issues that have been noticed)

MRTG Script to Graph the Current Outdoor Temperature

I’m graphing a few things around my home and wanted to graph the current outdoor temperature. I wrote this quick script to grab the current temperature or my zip code from weatherunderground.com and have it graphed with MRTG:

#!/usr/bin/php
<?php

$zipcode = $argv[1];
$page = file_get_contents("https://www.wunderground.com/cgi-bin/findweather/getForecast?query=$zipcode&wuSelect=WEATHER");

$current_temp = 0;
if (preg_match('#pwsvariable="tempf" english="&deg;F" metric="&deg;C" value="([0-9\.]+)">#', $page, $matches)) {
    $current_temp = $matches[1];
}

if (preg_match('#Heat Index:</td>.*?<span class="nobr"><span class="b">([0-9\.]+)</span>#s', $page, $matches)) {
    $heat_index = $matches[1];
}

echo "$current_temp\n$heat_index\non\non\n";

?>

And add it to your MRTG config with something like this (Note that you need to specify your zip code in the ‘Target’ line):

Target[temp]: `/etc/mrtg/temperature.php 30605`
Options[temp]: nopercent,growright,nobanner,nolegend,noinfo,integer,gauge
Title[temp]: Outdoor Temperature
PageTop[temp]: <h3>Outdoor Temperature</h3>
YLegend[temp]: Degrees Farenheight
ShortLegend[temp]: &nbsp;&deg;F
LegendI[temp]: Temperature &deg;F&nbsp;
LegendO[temp]: Heat Index &deg;F&nbsp;

outdoor-temperatur

Array versus String in CURLOPT_POSTFIELDS

The PHP Curl Documentation for CURLOPT_POSTFIELDS makes this note:

This can either be passed as a urlencoded string like ‘para1=val1&para2=val2&…’ or as an array with the field name as key and field data as value. If value is an array, the Content-Type header will be set to multipart/form-data.

I’ve always discounted the importance of that, and in most cases it doesn’t generally matter. The destination server and application likely know how to deal with both multipart/form-data and application/x-www-form-urlencoded equally well. However, the data is passed in a much different way using these two different mechanisms.

application/x-www-form-urlencoded

application/x-www-form-urlencoded is what I generally think of when doing POST requests. It is the default when you submit most forms on the web. It works by appending a blank line and then your urlencoded data to the end of the POST request. It also sets the Content-Length header to the length of your data. A request submitted with application/x-www-form-urlencoded looks like this (somewhat simplified):

POST /some-form.php HTTP/1.1
Host: www.brandonchecketts.com
Content-Length: 23
Content-Type: application/x-www-form-urlencoded

name=value&name2=value2

multipart/form-data

multipart/form-data is much more complicated, but more flexible. Its flexibility is required when uploading files. It works in a manner similar to MIME types. The HTTP Request looks like this (simpified):

POST / HTTP/1.1
Host: www.brandonchecketts.com
Content-Length: 244
Expect: 100-continue
Content-Type: multipart/form-data; boundary=----------------------------26bea3301273

And then subsequent packets are sent containing the actual data. In my simple case with two name/value pairs, it looks like this:

HTTP/1.1 100 Continue
------------------------------26bea3301273
Content-Disposition: form-data; name="name"

value
------------------------------26bea3301273
Content-Disposition: form-data; name="name2"

value2
------------------------------26bea3301273--

CURL usage

So, when sending POST requests in PHP/cURL, it is important to urlencode it as a string first.

This will generate the multipart/form-data version

$data = array('name' => 'value', 'name2' => 'value2');
curl_setopt($curl_object, CURLOPT_POSTFIELDS,  $data)

And this simple change will ensure that it uses the application/x-www-form-urlencoded version:

$data = array('name' => 'value', 'name2' => 'value2');
$encoded = '';
foreach($data as $name => $value){
    $encoded .= urlencode($name).'='.urlencode($value).'&';
}
// chop off the last ampersand
$encoded = substr($encoded, 0, strlen($encoded)-1);
curl_setopt($curl_object, CURLOPT_POSTFIELDS,  $encoded)

KnitMeter is now a Facebook App

KnitMeter.com is a site that I wrote quickly for my wife to keep track of how much she has knit. It generate a little ‘widget’ image that can be placed on blogs, forums, etc and says how many miles of yarn you have knit in some period. The site has been live for about a year and a half now and has a couple thousand registered users.

I have been receiving an increasing number of requests to add a method for adding a KnitMeter it to Facebook. I’ve experimented with a couple of other ideas on Facebook and found that it was pretty straightforward to write an app. KnitMeter seems like a decent candidate for a social app, so I started working on it about a week ago. I’m happy to say that I just made the application live late last night. It is available at https://apps.facebook.com/knitmeter/. If you use other social media apps, then go here where you can buy YouTube subscribers or views, or even Instagram followers.

Features include:

  • Ability to add projects and add knitted lengths to a project (or not)
  • Settings for inputting lengths in feet, yards, or meters
  • Display how much you’ve knit in feet, yards, meters, kilometers, or miles
  • When entering a new length, you can choose to have it publish a ‘story’ on your profile page
  • You can add a tab on your profile page that shows each of your projects as well as a total
  • You can add a KnitMeter ‘box’ to the side of your profile page, or on your ‘boxes’ tab.

I recreated the database from scratch and defined it a little better, so I have a little bit of work to do in migrating the existing site and database over to the new structure. Once that is done users will be able to import their data from the existing KnitMeter.com by providing their email/password.

Synchronize Remote Memcached Clusters with memcache_sync

The problem: Servers in two separate geographic locations each have their own memcached cluster. However, there doesn’t currently exist (that I know of) a good way to copy data from one cluster to the other cluster.

One possible solution is to configure the application to perform all write operations in both places. However, each operation requires a round-trip response. If the servers are separated by 50ms or more, doing several write operations causes a noticable delay.

The solution that I’ve come up with is a perl program that I’m calling memcache_sync. It acts a bit like a proxy that asynchronously performs write operations on a remote cluster. Each geographic location runs an instance of memcache_sync that emulates a memcached server. You configure your application to write to the local memcache cluster, and also to the memcache_sync instance. memcache_sync queues the request and immediately returns a SUCCESS message so that your application can continue doing its thing. A separate thread then writes those queued operations to the remote cluster.

The result is two memcache clusters that are synchronized in near-real time, without any noticable delay in the application.

I’ve implemented ‘set’ and ‘delete’ operations thus far, since that is all that my application uses. I’ve just started using this on a production environment and am watching to see how it holds up. So far, it is behaving well.

The script is available here. I’m interested to see how much need there is for such a program. I’d be happy to have input from others and in developing this into a more robust solution that works outside of my somewhat limited environment.

« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑