Getting Ubuntu 14.04 php5enmod to understand module priority

Posted on October 5th, 2014 in PHP by Brandon

Usage of Debian’s php5enmod module doesn’t seem to be documented anywhere except from the command line when calling it without any arguments:

user@host:~# php5enmod
WARNING:
usage: php5enmod [ -s ALL|sapi_name ] module_name [ module_name_2 ]

Unfortunately, that provides no information on how to customize the priority of a module when enabling it. Some others seem to think that you should be able to provide a priority level on the command line, but that doesn’t work.

It took some digging into the bash scripts to figure out how to make it work. The trick is to add a comment in the .ini file for the module. The comment must contain a very specific format of:


zend_extension = /usr/lib/php5/20121212/ioncube_loader_lin_5.5.so
; priority=1

The ‘priority’ line must be in that format exactly and most not contain any other spaces or characters. The line must start with a semicolon, followed by a space, followed by priority=, and finally the desired priority level. The only space on the line must be between the semicolon and the word ‘priority’.

Fix For Amazon API Error: Your request is missing required parameters. Required parameters include AssociateTag.

Posted on November 3rd, 2011 in Amazon APIs,PHP by Brandon

Users of Amazon’s Product Advertising API will begin seeing the following error as of about November 1st:

Your request is missing required parameters. Required parameters include AssociateTag.

This is due to changing requirements to the Product Advertising API. The AssociateTag parameter is now a required and validated field. You can see in the list of changes that it is the first thing mentioned.

These changes are made to keep the Product Advertising API in line with its purpose of driving affiliate sales through the Amazon.com website. The AssociateTag is a tag generated from your Amazon Associates account.

So far, it looks like the tag is not entirely validated to your account. I’ve had luck using fictitious tags such as aztag-20.

KnitMeter.com Has Been Upgraded

Posted on March 24th, 2011 in PHP,Programming,Websites by Brandon

KnitMeter Logo

KnitMeter Has Been Upgraded

KnitMeter.com was originally started over four years ago in December of 2007 as a small project that my wife thought would be useful. Since then, the site hasn’t changed much, but it has managed to grow to thousands of users who have knit nearly 20 thousand miles of yarn. I’ve received numerous requests and have finally gotten a chance to impliment what many of you have been requesting for a while now. New features on the site include:

  • Users can now add entries for knitting, crocheting, and spinning
  • Completely new and modernized design and logo
  • You can customize your widgets directly on KnitMeter.com rather than editing the code for the widget on your website
  • The website and the KnitMeter Facebook Application are now completely integrated. Entries added in one will be displayed and counted in the other
  • The Facebook application can (again) publish your entries to your news feed, but only when you tell it to
  • You can chose to make your profile public, which will display some of the most recent entries on the KnitMeter home page with a link to your website
  • Added several new timeframes, including specific calendar years (ie: I knit 4.3 miles in 2010)
  • Numerous technical changes that should make the site faster to use and make it easier to make future changes

These new features have been rolled out over the past couple of weeks. I appreciate the patience of those who have dealt with a few bugs over that time, and I believe that everything should be pretty bug-free now. I encourage you to check out the new site and to start adding up the mileage for your own projects. The next major milestone will be when we have gone through enough yarn to go around the earth (about 24,901 miles). At the present rate, we should hit that figure in about 3-5 months.

Happy Knitting, Crocheting, and Spinning,
Brandon Checketts
KnitMeter.com

Website Performance: Tables Versus CSS

Posted on October 16th, 2010 in General,PHP,Programming,Website Performance,Websites by Brandon

Most website designers have been using CSS for page layout for several years now, but I occasionally see some websites that continue to use HTML tables for layout. As I’ve been focusing on website performance lately, I’ve found some references that modern browsers render sites using tables for layout slower than they do sites that use CSS. I decided to investigate and confirmed that there are many possible situations where sites using large tables will appear to load much slower than those using CSS. I put together two pages to confirm:

This page uses <div> elements for layout
and
This pages uses a large table for layout

On both pages I’ve added a 5-second sleep near the end of the page to show what might happen if the server was slow, if there were network problems, or any other number of things may have happened.

Notice that the page created using a table changes a lot after the delay. I’ve tried it in Firefox 3 which extends the main (yellow) content section all of the way to the right until it receives the rest of the document, at which point it has to shrink that part to make room for the section on the right. Internet Explorer behaves even worse. It leaves a blank white page until after the delay, at which point it draws the whole table.

By contrast, the page created with CSS positioning shows all of the content above the delay and has it in the correct position. When the rest of the document is sent it just fills in the appropriate content, but doesn’t have to re-arrange anything on the page.

Script to Import Static Pages into GetSimple CMS

Posted on May 12th, 2010 in General,PHP,Websites by Brandon

I’ve recently been impressed with a very simple Content Management System called GetSimple. It provides just the very basics that allows a user to edit their own website content. For brochure sites with owners who don’t want the complexity of a larger CMS, I think it is pretty ideal.

When I develop a site though, I typically have a header and footer, and then all of the content pages exist as PHP files that simply include that header and footer. Converting a static site like that into the CMS takes a bunch of copy/pasting. I always try to avoid such tedious jobs, and so developed a script that will import those static pages into a GetSimple installation.

To run this script, I wanted to import a bunch of files in a ‘static’ directory where I had moved all of the static files to. I then ran this from the command line to import all of the content into GetSimple

# for file in `find static -type f`
> do
> ./getsimple_import_file.php $file
> done

The script is available as getsimple_import_file.php

It takes a little configuration before running it. It works by simulating the data that you would submit when creating the page through the web interface, so we have to fake the necessary session cookie. Uncomment the bit in the middle that will display your cookie and run the script once. You’ll need to copy your cookie name and value into the script before doing any actual imports.

Once you’ve done that, you will probably want to change the regular expression that attempts to grab the page title from your file. You may also want to manipulate how it figures the URL to use.

Feel free to post comments here if you found this useful, or made any changes you’d like to share with other users

Enabling HTTP Page Caching with PHP

Posted on April 29th, 2010 in PHP,Programming,Websites by Brandon

I’ve been doing a lot of work on BookScouter.com lately to reduce page load time and generally increase the performance of the website for both users and bots. One of the tips that the load time analyzer points out is to enable an expiration time for static content. That is easy enough for images and such by using an Apache directive such as:

    ExpiresActive On
    ExpiresByType image/gif A2592000
    ExpiresByType image/jpg A2592000
    ExpiresByType image/png A2592000

But pages generated with PHP by default have the Pragma: no-cache header set, so that the users’ browsers do not cache the content at all. In most cases, even hitting the back button will generate another request to the server which must be completely processed by the script. You may be able to cache some of the most intensive operations inside your script, but this solution will eliminate that request completely. Simply add this code to the top of any page that contains semi-static content. It effectively sets the page expiration time to one hour in the future. So if a visitor hits the same URL within that hour, the page is served locally from their browser cache instead of making a trip to the server. It also sends an HTTP 304 (Not Modified) response code if the user requests to reload the page within the specified time. That may or may-not be desired based on your site.

$expire_time = 60*60; // One Hour
header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() + $expire_time));
header("Cache-Control: max-age={$expire_time}");
header('Last-Modified: '.gmdate('D, d M Y H:i:s \G\M\T', time()));
header('Pragma: public');

if ((!empty($_SERVER['HTTP_IF_MODIFIED_SINCE'])) && (time() - strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE']) < = $expire_time)) {
    header('HTTP/1.1 304 Not Modified');
    exit;
}

PHP Wrapper Class for a Read-only database

Posted on January 4th, 2010 in General,Linux System Administration,MySQL,PHP,Programming by Brandon

This is a pretty special case of a database wrapper class where I wanted to discard any updates to the database, but want SELECT queries to run against an alternative read-only database. In this instance, I have a planned outage of a primary database server, but would like the public-facing websites and web services to remain as accessible as possible.

I wrote this quick database wrapper class that will pass all SELECT queries on to a local replica of the database, and silently discard any updates. On this site almost all of the functionality still works, but it obviously isn’t saving and new information while the primary database is unavailable.

Here is my class. This is intended as a wrapper to an ADOdb class, but it is generic enough that I think it would work for many other database abstraction functions as well.

class db_unavailable {
    var $readonly_db;

    function __construct($readonly_db)
    {
        $this->query_db = $readonly_db;
    }

    function query($sql)
    {
        $args = func_get_args();
        if (preg_match("#(INSERT INTO|REPLACE INTO|UPDATE|DELETE)#i", $args[0])) {
            // echo "Unable to do insert/replace/update/delete query: $sql\n";
            return true;
        } else {
            return call_user_func_array(array($this->readonly_db, 'query'), $args);
        }
    }

    function __call($function, $args)
    {
        return call_user_func_array(array($this->readonly_db, $function), $args);
    }
}

I simply create my $query_db object that points to the read-only database. Then create my main $db object as a new db_unavailable() object. Any select queries against $db will behave as they normally do, and data-modifying queries will be silently discarded.

LUG Presentation on SQL Basics

Posted on December 17th, 2009 in General,LUG,MySQL,PHP,Programming by Brandon

I gave a presentation tonight at my local Linux Users Group meeting on SQL Basics. I had a fun time preparing the presentation and made up a bunch of examples having to do with Santa’s database.

It started out with a simple table for kids who were either naughty or nice. We then added some reports to that. Then imported kids’ wish lists from CSV files. From there we were able to generate some manufacturing reports for the workshop.

When we joined the wish list table with the kids table, we were then able to generate a sleigh-loading report which included only gifts for kids who had been good. Then we got even more complicated and introduced several joins with some complicated mathematics to select gifts for kids within a certain radius from a given zip code.

The presentation is available for download here. And Brian recorded part of the presentation which is available to view on uStream.tv or here. (We’re still experimenting with getting the video recording set up correctly)

PHP Code to Sign any Amazon API Requests

Posted on June 30th, 2009 in General,Linux System Administration,PHP,Programming by Brandon

Starting next month, any requests to the Amazon Product Advertising API need to be cryptographically signed. Amazon has given about three months notice and the deadline is quickly approaching. I use the Amazon web services on several sites and came up a fairly generic way to convert an existing URL to a signed URL. I’ve tested with several sites and a variety of functions, and this is working well for me so far:

function signAmazonUrl($url, $secret_key)
{
    $original_url = $url;

    // Decode anything already encoded
    $url = urldecode($url);

    // Parse the URL into $urlparts
    $urlparts       = parse_url($url);

    // Build $params with each name/value pair
    foreach (split('&', $urlparts['query']) as $part) {
        if (strpos($part, '=')) {
            list($name, $value) = split('=', $part, 2);
        } else {
            $name = $part;
            $value = '';
        }
        $params[$name] = $value;
    }

    // Include a timestamp if none was provided
    if (empty($params['Timestamp'])) {
        $params['Timestamp'] = gmdate('Y-m-d\TH:i:s\Z');
    }

    // Sort the array by key
    ksort($params);

    // Build the canonical query string
    $canonical       = '';
    foreach ($params as $key => $val) {
        $canonical  .= "$key=".rawurlencode(utf8_encode($val))."&";
    }
    // Remove the trailing ampersand
    $canonical       = preg_replace("/&$/", '', $canonical);

    // Some common replacements and ones that Amazon specifically mentions
    $canonical       = str_replace(array(' ', '+', ',', ';'), array('%20', '%20', urlencode(','), urlencode(':')), $canonical);

    // Build the sign
    $string_to_sign             = "GET\n{$urlparts['host']}\n{$urlparts['path']}\n$canonical";
    // Calculate our actual signature and base64 encode it
    $signature            = base64_encode(hash_hmac('sha256', $string_to_sign, $secret_key, true));

    // Finally re-build the URL with the proper string and include the Signature
    $url = "{$urlparts['scheme']}://{$urlparts['host']}{$urlparts['path']}?$canonical&Signature=".rawurlencode($signature);
    return $url;
}

To use it, just wrap your Amazon URL with the signAmazonUrl() function and pass it your original string and secret key as arguments. As an example:

$xml = file_get_contents('http://webservices.amazon.com/onca/xml?some-parameters');

becomes

$xml = file_get_contents(signAmazonUrl('http://webservices.amazon.com/onca/xml?some-parameters', $secret_key));

Like most all of the variations of this, it does require the hash functions be installed to use the hash_hmac() function. That function is generally available in PHP 5.1+. Older versions will need to install it with Pecl. I tried using a couple of versions that try to create the Hash in pure PHP code, but none worked and installing it via Pecl was pretty simple.

(Note that I’ve slightly revised this code a couple of times to fix small issues that have been noticed)

Array versus String in CURLOPT_POSTFIELDS

Posted on May 29th, 2009 in General,PHP,Programming by Brandon

The PHP Curl Documentation for CURLOPT_POSTFIELDS makes this note:

This can either be passed as a urlencoded string like ‘para1=val1&para2=val2&…’ or as an array with the field name as key and field data as value. If value is an array, the Content-Type header will be set to multipart/form-data.

I’ve always discounted the importance of that, and in most cases it doesn’t generally matter. The destination server and application likely know how to deal with both multipart/form-data and application/x-www-form-urlencoded equally well. However, the data is passed in a much different way using these two different mechanisms.

application/x-www-form-urlencoded

application/x-www-form-urlencoded is what I generally think of when doing POST requests. It is the default when you submit most forms on the web. It works by appending a blank line and then your urlencoded data to the end of the POST request. It also sets the Content-Length header to the length of your data. A request submitted with application/x-www-form-urlencoded looks like this (somewhat simplified):

POST /some-form.php HTTP/1.1
Host: www.brandonchecketts.com
Content-Length: 23
Content-Type: application/x-www-form-urlencoded

name=value&name2=value2

multipart/form-data

multipart/form-data is much more complicated, but more flexible. Its flexibility is required when uploading files. It works in a manner similar to MIME types. The HTTP Request looks like this (simpified):

POST / HTTP/1.1
Host: www.brandonchecketts.com
Content-Length: 244
Expect: 100-continue
Content-Type: multipart/form-data; boundary=----------------------------26bea3301273

And then subsequent packets are sent containing the actual data. In my simple case with two name/value pairs, it looks like this:

HTTP/1.1 100 Continue
------------------------------26bea3301273
Content-Disposition: form-data; name="name"

value
------------------------------26bea3301273
Content-Disposition: form-data; name="name2"

value2
------------------------------26bea3301273--

CURL usage

So, when sending POST requests in PHP/cURL, it is important to urlencode it as a string first.

This will generate the multipart/form-data version

$data = array('name' => 'value', 'name2' => 'value2');
curl_setopt($curl_object, CURLOPT_POSTFIELDS,  $data)

And this simple change will ensure that it uses the application/x-www-form-urlencoded version:

$data = array('name' => 'value', 'name2' => 'value2');
$encoded = '';
foreach($data as $name => $value){
    $encoded .= urlencode($name).'='.urlencode($value).'&';
}
// chop off the last ampersand
$encoded = substr($encoded, 0, strlen($encoded)-1);
curl_setopt($curl_object, CURLOPT_POSTFIELDS,  $encoded)
Next Page »