Cacti stops updating graphs after upgrade to version 0.8.6j

Posted on January 31st, 2007 in General,Linux System Administration,Programming by Brandon

It turns out the latest update to Cacti, the popular SNMP and RRDTool graphing program, has a bug that makes it so graphs based on SNMP Data aren’t updated after upgrading.  The problem has to do with using the PHP “snmpgetnext” function, which is unimplemented in PHP 4. 

There is a discussion on Cacti’s forum at http://forums.cacti.net/about19199.html  where a developer posts a new ping.php that will resolve the problem.

Internet Explorer Oddities

Posted on January 31st, 2007 in General,Programming by Brandon

I spent about an hour debugging a dumb behavior of Internet Explorer.  The problem site is one that stored some session data in PHP’s $_SESSION variable for display later.  The form would use parameters from the GET request to populate some data in the users $_SESSION.  Upon trying to retrieve the data in a subsequent page though, it was missing or incorrect.. but only in Internet Explorer.

The failure of PHP Sessions is typically a server-side problem, so it didn’t make sense that the browser was causing a problem.   I spent a while verifying that the sessions were, in fact, working properly in all different browsers, but that still didn’t explain the problem.
The odd behavior comes, though, when the page had an image tag with a blank “src” parameter.    This causes most browsers to try and fetch the current page.   But Internet Explorer tries to fetch the parent directory.
For example, if your page is a http://www.somesite.com/somedirectory/somepage.php, most browsers will try and fetch that URL for images with a blank src parameter.   Internet Explorer, though will try to fetch http://www.somesite.com/somedirectory/

Either case is really not what one would expect.   I would think that without a destination, it wouldn’t try to fetch anything.  Attempting to fetch the page that called it (obviously not a graphic) or the parent directory (why would it do that) doesn’t really make any sense.

In this case, fetching the parent directory hit my problem script since it was the DirectoryIndex (index.php).   Calling the script without any parameters erased the saved variable that I was looking for, so the subsequent page that I hit was missing this variable.
I guess the moral of the story is to not leave images with a blank src parameter, because it will do weird things.

I’ve written up a sample page available at  http://www.brandonchecketts.com/examples/ie_blank_img_src_tag/index.php to demonstrate what I’m talking about

The best online todo list

Posted on January 29th, 2007 in General by Brandon

I’ve been looking for a while for a web-based todo list that I like.  I’ve tried a bunch, but finally decided on gubb.net because it is the most useful to me.  It makes good use of Ajax functionality, and just functions the way I like.  Thanks to WebWorkerDaily.com for pointing it out.

My Data featured on swivel.com home page

Posted on January 25th, 2007 in General,Virtual Economies by Brandon

Swivel.com is a new service that allows you to compare separate data sets in creative ways. I recently posted some historic virtual currency prices there from World of Warcraft, Star Wars Galaxies, and Final Fantasy XI from some data that I already collect on GamePriceWatcher.com. The guys there were interested in it and posted some graphs based on my data on their home page. So that was cool for me.

World of Warcraft Gold Overall Prices vs European Prices

Link building and PageRank stuff

Posted on January 25th, 2007 in General by Brandon

My friend, Kevin, over at www.utahsysadmin.com has a much better grasp of PageRank than I do. He recently noticed that his sites got a PageRank assigned finally, which prompted me to re-look at some of my own sites. GamePriceWatcher.com has just increased from a PR of 2 up to a 3 finally. Google’s index now shows a few more of the links pointing at me too, which is nice.  I’ve been spending time recently, trying to get some links to my sites, which is evidently paying off.

Poor PHP Programming

Posted on January 22nd, 2007 in General by Brandon

Lately, I’ve been working on numerous projects where I’m debugging or updating other people’s code.  I’m constantly amazed at the poor programming that goes into a lot of these sites.  They are filled with SQL injection vulnerabilities, confusing file structures, even remote code execution problems.

Properly escape database queries – By including a user provided variable directly into a query, you are opening yourself up to SQL injection problems.  For example this code:

mysql_query(” SELECT * FROM sometable WHERE somecolumn = ‘”.$_POST['somevalue'].”‘);

is just plain bad! You are allowing the user to insert arbitrary data into the query without sanitizing it first.    Always sanitize your variables before using them in a query, or better yet, use a database abstraction layer like PEAR::DB that does the escaping for you.

Don’t store user passwords in clear text!   I hate it when sites do this.  Combined with SQL injection attacks, this could allow hackers to view all of the usernames and passwords in your database.   At the very least, you should store the password as an MD5 hash, preferably with some salt so that even if an attacker manages to read the values of your table, they are much more difficult to use.   Since most users tend to re-use passwords, it also allows hackers to potentially use stolen user credentials to access other accounts not even associated with your site.

Poor file structures can be extra confusing.  One of the site’s I’m working with now has no less than three copies of most of the code spread between a half dozen directories with no clear association between them.   Files in one directory are including library files in a completely unrelated directory.   In this case, a development branch was using a combination of production and development usernames to access a remote resource, causing extreme amounts of confusion, and destroying the integrity of the data.

I’ve also recently become converted to using Subversion to track code changes over time.  I used to keep multiple copies of a file (include.php.OLD, include.php.1, OLDinclude.php, you know the drill) but Subversion makes it far easier to keep backup coies and refer back to them if something breaks.

The future of Television

Posted on January 17th, 2007 in General by Brandon

This recent story on Wired caught my attention

http://www.wired.com/news/wiredmag/0,72506-1.html?tw=wn_story_page_next1

It’s about a new company called Joost that has plans to reinvent the television market as we know it today.  From what I’ve read, it sounds like they will succeed.

Essentially, the designers of Kazaa and Skype are applying a lot of the concepts that they have learned with those ventures to the Television market.   Encrypted 10 second video clips, will be streamed from peers and assembled back into a full program.  Their design also adds a lot of modern social networking concepts, like inviting others to view your show, and applying tags to clips.

It will be interesting to follow how this technology develops.

First experience with Subversion

Posted on January 12th, 2007 in General by Brandon

I’ve long realized the importance of version control, but since I tend to work on most projects myself, I’ve never really been force to use one. Recently though, I’ve been working on several different website simultaneously, and I’ve found myself making changes to code on one site, and then having to make the same change

Subversion is the perfect answer to this situation.  I’ve recently set up a subversion repository for my common code, and I can now work on the code on one site and ‘commit’ it.  Then, just update my local copy on another site, and all of my work is merged.

Of course, I have to take care to realize what the updates will do, and that it doesn’t break functionality on each site, but the usefulness of being able to share code like this is amazing.

What companies can do to avoid phishing scams

Posted on January 5th, 2007 in General by Brandon

A recent blog post about the Google Blacklist brought up a thought I had a while ago about reducing the effectiveness of phishing. In his post Micheal says that “The pages are generally exact replicas of the original web page and generally pull graphics (*.jpg, *.gif, etc.) from the legitimate web site.” This has been my experience as well. The phishing page actually includes graphics from the legitimate site.

I can see a couple reasons for this.

  1. The phishers have some concern about bandwidth usage or disk space usage on their hosts
  2. When the page is loading, some browsers will say “waiting for www.paypal.com” which helps to make the site appear more legitimate.
  3. Phishers are lazy and don’t want the added work of changing the source and uploading more files to their web host

In any case, the fact that these phishing sites are pulling graphics from the legitimate site provides an easy way for the target site to identify phishing sites. On 90% (or more) of browsers, when the browser requests a graphic, it sends an HTTP_REFERRER header that tells the web server which page included the graphic.

For example, if you are hitting my site now, your browser requested this graphic:

http://www.brandonchecketts.com/wp-content/themes/cordobo-green-park-09-beta-09/images/h3_bg.gif

When your browser requested it, it also told my web server which page the request originated from. This is the default behavior for all major browsers. The request in my Apache log looks like this:

11.22.33.44 – - [05/Jan/2007:09:59:58 -0500] “GET /wp-content/themes/cordobo-green-park-09-beta-09/images/h3_bg.gif HTTP/1.1″ 200 1581 “http://www.brandonchecketts.com/wp-content/themes/cordobo-green-park-09-beta-09/style.css” “Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1″

Basically, your web browser told my server to request the graphic, and that the page that instructed it to do that was the “style.css” file.

The phishing targets (PayPal, eBay, Bank of America, etc) could easily look through their logs to identify those phishing sites that are including their graphics.

Or, better yet, instead of just displaying the static images, they could program their web server to look at the HTTP_REFERRER field on each request. If it comes from a legitimate source, then display the normal graphic. If it comes from an unknown source, then display an alternate graphic that says “THIS IS NOT THE REAL PAYPAL SITE!”

Who knows why they haven’t done this yet. I could whip up a script to do it in about an hour!

2007 Predictions

Posted on January 3rd, 2007 in General,Virtual Economies by Brandon

Lots of the blogs I read are making predictions for 2007, so I figured I’d chime in with my own (mostly agreeing with others).

- Second life will get a bunch of negative press (finally)

- The biggest news in Virtual Worlds will be when Areae debut’s their upcoming Virtual World product.  Presumably, here are some of the characteristics it will have:

  • A broad environment with loose storyline
  • The world will piece together chunks of content provided by the users, much the same way that an news reader pulls in RSS feeds from a variety of sources.
  • Users will be able to provide much of the content.  I’m not sure how they will accomplish this, but it will be something like creating web sites, as opposed to creating 3D content (like in Second Life)
  • Along with the previous point, I suspect that users will be able to host the content themselves somehow.

- I’ll finally find a way to make a full-time living with online games

Next Page »