Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Category: Programming (Page 5 of 6)

Tracking down how hackers gain access through web apps

Hackers commonly use vulnerabilities in web applications to gain access to a server. Sometimes, though, it can be difficult to track down exactly how they gained access to a server. Especially if the server hosts a bunch of websites and there are lots of potentially vulnerable scripts.

I’ve tracked down more of these than I can count, and have sortof developed a pattern for investigating. Here are some useful things to try:

1- Look in /tmp and /var/tmp for possibly malicious files. These directories are usually world-writable, and commonly used to temporarily store files. Sometimes the files are disguised with leading dot’s, or they may be named something that looks similar to other files in the directory like “. ” (dot- space), or like a session files named sess_something.

If you are able to see any files, you can use the timestamps of the files to try and look through some Apache logs to find the exact hit that it came from

2- If a rogue process is still running, look at the /proc entry for that file to determine more information about it. The files in /proc/<PID> will tell you information like the executable file that created the process, it’s working directory, environment information, and plenty more details. Usually, the rogue processes are running as the apache user (httpd, nobody, apache).

If all of the rogue processes were being run by the Apace user, then the hacker likely didn’t gain root access. If you have rogue processes that were being run by root, it is much harder to clean up after. Usually the only truly safe method is to start over with a clean installation.

3- netstat -l will help you identify processes that are listening for incoming connections. Often times, these are a perl script. Sometimes they are named things that look legitmiate like ‘httpd’, so pay close attention. netstat-n will help you to see current connections that your server has to others.

4- Look in your error logs for files being downloaded with wget. A common tactic is for hackers to run a wget command to download another file with more malicious instructions. Fortunately, wget writes to STDERR, so it’s output is usually displayed in the error logs. Something like this is evidence of a successful wget:

--20:30:40--  https://somehackedsite.com/badfile.txt
            => `Lnx.txt'
Resolving somehackedsite.com... 12.34.56.78

Connecting to somehackedsite.com[12.34.56.78]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12,345 [text/plain]

     0K .......... ......                                     100%  263.54 KB/s

20:30:50 (263.54 KB/s) - `badfile.txt' saved [12,345/12,345]

You can use this information to try and recreate what the hacker did. Look for the file they downloaded (badfile.txt in this case) and look at what it does. You can also used these timestamps to look through access_logs to find the vulnerable script.

Since wget is a commonly used tool for this, I like to create a .wgetrc file that contains bogus proxy information, so that even if a hacker is able to attempt a download, it won’t work. Create a .wgetrc file in Apache’s home directory with this content:

http_proxy = https://bogus.dontresolveme.com:19999/
ftp_proxy = https://bogus.dontresolveme.com:19999/

5- If you were able to identify any timestamps, you can grep through Apache logs to find requests from that time. If you have a well-structured server where you have logs in a consistent place, then you can use a command like this to search all of the log files at onces:

grep "01\\/Jun\\/2007:10:20:" /home/*/logs/access_log

I usually leave out the seconds field because requests sometimes take several seconds to execute. If you have a server name or file name that you found was used by a wget, you can try searching for those too:

grep "somehackesite.com" /home/*/logs/access_log

6 – Turn of PHP’s register_globals by default and only enable it if truly needed. If you write PHP apps, learn how to program securely, and never rely on register_globals being on.

What a difference a blank line can make

I had a customer today who had problems with a PHP script that output a Microsoft Word document. The script was pretty simple and just did some authentication before sending the file to the client. But, when the document was opened in Word, it tried to convert it into a different format and would only display gibberish.

The customer had posted his problem on some forums, and was told that upgrading from PHP 5.1.4 to PHP 5.2 should fix the problem. Well it didn’t. In fact, the PHP 5.2 version had some weird bug where a PDO object would overwrite stuff in the wrong memory location. In this case, a call to fetchAll() was overwritting the username stored in the $_SESSION variable, which in turn was messing up all of the site’s authentication. After digging into it to find that out, it seemed best to revert back to PHP 5.1. Once that was completed, the we were back to the original problem with the Word document.

The headers he was sending all looked okay. Here’s the relevant code to download a document:

$file = "/path/to/some_file.doc";
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false); // required for certain browsers
header("Content-Type: application/msword");
header("Content-Disposition: attachment; filename=\"".basename($file)."\";" );
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".filesize($file));
readfile($file);

I tried tweaking with them a little to match a known-working site, but to no avail. I finally had to download a copy of the file directly from the web server, bypassing the PHP script. I also downloaded a copy of the file through the PHP script and saved them both for comparison. After looking at them both side-by-side in vi, I noticed an extra line at the top of the bad one. I removed the extra line and downloaded the fixed copy which opened fine in Word. After that, it was just a matter of finding the included file with an extra line in it. Sure enough, one of the configuration files had an extra line after the closing ?> tag. Removed that and everything worked correctly.

Avazio.com it is

After spending far to many hours looking up possible domain names, I’ve finally settled on avazio.com. This will be a place for me to sell programs that I’ve written, and to advertise System Administration and Programming services. There is no special meaning or anything to the name. It’s just something that sounded cool and was available. I’ve spent a little time putting up a website there with a little bit of information about the products and services that I’m hoping to sell.

I’m actually quite happy with the look of the site. It’s nothing too complicated, but I have created all of the graphics for it myself using an old version of Paint Shop Pro. Considering that I know nothing about graphics, I think that it looks pretty good. I picked the colors from colorschemer.com (although I forget which one).

Redirecting WordPress ‘numeric’ permalinks

When I set up my blog, I configured it to use the ‘numeric’ permalinks that look something like https://www.brandonchecketts.com/archives/61

Of course, that is pretty straightforward, but it isn’t very readable, and we can probably get a few extra SEO points by putting the post’s title in the URL. However, just changing the permalink format is a bad idea since I have a bunch (okay, a few) incoming links that I don’t necessarily want to break.

So, I wrote a quick WordPress Plugin that redirects these old numeric links to my new format. Simply create a file named ‘numeric_permalink_redirect.php’ in your wp-content/plugins directory with this content:

<?php
/*
Plugin Name: BC Rewriter
Plugin URI: https://www.brandonchecketts.com/
Description: Redirect old requests to new permalink format
Author: Brandon Checketts
Version: 1.5
Author URI: https://www.brandonchecketts.com/
*/

// redirect old "numeric" type archives to our current permalin structure
function check_numeric_permalink()
{
  if(preg_match("/^/archives/([0-9]+)/", $_SERVER['REQUEST_URI'], $matches)) {
  $post_id = $matches[1];
  $url = get_permalink($post_id);
  header("HTTP/1.1 301 Moved Permanently");
  header("Location: $url");
  exit;
}

add_action('init', 'check_numeric_permalink');

?>

That will now do a 301 Permanent redirect to the new URL so that you shouldn’t lose anybody, and the search engines should change their incoming links.

Multi-threaded perl

I’ve been experimenting on multi-threading in perl for a new project, and am impressed with how straightforward it is. Before digging into it, I never really considered doing anything with it because it was always kindof ‘mysterious’ to me. Now, I’m seeing how useful it is to have multiple threads that are able to share variables.

In the application I’m rewriting, I used to have one script that listened for network data, then saved that out to a file. I had another script that read through the output files, and then inserted the data into a database. Now, with a multi-threaded program, I just have one thread that listens, and another thread (or multiple threads) that parse the data and manipulate it however I want. In this case, that saves a lot of disk activity, and makes the program a lot more efficient, and straight-forward.
I’m also able to use the Thread::Queue module to create a queue that the listener process can add to, and then have ‘worker’ threads that can go through the data and format/summarize/whatever I’m going to do with it.

I’m looking forward to seeing how this all works out.  I’m impressed so far.

The coolest, most efficient way to copy a directory between servers

I was recently quizzed about the quickest, most efficient way to copy an entire directory between servers. I typically do this by tar’ing it up on one server, copying it to the other, then extracting it. However, this has a couple obvious problems. One, is that it requires large chunks of disk space to hold the archive file on both the source and destination. If you are low on disk space, this can be a real pain. The other bad thing, is that it a waist of time since it reads through all of the files three times (read, copy, extract).

The original thought I had was to use “scp -r” which will recursively copy a directory over to the destination. This, however, doesn’t copy directories that start with a dot, and it doesn’t preserve file ownership information.

The best way, is to use a combination of tar and ssh. The idea is to tar the files up to STDOUT, then create an SSH session to the remote host, and extract from STDIN. After you’ve got the idea, the command is pretty simple:

tar -cvzf – /path/to/local/dir | ssh root@remotebox “cd /path/to/extract; tar -xvzf -”

That’s it. One simple command and it can save you tons of time creating, copying, and extracting by doing it all at once.

Cacti stops updating graphs after upgrade to version 0.8.6j

It turns out the latest update to Cacti, the popular SNMP and RRDTool graphing program, has a bug that makes it so graphs based on SNMP Data aren’t updated after upgrading.  The problem has to do with using the PHP “snmpgetnext” function, which is unimplemented in PHP 4. 

There is a discussion on Cacti’s forum at https://forums.cacti.net/about19199.html  where a developer posts a new ping.php that will resolve the problem.

Internet Explorer Oddities

I spent about an hour debugging a dumb behavior of Internet Explorer.  The problem site is one that stored some session data in PHP’s $_SESSION variable for display later.  The form would use parameters from the GET request to populate some data in the users $_SESSION.  Upon trying to retrieve the data in a subsequent page though, it was missing or incorrect.. but only in Internet Explorer.

The failure of PHP Sessions is typically a server-side problem, so it didn’t make sense that the browser was causing a problem. I spent a while verifying that the sessions were, in fact, working properly in all different browsers, but that still didn’t explain the problem.

The odd behavior comes, though, when the page had an image tag with a blank “src” parameter. This causes most browsers to try and fetch the current page. But Internet Explorer tries to fetch the parent directory.

For example, if your page is a https://www.somesite.com/somedirectory/somepage.php, most browsers will try and fetch that URL for images with a blank src parameter. Internet Explorer, though will try to fetch https://www.somesite.com/somedirectory/

Either case is really not what one would expect. I would think that without a destination, it wouldn’t try to fetch anything. Attempting to fetch the page that called it (obviously not a graphic) or the parent directory (why would it do that) doesn’t really make any sense.

In this case, fetching the parent directory hit my problem script since it was the DirectoryIndex (index.php). Calling the script without any parameters erased the saved variable that I was looking for, so the subsequent page that I hit was missing this variable.

I guess the moral of the story is to not leave images with a blank src parameter, because it will do weird things.

ldssd.org is now live

I’ve spent the past few days working on a new website at ldssd.org. The site has most of the LDS Scriptures available online, and can generate them in an RSS Feed that will deliver one chapter to you each day. The site still has a couple of small issues that should be fixed soon, but I wanted to make sure that it was ‘officially’ launched today in time for people (me) to keep their New Years Resolutions to read the scriptures each day.

Database encryption made easy

I’ve always wondered how one would securely store sensitive information in a MySQL database. A recent project has given me the opportunity to work on it, and I’ve been impressed on how easy it is to implement. MySQL provides an easy interface for encrypting data before storing it in the database. Simply use the AES_ENCRYPT and AES_DECRYPT functions when reading or writing to a table.

Simply make your column a blob field, then use something like this to write to the table

(using a PEAR::DB syntax)

$db->query("
UPDATE sometable
SET    some_col = AES_ENCRYPT( ?, ?)
WHERE something_else = ?
" array( $sensitive_value, $encryption_key, $index));

and something like this to read it back out

$value = $db->getOne("
SELECT AES_DECRYPT( some_col, ?)
FROM   sometable
WHERE something_else = ?
", array( $encryption_key, $index));
« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑