Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Category: Linux System Administration (Page 10 of 11)

Tracking down how hackers gain access through web apps

Hackers commonly use vulnerabilities in web applications to gain access to a server. Sometimes, though, it can be difficult to track down exactly how they gained access to a server. Especially if the server hosts a bunch of websites and there are lots of potentially vulnerable scripts.

I’ve tracked down more of these than I can count, and have sortof developed a pattern for investigating. Here are some useful things to try:

1- Look in /tmp and /var/tmp for possibly malicious files. These directories are usually world-writable, and commonly used to temporarily store files. Sometimes the files are disguised with leading dot’s, or they may be named something that looks similar to other files in the directory like “. ” (dot- space), or like a session files named sess_something.

If you are able to see any files, you can use the timestamps of the files to try and look through some Apache logs to find the exact hit that it came from

2- If a rogue process is still running, look at the /proc entry for that file to determine more information about it. The files in /proc/<PID> will tell you information like the executable file that created the process, it’s working directory, environment information, and plenty more details. Usually, the rogue processes are running as the apache user (httpd, nobody, apache).

If all of the rogue processes were being run by the Apace user, then the hacker likely didn’t gain root access. If you have rogue processes that were being run by root, it is much harder to clean up after. Usually the only truly safe method is to start over with a clean installation.

3- netstat -l will help you identify processes that are listening for incoming connections. Often times, these are a perl script. Sometimes they are named things that look legitmiate like ‘httpd’, so pay close attention. netstat-n will help you to see current connections that your server has to others.

4- Look in your error logs for files being downloaded with wget. A common tactic is for hackers to run a wget command to download another file with more malicious instructions. Fortunately, wget writes to STDERR, so it’s output is usually displayed in the error logs. Something like this is evidence of a successful wget:

--20:30:40--  https://somehackedsite.com/badfile.txt
            => `Lnx.txt'
Resolving somehackedsite.com... 12.34.56.78

Connecting to somehackedsite.com[12.34.56.78]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12,345 [text/plain]

     0K .......... ......                                     100%  263.54 KB/s

20:30:50 (263.54 KB/s) - `badfile.txt' saved [12,345/12,345]

You can use this information to try and recreate what the hacker did. Look for the file they downloaded (badfile.txt in this case) and look at what it does. You can also used these timestamps to look through access_logs to find the vulnerable script.

Since wget is a commonly used tool for this, I like to create a .wgetrc file that contains bogus proxy information, so that even if a hacker is able to attempt a download, it won’t work. Create a .wgetrc file in Apache’s home directory with this content:

http_proxy = https://bogus.dontresolveme.com:19999/
ftp_proxy = https://bogus.dontresolveme.com:19999/

5- If you were able to identify any timestamps, you can grep through Apache logs to find requests from that time. If you have a well-structured server where you have logs in a consistent place, then you can use a command like this to search all of the log files at onces:

grep "01\\/Jun\\/2007:10:20:" /home/*/logs/access_log

I usually leave out the seconds field because requests sometimes take several seconds to execute. If you have a server name or file name that you found was used by a wget, you can try searching for those too:

grep "somehackesite.com" /home/*/logs/access_log

6 – Turn of PHP’s register_globals by default and only enable it if truly needed. If you write PHP apps, learn how to program securely, and never rely on register_globals being on.

Disabling PHP processing for an individual file

I sometimes want to post samples of PHP scripts on my website. Since the site web server is configured to parse files that end in .php, that means that simply linking to the PHP file will try to parse it instead of displaying its contents. In the past, I’ve always made a copy of the file with a .txt extension to have it displayed as text/plain. That way is kindof clumsy though. If a user wants to save the file, they download it as a .txt and have to rename it to .php.
Fortunately, Apache has a way to do about anything. To configure it to not parse a specific PHP file, you can use this in your Apache configuration:

<Files "some.file.php">
   RemoveHandler .php
   ForceType text/plain
</Files>

If you have AllowOverride FileInfo enabled, this can also be placed in a .htaccess file. It should work for other file types like .cgi or .pl files as well. You can substitute a FilesMatch directive to have it match multiple file names base on a regular expression match.

What a difference a blank line can make

I had a customer today who had problems with a PHP script that output a Microsoft Word document. The script was pretty simple and just did some authentication before sending the file to the client. But, when the document was opened in Word, it tried to convert it into a different format and would only display gibberish.

The customer had posted his problem on some forums, and was told that upgrading from PHP 5.1.4 to PHP 5.2 should fix the problem. Well it didn’t. In fact, the PHP 5.2 version had some weird bug where a PDO object would overwrite stuff in the wrong memory location. In this case, a call to fetchAll() was overwritting the username stored in the $_SESSION variable, which in turn was messing up all of the site’s authentication. After digging into it to find that out, it seemed best to revert back to PHP 5.1. Once that was completed, the we were back to the original problem with the Word document.

The headers he was sending all looked okay. Here’s the relevant code to download a document:

$file = "/path/to/some_file.doc";
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false); // required for certain browsers
header("Content-Type: application/msword");
header("Content-Disposition: attachment; filename=\"".basename($file)."\";" );
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".filesize($file));
readfile($file);

I tried tweaking with them a little to match a known-working site, but to no avail. I finally had to download a copy of the file directly from the web server, bypassing the PHP script. I also downloaded a copy of the file through the PHP script and saved them both for comparison. After looking at them both side-by-side in vi, I noticed an extra line at the top of the bad one. I removed the extra line and downloaded the fixed copy which opened fine in Word. After that, it was just a matter of finding the included file with an extra line in it. Sure enough, one of the configuration files had an extra line after the closing ?> tag. Removed that and everything worked correctly.

Stupid Advertisers

Advertisers can be dumb. I don’t read a whole lot of magazines, but a couple advertisements in eWeek have made me laugh

First of all, is the latest VeriSign ad that says ‘Introducing the biggest advancement to Internet security in the last ten years’. Turns out that this amazing new advancement is that they now sell an ‘Extended Validation SSL’ certificate. The amazing feature that they’ve manage to incorporate with this, is that modern browsers will now display a green address bar when it detects a site using one of these new certificates.

So that’s it? The big advancement of car insurance quotes in security is that the address bar turns green? You’ve gotta be kidding me. Sounds to me like just a way for VeriSign to make more money.

And my favorite, stupid-funny advertisement is a Microsoft ad that used to be inside the front cover of eWeek. You’ll notice the text that claims 5-nines uptime with a star next to it. Then read the fine print in the footnote of the start that says “Results Not Typical”. It still makes me laugh. To bad they don’t run this ad anymore.

Funny Microsoft AdThe TextThe Fine Print

Avazio.com it is

After spending far to many hours looking up possible domain names, I’ve finally settled on avazio.com. This will be a place for me to sell programs that I’ve written, and to advertise System Administration and Programming services. There is no special meaning or anything to the name. It’s just something that sounded cool and was available. I’ve spent a little time putting up a website there with a little bit of information about the products and services that I’m hoping to sell.

I’m actually quite happy with the look of the site. It’s nothing too complicated, but I have created all of the graphics for it myself using an old version of Paint Shop Pro. Considering that I know nothing about graphics, I think that it looks pretty good. I picked the colors from colorschemer.com (although I forget which one).

DHCP ‘always-broadcast’ confusion

I run a DHCP server using Linux’s dhcpd program to serve addresses to a bunch of clients.  These clients are connected over several wireless links, and the radios are sometimes quirky.  Specifically, some clients never get the DHCPOFFER unless the ‘always-broadcast’ parameter is on.  This usually works out fine.

Today, however, we had a problem where we just saw a bunch of incoming DHCPDISCOVER messages, to which the server would reply with a DHCPOFFER. The devices would just continually send discover messages, and none would ever DHCPREQUEST an address.

From what I can conclude, I think that the clients were confused when they received multiple broadcast responses back for their DHCPDISCOVER message.  The client would then send another discover message, which just caused a never-ending loop of requests and offers.

To resolve the problem, I turned off always-broadcast for a few minutes.  This made the clients wait for a random period of time before discovering again.  Some clients accepted the IP fine even though it wasn’t broadcast.  For the ones that didn’t, I then re-enabled always-broadcast, and they picked up an address the next time that they tried.

For a long term solution, we’re working on subnetting the two /24 networks that are currently together into smaller /26 or /27 blocks.  That should reduce the possibility of having this happen again.

The coolest, most efficient way to copy a directory between servers

I was recently quizzed about the quickest, most efficient way to copy an entire directory between servers. I typically do this by tar’ing it up on one server, copying it to the other, then extracting it. However, this has a couple obvious problems. One, is that it requires large chunks of disk space to hold the archive file on both the source and destination. If you are low on disk space, this can be a real pain. The other bad thing, is that it a waist of time since it reads through all of the files three times (read, copy, extract).

The original thought I had was to use “scp -r” which will recursively copy a directory over to the destination. This, however, doesn’t copy directories that start with a dot, and it doesn’t preserve file ownership information.

The best way, is to use a combination of tar and ssh. The idea is to tar the files up to STDOUT, then create an SSH session to the remote host, and extract from STDIN. After you’ve got the idea, the command is pretty simple:

tar -cvzf – /path/to/local/dir | ssh root@remotebox “cd /path/to/extract; tar -xvzf -”

That’s it. One simple command and it can save you tons of time creating, copying, and extracting by doing it all at once.

Technology in books and movies

I recently finished reading “Digital Fortress” by Dan Brown. The story was actually pretty good, and I would recommend it to others, but I have got to complain about the technology described in the book.

Essentially, the book is set around the NSA’s supercomputer called TRANSLTR. This machine is described as a multi-billion dollar, multi-million core computer that is used to brute-force encryption keys on encrypted documents. Supposedly this machine can crack most encrypted documents in minutes, and it has been stumped for as long as a couple hours on the most complex jobs.

Now, when the bright minds at the NSA try to decrypt the latest ‘unbreakable’ code with their fancy machine, it just works on it for hours and hours. The only interface that all of the technicians have though, is this ‘run-time monitor’ that says how long it’s been working on the latest code. The main character who supposedly did most of the programming on this machine doesn’t have any better debugging tools available than the single clock? Come on…

Equally annoying is the fact that TRANSLTR also has some built-in access into the NSA’s super-secret database of highly classified information. Therefore when TRANSLTR becomes exploited, its conveniently able to modify the firewall and access controls to the NSA’s secret database. Well, the NSA deserves it if they allow an outside system control like that.

There are a whole bunch of other little things (like the only manual power-off button is six stories beneath ground) that are annoying about this book. But the worst is near the very end where is supposed to be suspenseful. The final code is the ‘prime difference between Hiroshima and Nagasaki’). It takes the main characters (who are supposedly math geniuses) 20 pages to figure out that this is a numeric answer, despite the words ‘prime’ and ‘difference’. And another few pages to figure out that the the difference between 235 and 238 is three. Amazing

Cacti stops updating graphs after upgrade to version 0.8.6j

It turns out the latest update to Cacti, the popular SNMP and RRDTool graphing program, has a bug that makes it so graphs based on SNMP Data aren’t updated after upgrading.  The problem has to do with using the PHP “snmpgetnext” function, which is unimplemented in PHP 4. 

There is a discussion on Cacti’s forum at https://forums.cacti.net/about19199.html  where a developer posts a new ping.php that will resolve the problem.

Impressed with PowerDNS

I’ve spent the last couple weeks working with PowerDNS. We’re migrating our old BIND servers over to new PowerDNS servers that use a MySQL backend. Installation was fairly easy, because things were well documented. The application has worked perfectly, and when I emailed their mailing list to ask about a configuration setting that wasn’t documented, I got a useful reply within minutes.

Since PowerDNS is just the DNS Server, it doesn’t provide any user-interfaces for modifying the DNS information. I took a look at several of the possible applications that claimed to be “front ends” for PowerDNS, but didn’t find any that suited our needs. (I tried out WebDNS, Tupa, and a couple others listed on SourceForge). The existing tools were too complex, too simple, or too buggy. But, the database schema that PowerDNS uses, is pretty straightforward, so I wrote a PHP class that provides most of the necessary functions, and started our long-awaited customer interface that uses the class to allow our customers to maintain their own DNS records.

Overall, this has been a great project with great results.

« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑