Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Author: Brandon (Page 20 of 29)

Verizon 5750 on Linux

About a week ago, I subscribed to Verizon’s EVDO service so that I can get online from practically anywhere. The first few days, I had it running on a Windows machine and that was pretty simple to install. I finally got around to putting Ubuntu on that machine and getting it working under Linux which was also pretty painless.

I specifically chose the 5750 because I found plenty of online documentation for getting it working. Specifically this post, which has some pretty simple instructions. Within about 30 minutes, I had configured my fresh Ubuntu install to work fine with the card. It’s not perfect yet and at this point I still need to run a couple manual commands to get it to connect each time. Also, I’m not sure how to make it cleanly disconnect so that I can reconnect to a wifi service when that is available. I haven’t played with it much past getting it working though, so I’m sure I’ll figure out the rest soon enough.

Performance seems pretty impressive. Speed tests have usually been around 500-700 kbps down and 120-200k up. I’ve seen it as high as 1.6 Mbps down/700k up though. When sitting stationary, latency has been around 100ms to google.com, compared to about 40ms on my Comcast connection. That is decent enough to work with an interactive shell on and to use vim remotely without too much complaint.

I just finished testing latency during a 40 minute car ride. I pinged google the entire way home for a quick test. Although not as impressive overall, I was still impressed that it stayed connected and had less than 1% loss. Latency got as high as 6600 ms though, and the average latency was 272 ms, so that would be more difficult to do something interactive like a remote shell.

brandonc@ubuntu:~$ ping -f www.google.com -i.2
PING www.l.google.com (72.14.205.99) 56(84) bytes of data.
...................................
--- www.l.google.com ping statistics ---
5120 packets transmitted, 5077 received, +10 duplicates, 0% packet loss, time 1809665ms
rtt min/avg/max/mdev = 69.177/205.808/6650.000/324.142 ms, pipe 15, ipg/ewma 353.519/104.946 ms

Overall, I’m impressed so far. Verizon has their 30 day test, so that has been nice that I had the chance to test everything out and would be able to cancel if necessary. At this point though, I’m satisified with the ease of getting it working under Ubuntu, and the performance so it looks like I’ll be keeping it.

Testing Radius from a command-line

I like to test things manually to bypass any potential issues cause by multiple layers of applications. Here is how I found to test radius authentication using the command line radclient command:

[root@radius ~]# /usr/local/bin/radclient -x localhost:1812 auth <password>
< User-Name="<valid-username>"
< User-Password="<valid-password>"
<
> Sending Access-Request of id 228 to 63.172.126.12:1812
>         User-Name = "username"
>         User-Password = "password"
> rad_recv: Access-Accept packet from host 11.22.33.44:1812, id=228, length=180

Maildir information

With my (seemingly endless) work on mail servers, I ran across a couple good pieces of information regarding the format and structure of Maildir’s.

A description of the folders, how to write messages to a maildir, the basic structure, etc:

https://www.courier-mta.org/maildir.html

Essentially, each filter has a ‘new’, ‘cur’, and ‘tmp’ directory.  tmp is used when writing the message, and then the file is immediately moved to the ‘new’ directory, and the S=xxxx part added with the file size in bytes.   Once a file is read by a mail client, it is moved to the ‘cur’ directory, a ‘:2,<FLAGS>’ parameter is added, where the flags can be used to mark the message as read, replied, deleted, etc

The format and ways to use the maildirsize file:

https://inter7.com/courierimap/README.maildirquota.html 

In the maildirsize file, the first line contains the quota size in the format xxxS,yyyC where xxx is the total size in bytes, and yyy is the number of messages.   So a quota of 1048576S,1000C would be either 1 MB or 1000 messages (whichever occurs first).   Then, each line after that contains two numbers.  The first is a size in bytes, and the second is the number of messages.

Each time a new message is saved, a new line is added to maildirsize with its size.  The total quota is calculated by totaling up the two columns.   Occasionally,  the maildirsize file is recalculated from scratch.

Clonezilla is a useful alternative to Ghost

I’m about ready to wipe out my laptop’s hard drive and reinstall, but wanted to back it up first, just in case there is something I need on it in the future.   I was searching for open-source alternatives to Symantec Ghost and came across CloneZilla which looked like it would do the trick.   It is available as a Linux-based Live CD so that you can just boot off it and go.   With just a few minutes of playing with it, I was able to back up my entire laptop hard drive to a Samba share on a Windows PC.

Tracking TCP connections with netstat

I’ve been troubleshooting some possible problems on a mail server recently, and have been digging into TCP connections some. The ‘netstat’ command has a ‘-o’ option that displays some timers that are useful:

[mail]# netstat -on |grep 189.142.18.18
tcp        0      8 205.244.47.142:25           189.142.18.18:1256          ESTABLISHED on (17.00/4/0)
tcp        0    452 205.244.47.142:25           189.142.18.18:2676          ESTABLISHED on (36.09/6/0)

This displays countdown timers for each TCP State. For example, if a connection is in FIN_WAIT and you run the command over and over with “watch”, you can watch the time count down to 0 and then go away. The man pages and documentation I could find didn’t explain the timers very well, so this is what I have learned by watching it. (read: this is not official).

When a connection is in the ESTABLISHED state, the timer can be either on or off. From what I can tell, the counter turns to ON when there is some kind of trouble with the connection. It looks like when a retransmission occurs, the timer is flipped ON, and then the countdown timer starts. The countdown timer has 3 numbers. The first is a countdown in seconds, the second is incremented for each retransmission, and the third one is always 0, so I’m not sure what it does

Now that I have a basic understanding of the output, I still have to figure out why these connections just hang. My guess at this point is that it is poorly written spamming software and maxed out bandwidth on all a whole much of compromised machines throughout the world that are hitting my mail servers.

Postfix regexp tables are memory hogs

I spent a good part of the day today troubleshooting memory problems on some postfix mail servers. Each smtpd process was using over 11 MB of Ram which seems really high. Each concurrent SMTP session has its own smtpd process, and with over 150 concurrent connections, that was using well over 1.5 GB of Ram.

[root@mail ~]# ps aux|grep -i smtpd |head -n1
postfix   3978  0.0  0.5 16096 11208 ?       S    12:29   0:00 smtpd -n smtp -t inet -u

After some trial and error of temporarily disabling stuff in the main.cf file, I narrowed the memory usage to a regexp table in a transport map:

transport_maps = regexp:/etc/postfix/transport.regexp

The transport.regexp file had about 1400 lines in it to match various possible address variations for a stupid mailing list application. Each mailing list has 21 different possible commands (addresses). By combining those 21 different commands into a single regex, I was able to cut those 1400 lines down to about 70. Now the smtpd processes use just under 5mb each:

[root@mail ~]# ps aux|grep -i smtpd |head -n1
postfix   7634  0.0  0.2  9916 4996 ?        S    13:31   0:00 smtpd -n smtp -t inet -u

So, by my math, a savings of about 6,000 kb of memory by removing 1300 lines from the regexp file means that each regexp used about 4.5 kb of memory. Overall, with 150+ simultaneous smtpd processes, that resulted in several hundred megs of memory saved on each mail server.

Manually testing postgrey through a telnet session

I’m working on implementing some new, custom features in Postgrey, and needed to test it manually via telnet instead of sending an email every time that I wanted to try it out. Evidently Postfix has a custom protocol for communicating via it’s check_policy_service command (and probably others). By doing a tcpdump, I was able to capture this, which makes it simple to test postgrey, and presumably other similar postfix-compatible programs.

[root@mail1 tmp]# telnet postgrey 10023
Trying 10.20.30.40 ...
Connected to postgrey.mydomain.tld (10.20.30.40).
Escape character is '^]'.
request=smtpd_access_policy
protocol_state=RCPT
protocol_name=ESMTP
client_address=201.1.2.3
client_name=imaspammer.brasiltelecom.net.br
helo_name=imaspammerl.brasiltelecom.net.br
[email protected]
[email protected]
queue_id=
instance=66cf.46d5964c.0
size=0
sasl_method=
sasl_username=
sasl_sender=
ccert_subject=
ccert_issuer=
ccert_fingerprint=

action=DEFER_IF_PERMIT Temporary Failure - Recipient address rejected - \
   Try back in 180 seconds: See https://www.webpipe.net/failedmail.php?domain=somedomain.com

^]
telnet> quit
Connection closed.

Just telnet to the machine on the port its listening on (you have to be running postgrey with the inet option, not unix sockets). Then copy paste everything between the ‘request=’ line and the first blank line’. Then hit enter and postgrey should reply with an appropriate response.

Process Memory Usage using smaps

I’m digging into a mail server and trying to figure out the actual memory usage of some processes. ‘ps’ only gives a little information, which may or may not be incredibly useful:

[root@mail tmp]# ps aux|grep someprocess
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root     14368  0.0  0.0   3884   668 pts/0    R+   11:26   0:00 someprocess

From that you can see the VSZ and RSS. I’m not totally sure what exactly those include, but I’ve learned enough to know that they may include the size of shared libraries and things that aren’t unique to this process. Upon trying to find out a little more, I came across this post which explained it well, and has a simple perl script that breaks it down even further:

https://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html

When running that smem.pl script with a PID, it produces something like this:

[root@mail1 tmp]# ./smem.pl 14291
VMSIZE:       9840 kb
RSS:          3692 kb total
              1892 kb shared
               816 kb private clean
               984 kb private dirty

From that, you can see how much of the memory usage is ‘Private’ and therefore unique to this process. This is the best way that I have found to view actual memory usage that is unique to a given process.

Crude file recovery on an ext3 partition

I was working on project for the past couple days and was just about to enable it permanently. Before that, though, I ran a ‘yum update’. I wasn’t paying attention to what was updated though, and the program that I was working on got updated during the process. My modified version of the script was wiped out

Not willing to throw away a couple days worth of modifications, I was desperate to recover my changes. Fortunately the script was still running, so I know that it wasn’t really deleted from the disk yet. Since the file was still locked, the file system just marked the file as deleted, but hadn’t really deleted it. A ‘lsof’ showed that it was still there but deleted. It gave me an inode number, but I couldn’t find any way to use that.

Instead, I came up with a pretty crude way to find my script:

cat /dev/sda1 | strings | grep -A 10000 -B 10000 "some_string_unique_to_my_script" > /tmp/somefile

This cats out the actual content of the device file, searches for strings in it, and then grep’s for your unique string, and saves 10,000 lines before and after it into /tmp/somefile. I was then able to look through /tmp/somefile and find my script in there. It is not in a format that you can just copy/paste out. But all the significant pieces were in there, and I was able to recover everything that I needed without rewriting everything.

Simple console-based bandwidth monitoring utilities

A co-worker today introduced me to ‘iftop’ which is an incredibly handy utility for  monitoring current bandwidth utilization through an interface.  Its sortof like a simple, command-line version of ntop. It is available in most Debian repositories, and in the RPMForge repositry if you use RHEL/CentOS.  More information on its homepage at https://www.ex-parrot.com/~pdw/iftop/

Also, another handy bandwidth monitoring program to run on your server is vnstat.  It runs via cron every 5 minutes and  can provide some simple historic bandwidth usage graphs.  I didn’t see any packages for vnstat, but its the simplest build I’ve ever seen.  Just download it and run ‘make && make install’  More information at https://humdi.net/vnstat/

« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑