Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Category: MySQL (Page 2 of 3)

LUG Presentation on SQL Basics

I gave a presentation tonight at my local Linux Users Group meeting on SQL Basics. I had a fun time preparing the presentation and made up a bunch of examples having to do with Santa’s database.

It started out with a simple table for kids who were either naughty or nice. We then added some reports to that. Then imported kids’ wish lists from CSV files. From there we were able to generate some manufacturing reports for the workshop.

When we joined the wish list table with the kids table, we were then able to generate a sleigh-loading report which included only gifts for kids who had been good. Then we got even more complicated and introduced several joins with some complicated mathematics to select gifts for kids within a certain radius from a given zip code.

The presentation is available for download here. And Brian recorded part of the presentation which is available to view on uStream.tv

Setting Up Virtualmin on an OpenVZ Guest

I’m experimenting with a hosting control panel and am interested in Virtualmin. I generally avoid web-based control panels, because they generally make direct configuration via the command line and manually editing config files very difficult. However one of Virtualmin’s goals is to not interfere with such manual configurations. I’ve had plenty of clients who use Webmin, and they seem to do a good job, so Virtualmin seems like a good choice.

These are the steps that I went through to get a new OpenVZ guest set up with the GPL version of Virtualmin.

Download a CentOS 5 OS template and create the guest

# wget https://download.openvz.org/template/precreated/centos-5-x86_64.tar.gz
# vzctl create <VEID> --ostemplate centos-5-x86_64

I replaced all of these limits in /etc/vz/<VEID>.conf. This is based off of a different running machine with some fairly generous limits. Most importantly, it includes 1GB of RAM.

# UBC parameters (in form of barrier:limit)
KMEMSIZE="43118100:44370492"
LOCKEDPAGES="256:256"
PRIVVMPAGES="262144:262144"
SHMPAGES="21504:21504"
NUMPROC="2000:2000"
PHYSPAGES="0:9223372036854775807"
VMGUARPAGES="65536:65536"
OOMGUARPAGES="26112:9223372036854775807"
NUMTCPSOCK="360:360"
NUMFLOCK="380:420"
NUMPTY="16:16"
NUMSIGINFO="256:256"
TCPSNDBUF="10321920:16220160"
TCPRCVBUF="1720320:2703360"
OTHERSOCKBUF="4504320:16777216"
DGRAMRCVBUF="262144:262144"
NUMOTHERSOCK="5000:5000"
DCACHESIZE="3409920:3624960"
NUMFILE="18624:18624"
AVNUMPROC="180:180"
NUMIPTENT="128:128"

Then set up some host-specific parameters and start it up.

# vzctl set <VEID> --ipadd 10.0.0.1 --hostname yourhostname.com --nameserver 4.2.2.1 --diskspace 4G --save
# vzctl start <VEID>
# vzctl enter <VEID>

You are now logged in to the guest, where you can download and install virtualmin

# yum update
# cd /root
# wget https://software.virtualmin.com/gpl/scripts/install.sh
# sh install.sh
 Continue? (y/n) y

That should install without significant errors. Finally, set a password for root, and then log in to Virtualmin to go through the post-installation configuration

passwd root

Login at https://<your-ip>:10000/ and go through the post-installation configuration

KnitMeter is now a Facebook App

KnitMeter.com is a site that I wrote quickly for my wife to keep track of how much she has knit. It generate a little ‘widget’ image that can be placed on blogs, forums, etc and says how many miles of yarn you have knit in some period. The site has been live for about a year and a half now and has a couple thousand registered users.

I have been receiving an increasing number of requests to add a method for adding a KnitMeter it to Facebook. I’ve experimented with a couple of other ideas on Facebook and found that it was pretty straightforward to write an app. KnitMeter seems like a decent candidate for a social app, so I started working on it about a week ago. I’m happy to say that I just made the application live late last night. It is available at https://apps.facebook.com/knitmeter/. If you use other social media apps, then go here where you can buy YouTube subscribers or views, or even Instagram followers.

Features include:

  • Ability to add projects and add knitted lengths to a project (or not)
  • Settings for inputting lengths in feet, yards, or meters
  • Display how much you’ve knit in feet, yards, meters, kilometers, or miles
  • When entering a new length, you can choose to have it publish a ‘story’ on your profile page
  • You can add a tab on your profile page that shows each of your projects as well as a total
  • You can add a KnitMeter ‘box’ to the side of your profile page, or on your ‘boxes’ tab.

I recreated the database from scratch and defined it a little better, so I have a little bit of work to do in migrating the existing site and database over to the new structure. Once that is done users will be able to import their data from the existing KnitMeter.com by providing their email/password.

Synchronize Remote Memcached Clusters with memcache_sync

The problem: Servers in two separate geographic locations each have their own memcached cluster. However, there doesn’t currently exist (that I know of) a good way to copy data from one cluster to the other cluster.

One possible solution is to configure the application to perform all write operations in both places. However, each operation requires a round-trip response. If the servers are separated by 50ms or more, doing several write operations causes a noticable delay.

The solution that I’ve come up with is a perl program that I’m calling memcache_sync. It acts a bit like a proxy that asynchronously performs write operations on a remote cluster. Each geographic location runs an instance of memcache_sync that emulates a memcached server. You configure your application to write to the local memcache cluster, and also to the memcache_sync instance. memcache_sync queues the request and immediately returns a SUCCESS message so that your application can continue doing its thing. A separate thread then writes those queued operations to the remote cluster.

The result is two memcache clusters that are synchronized in near-real time, without any noticable delay in the application.

I’ve implemented ‘set’ and ‘delete’ operations thus far, since that is all that my application uses. I’ve just started using this on a production environment and am watching to see how it holds up. So far, it is behaving well.

The script is available here. I’m interested to see how much need there is for such a program. I’d be happy to have input from others and in developing this into a more robust solution that works outside of my somewhat limited environment.

MySQL [ERROR] Invalid (old?) table or database name

After upgrading to MySQL 5, one of my databases appeared empty even though the MyISAM tables existed in /var/lib/mysql/database-name. I found this error in the mysql log file in /var/log/mysqld.log:

090425 23:10:52 [ERROR] Invalid (old?) table or database name 'database-name'

Turns out that MySQL 5 doesn’t like database names that contain a hyphen. In the mysql directory, I see that it created a second directory for the database, but with the hyphen replaced with ‘@002d’.

drwx------  2 mysql mysql     4096 Apr 25 23:10 database-name
drwx------  2 mysql mysql     4096 Apr 24 00:30 database@002dname

To resolve that I decided to replace the hyphen with an underscore. I stopped mysql, renamed /var/lib/mysql/database-name to /var/lib/mysql/database_name, and then ran this SQL to update the permissions:

UPDATE mysql.db SET Db = 'database_name' WHERE Db = 'database\-name';
FLUSH PRIVILEGES;

I could then see the tables using mysql from the command line. From there, I just updated the setting for the application to use the new name, and everything worked out find.

Announcing WebPasswd

Do you have users who need access to web-based applications on multiple servers? Managing those users can be a pain when dealing with normal htpasswd-based permissions. Adding or removing users means editing each htpasswd file and remembering where all of them are.

Mod_auth_mysql is a good way to centralize that user database so that you can avoid having all of the separate htpasswd files. The apache module is available from any modern Linux distribution, so installing and configuring it takes less than 5 minutes. I started using it almost 2 years ago, and over that time have made a simple web application for managing the users and granting them permission to each application.

I’ve released the program as WebPasswd for anybody else who wants to use it. Now adding users and granting them access to application can be don with just a few clicks. Granting and revoking access to an application takes just seconds and is applied immediately. Configuring a new application takes a couple clicks, and then you just copy/paste the Apache configuration into the appropriate place on your web server. Try it out with this demo.

I think this will be useful to people. I have not seen another application that does something similar. Let me know if it works for you.

CSV from the MySQL Command Line

I frequently need to generate a quick report generated from a quick database query. I have been doing this by creating a simple PHP script that queries the database and then displays the results as a CSV. However, I just found a way to do it directly from the MySQL command line, which makes it possible to skip the steps of creating the PHP script.

Simply run the command like this:

mysql>SELECT * FROM sometable INTO OUTFILE '/tmp/output.csv'
   FIELDS TERMINATED BY ',';

That will save the results to the file specified in a simple CSV format. There are also options for enclosing fields in quotes, the line terminators, and several other options.

MySQLDump To a Remote Server

I was running out of disk space on a server today. The server had a large database table that was no longer used, so I wanted to archive it and then drop the table. But the server didn’t have enough disk space to dump it out to disk before copying it off to a remote server for archiving.

The first thought was to run mysqldump dump on the destination machine, and to access the database over the network. That however, doesn’t compress or encrypt the data. Plus I would have had to create a mysql user with permission to access the database remotely.

The solution I came up with worked out well: mysqldump directly to the remote host with this command:

mysqldump <DATABASE_NAME> [mysqldump options] | gzip -c | ssh user@remotehost "cat > /path/to/some-file.sql.gz"

That pipes the mysqldump command through gzip, then to through and SSH connection. SSH on the remote side runs the ‘cat’ command to read the stdin, then redirects that to the actual file where I want it saved.

Fixing a Corrupt MySQL Relay Log

As an extension of my post yesterday about skipping corrupt queries in the relay log, I found out that my problem is due to some network problems between the servers which triggers a MySQL bug.

The connection and replication errors in my MySQL log looks like this:

080930 12:26:52 [ERROR] Error reading packet from server: Lost connection to MySQL server 
  during query ( server_errno=2013)
080930 12:26:52 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, 
  log 'mysql-bin.000249' position 747239037
080930 12:26:53 [Note] Slave: connected to master 'replicate@mysqltunnel:13306',replication 
  resumed in log 'mysql-bin.000249' at position 747239037
080930 13:18:49 [ERROR] Error reading packet from server: Lost connection to MySQL server during
   query ( server_errno=2013)
080930 13:18:49 [Note] Slave I/O thread: Failed reading log event, reconnecting to retry, log 
  'mysql-bin.000249' position 783782486
080930 13:18:49 [ERROR] Slave: Error 'You have an error in your SQL syntax; check the manual 
  that corresponds to your MySQL server version for the right syntax to use near '!' at line 6' 
  on query. Default database: 'database'. Query: 'INSERT INTO `sometable`
            SET   somecol         = 3,
                    comeothercol  = 8,
                    othervalue      = NULL!', Error_code: 1064
080930 13:18:49 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and 
  restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000249' 
  position 783781942
080930 13:18:50 [Note] Slave: connected to master 'replicate@mysqltunnel:13306',
  replication resumed in log 'mysql-bin.000249' at position 783782486

When there are network problems between the server, there was some issue where the master didn’t properly detect and notify the slave of the failure. This resulted in parts of queries missing, duplicated, or replaced by random bits in the relay log on the slave. When the slave tries to execute the corrupt query, it will likely generate an error that begins with:

Error You have an error in your SQL syntax; check the manual that corresponds to 
  your MySQL server version for the right syntax to use near . . 

This bug has been fixed in MySQL releases since February 2008, but still hasn’t made its way into the CentOS 5 repositories. Until then, that bug report contained a work-around which forces the slave to re-request the binary log from the master. Run ‘SHOW SLAVE STATUS’ and make note of the Master_Log_File and Exec_Master_Log_Pos columns. Then run ‘STOP SLAVE’ to suspend replication, and run this SQL:

CHANGE MASTER TO master_log_file='<Value from Relay_Master_Log_File>',
  master_log_pos=<Value from Exec_master_log_pos>;

After that, simply run ‘START SLAVE’ to have replication pick up from there again. That evidently has the slave re-request the rest of the master’s binary log, which it should (hopefully) get without becoming corrupt, and replication will continue as normal.

I guess the network connection between my servers is problematic lately. I’ve had to fix this several times in the past couple days. If that keeps up, I may add this fix to my Replication checking-script until I’m able to upgrade to a version of MySQL that contains this fix.

Compression for MySQL Replication

I have a MySQL database that does a fair number of updates and inserts. The server is replicated to an off-site server located across the country. With MySQL replication, any Insert, Update, or Delete statements are written to the binary log, then sent from the master server in San Jose to the slave in New York.

I noticed today that the slave server was falling behind the master and had trouble keeping up. I noticed that there was a sizable amount of bandwidth between the two servers and after investigating for a little while, determined that the bandwidth between the servers wasn’t sufficient to keep up with the replication.

We have applications running on the server in New York that were significantly behind or slow. After a bit of research, I found the slave_compressed_protocol setting in MySQL which allows the master and slave to compress the replication data between the two servers. After enabling that, the slave was able to catch up within a matter of minutes and has stayed caught up just fine. The bandwidth usage has dropped from a consistent 600 kb/s to around 20 kb/s.

Upon looking into MySQL replication, I also experimented with SSH compression since the replication goes through an SSH Tunnel. I had similar success with SSH compression as well.

« Older posts Newer posts »

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑