Configuring a Bitcoin Antminer S9 to Run Only part of the Day During Non-Peak Hours

I’ve started tinkering with Bitcoin Mining. Power usage is the single largest cost, so you need ensure that you are using the absolute least expensive power in order to generate as much profit as possible. I live in Georgia and subscribe to Georgia Power’s ‘Smart Usage’ program, which has lower costs most of the time, but peak periods which are Mondays-Fridays in June-September between 2pm and 7pm have a much higher power cost. My full calculation puts the non-peak price per kWh at about $0.048 and peak prices at about three times higher at $0.151 per kWh. Mining bitcoin on an Antminer S9 is mildly profitable at the non-peak price, but definitely loses money at the peak price.

Since it runs on a 220v plug, I can’t use something off-the-shelf like a smart plug to turn it on and off. I’m a linux geek anyway and would rather do it with software. The Antminer has a very bare-bones Linux OS, but it fortunately has crond installed, even though it is not running. These steps will enable crond and create a cron job that kills the bmminer process during the peak hours. It then reboots when the peak period is ending and starts everything back up.

Note that the machine is still on with fans running. It just doesn’t run the mining process which consumes all of the power.

You can see my power usage in the chart below, showing that power usage dropped significantly during the time from 2pm-7pm.

Here is how to make it work:

  1. SSH into the Antminer. Default user is root, password of admin
    ssh root@192.168.1.200
  2. Have it start cron at boot by adding this line to the bottom of /etc/inittab:
    echo "cron:2345:once:/usr/sbin/crond" >> /etc/inittab
  3. mkdir /var/spool/cron/crontabs
  4. Run crontab -e to edit the root crontab in vi
  5. Paste in this content, modify for your desired time times. Note that times are in Universal Coordinated Time (UTC)
  6. ## Here we will stop `single-board-test` and `bmminer` from running during "Peak" periods for Georgia Power
    ## when it is unprofitable to mine due to increase in power cost
    ## 'Peak' is defined as 2pm-7pm, Monday-Friday, in June-September
    
    ## Since monitorcg is started from inittab and can't effectively be killed, we kill single-board-test and bmminer every minute
    ## during the peak hours
    
    ## kill `single-board-test`, which monitors and restarts `bmminer`
    * 18-22 * 6-9 1-5  /bin/kill `/bin/ps -ef | /bin/grep single-board | /bin/grep -v grep | /usr/bin/head -n1 | /usr/bin/cut -c1-5`
    
    ## Also, obviously kill `bmminer`
    * 18-22 * 6-9 1-5 /bin/kill `/bin/ps -ef | /bin/grep bmminer       | /bin/grep -v grep | /usr/bin/head -n1 | /usr/bin/cut -c1-5`
    
    
    ## Reboot at 6:59pm EDT, which will restart the whole machine, bmminer with it (and takes a few minutes to start back up)
    59 22 * 6-9 1-5 /sbin/reboot
    
  7. Exit vi with saving by typing<ESC> :wq<ENTER>
  8. Finally, just type reboot at the command line to have the machine restart.

MySQL Statistics for Updates/Inserts per-table

For a long time, I’ve never been able to answer some basic questions that I thought fundamental to optimizing server performance. MySQL gives you some server-wide metrics about activity, but none of it is broken down per-table so that an application developer could look into where to reduce the number of writes, or generally where to focus their attention in order to improve the server performance.

I finally got ambitious enough to tackle this problem and asked a question on StackOverflow at http://stackoverflow.com/questions/39459185/mysql-how-to-count-the-number-of-inserts-updates-to-a-table

A commenter named barat pointed me to this post which had the insightful idea of parsing the binary log for analysis.
Since my servers are generally hosted on AWS, I don’t have direct access to the binary log, so I had to invest at Penny Stocks, how to retrieve those. The MySQL documentation for the mysqlbinlog command briefly mentions how to read the binary log from a remote server. It took some experimentation to get the right command and output options with all of the data I wanted. Specifically, the `–base64-output=DECODE-ROWS –verbose` options which translate some of the row-based logging into MySQL commands that can be parsed.

The first step is to create a user that has access to the binary logs. I used the main ‘admin’ user that RDS creates because it was convenient. If creating a new user, you probably need to grant the REPLICATION_SLAVE privilege.

You can see which binary logs are available on the server with the SHOW BINARY LOGS; command:

mysql> show binary logs;
+----------------------------+-----------+
| Log_name                   | File_size |
+----------------------------+-----------+
| mysql-bin-changelog.232522 | 16943219  |
| mysql-bin-changelog.232523 | 32300889  |
| mysql-bin-changelog.232524 | 15470603  |
+----------------------------+-----------+

Then you can actually retrieve the log and print to STDOUT using this command:

14:01 $ mysqlbinlog --read-from-remote-server \
  --host myhost.somerandomchars.us-east-1.rds.amazonaws.com \
  --user admin \
  --password="mypassword"
  mysql-bin-changelog.232522

Note that if you get the error below, you need to make sure that your MySQL client and server tools are using the same version. I originally attempted to use MySQL 5.5 tools with a MySQL 5.6 server.

ERROR: Got error reading packet from server: Slave can not handle replication events with
the checksum that master is configured to log; the first event 'mysql-bin-changelog.232519'
at 4, the last event read from '/rdsdbdata/log/binlog/mysql-bin-changelog.232519' at 120,
the last byte read from '/rdsdbdata/log/binlog/mysql-bin-changelog.232519' at 120.

After that, it was just a matter of parsing the file for the relevant commands. I’ve put all of that logic now into a quick PHP script that I can reuse anywhere. Now, I can go through a bunch of binary logs on a server and see which tables are updated the most frequently with output like this:

Parsed 1,096,063 lines spanning 300 seconds between 2016-09-13 03:05:00 and 2016-09-13 03:10:00
master                         metrics                        update          = 43570
master                         metrics                        insert into     = 9
DEFAULT                        accounts                       update          = 501
DEFAULT                        users                          update          = 5
DEFAULT                        logins                         insert into     = 1
mysql                          rds_heartbeat2                 insert into     = 1

I’ve committed this project to Github at https://github.com/sellerlabs/mysql-writes-per-table for others to use.

Proposed Pattern for Deploying EC2 instances with Secure Credentials

After struggling with this problem in my mind for a while, I finally had the opportunity to experiment with Cloud Init and come up with a working solution for securely (I think) deploying code and credentials to a stock Ubuntu Instance on EC2.

My primary goals are:

  • Must use an stock AMI with no customization
  • Human readable user-data that contains appName, environment, and role.
  • user-data must be easily modified by a developer for their own app or environment
    (No forcing them to base64 encode, gzip, or use special tools)
  • Must be portable between providers.
    The example works with EC2, but initial ‘include’ file can be customized for each provider or OS.

The diagram below shows how this is to be accomplished

Proposed Cloud Init on Ubuntu / EC2 with secure credentials

The benefits of a exhibit display exponents is that it can surpass any marketing or growth technique. If you want to learn how to start marketing for your business, then go read the Awin Report.

I’ve successfully deployed several instances using this method and it seems to work well. Getting the cloud init include file, and the script

Troubleshooting /etc/cron.d/ on Ubuntu

On Debian-based systems, files in /etc/cron.d:
– must be owned by root
– must not be group or world writable
– may be symlinked, but the destination must follow the ownership and file permissions above
– Filenames must contain ONLY letters, numbers, underscores and hyphens. (not periods)
– must contain a username in the 6th column

From the man page:

Files in this directory have to be owned by root, do not need to be executable (they are configuration files, just like /etc/crontab) and must conform to the same naming convention as used by run-parts(8): they must consist solely of upper- and lower-case letters, digits, underscores, and hyphens. This means that they cannot contain any dots.

The man page also provides this explanation to this strange rule:

For example, any file containing dots will be ignored. This is done to prevent cron from running any of the files that are left by the Debian package management
system when handling files in /etc/cron.d/ as configuration files (i.e. files ending in .dpkg-dist, .dpkg-orig, and .dpkg-new).

Fix for MongoDB not reliably starting/stopping on Ubuntu

I spent the last several hours troubleshooting any annoying error with the mongodb init script on Ubuntu. The script would start the daemon easy enough, but would report a failure. When subsequently trying to stop the daemon, it would then then say it was successful, but in-fact, still be running.

Fortunately, I found that somebody else had already reported the bug, but the comments pointing developers to the wrong place – looking inside the mongod code instead of in the init script.

After digging in to it for longer than I’d like to have spent, I found the cause was the –make-pidfile option being used in the init script.

My understanding of this process is that the start-stop-daemon command was creating the pidfile (as root), before spawning the actual mongod process (as the mongodb user). mongod in some cases (at least when not configsvr=true) must fork again before saving its own pidfile. Since the file created by start-stop-daemon is being run as root, the less-privileged mongodb user can not overwrite it (perhaps this should be logged, or logged at a less verbose level?), leaving the pidfile containing a pid that is no longer correct.

On my machine, the pidfile created with the –make-pidfile options was consistently exactly three less than the PID shown in the output of ‘ps’

After making that change to the init file, I can now reliably start/stop the mongod process using the expected commands.

Hopefully that bug will be closed soon and released so that I don’t have to customize the init script on every mongo server I have.

GnuPG Encryption with PHP (on Ubuntu with Pecl)

Instructions for Getting this working on Ubuntu 12.04 and more modern systems than my previous post

Install the required system and pecl packages:

  # apt-get install gnupg  libgpgme11 libgpgme11-dev
  # pecl install gnupg
  # echo extension=gnupg.so > /etc/php5/conf.d/gnupg.ini
  # apache2ctl restart

Generate a Private key

 # gpg --homedir /path/to/your/directory --gen-key

On a virtual machine, if that stalls for a while, you may have to generate some “randomness” somehow. Try one of these commands in a separate session, according to this bug report:

 # find / -type f | xargs grep blahblahblha
 # tcpdump -i any > /dev/null

At this point, you should have a working GPG key in the home directory you specified. You can list your secret keys with the command:

 
  # gpg --homedir /path/to/your/directory -K

You’ll then want to export the key with the command:

 # gpg --homedir /path/to/your/directory --export-secret-key --armour

You’ll want to copy that secret key to another machine. DON’T LOSE IT or you won’t be able to decrypt anything. Once you’ve got it safely stored somewhere, you want to delete it from your web server:

 #  gpg --homedir /path/to/your/directory --delete-secret-key your@address.com

You can then make sure that the public key is still there. It is what you’ll need to encrypt messages:

 # gpg --homedir /path/to/your/directory -k

Finally, you’ll need the fingerprint for the key to refer to it within your PHP code.

 # gpg --homedir /path/to/your/directory --fingerprint 
pubring.gpg
-------------
pub   2048R/5BB54E26 2013-04-14 [expires: 2023-04-12]
      Key fingerprint = AAAA BBBB CCCC DDDD EEEE  FFFF 0000 1111 2222 3333
uid                  Your Name <your@address.com>
sub   2048R/2EF4937A 2013-04-14 [expires: 2023-04-12]

You can then use the gnupg pecl functions to encrypt a messages:

<?php
$CONFIG['gnupg_home'] = '/var/www/.gnupg';
$CONFIG['gnupg_fingerprint'] = 'FA451EE9877270EF1CFA99CE048A613921CCC3D6';

$data = 'this is some confidential information';

$gpg = new gnupg();
putenv("GNUPGHOME={$CONFIG['gnupg_home']}");
$gpg->seterrormode(GNUPG_ERROR_SILENT);
$gpg->addencryptkey($CONFIG['gnupg_fingerprint']);
$encrypted =  $this->gpg->encrypt($data);
echo "Encrypted text: \n$encrypted\n";

// Now you can store $encrypted somewhere.. perhaps in a MySQL text or blob field.

// Then use something like this to decrypt the data.
$passphrase = 'Your_secret_passphrase';
$gpg->adddecryptkey($CONFIG['gnugp_fingerprint'], $passphrase);
$decrypted = $gpg->decrypt($encrypted);

echo "Decrypted text: $decrypted";
?>

Monit CPU Usage problem

I just recently fixed an issue I wanted my Monit monitoring process to restart a daemon who was segfaulting and causing 100% CPU usage according to top and most other system tools. I had seen configuration examples where Monit could detect that and restart the process, so I figured that adding a configuration like that below would fix it easily enough:

check process foo with pidfile /var/run/foo.pid
  start program = "/etc/init.d/foo start" timeout 10 seconds
  stop program  = "/etc/init.d/foo stop"
  if cpu usage > 90% for 8 cycles then restart

After letting that run for a bunch of cycles the process remained running, and monit didn’t do anything to acknowledge it even in log files. (FYI, a “cycle” is defined in the Monitrc config file in the “set daemon” line and defaults to 120 seconds).

After some research, I finally came upon this post on the Monit mailing list where somebody describes that the CPU usage that Monit bases its numbers off is a percentage of the CPU available for all processors. My machine had 4 processors, so what was seeing as 100% CPU usage in top, monit would only see that as 25%.

I quickly changed my Monit config to check for CPU Usage > 22% as ween in the following. That now works perfectly, even acknowledging in the log each of the 8 times that the CPU was over the limit before restarting it:

check process foo with pidfile /var/run/foo.pid
  start program = "/etc/init.d/foo start" timeout 10 seconds
  stop program  = "/etc/init.d/foo stop"
  if cpu usage > 22% for 8 cycles then restart

…. Now I need to solve the real problem and see why the latest Mongo PHP pecl module is segfaulting….

Quick PHP Script to find values that add to a specified total

I’ve had need for this on several occasions, and never come up with a decent solution until now. The problem happens for me most frequently when dealing with a group of transactions, and I need to find a subset of them that add up to a specific amount.

My data might look like this:

+----+--------+
| id | total  |
+----+--------+
|  1 |    1.6 |
|  2 |    0.8 |
|  3 |  16.25 |
.........
| 50 |      5 |
| 51 |    2.5 |
| 52 |     29 |
| 53 |    3.5 |
+----+--------+

And I need to find some combination of those that add up to a certain amount. I put together this quick recursive script that comes up with a solution:

$available = array(
  1 => 1.6,
  2 => 0.8,
  3 => 16.25,
  .....
  50 => 5,
  51 => 2.5,
  52 => 29,
  53 => 3.5,
);

function findCombination($count_needed, $amount_needed, $remaining)
{
    if ($count_needed < 0) return false;

    foreach ($remaining as $id => $this_amount) {
        if ($count_needed == 1 && $this_amount == $amount_needed) {
            echo "Found: {$id} for {$amount_needed}\n";
            return true;
        } else {
            unset($remaining[$id]);
            $correct = findCombination($count_needed - 1, $amount_needed - $this_amount, $remaining);
            if ($correct) {
                echo "Found: {$id} for {$this_amount}\n";
                return true;
            }
        }

    }
    return false;
}
$count_needed = 9; 
$amount_needed = 418;
findCombination($count_needed, $amount_needed, $available);

This will output something like:

Looking for 9 transactions totaling 418
Found: 38 for 59
Found: 37 for 45.5
Found: 36 for 34.75
Found: 33 for 44.75
Found: 31 for 57.75
Found: 30 for 48
Found: 26 for 23.5
Found: 22 for 2.5
Found: 20 for 102.25

Increasing the number of simultaneous SASL authentication servers with Postfix

I had a customer complaining lately that messages sent via Gmail to one of my mail servers was occasionally receiving SMTP Authentication failures and bounce backs from Gmail. Fortunately he noticed it happening mainly when he sent a messages to multiple recipients and was able to send me some of the bounces for me to track it down pretty specifically in the postfix logs.

The Error message via Gmail was:

Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 535 535 5.7.0 Error: authentication failed: authentication failure (SMTP AUTH failed with the remote server) (state 7).

This was a little odd, because the SMTP AUTH failure is what I would typically expect with a mistyped username and password. However, I could see that plenty of messages were being sent from the same client. By looking at the specific timestamp of the bounced message, I tracked down the relevant log segment shown below. It indicates 5 concurrent SMTPD sessions where the SASL authentication was successful on 4 of them and failed on the 5th.

Jul  5 12:43:39 mail postfix/smtpd[13602]: connect from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:39 mail postfix/smtpd[13602]: setting up TLS connection from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:39 mail postfix/smtpd[14113]: connect from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:39 mail postfix/smtpd[14113]: setting up TLS connection from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:39 mail postfix/smtpd[14115]: connect from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:39 mail postfix/smtpd[14115]: setting up TLS connection from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:39 mail postfix/smtpd[14116]: connect from mail-bk0-f49.google.com[209.85.214.49]
Jul  5 12:43:39 mail postfix/smtpd[14117]: connect from mail-bk0-f49.google.com[209.85.214.49]
Jul  5 12:43:39 mail postfix/smtpd[14116]: setting up TLS connection from mail-bk0-f49.google.com[209.85.214.49]
Jul  5 12:43:39 mail postfix/smtpd[14117]: setting up TLS connection from mail-bk0-f49.google.com[209.85.214.49]
Jul  5 12:43:39 mail postfix/smtpd[13602]: TLS connection established from mail-bk0-f50.google.com[209.85.214.50]: TLSv1 with cipher RC4-SHA (128/128 bits)
Jul  5 12:43:39 mail postfix/smtpd[14113]: TLS connection established from mail-bk0-f50.google.com[209.85.214.50]: TLSv1 with cipher RC4-SHA (128/128 bits)
Jul  5 12:43:39 mail postfix/smtpd[14115]: TLS connection established from mail-bk0-f50.google.com[209.85.214.50]: TLSv1 with cipher RC4-SHA (128/128 bits)
Jul  5 12:43:39 mail postfix/smtpd[14116]: TLS connection established from mail-bk0-f49.google.com[209.85.214.49]: TLSv1 with cipher RC4-SHA (128/128 bits)
Jul  5 12:43:39 mail postfix/smtpd[14117]: TLS connection established from mail-bk0-f49.google.com[209.85.214.49]: TLSv1 with cipher RC4-SHA (128/128 bits)
Jul  5 12:43:40 mail postfix/smtpd[13602]: 2846B11AC5E2: client=mail-bk0-f50.google.com[209.85.214.50], sasl_method=PLAIN, sasl_username=someuser@somedomain.com
Jul  5 12:43:40 mail postfix/smtpd[14113]: 3290811AC5E3: client=mail-bk0-f50.google.com[209.85.214.50], sasl_method=PLAIN, sasl_username=someuser@somedomain.com
Jul  5 12:43:40 mail postfix/smtpd[14115]: 3C4AD11AC5E4: client=mail-bk0-f50.google.com[209.85.214.50], sasl_method=PLAIN, sasl_username=someuser@somedomain.com
Jul  5 12:43:40 mail postfix/cleanup[13420]: 2846B11AC5E2: message-id=
Jul  5 12:43:40 mail postfix/cleanup[14092]: 3290811AC5E3: message-id=
Jul  5 12:43:40 mail postfix/smtpd[14116]: warning: SASL authentication failure: Password verification failed
Jul  5 12:43:40 mail postfix/smtpd[14116]: warning: mail-bk0-f49.google.com[209.85.214.49]: SASL PLAIN authentication failed: authentication failure
Jul  5 12:43:40 mail postfix/cleanup[14121]: 3C4AD11AC5E4: message-id=
Jul  5 12:43:40 mail postfix/qmgr[32242]: 2846B11AC5E2: from=, size=10564, nrcpt=1 (queue active)
Jul  5 12:43:40 mail postfix/qmgr[32242]: 3290811AC5E3: from=, size=10566, nrcpt=1 (queue active)
Jul  5 12:43:40 mail postfix/smtpd[14116]: disconnect from mail-bk0-f49.google.com[209.85.214.49]
Jul  5 12:43:40 mail postfix/qmgr[32242]: 3C4AD11AC5E4: from=, size=10568, nrcpt=1 (queue active)
Jul  5 12:43:40 mail postfix/smtpd[13602]: disconnect from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:40 mail postfix/smtpd[14113]: disconnect from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:40 mail postfix/smtpd[14115]: disconnect from mail-bk0-f50.google.com[209.85.214.50]
Jul  5 12:43:40 mail postfix/smtpd[14117]: D4F2411AC5E5: client=mail-bk0-f49.google.com[209.85.214.49], sasl_method=PLAIN, sasl_username=someuser@somedomain.com
Jul  5 12:43:41 mail postfix/cleanup[13420]: D4F2411AC5E5: message-id=
Jul  5 12:43:41 mail postfix/qmgr[32242]: D4F2411AC5E5: from=, size=10565, nrcpt=1 (queue active)
Jul  5 12:43:41 mail postfix/smtpd[14117]: disconnect from mail-bk0-f49.google.com[209.85.214.49]

In looking into the SASL component a bit, I noticed that there were 5 simultaneous SASL servers running. The first one looks like a parent with 4 child processes.

[root@mail postfix]# ps -ef |grep sasl
root      9253     1  0 Mar15 ?        00:00:04 /usr/sbin/saslauthd -m /var/run/saslauthd -a rimap -r -O 127.0.0.1
root      9262  9253  0 Mar15 ?        00:00:04 /usr/sbin/saslauthd -m /var/run/saslauthd -a rimap -r -O 127.0.0.1
root      9263  9253  0 Mar15 ?        00:00:04 /usr/sbin/saslauthd -m /var/run/saslauthd -a rimap -r -O 127.0.0.1
root      9264  9253  0 Mar15 ?        00:00:04 /usr/sbin/saslauthd -m /var/run/saslauthd -a rimap -r -O 127.0.0.1
root      9265  9253  0 Mar15 ?        00:00:04 /usr/sbin/saslauthd -m /var/run/saslauthd -a rimap -r -O 127.0.0.1

So it seemed likely that the 4 child processes were in use and that Postfix couldn’t open a connection to a 5th simultaneous SASL authentication server, so it responded with a generic SMTP AUTH failure.

To fix, I simply added a couple of extra arguments to the saslauthd command that is run. I added a ‘-c’ parameter to enable caching, and ‘-n 10’ to increase the number of servers to 10. On my CentOS server, I accomplished that by modifying /etc/sysconfig/saslauthd to look like this:

# Directory in which to place saslauthd's listening socket, pid file, and so
# on.  This directory must already exist.
SOCKETDIR=/var/run/saslauthd

# Mechanism to use when checking passwords.  Run "saslauthd -v" to get a list
# of which mechanism your installation was compiled with the ablity to use.
MECH=rimap

# Additional flags to pass to saslauthd on the command line.  See saslauthd(8)
# for the list of accepted flags.
FLAGS="-r -O 127.0.0.1 -c -n 10"

After restarting saslauthd, and a quick test, it looks good so far.