Configuring a Bitcoin Antminer S9 to Run Only part of the Day During Non-Peak Hours

I’ve started tinkering with Bitcoin Mining. Power usage is the single largest cost, so you need ensure that you are using the absolute least expensive power in order to generate as much profit as possible. I live in Georgia and subscribe to Georgia Power’s ‘Smart Usage’ program, which has lower costs most of the time, but peak periods which are Mondays-Fridays in June-September between 2pm and 7pm have a much higher power cost. My full calculation puts the non-peak price per kWh at about $0.048 and peak prices at about three times higher at $0.151 per kWh. Mining bitcoin on an Antminer S9 is mildly profitable at the non-peak price, but definitely loses money at the peak price.

Since it runs on a 220v plug, I can’t use something off-the-shelf like a smart plug to turn it on and off. I’m a linux geek anyway and would rather do it with software. The Antminer has a very bare-bones Linux OS, but it fortunately has crond installed, even though it is not running. These steps will enable crond and create a cron job that kills the bmminer process during the peak hours. It then reboots when the peak period is ending and starts everything back up.

Note that the machine is still on with fans running. It just doesn’t run the mining process which consumes all of the power.

You can see my power usage in the chart below, showing that power usage dropped significantly during the time from 2pm-7pm.

Here is how to make it work:

  1. SSH into the Antminer. Default user is root, password of admin
    ssh root@192.168.1.200
  2. Have it start cron at boot by adding this line to the bottom of /etc/inittab:
    echo "cron:2345:once:/usr/sbin/crond" >> /etc/inittab
  3. mkdir /var/spool/cron/crontabs
  4. Run crontab -e to edit the root crontab in vi
  5. Paste in this content, modify for your desired time times. Note that times are in Universal Coordinated Time (UTC)
  6. ## Here we will stop `single-board-test` and `bmminer` from running during "Peak" periods for Georgia Power
    ## when it is unprofitable to mine due to increase in power cost
    ## 'Peak' is defined as 2pm-7pm, Monday-Friday, in June-September
    
    ## Since monitorcg is started from inittab and can't effectively be killed, we kill single-board-test and bmminer every minute
    ## during the peak hours
    
    ## kill `single-board-test`, which monitors and restarts `bmminer`
    * 18-22 * 6-9 1-5  /bin/kill `/bin/ps -ef | /bin/grep single-board | /bin/grep -v grep | /usr/bin/head -n1 | /usr/bin/cut -c1-5`
    
    ## Also, obviously kill `bmminer`
    * 18-22 * 6-9 1-5 /bin/kill `/bin/ps -ef | /bin/grep bmminer       | /bin/grep -v grep | /usr/bin/head -n1 | /usr/bin/cut -c1-5`
    
    
    ## Reboot at 6:59pm EDT, which will restart the whole machine, bmminer with it (and takes a few minutes to start back up)
    59 22 * 6-9 1-5 /sbin/reboot
    
  7. Exit vi with saving by typing<ESC> :wq<ENTER>
  8. Finally, just type reboot at the command line to have the machine restart.

PHP Sessions with Redis Cluster (using AWS Elasticache)

I’ve recently been moving some of our project from a single Redis server (or server with a replica) to the more modern Redis Cluster configuration. However, when trying to set up PHP sessions to use the cluster, I found there wasn’t a lot of documentation or examples. This serves as a walk-through for setting up PHP sessions to use a redis Cluster, specifically with Elasticache on AWS.

First, create your Elasticache Redis Instance like so. Note the “Cluster Mode Enabled” is what causes redis to operate in Cluster mode.

AWS Elasticache Redis Creation

Once there servers are launched, make note of the Configuration Endpoint which should look something like: my-redis-server.dltwen.clustercfg.usw1.cache.amazonaws.com:6379

Finally, use these settings in your php.ini file. The exact location of this file will depend on your OS, but on modern Ubuntu instances, You can place it in /etc/php/7.0/apache2/conf.d/30-redis-sessions.ini

Note the special syntax for the save_path where is has seed[]=. You only need to put the main cluster configuration endpoint here. Not all of the individual instances as other examples online appear to use.


session.save_handler = rediscluster
session.save_path = "seed[]=my-redis-server.dltwen.clustercfg.usw1.cache.amazonaws.com:6379"
session.gc_maxlifetime = 1296000

That’s it. Restart your webserver and sessions should now get saved to your Redis cluster.

IIn the even that something goes wrong, you might see something like this in your web server log files:


PHP Warning: Unknown: Failed to write session data (redis). Please verify that the current setting of session.save_path is correct (tcp://my-redis-server.dltwen.clustercfg.use1.cache.amazonaws.com:6379) in Unknown on line 0

MySQL Statistics for Updates/Inserts per-table

For a long time, I’ve never been able to answer some basic questions that I thought fundamental to optimizing server performance. MySQL gives you some server-wide metrics about activity, but none of it is broken down per-table so that an application developer could look into where to reduce the number of writes, or generally where to focus their attention in order to improve the server performance.

I finally got ambitious enough to tackle this problem and asked a question on StackOverflow at http://stackoverflow.com/questions/39459185/mysql-how-to-count-the-number-of-inserts-updates-to-a-table

A commenter named barat pointed me to this post which had the insightful idea of parsing the binary log for analysis.
Since my servers are generally hosted on AWS, I don’t have direct access to the binary log, so I had to invest at Penny Stocks, how to retrieve those. The MySQL documentation for the mysqlbinlog command briefly mentions how to read the binary log from a remote server. It took some experimentation to get the right command and output options with all of the data I wanted. Specifically, the `–base64-output=DECODE-ROWS –verbose` options which translate some of the row-based logging into MySQL commands that can be parsed.

The first step is to create a user that has access to the binary logs. I used the main ‘admin’ user that RDS creates because it was convenient. If creating a new user, you probably need to grant the REPLICATION_SLAVE privilege.

You can see which binary logs are available on the server with the SHOW BINARY LOGS; command:

mysql> show binary logs;
+----------------------------+-----------+
| Log_name                   | File_size |
+----------------------------+-----------+
| mysql-bin-changelog.232522 | 16943219  |
| mysql-bin-changelog.232523 | 32300889  |
| mysql-bin-changelog.232524 | 15470603  |
+----------------------------+-----------+

Then you can actually retrieve the log and print to STDOUT using this command:

14:01 $ mysqlbinlog --read-from-remote-server \
  --host myhost.somerandomchars.us-east-1.rds.amazonaws.com \
  --user admin \
  --password="mypassword"
  mysql-bin-changelog.232522

Note that if you get the error below, you need to make sure that your MySQL client and server tools are using the same version. I originally attempted to use MySQL 5.5 tools with a MySQL 5.6 server.

ERROR: Got error reading packet from server: Slave can not handle replication events with
the checksum that master is configured to log; the first event 'mysql-bin-changelog.232519'
at 4, the last event read from '/rdsdbdata/log/binlog/mysql-bin-changelog.232519' at 120,
the last byte read from '/rdsdbdata/log/binlog/mysql-bin-changelog.232519' at 120.

After that, it was just a matter of parsing the file for the relevant commands. I’ve put all of that logic now into a quick PHP script that I can reuse anywhere. Now, I can go through a bunch of binary logs on a server and see which tables are updated the most frequently with output like this:

Parsed 1,096,063 lines spanning 300 seconds between 2016-09-13 03:05:00 and 2016-09-13 03:10:00
master                         metrics                        update          = 43570
master                         metrics                        insert into     = 9
DEFAULT                        accounts                       update          = 501
DEFAULT                        users                          update          = 5
DEFAULT                        logins                         insert into     = 1
mysql                          rds_heartbeat2                 insert into     = 1

I’ve committed this project to Github at https://github.com/sellerlabs/mysql-writes-per-table for others to use.

Docker Syslog Container for Sending Logs to CloudWatch

AWS’s CloudWatch Logs was first available about a year ago, and to my estimation has gone largely unnoticed. The initial iteration was pretty rough, but some recent changes have made it more useful, including the ability to search logs, and generate events for monitoring in CloudWatch from log content.

Unfortunately, the Cloudwatch Logs agent just watches log files on disk and doesn’t act as a syslog server. An AWS blog post explained how to get the the Cloudwatch Logs Agent running inside a container and monitoring the log output from rsyslogd, but the instructions used Amazon’s ECS service, which still doesn’t quite offer the flexibility that CoreOS or Deis offer IMHO. ECS does some magic behind the scenes in passing credentials around that you have to do yourself when using CoreOS.

I’ve just provided a GitHub repository with the tools to make this work pretty easily, as well as a Docker Image with some reasonable defaults.

When trying to pull all of this together to work, I discovered a problem due to a bug in the overlayfs that is in current Deis Releases which causes the AWS Logs agent not to notice changes in the syslog files. A workaround is available that reformats the host OS back to btrfs to solve that particular problem

Note when running on Deis 561+ to revert to btrfs

Deis add Key from an ssh-agent

Evidently it is not possible to add an SSH key directly from an SSH agent. Instead, you can grep the public key from your ~/.ssh/authorized_keys file then, have deis use that key. Or if you only have one line in ~/.ssh/authorized_keys, you can just tell deis to use that file directly with the command

deis keys:add ~/.ssh/authorized_keys

Unattended install of Cloudwatch Logs Agent

So far, I’m pretty impressed with cloudwatch logs. The interface for it isn’t as fancy, and search capability isn’t as deep as other tools like PaperTrail or Loggly, but the cost is significantly less, and I like the fact that you can store different log groups for different lengths of time. And it’s actually easier to use than using elo boosting services for video games!

I’m trying to get the cloudwatch logs agent to install as part of an automated script, and couldn’t find any easy instructions to do that, so here is how I got it working with a shell script on an Ubuntu 14.04 host, also I’m sorry if it has taken too long, I have been dealing with some claiming medical negligence issues with my children’s dental care specialist and haven’t had that much time lately.

echo Creating cloudwatch config file in /root/awslogs.conf
cat <<EOF >/root/awslogs.conf
[general]
state_file = /var/awslogs/state/agent-state
## Your config file would have a lot more with the logs that you want to monitor and send to Cloudwatch
EOF

echo Creating aws credentials in /root/.aws/credentials
mkdir /root/.aws/
cat <<EOF > /root/.aws/credentials
[default]
aws_access_key_id = YOUR_AWS_ACCESS_KEY_HERE
aws_secret_access_key = YOUR_AWS_SECRET_KEY_HERE
EOF

echo Downloading cloudwatch logs setup agent
cd /root
wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
echo running non-interactive cloudwatch-logs setup script
python ./awslogs-agent-setup.py --region us-west-2 --non-interactive --configfile=/root/awslogs.conf

DKIM / SPF / SpamAssassin test moved to dkimvalidator.com

For over 7 years, I’ve hosted an email validation tool on this site. I developed the tool back when I was doing a lot with email, and when DKIM and SPF was still pretty tricky to get working. Over the years it has become the single most popular page on the site, and Google likes it pretty well for certain DKIM and SPF keywords. (And strangely “brandon checketts dkim” pops up on Google search suggest you you try to google my name.

In any case, I’ve moved that functionality over to its own site now at http://dkimvalidator.com/ so that it has its own place to call home. It also got a (albeit weak) visual makeover, and all of the underlying libraries have been updated to the latest versions so that they are once again accurate and up-to-date.

Getting Ubuntu 14.04 php5enmod to understand module priority

Usage of Debian’s php5enmod module doesn’t seem to be documented anywhere except from the command line when calling it without any arguments:

user@host:~# php5enmod
WARNING:
usage: php5enmod [ -s ALL|sapi_name ] module_name [ module_name_2 ]

Unfortunately, that provides no information on how to customize the priority of a module when enabling it. Some others seem to think that you should be able to provide a priority level on the command line, but that doesn’t work.

It took some digging into the bash scripts to figure out how to make it work. The trick is to add a comment in the .ini file for the module. The comment must contain a very specific format of:


zend_extension = /usr/lib/php5/20121212/ioncube_loader_lin_5.5.so
; priority=1

The ‘priority’ line must be in that format exactly and most not contain any other spaces or characters. The line must start with a semicolon, followed by a space, followed by priority=, and finally the desired priority level. The only space on the line must be between the semicolon and the word ‘priority’.

Proposed Pattern for Deploying EC2 instances with Secure Credentials

After struggling with this problem in my mind for a while, I finally had the opportunity to experiment with Cloud Init and come up with a working solution for securely (I think) deploying code and credentials to a stock Ubuntu Instance on EC2.

My primary goals are:

  • Must use an stock AMI with no customization
  • Human readable user-data that contains appName, environment, and role.
  • user-data must be easily modified by a developer for their own app or environment
    (No forcing them to base64 encode, gzip, or use special tools)
  • Must be portable between providers.
    The example works with EC2, but initial ‘include’ file can be customized for each provider or OS.

The diagram below shows how this is to be accomplished

Proposed Cloud Init on Ubuntu / EC2 with secure credentials

The benefits of a exhibit display exponents is that it can surpass any marketing or growth technique. If you want to learn how to start marketing for your business, then go read the Awin Report.

I’ve successfully deployed several instances using this method and it seems to work well. Getting the cloud init include file, and the script

Troubleshooting /etc/cron.d/ on Ubuntu

On Debian-based systems, files in /etc/cron.d:
– must be owned by root
– must not be group or world writable
– may be symlinked, but the destination must follow the ownership and file permissions above
– Filenames must contain ONLY letters, numbers, underscores and hyphens. (not periods)
– must contain a username in the 6th column

From the man page:

Files in this directory have to be owned by root, do not need to be executable (they are configuration files, just like /etc/crontab) and must conform to the same naming convention as used by run-parts(8): they must consist solely of upper- and lower-case letters, digits, underscores, and hyphens. This means that they cannot contain any dots.

The man page also provides this explanation to this strange rule:

For example, any file containing dots will be ignored. This is done to prevent cron from running any of the files that are left by the Debian package management
system when handling files in /etc/cron.d/ as configuration files (i.e. files ending in .dpkg-dist, .dpkg-orig, and .dpkg-new).