Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

Category: General (Page 1 of 26)

Disney Should Have a Developer Program

The Walt Disney Company should start a third-party software developer program. There are plenty of hobbyists and small developers that would like to have official access to some of the data available for the theme parks, Disney Plus, and other data available from the company.

If done with an approval process, and perhaps even incentives, it could be an effective way for Disney to improve guest experiences by effectively letting others innovate on the guests’ behalf. Instead of Disney having a development bottleneck of their own development teams, it would allow others to create tools that compete with each other for the best experiences.

There’s already some of this using unofficial data sources. By bringing it under an official Developer Program they could enforce things they’d like to have available but don’t have time to develop themselves.

API’s that I’d like to see and could see immediate use for:

  • Theme Park Basics Get a list of Resorts, Parks, Attractions, operating hours, and ticket prices
  • Ride Wait Times Current wait times that are posted in-app. This would make possible a new way of planning apps instead of using unofficial sources
  • Dining Availability See when restaurants have availability and ideally be able to make a reservation on behalf of a user
  • Disney Plus Media Be able to view what’s currently available for viewing on Disney+. Maybe even a users’s watch history, wishlist, etc
  • Disney Photopass Photos Be able to retrieve a users photos

What other Disney API’s would you like to see offered?

Disney is Doing Cross-Site Authentication All Wrong

Disney runs quite a few properties including disneyplus.com, hulu.com, espn.com, abc.com, and a bunch of obviously Disney sites like shopdisney.com, disneyworld.disney.go.com, and disneycruise.disney.go.com. They have a centralized authentication system so all of these sites can use the same email address and password to log in.

It has a couple major problems though:

  1. It isn’t obvious that the login is shared. They share a logo when logging in, but its not obvious to users that these sites share the same credentials. I wouldn’t expect that espn.com uses the same login as hulu.com and I know that Disney owns both of them! Also, password managers aren’t aware that the logins are tied together, so when you log in to one site and your password doesn’t work because you don’t realize they are shared, you end up resetting it. And then it broke your password for another site that you didn’t realize was connected
  2. Users can’t verify that a site is legitimate. It would be trivial for an attacker to create a fake Disney site and mimic the Disney login system to capture passwords. I actually noticed this because my wife was logging into a site for Disney gift cards and I seriously throught it was a scam

Disney should implement a shared login that uses a common login site (like login.disney.com) so that users can know that it is a legitimate Disney site. This fixes the issues above. Users can know that they trust login.disney.com. Password managers will use the same credentials. And it will be more difficult for attackers to mimic a site if users know that login.disney.com is the only legitimate place to log in

Stop Validating Domain Ownership with @ TXT Records

Lots of services need to validate ownership of a domain. Especially for sending email or creating SSL certificates

Creating a TXT record at the domain root (@) is a common practice and I think it should be avoided. Many services like to request adding things to this same record. That creates several concerns:

  1. It leaks information about what 3rd-party services you use (or have used). This is a minor security issue, but is not necessary
  2. The process for adding multiple lines to a single records is inconsistent between various services, meaning that instructions have to be service-specific. Instructions for GoDaddy are different than on CloudFlare
  3. Most services don’t have comments on DNS records, and the names of the records are often not self-explanatory. You end up with many lines and don’t know which is for which service. To make matters worse, records are rarely removed when you stop using a service, so it becomes an ever-growing list

A better practice is to use either TXT or CNAME records for specific hostnames (ie: google-verification-randomstring.mydomain.com) that contain a verification string or hostname. This avoids all of the problems above. The name can’t be guessed, and each record is separate. And either the hostname or value should indicate what service the record is for. Having a random value like 25376de5f10046a853b1395e756cbf66 doesn’t help me know what service it belongs to (I’m looking at you AWS Certificate Manager?)

This is the kind of bloat you end up with when everybody uses TXT record, and when people add stuff who don’t know what they are doing.

01:21 $ dig -ttxt mydomain.com

; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> -ttxt mydomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26429
;; flags: qr rd ra; QUERY: 1, ANSWER: 12, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;mydomain.com.		IN	TXT

;; ANSWER SECTION:
mydomain.com.	277	IN	TXT	"google-site-verification=qX5fQ3XXXXXXXXXXXXXXXXXXWvxBGAlVigFEW3nYzfU"
mydomain.com.	277	IN	TXT	"google-site-verification=oAHQRYYYYYYYYYYYYYYYYYYYYYD447rpeYhE81wPD44"
mydomain.com.	277	IN	TXT	"slack-domain-verification=sa2uZZZZZZZZZZZZZZZZZZZZZZZZZZZZZtTRsDOOS"
mydomain.com.	277	IN	TXT	"google-site-verification=3LEWAAAAAAAAAAAAAAAAAAAA8GGyhpkv-Ge3qhaOIn8"
mydomain.com.	277	IN	TXT	"facebook-domain-verification=zvyCCCCCCCCCCCCCCCCCCCCCC5lbhn"
mydomain.com.	277	IN	TXT	"v=spf1 include:spf.mandrillapp.com include:_spf.elasticemail.com include:aspmx.pardot.com ~all"
mydomain.com.	277	IN	TXT	"pardot885593=bd2638dff2ffffffffffffffffffffffffffffffffe46fd6c4dbffefa91"
mydomain.com.	277	IN	TXT	"include:servers.mcsv.net ?all"
mydomain.com.	277	IN	TXT	"include:_spf.google.com include:mailgun.org"
mydomain.com.	277	IN	TXT	"mandrill_verify.tIcfQQQQQQQQQQQQQQQQqaQ"
mydomain.com.	277	IN	TXT	"google-site-verification=rjDDDDDDDDDDDDDDDDDDDDDDDDDtzfGMuZKmt74DfQ0"
mydomain.com.	277	IN	TXT	"brevo-code:a0aaaaaaaaaaaaaaaaaaaaaabebed7419"

;; Query time: 4 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Sun Dec 21 01:22:25 UTC 2025
;; MSG SIZE  rcvd: 984

I know they aren't using several of those services, but cleaning them up requires timely validation just to make sure

Migrating Between Google Workspace Accounts

It has been a few years since I’ve had to migrate data between Google Workspace accounts, but I recently had to do it again. Google has made some improvements! Namely they have a Migration Service now where you can provide a list of user accounts to migrate from old account to new and it will move all of their emails from their inbox between the accounts! I used to have to do that via a 3rd party service or with an IMAP client manually

The migration service looks like it might handle Google Calendar and Contacts as well

It is still a little bit of a hassle to transfer ownership of Google Docs between accounts. Google doesn’t let you change ownership directly from one person in an organization to a person in a different organization. But you can work around that by using Google Shared Drives.

  1. Set up a shared drive, and share the drive with both the old and new Google Workspace accounts. Make sure to grant them the full “Manager” permission (Content manager won’t allow transferring ownership)
  2. From the old account, move all of the content to the shared drive. I usually do this in a folder within the shared drive if there is stuff already there
  3. From the new account, access the same Shard drive and move the content from the shared drive back into your own Drive. This transfer ownership to the individual user

Note that you can’t move items that were shared with you. They will cause an error when its checking what can be moved. Also, after you move a document, the URL for it changes, so any links between documents will likely be broken and will have to be re-linked

Google Calendar Spam is on the rise! ZapSpam is here to help!

I’ve been getting more and more spam on my calendar lately. It’s actually gotten pretty out-of-control with Bitcoin and PayPal scams, and anti-virus software I didn’t buy wanting me to call to cancel.

Google Trends shows the sharp uptick in search volume for “Calendar Spam” in the recent month or two

Google Trends show Calendar Spam is on the rise

Google Trends show Calendar Spam is on the rise

I figured this was a problem I could solve, so put together an app at ZapSpam.io to make it so you never have to see this junk again.

It monitors your calendar events in real-time and deletes the spam events within seconds of them being added to your calendar. Problem Solved!

And I had a fun time doing the 1960’s style superhero theming on the site


AWS CodeDeploy Troubleshooting

CodeDeploy with AutoScalingGroups is a bit of a complex mess to get working correctly. Especially with an app that has been working and needs to be updated for more modern functionality

Update the startups scripts with the latest versions from https://github.com/aws-samples/aws-codedeploy-samples/tree/master/load-balancing/elb-v2

I found even the latest scripts there still not working. My instances were starting up then dying shortly afterward. CodeDeploy was failing with the error


LifecycleEvent - ApplicationStart
Script - /deploy/scripts/4_application_start.sh
Script - /deploy/scripts/register_with_elb.sh
[stderr]Running AWS CLI with region:
[stderr][FATAL] Unable to get this instance's ID; cannot continue.

Upon troubleshooting, I found that common_functions.sh has the get_instance_id() function that was running this curl command to get the instance ID


curl -s http://169.254.169.254/latest/meta-data/instance-id

Running that command by itself while an instance was still running returned nothing, which is why it was failing.

It turns out that newer instances use IMDSv2 by default, and it is required (no longer optional). With that configuration, this curl command will fail. In order to fix, this, I replaced the get_instance_id() function with this version:

# Usage: get_instance_id
#
#   Writes to STDOUT the EC2 instance ID for the local instance. Returns non-zero if the local
#   instance metadata URL is inaccessible.

get_instance_id() {
    TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" -s -f)
    if [ $? -ne 0 ] || [ -z "$TOKEN" ]; then
        echo "[FATAL] Failed to obtain IMDSv2 token; cannot continue." >&2
        return 1
    fi

    INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id -s -f)
    if [ $? -ne 0 ] || [ -z "$INSTANCE_ID" ]; then
        echo "[FATAL] Unable to get this instance's ID; cannot continue." >&2
        return 1
    fi

    echo "$INSTANCE_ID"
    return 0
}

This version uses the IMDSv2 API to get a token and uses that token to get the instance-id

With that code replaced, the application successfully registered with the Target Group and the AutoScaling group works correctly

Alternatively (and for troubleshooting), I was able to make IMDSv2 Optional using the AWS Console, and via CloudFormation with this part of the Launch Template:

Resources:
  MyLaunchTemplate:
    Type: AWS::EC2::LaunchTemplate
    Properties:
      LaunchTemplateName: my-launch-template
      LaunchTemplateData:
        ImageId: ami-1234567890abcdef0
        InstanceType: t4g.micro
        MetadataOptions:
          HttpTokens: optional

ShipStation’s Auctane Fulfillment Network for 3PLs

At Data Automation, we recently built some stuff for use in ShipStation’s “Auctane Fulfillment Network”, as it is called in the API.
It looks like they refer to it as Send Orders To Fulfillment in their documentation

This is a fairly clever innovation where if the Seller has ShipStation, and their Third Party Logistics (3PL) provider also uses ShipStation, they can essentially “Send To Fulfillment”, meaning it makes a copy of the order in the 3PL’s ShipStation account for them to fulfill the order. Once the 3PL fulfills the order, it copies the shipment information, including Carrier, Service, Tracking Number, and Estimated Delivery Date, back to the seller’s ShipStation account.

It looks like it is still a little convoluted to set-up. The 3PL and Seller need to coordinate some things back and forth via email to begin. But once set up, the seller can simply click the “Send to Fulfillment” button inside their ShipStation account to assign the order to their 3PL. You can also set up automation rules to make that happen automatically depending on the sales channel, sku, and other things

From a technical perspective, the order is duplicated into the 3PL’s system, but not quite exactly the same as if it was pulled from the channel directly.

Its always nice when working with a pleasant customer to troubleshoot new things. With their help we got this sorted out and running smoothly now for our Amazon Custom integration with ShipStation at Data Automation

Thinking Outside the Box – Helping with a Tree Service Business

A good friend of mine owns Sherwood Forest Tree Service which mostly does cutting down and pruning trees in area in Northeast Georgia. That’s outside what I normally work with, but I’ve enjoyed learning about and helping with his business. I’m seeing a lot of opportunities to use some technology in different aspects of his business.

I’ve looked at a list of all of his past customers, and am looking through property data to try and identify common things that will help define his ideal customer. Then we might be able to target more customers like those that he’s already worked with.

Also, I’m wondering about the ability to generate an estimate for tree removal if you just send a photo of it and have AI provide some information about it like the species of tree, estimated height and trunk diameter.

I’m excited to see if it turns in to some other projects.

SSH Key Best Practices for 2025 – Using ed25519, key rotation, and other best practices

Apparently Google thinks I’m an expert at SSH Keys, so I’m providing an update to my previous post two years ago with some slight updates.

You can tell quite a bit about other IT professionals from their Public SSH Key! I often work with others and ask for their key when granting access to a machine I control. Its a negative sign when they ask how to create one. If they provide one in the PuttyGen format, I know they’ve been asked for their key exactly once. A 2048 bit or smaller RSA key means they haven’t generated one in a long time. If they send me an ed25519 key with a comment other than their machine name, I feel confident that they know what they are doing.

For reference, a 4096-bit RSA key will be in this format:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDowuIZFbN2EWbVwK9TG+O0S85yqr7EYc8Odv76H8+K6I7prrdS23c3rIYsKl2mU0PjOKGyyRET0g/BpnU8WZtDGH0lKuRaNUT5tpKvZ1iKgshdYlS5dy25RxpiVC3LrspjmKDY/NkkflKQba2WAF3a5M4AaHxmnOMydk+edBboZhklIUPqUginLglw7CRg/ck99M9kFWPn5PiITIrpSy2y2+dt9xh6eNKI6Ax8GQ4GPHTziGrxFrPWRkyLKtYlYZr6G259E0EsDPtccO5nXR431zLSR7se0svamjhskwWhfhCEAjqEjNUyIXpT76pBX/c7zsVTBc7aY4B1onrtFIfURdJ9jduYwn/qEJem9pETli+Vwu8xOiHv0ekXWiKO9FcON6U7aYPeiTUEkSDjNTQPUEHVxpa7ilwLZa+2hLiTIFYHkgALcrWv/clNszmgifdfJ06c7pOGeEN69S08RKZR+EkiLuV+dH4chU5LWbrAj/1eiRWzHc2HGv92hvS9s/c= someuser@brandonsLaptop

And for comparison, an ed25519 key looks like this:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLEURucCueNvq4hPRklEMHdt5tj/bSbirlC0BkXrPDI someuser@ip-172-31-74-201

You’ll notice in both of these, the first characters contain the key type. The middle section with all of the random characters contain the base-64 encoded public key. And at the end is a comment that is intended to identify the user to whom it belongs.

The ed25519 key is much shorter than an RSA keys, so if you’ve never seen one before, you might think it is less secure. But this key type is newer, and uses a totally different, more complex algorithm. Although the 256-bit ed25519 key has fewer characters, it is, for all practical purposes, as secure as the 4096-bit RSA key above. The ed25519 algorithm is more computationally complex, so it requires fewer bits for a similar level of security.

The ed25519 algorithm is based on a specific formula for an ellipse instead of prime numbers like the RSA algorithm. It has been in wide use for ~10 years, is supported by all modern software, and as such is the current standard for most professional users. Creating a key is simple with the ssh-keygen command. But before jumping to the actual command, I wanted to also explain a few other tips that I use, and think others should adopt as well.

Keys should created by individuals, not issued to groups

You should never share your private key with anybody. Ever. If a key is ever shared, you have to assume that the other party can impersonate you on any system in which it is used.

I’ve been a part of some teams which create a new server and create a new key to access that server, and share they new key with everybody who needs to accss the machine. I think this practice stems from AWS or other providers who create an SSH key for you, along with a new machine, and the user just continuing the practice. I wish they’d change that.

That’s the backwards way of thinking about it. Individuals should own their own keys. They should be private. And you can add multiple public keys to resources where multiple people need access. Again, I wish AWS and others will allow this more easily instead of allowing only a single key. You then revoke access by removing the public key, instead of having to re-issue a new key whenever the group changes. (Or worse, not changing the key at all!)

Rotating your SSH keys

You should rotate your SSH keys regularly. The thought process here is that if you have used the same key for a long time, and then your laptop with your private key gets lost, or your key compromised, every machine that you’ve been granted access to over that time is potentially at risk, because administrators are notoriously bad about revoking access. By changing out your key regularly, you limit the potential access in the case of a compromised key. Generating a new SSH key also ensures that you are using more modern algorithms and key sizes.

I like to create a new SSH key about every two years. To remind my self to do this, I embed the year I created the key within its name. My last key was created in March 2023, which I have named [email protected]. I’m creating a new key now, at the beginning of 2025, which I’ll name with the current year. Each time I use it, I’m reminded when I created the key, and if it gets to be around 2 years, and I have some time free, I’ll create a new key. Of course I keep all of my older keys in case I need access to something I haven’t accessed for a while. My ssh-agent usually has my two most recent keys loaded. If I do need to use an older one, it is enough of a process to find and use the old one, that the first thing I’ll do is update my key as soon as I get into a system where an old key was needed.

Don’t use the default ssh-keygen comment

I also suggest that you make the SSH key comment something meaningful. If you don’t provide a comment, most ssh-keygen implementations default to your_username@you_machine name which just might be silly or meaningless. In a professional setting, it should clearly identify you. For example BrandonChecketts as a comment is better than me00101@billys2017_macbook_air. It should be meaningful both to you, and to whomever you are sharing it.

I mentioned including the creation month above, which I like to include in the comment because when sharing the public key, it subtly demonstrates that I am security conscious, have rotated it recently, and I know what I’m doing. The comment at the end of the key can be changed without affecting its functionality, so if I might change the comment depending on who I’m sharing it with. When I receive a public key from somebody else that contains a generic comment, I often change the comment to be include their name or email address so I can later remember to whom it belongs to.

Always use a passphrase

Your SSH key is just a tiny file on disk. If your machine is ever lost, stolen, or compromised in any way by an attacker, the file is pretty easy for them to copy. Without it being encrypted with a pass phrase, it is directly usable. And if someone has access to your SSH private key, they probably have access to your bash or terminal history and would know where to use it.

As such, it is important to protect your SSH private key with a decent pass phrase. To avoid typing your pass phrase over and over, use the SSH-Agent, which will remember it for your session.

Understand and use SSH-Agent Forwarding when applicable

SSH Agent Forwarding allows you to ssh into one machine, and then transparently “forward” your SSH keys to the that machine for use authenticating into a machine past it. I most often use this when authenticating to GitHub from a remote machine. Using Agent forwarding means that I don’t have to copy my SSH Private key onto the remote machine in order to authenticate to GitHub from there.

You shouldn’t, however, just blindly use SSH Agent Forwarding everywhere. If you access a compromised machine where an attacker may have access to your account or to the root account, you should NOT use agent forwarding since it is possible for them to intercept your private key. I’ve never seen this exploited, but since it is possible, you should only use SSH Agent Forwarding to systems which you trust.

The ssh-keygen Command

With all of the above context, this is the command you should use to create your ed25519 key:

ssh-keygen -t ed25519 -f ~/.ssh/your-key-filename -C "your-key-comment"

That will ask you for a pass phrase and then show you a randomart image that represents your public key when it is created. The randomart is just a visual representation of your key so that you can see it is different from others.

 $ ssh-keygen -t ed25519 -f ~/.ssh/[email protected] -C "[email protected]"
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/[email protected]
Your public key has been saved in ~/.ssh/[email protected]
The key fingerprint is:
SHA256:HiCF8gbV6DpBTC2rq2IMudwBc5+QuB9NqeGtc3pmqEY brandon+2025@roundsphere
The key's randomart image is:
+--[ED25519 256]--+
| o.o.+.          |
|  * +..          |
| o O...          |
|+ A *. .         |
|.B % .  S        |
|=E* =  . .       |
|=+o=    .        |
|+==.=            |
|B..B             |
+----[SHA256]-----+

Obsessive/Compulsive Tip

This may be taking it too far, but I like to have a memorable few digits at the end of the key so that I can confirm the key got copied correctly. One of my keys ends in 7srus, so I think of it as my “7’s ‘R’ Us” key. You can do that over and over again until you find a key that you like with this one-liner:

rm newkey; rm newkey.pub; ssh-keygen -t ed25519 -f ./newkey -C "[email protected]" -N ''; cat newkey.pub;

That creates a key without a passphrase, so you can do it over and over quickly until you find a public key that you “like”. Then protect it with a passphrase with the command

ssh-keygen -p -f newkey

And obviously, then you rename it from newkey and newkey.pub a more meaningful name.

Replacing your public key when you use it

As you access machines, make sure to add your new key and remove old keys from your ~/.ssh/authorized_keys file. At some point, you should remove your previous key from your ssh-agent and you’ll be forced to use the old key to get in, and replace it with the new key.

Is that complete? What other tips should others know about when creating an SSH Key in 2025 and beyond?

Several AWS Step Function Events Should be Classified as Data Events

At DataAutomation, we use the AWS Step Functions service pretty extensively. It provides a pretty nice, modular framework for us to build custom workflows for customers. We do millions of requests per day to the service. We also use AWS GuardDuty for threat detection.

GuardDuty monitors the CloudTrail log for odd things happening on your AWS Account. It also monitors for suspicious network traffic, and potential weaknesses on your EC2 instances, among other things. I actually like Guard Duty quite a bit.

I have one complaint about this combination of AWS usage though. With our high volume usage of AWS Step Functions, all of those common State Machine usage events like creating tasks, executing the tasks, and deleting them all go through CloudTrail, and thus through Guard Duty for monitoring. GuardDuty can get kindof expensive for this since we’re generating hundreds of thousands or millions of events per day.

S3 and DynamoDB are similar in this respect. When using those services, you can quickly rack up millions of events very quickly. They have a solution that classifies events as either “Management Events” or “Data Events”. Management Events include things like Creating a new S3 Bucket, or changing policies on the bucket. Data events include things like adding, reading or deleting items from the bucket. On the DynamoDB side, Management Events include events like Creating or modifying tables, or access to the tables, while Data Events include things like reading or writing to the tables.

Step Function does include one Data Event, that is InvokeHTTPEndpoint. However, I’d like for the Step Functions team to consider making the events related to “Using” the service into data events as well. This list of events should include all of the Execution events (StartExecution, StartSyncExecution, RedriveExecution, ListExecutions, DescribeExecution, GetExecutionHistory, DescribeStateMachineForExecution, StopExecution) and the Task Token events (SendTaskSuccess, SendTaskHeartbeat, and SendTaskFailure), as well as the GetActivityTask event

I have created an AWS support ticket to try and explain this in as much detail as possible to the Step Functions team. I think it gets lost inside AWS because the effects are not readily apparent to the Step Machine team, since the cost ends up associated with Guard Duty. If you have similar problems, I encourage you to create similar ticket with detailed explanation and that it get directed to the Step Functions team, who I believe is the most qualified team to make this change.

« Older posts

© 2026 Brandon Checketts

Theme by Anders NorenUp ↑