Surviving the Great PSPS of 2025

Colorado is having a dry winter, and over the past few days we've had very high winds along the Front Range area, where I live. These conditions create a high risk for wildfires, as evidenced by the Marshal Fire in 2021. Xcel instituted a Public Safety Power Shutoff (PSPS) event twice in 3 days to reduce this risk.

The first event started about 10a on Wednesday and lasted until Noon on Thursday (14 hours). The second event started at 6a on Friday and lasted into Saturday (over 24 hours). 

There are many ways to deal with an outage like this. Some of these include (from less to more advanced):

- Buy ice, candles. Low tech but it works

- Portable Batteries to power essentials

- Portable Generator to power essentials

- Whole Home Generator

- Bi-Directional EV

- Solar + Battery

While the ideal situation would be whole house solar with battery backup, for an uninterrupted supply of power, it is also an expensive solution that is only beginning to make economical sense if you can use it on a variable rate power plan to shift your daily usage. See Base Power in Texas.

A gas generator is a well tested and durable solution, either portable or a larger whole-house version. The downsides are that they require regular maintenance if not used regularly, they are noisy, and they burn gas.

While I do have (and love) my EV, it is not capable of bi-directional use. IE, I can't draw power from it other than to drive or through the cigarette adapter (generally limited to ~100W). 

What I do have is a portable battery and solar setup I primarily use to augment the onboard solar and batteries of my camper. But the setup also works well in this situation.

My Setup (Prices current of 12/2025, including sales):

- EcoFlow Delta 2 Max ($850)

- EcoFlow 400W Solar Array ($599)

- Renology 400W Solar Array ($499)

- EcoFlow 800W Alternator Charger ($289)

The Delta 2 Max is a 2kWh portable battery with inverter. This means it can provide 2 kW of power for an hour (or 200 W for 10 hours). It can also charge from the solar panel arrays (up to 2) or the Alternator Charger (among other sources).

During the outage, I used extension cords to plug in our refrigerator, chest freezer, and networking gear (Internet, WIFI, etc), plus some phone/laptop chargers and lights. 

The networking components draw a constant ~100W. The Refrigerator and Freezer each draw about 100-200W while the compressor is running, and the rest are generally less than 100W. 

This means that the average draw was somewhere around 200-300W throughout the day. 

While the sun is out (which is pretty brief as we approach the winer solstice) I used the solar arrays to keep the battery charged. But since they days were short, and there was quite a bit of cloud cover, the 800W of solar panels averaged only a couple hundred watts, generally about break-even. It is important to remember that even in peak conditions, the panels generally don't deliver their advertised power. I think about 350W is the best I've seen off either panel in the direct Colorado sunshine.

So for the rest of the day I utilized the 800W Alternator charger installed in our pickup truck (normally used to tow our camper). The Alternator Charger is an adapter that is hard wired to the battery terminals and outputs up to 800W to the EcoFlow battery. This means it is roughly capable of charging the battery (assuming no power draw) in about 2 1/2 hours. 


This setup kept all the essentials running through the outage. The battery did drain overnight, but for the few hours the power was out the refrigerator and freezer stayed cold enough.

I only had to run the truck to charge the battery in the evening after the sun set, and then again right before bed to tip it off for overnight.

Overall this was pretty successful. We didn't have the constant noise from a gas generator, although we did have the truck noise in the evenings. It kept the food cold and the family entertained during the outage. With the truck standing in as a gas (diesel) generator to top up the battery, this solution can provide power indefinitely.

Since the primary usage of this setup is for camping, it is a nice bonus as a home backup solution. From a cost perspective, it is pretty expensive compared to a portable gas generator. A high end portable Honda that can output much more power would cost about 1k (less than half of this solution), but it would have to be running the entire time you want power.

The whole house solutions are much nicer, but for the limited use cases (this is the first outage of this size I can remember in 15 years here) they are very expensive for the value.

For shorter outages, the EcoFlow Delta 2 Max by itself, or with a single set of panels does compare well to a gas generator for a silent, more environmentally friendly approach. But for longer outages like this one it requires the gas backup (in this case the Alternator Charger).

One limitation we did discover is that the battery backup for our internet provider is only about 12 hours, but luckily for us they added a gas generator during the second outage to keep everything running.

Go Web Server using Lambda Function URLs

It has been possible to use Lambda Functions to host Go (Golang) Web Servers for years. I wrote about a project doing this back in 2016. More recently, the github.com/apex/gateway project provided a drop in replacement for net/http's ListenAndServe function that supported AWS REST APIs, and later the AWS HTTP APIs.

Amazon recently released the ability to access a Lambda Function directly via URL, without needing to configure the REST APIs or HTTP APIs. This remove an extra layer of configuration, complexity, and potentially cost, for deployments. 

I created a fork of the github.com/apex/gateway at github.com/ericdaugherty/gateway that supports this new approach. A trivial example of usage in the README is:

package main

import (
	"fmt"
	"log"
	"net/http"
	"os"

	"github.com/ericdaugherty/gateway"
)

func main() {
	http.HandleFunc("/", hello)
	log.Fatal(gateway.ListenAndServe(":3000", nil))
}

func hello(w http.ResponseWriter, r *http.Request) {
	// example retrieving values from the api gateway proxy request context.
	requestContext, ok := gateway.RequestContext(r.Context())
	if !ok || requestContext.Authorizer["sub"] == nil {
		fmt.Fprint(w, "Hello World from Go")
		return
	}

	userID := requestContext.Authorizer["sub"].(string)
	fmt.Fprintf(w, "Hello %s from Go", userID)
}

This is effective and addresses the issue, but can make local development a challenge. I often replace the main implementation above with:

func main() {

	// check and see if we are running within AWS.
	aws := len(os.Getenv("AWS_REGION")) > 0 
        http.HandleFunc("/", hello)

	// run using apex gateway on Lambda, or just plain net/http locally
	if aws {
		gateway.ListenAndServe(":8080", nil)
	} else {
		http.ListenAndServe(":8080", nil)
	}
}

This uses the net/http implementation of ListenAndServe locally during development, and the gateway library when deployed via Lambda. No configuration is needed, as the AWS_REGION environment variable is automatically set by AWS when running as a Lambda Function.

One limitation of the Lambda Function URLs is that you can only access the Lambda Function URL via the aws domain (ex: https://<function id>.lambda-url.<aws region>.on.aws/). One easy solution to this is to use AWS CloudFront. For a simple use case, create a SSL Certificate in AWS for your custom domain, and then create a CloudFront distribution, using the SSL Certificate and the AWS Function URL as the default origin. If you use this approach, you should also make sure to set the HTTP Cache-Control Header on your cache-able responses to improve performance and reduce your Lambda invocations. 

You can use a more complicated CloudFront approach if your project uses a lot of static assets. The static assets can be deployed via S3 and the CloudFront distribution can pull from S3 or Lambda depending on the request path, but for smaller or low traffic deployment, everything can be served from the Lambda function.

From here you can build your application using any traditional tooling that is supported by the Go http/net library.


Package Delivery Detector

The following is an outline of how and why I built a tool to detect when packages are delivered to my front porch. If you have enough of the right infrastructure in place, this will serve as a guide to get it setup yourself. All the source code is available, so if your setup is different, you can take and modify what I've done to work for you.

This solution is built using Go, AWS, Google Vision, and Docker.

Motivation

In November I attended Amazon's AWS re:Invent conference to catch up on the current state of the cloud. I came away from the conference inspired to leverage an area that I didn't have a ton of experience, Machine Learning. I attended several SageMaker sessions and came up with a project idea:

A tool that would use Machine Learning to detect when a new package was delivered to my home.

I already had a lot of key infrastructure in place, including a Ubiquity UniFi camera that was pointed at my front porch. I also had a Synology Diskstation with Docker that I used to run both my UniFi controller and UniFi NVR.

This project took a long time. I kicked off work in early December and I just recently got to the point where I'm happy with the results.

Image Capture

The key to this project is to build a Machine Learning model that can determine if there is a package at my front door. To build this model, I needed a lot of input data, so I started with a small app that would take a snapshot from my camera every few minutes and save it.

Luckily, UniFi cameras make it easy to grab a JPEG of the current frame. This is what my front door looks like right now:


I was able to grab this image with a simple HTTP GET request to the camera's IP. Example: http://192.168.1.5/snap.jpeg

In order to enable this, I did have to turn on Anonymous Snapshots directly on the camera. Navigate to http://192.168.1.5/camera/config and click:

Since I would be running my image capture tool within my local network, I did not need to expose the camera to the Internet.

While this camera placement works well for my every-day use, it contains a lot of extra data that isn't relevant to detecting a package, so I needed to crop each image down to just the area where I thought packages might be placed. Here is the result of my crop:


I now had everything in place to build out an app to start capturing images. I wrote the app in Go (golang) as it is perfect for this type of systems programming. The app simply polls the camera every 10 minutes, grabs the image, crops it (if desired), and stores it in an S3 bucket or on the local file system.

The source code is available on GitHub.

I wrapped the app in a Docker container and deployed it on my Synology. The Docker image is available on Docker Hub. Here is a sample Docker run command:

docker run --restart always \
 -d \
 -e TZ='America/Denver' \
 -v /volume1/imagefetcher:/img \
 --name imagefetcher \
 ericdaugherty/imagefetcher \
 -imageURL http://192.168.1.5/snap.jpeg \
 -dir /img \
 -sleepHour 22 \
 -wakeHour 7 \
 -rect 0,400,900,1080

This creates and runs a new Docker container that captures the images, crops them, and stores them locally.

Since packages are generally not delivered in the middle of the night, and the night images are much harder to see, I decided to only run the image fetcher (and eventually the package detector) from 7a to 10p.  So I added parameters to the ImageFetcher tool to stop capturing images overnight.

The -e TZ='America/Denver' parameter sets the timezone for the Docker container. This is important so that the timestamps are correct and the logic to sleep and wake work correctly.

The -v /volume1/imagefetcher:/img parameter maps the directory on the Synology to /img in the container, and then later the -dir /img specifies that the snapshots should be written to /img in the container, which will result in them being stored in /volume1/imagefetcher on the Synology.

If you would prefer to store the images on S3, you can add these parameters:
    -e AWS_ACCESS_KEY_ID='<Your Access Key ID' \
    -e AWS_SECRET_ACCESS_KEY='<Your Access Key Secret>' \

to the docker run command and this parameter:
    -s3Bucket <bucket name> \

to the imagefetcher command. You can then drop the -v /volume1/imagefetcher:/img and -dir /img, or keep them both and store the images twice!

Now you wait and capture data... I started with a month's worth of images before I trained the first model, but I continue to capture images and plan on training a new model with the larger set.

Machine Learning Model

You should now have a S3 bucket or directory full of JPEG images. Now comes the fun part, you need to manually sort the images into two different labels.  I chose the highly descriptive 'package' and 'nopackage' labels. 

I use a MacBook, so I used finder to quickly preview the images and run through them until I saw a change of state. Then I moved all the images into either the 'package' or 'nopackage' directory. Repeat until you've processed all of the images.

This is pretty labor intensive, but it did go faster than I expected it.

You should end up with two folders, named 'package' and 'nopackage'.

I then spent quite a while trying to figure out how to train the model using SageMaker. I found this pretty frustrating as I'm not really fluent in Python, and it turns out the Machine Learning space is pretty large and not super-obvious to pick up. Luckily, I came across a post from Jud Valeski where he was building a similar tool to determine when the wind blew the covers off his patio furniture. He used Google Vision for his solution, so I took a look.

As it turns out, using Google Vision to build a simple model is drop dead simple. I signed up for Google Cloud account, created a project, and then created a new Google Vision model. To create the model, I simply had to zip up the 2 directories and upload it. In about 10 minutes, I had a functional model!

Google also provides you a HTTP Endpoint that you can use to evaluate images against your model. You simply post a JSON body including your image base64 encoded, and it gives you back what label matches, along with its confidence level.

Package Detector

With the trained model in place and a public endpoint I can hit, all that was left was to build the final tool.

The final source code is available on GitHub. The Docker image is also available on Docker Hub.

I lifted much of the logic from the imagedetector to grab and crop the JPEG image. I then wrote new code to base64 encode the image and upload it to Google Vision. Based on the response, if a package is detected, an email is sent out to notify me.

The current version supports email as the notification tool.  I leveraged Amazon's Simple Email Service (SES), but you can use any SMTP server you have appropriate access to.

This tool supports two triggers. The simple approach is to specify a simple interval, ex: -interval 5 and it will check every 5 minutes. However, I realized that the Unifi NVR is already doing motion detection, so if I could trigger based on that, it would only evaluate when there was a reason to do so. I came across a cool project by mzak on the Unifi Community Forums. Here is the GitHub Repo. Mzak realized that the NVR wrote a line to a log file (motion.log) every time motion was detected or ended. I leveraged his work to build a go library that would also monitor the log file. To use this, you must map the location of the motion.log file into the docker container. I do so with -v /volume1/docker/unifi-video/logs:/nvr and then point the packagedetector at this location with the -motionLog /nvr/motion.log parameter.

You can run the docker using the following command:
sudo docker run \
    --restart always \
    -d \
    -e TZ='America/Denver' \
    -v /volume1/packagedetector:/pd \
    -v /volume1/unifi-video/logs:/nvr \
    --name packagedetector \
    ericdaugherty/packagedetector:latest \
    -imageURL http://192.168.1.5/snap.jpeg \
    -rect 0,400,900,1080 \
    -motionLog /nvr/motion.log \
    -cameraID AABBCCDD1122 \
    -gAuthJSON /pd/google-json-auth-file.json \
    -gVisionURL https://automl.googleapis.com/v1beta1/projects/... \
    -sleepHour 22 \
    -wakeHour 7 \
    -emailFrom test@example.com \
    -emailTo test@example.com \
    -emailServer email-smtp.us-east-1.amazonaws.com \
    -emailUser  \
    -emailPass "" \
    -emailOnStart true

I now receive an email every time a new package is delivered!

Looking ahead, I'm interested in building in support for SMS or even push notifications, although I would also need to build an iOS app for that. I also plan on continuing to refine the model with additional images until I'm confident it will be correct nearly all the time.

Simple HipChat AddOn in Go

We use HipChat at work to stay connected. It works well and has some fun plugins, including Karma. I'll bet a few of you can guess what happens when you create a karma system in a company full of smart technical folks.

THEY GAME THE SYSTEM.

One of the more interesting hacks they discovered is to use the 'find and replace' functionality built into HipChat to secretly give (or more likely take) karma to/from another person. The proper usage of the Find/Replace functionality is something like:
> I can typo good.
> s/typo/type

The first line would be a typo that you made, and the second line allows you to 'fix' the typo. The key part here is that the second line modifies the first line and is not visible to any other users.

Who could let this fun discovery go unused? Not our team! Our chats were quickly filled with Karma Ninjas giving and taking karma from behind a thin veil of secrecy.

Luckily the Karma bot is open source. So I forked the source, made a fix and submitted a pull request right?

WRONG. Karma is written in JavaScript and writing JavaScript doesn't make me happy. So I decided to create a Karma Ninja bot instead to 'out' the folks attempting to be Karma Ninjas. Since I've been exploring Go on AWS Lambda recently, I figured this would be a great excuse to write more Go!

Starting Points

There are a few options out there already that really helped accelerate the effort.

First, the eawsy folks created a great shim for Lambda that allows you to deploy Go easily and it runs FAST. Their solution is much faster from a cold start than the other NodeJS shims out there.

Second, Nicola Paolucci has a great post on the Atlassian Blog about building a HipChat addon in Go.

Finally, there is a HipChat Addon project by David Jonas on GitHub.

The eawsy Lambda shim is being actively developed and the team was very responsive to a few of the issues/questions I had, including one that is apparently a bug in Go 1.8.

The Atlassian blog post does a great job of walking through how to build an add-on. This was the template I ended up using for my addon. The only note I have here is that Nicola uses the roomId to store the authentication token, but that only works for single room installations. I used the OAuthId field instead which seems to work for both single room and global installations.

David's project did not compile out of the box because of changes in the JWT Token upstream project. I thought about trying to fix is, but ultimately I wanted to understand how it all worked so I wrote mine without the template. I may go back and try to fix that project as an alternate implementation.

State

When I was envisioning building the plug-in, I assume I could build it statelessly. I assumed you could simply register a webhook with a regex and a callback URL that would return a message. But the webhook does not appear to let you return a message, you need to make a separate call into the HipChat API to post a message. Because of this, you need to maintain some state between calls.

EDIT: My first guess was correct, you can return a response to the webhook call and post a message. I ended up over-complicating it but the point was to learn more about AWS so it was still time well spent. Here is info on how to pass a response to the webhook.

I explored using both DynamoDB and S3 to maintain the needed state, which is really just an OAuth Token. They both worked fine but ultimately I chose to use S3 as it was the simpler and cheaper option for this use case.

Amazon does publish an AWS SDK in Go which made interacting with DynamoDB and S3 pretty easy. I do think the SDK is a bit 'low level' and could probably be made a lot easier to use with some convenience functions, but I'm just happy they support Go so I shouldn't complain.

The eawsy team does the necessary work to make the Lambda environment available from Go, meaning that you can simply use the Go SDK without any explicit authentication credentials or setup. By default, you simply inherit the IAM Role of the Lambda function. In this case, I simply needed to add permission to access DynamoDB and Se.

Code

The code itself is pretty straightforward. Like most go projects, I'm always impressed with how much you can get done in just a few lines of code across a handful of files. I published the project on GitHub: ericdaugherty/hipchat-karmacop.

AWS

This code does require some AWS setup. You need to define a Lambda function and build and upload the project using the Makefile (see the README for instructions).

You also need to setup an Amazon API Gateway entry to reference your AWS Lambda function. This will give you a public URL that your AddOn will reside at.

An S3 bucket is required to store the OAuth token used to post responses back to the HipChat API.

Finally, you need to add permission to read and write to S3 to the Lambda IAM Role configured for the Lambda Function.

That's It

This was a fun little project and shows a good use case for using Lambda to implement add-ons to cloud applications like HipChat.

How and Why I Became a 'Manager'

After my previous post about my approach to technical interviewing, I received some requests to write more about my career path. In this post, I'm will attempt to answer a question I get asked somewhat regularly: Why did I stop writing code and become a manager?

How did I choose Software?

Before we can dive into why I became a manager, we need to explore a bit more about who I was. Growing up, I loved Legos. I would spent hours building custom lego creations. As I got older, my family got a Commodore 64 and I spent hours playing cartridge games, watching Zaxxon fail to load from cassette tape, and typing in programs printed in computer magazines.

While I was in High School, I ran a dial-up BBS on a computer made from parts cobbled together from old computers and occasional purchases from Computer Shopper. I took all three classes in our computer lab at the High School: Typing, BASIC Programming, and Computer Drafting. Typing was the most important of those three. When I exhausted every class at the High School, I took a Turbo Pascal class at the local community college. If I wasn't already convinced (I was), after that I knew I wanted to write software for a living.

I went to the University of Illinois to study Computer Science. Studying Computer Science at UIUC was the fulfillment of a dream for me, but I quickly made a small change.  During my Freshman year I switched to Computer Engineering. This was precipitated by my experience with the Introduction to Algorithms class. It involved way too much math and theoretical thinking. I just wanted to learn was how computers worked. The Computer Engineering program offered a lot more low-level classes and less theoretical programming classes. My favorite class was our microprocessor design, where we started with basic logic gates and built up to a pipelined processor, including writing a program and running it on the processor. My second favorite class was x86 Assembly programming. I was happy with my choice to switch to Computer Engineering, and it put me in the position I dreamed of as a kid, I got to get paid to write software.

Did I like Software?

After graduating I started working for a consulting company. They did a great job of training new college graduates with a well-defined self-directed training course in C++. That was, practically speaking, the first and last time I got paid to write C++. The world was quickly transitioning to the web and Java. I was among the first members of our staff to learn Java for new projects, and being a primary Microsoft shop, a little J++ as well.

It was amazing. I was learning something new every day, and getting to build applications that real people were using every day. It was also when I first learned to dislike process and management.

This is probably best exemplified by a quick story. On the first project I had an opportunity to lead, I was asked to document the object model. According to our process, this involved opening up Microsoft Word, and defining each class and public method, including method signatures and JavaDoc formatted documentation in Microsoft Word. I thought it was a complete waste of time, I was frustrated that management didn't understand software development, and I just wanted all the 'useless' people to get out of the way and let me build software.

I had another experience that furthered my distain for anyone other than developers. We were implementing Enterprise Application Integration (EAI) software written by Active Software for one of our clients. This involved moving data between several different enterprise systems. The pattern we used was to define a single message format for each data type that would contain all the data that could be consumed by every end systems. Active Software called this a 'Canonical Message'. However, apparently the client didn't understand what the world 'canonical' meant, and I spent the better part of an afternoon and the entire evening in a conference room with the entire project team attempting to come up with a new way to describe a canonical message without using the word canonical. It drove me CRAZY. I wanted everyone to get out of my way and let me go build something.

I swore I would never do anything but write code.

What Changed?

Fast forward a few years, and I had built up a solid skill-set, primarily in Enterprise Java but also a solid understanding in C#.Net. I learned how to build websites in Oracle Application Server 1.0, proprietary frameworks, Enterprise Java, Spring, and other variations. 

I loved learning new frameworks. Each one brought solutions to problems I had struggled with, and as I learned, a new set of problems. 

And ultimately, this was the primary factor in my evolution. After a while, a new framework brought a quick review and shrug instead of excitement and a deep examination.

My focus over this period was building enterprise applications running inside large companies. These are important systems and 'fun' to work on, but at the end of the day are largely reading data from a database, displaying it in HTML, capturing some changes, and writing data back to the database. It got old, especially when I became somewhat disillusioned with the evolution of the frameworks.

During this time, I also got to spend a lot more time working directly with clients. While this is often a frustrating experience, it allowed me to learn about a lot of different companies, industries, and business models. I came to respect the challenges that they each faced and the solutions that they had developed.

These factors caused me to pick my head up and look around at the world I lived in, and wonder what else I could be doing.

What Did I Know?

After spending close to a decade working primarily for consulting companies, I had broad but shallow knowledge of a lot of industries. I had worked for Fast Food, Manufacturing, Insurance, HR, Logistics, Banking, Financial Exchanges, and Retailers. I knew a little about a lot, but I didn't have deep knowledge of any. I realized that what I really understood was the consulting industry.

This worked out well since I currently worked for a consulting company. I looked around I started seeing other problems I could solve. Luckily, I was part of a fast-growing company with the trust of the ownership.

It started small. We had a small project and needed a part time PM. I stepped as the technical architect, but I also worked with the client to make sure we were delivering what they needed, and keeping them up to date on the project status and the challenges.

I also learned how the owner interviewed and I started helping out as the company was growing quickly and he was doing more interviews than he could handle alone.

I built trust and over time I took on more responsibility.

How Did I Learn?

I often get asked how I learned how to manage a company. The answer is pretty simple: just like I learned how to code. I read a lot and learned through trial-and-error. I also had great mentors.

This is an example conversation I often had with the owner of one of the companies:
Me: We should be doing X.
Him: Why?
Me: Because <a lot of reasons I thought made sense>.
Him: No.
Me: Why? 
Him: Because <a lot of reasons I had not thought about>.

I would take what he said and reformulate my pitch based on this new information. It would often take several iterations before I either got a yes or I became convinced my idea wouldn't work for our company.

The great thing about the owner was that nearly every yes was followed by "go do this, I'm holding you accountable". This was a great way to grow my responsibility and take ownership of my ideas and see them through.

This conversation happened a lot, and over time I really understood what he cared about, and what his overall philosophy was. I agreed with much of his philosophy, but not all of it. We had many spirited discussions over the years, and I came away with a well developed philosophy of my own.

During this time, I expanded my reading habits beyond technology. I started reading business books, articles and anything and everything that taught me about how businesses worked. I learned about public companies by reading their 10K filings. I took a deeper look at our clients and tried to learn why they made decisions that I had previously thought were... strange. 

And That Was That?

My transition to 'manager' wasn't over. My family decided to relocate to Colorado. Starting over in a new city with almost no professional network, I chose to go back to senior technical positions where I was actively writing code. I still have a passion for building software and, luckily enough, skills that were still valuable. But I eventually grew restless and started looking for a way to leverage my broader skills.

Then a very important event in my story happened: I took the family to Disney World. No, I didn't have an epiphany on my tenth ride on "It's A Small World", we only rode it 3 times. Instead, I met the CEO of Xcellent Creations on the flight home and found an opportunity to leverage the expertise I'd developed over the years to help grow a small mobile shop into an industry leader that we eventually sold to WPP.

Ultimately, I enjoy solving problems and building things. Whether those are Legos, software, or organizations, it doesn't really matter. What matters is that it that the problems are challenging and that I'm working with great people.

My hope is that the organization that we've built today does for our employees what my past companies did for me. I want to help our team expand their skills, take on new challenges, grow as individuals, and view the world differently. And along the way, ship great apps.

Technical Interviewing and Hiring


There has been some discussion in the technical community recently about the use of algorithms and coding tests during the interview process.  Here is a sample:

I have my own thoughts I would like to share, but this isn't really a topic for 140 characters.

Why should you care what I think?

I've spent a lot of time interviewing and hiring developers across several organizations. For a while I was doing interviews every day. In fact, I remember telling my recruiter that he couldn't schedule more than 2 interviews in a day because by the third interview I found I was no longer effective. I estimate that I've interviewed over 300 candidates, and been the primary decision maker on over half of those.  I have been a key member of the interviewing and/or hiring team in two custom software services companies that have grown from 30 or fewer people to over 115 during my tenure.  I've done this a lot, and I was there to see my successes and my failures.

I spent over a decade primarily as a software developer. I've written production code in LabView, C++, J++, Java, VB, Perl, C#/.Net, JavaScript, Flex, Scala, Objective-C, and probably more. I've probably written more Java than the rest combined, but I was always learning something new and during the time that I was doing these interviews, I knew what I was talking about. I would like to think I still very technically competent, although many of my current employees enjoy proving me wrong regularly.

I leaned this approach from a great mentor early on, and adopted and extended it over the years. We all stand on the shoulders of giants.

Is this the one true way?

Before I describe my approach, I want to focus on a key point lost among the 140 character conversations.  Your interview process should identify candidates that will work in YOUR organization. Your company probably isn't just like David Heinemeier Hansson's, or anyone else you read on twitter.  It probably isn't just like the companies I've worked for either, so if you blindly copy what was successful for me, you will probably still fail.

I've spent most of my career working for services companies building great software, from enterprise back-end systems in Java and .Net to amazing mobile applications in Objective-C, Swift, Java, and Kotlin. Because these companies are services companies, I expect my employees to be able to interact directly with out clients. The people I hire must be capable of more than just writing great code, they must solve our clients problems, by understanding what they really need, communicating effectively with technical and non-technical people alike, and delivering a great technical solutions.

I also expect my employees to be able to learn a new technology quickly. The landscape is always changing, and when you are building software for other companies, you do not have the luxury to define what technologies are and are not acceptable.

Therefore, I focus on hiring great problem solvers. If you are not a good problem solver, you will not work on my teams, no matter how amazing you are at language/framework/platform XYZ.

So what is the interview like?

Most of these interviews last about an hour. The four sections are...

Cool Stuff - ~15 Minutes

I start all the interviews by asking the candidate what projects they think are cool or projects they are proud of. These can be work projects, open-source projects, school projects, or just tinkering around and learning projects. This engages the candidate and allows them to talk about topics they are comfortable with.

This tells me several things. First, it tells me whether they communicate effectively. I actually prefer when they have a project in an area that I am not knowledgeable. Selfishly, it lets me learn about something new, but the primary reason is that it show me whether they can explain a technical concept to me that they understand well. If it is in an area that I know well, it lets me probe to see how well they really understand the topic.

It also helps me level-set where the candidate is. Ignoring their resume, the projects and accomplishments they are proud of tell a very clear story about their skill level and world view.

Coding Problems - ~20 Minutes

What!?! Yes, I have two basic coding/problem solving problems that I make each candidate work through, on paper, in front of me.

Problem 1:
This is essentially a recursive coding problem. You can solve it in any OO programing language with a basic knowledge of the core syntax. It does not require any API/Framework specific knowledge. The right answer is about 4 lines of code.

I have given this problem to everyone from college students looking for their first job to senior architects. They should all be able to solve it. I do adjust my expectations based on the level indicated by their resume and 'Cool Stuff' answers.

Achieving a Passing mark on this doesn't necessarily require a perfect answer right away. I'm happy to ask questions, and give small hints to the candidates if they get stuck. This is actually one of the most valuable parts of the process because it allows me to see how well they listen, and how they think through a problem. But if you can't write (simple) code down on paper with a reasonable level of accuracy, and you can't write a basic recursive method with a few hints, you are probably not going to cut it.

Problem 2:
This is where I date myself somewhat.  Problem #2 is ideally writing a SQL statement, but is really a boolean logic problem. It does not require anything other than very basic SQL syntax.  No Joins, or anything even slightly fancy. The right answer would fit in a single tweet.

Again, I have given this problem to everyone from college students looking for their first job to senior architects. They should all be able to solve it. I have run into college students that didn't know SQL, and in those cases I was able to adapt it to be a boolean logic problem that they could solve in general set notation.

Again, success here is showing me how you think through and solve problems. You don't need to write the correct answer down flawlessly the first time, but you have to show me that you can think through a problem, listen to feedback/hints if necessary and incorporate them into your thinking.

Alternate Problem:
Depending on time and level, another question I've asked A LOT, mostly to Java and .Net candidates, is how garbage collection works in the JVM (or on .Net).

Occasionally I'll get a candidate that actually knows the answer, and while impressive, this is actually somewhat disappointing.  The point of this question is to have them show me their problem solving skills applied to a real technology that they depend on every day, but probably don't think about.

This questions shows me how well a candidate can listen, how well do they think on their feet, and how well do they actually understand computer science concepts.

Again, the 'right answer' isn't necessarily the point. Having a great conversation with me where in the end they really grasp the concept and feel like they are a better developer makes both me and the candidate feel good about the interview.

Side Note: I was working with a startup (not my full time job) and helping them make their first full time developer hire. I used these problems and rejected a candidate. The CTO, who had not actively developed in several years, challenged me about the questions, so I gave him the interview. He passed both in less than 10 minutes, and realized that he didn't want anyone on his team that couldn't solve these problems. The next person we interviewed passed, was hired, and did a great job for them for several years.

Resume Due Diligence ~ 15 Minutes

Once the problems are solved, I will use the next section to tackle any areas of interest that I think are appropriate. I will often read through the resume and ask questions about specific projects/accomplishments to determine how accurate the descriptions are and what role the candidate actually had on those projects.

If they profess deep knowledge in a certain language/framework/platform, I may go deep in this area to see how much of an expert they really are. I love learning something new from candidates, and they usually enjoy teaching the interviewer something new as well!

A big take-away from this section is: how honest are they about their resume? Were they exaggerating, or underselling themselves?

I can also use this time to follow up on areas the candidate showed a particular interest in to see how deep their knowledge is.

Candidate Questions ~ 10 Minutes

The final section is an opportunity for the candidate to ask me questions, about the role, culture, expectations, etc. While this is a key part in making sure the candidate is sold on working for me, their questions are also a window into their thought process and outlook, and can be informative in making a hire/no-hire decision.

Ultimately thought, this section is about closing the candidate. Regardless of whether you will hire the candidate, you want them to be sold on your company, and you want them to feel good about the experience. Even if you pass on them, you want them to leave the interview with a positive opinion of you and your company. It is a small world.

Why do I do is this way?

I've covered a lot of the why along the way, but it important to reiterate and expand on my motivations and goals.

First, I want to reiterate that I target hiring great technical consultants. We build software solutions for other companies, and the skills we look for reflect that. This is probably not a great approach for other types of companies.

I firmly believe that in order to be a great software consultant, you have to be a great problem solver. To me, this includes listening and understanding the problem (solving the right problem), deep enough technical expertise to identify a good solution (there is rarely one RIGHT solution), the ability to communicate what the solution is, what the trade-offs are (there are always trade-offs) and how it addresses the actual problem, and the ability to deliver the solution.

In the companies that I worked for, we certainly didn't expect an Associate Developer to interact directly with the client and execute all of these steps, but this is still the ultimate process and everyone should be able to participate at the level of their experience.

I also don't believe in team interviewing for smaller services companies. A consultant working for one of these companies will need to be successful across different clients and different teams. I believe in the accountability of a single decision maker, and if the process is consistent, then the team members know the people getting hired went through the same process they did. Ultimately, if the team trusts the hiring manager, they will trust the candidate that they hire. However, this doesn't scale and as a company grows, the process needs to evolve.

Again, this probably doesn't work for other types of companies, and team interviews make sense in a lot of cases.

Finally, I do it this way because it works. Yes, I made a few bad hires over the years. In each case, I went back to look at the interview notes to see what I missed and how I could do better next time. In some cases I was being too optimistic because I was desperate to fill a need. In some cases the candidate simply snowed me. In some cases, there is an issue that I can't reasonably expect to uncover in an interview.

Final Notes

The lifeblood of the interview/hiring process is the great work that your recruiters do. I firmly believe in having in house recruiters and I've been blessed to work with some great ones. You know who you are, thank you.

I have no idea how many false negatives I've had. I would argue it is unknowable. But the growth of both companies would suggest that I didn't turn away too many qualified candidates.

These opinions are my own, and do not reflect those of my employer. I am no longer in a role where I make any hiring decisions on technical candidates, so don't expect that reading this post will give you an inside edge. ;)

This post is too long. Unfortunately, I'm not a good enough writer to write a shorter post.  I'm sorry.

GoLang Alexa SDK

I continue to be interested in Alexa, Amazon AWS Lambda, and Go (golang), and I've found a new way to deploy Alexa apps in Go on Lambda.

While Go is still not an officially supported language on Amazon Lambda, there are several ways to make it work:

In my previous project where I built a screen scraper in Go and deployed it to Lambda, I utilized Lambda_proc.  This does work pretty well an is a solid solution.

I found GopherJS to be an interesting project, but a pretty indirect way to get Go onto Lambda.  Lambda can run Go Linux binaries, so converting your Go to JavaScript seems like the wrong approach.

The eawsy team released their new tool earlier this year and it seems to be the cleanest approach.  The overhead between the Python shim and your Go code is clean and fast.  They utilize a Docker container to build the necessary bridges from Python to Go, resulting in very fast execution.  You can write log messages to the Lambda console using log.Printf. And it is FAST. The HelloWorld skill runs in sub-millisecond times!

To build Alexa skills in Go on Lambda, we still need an SDK.  Amazon publishes SDKs for Java and JavaScript but not for Go.  So I ported the Java SDK into Go as ericdaugherty/alexa-skills-kit-golang  While this does not (yet) have the full depth of the Amazon SDK, it does enable you to build simple Alexa skills using Go and deploy them on Lambda.

Take a look at the alexa-skills-kit-golang project for usage and samples, and give the eawsy Lamba Go Shim a try for your next Go project on Amazon AWS Lambda.

 - Updated 3/4/2017 - updated run time based on newer stable eawsy shim speeds.