Portable Power Pricing (or how I built something with AI)

In my recent blog post Surviving the Great PSPS of 2025, I outlined my interest in Portable Solar Generators, and I've been looking for a side project to test out the latest AI capabilities. I found the answer in PortablePowerPricing.com

I was recently looking to buy an additional battery for my EcoFlow Delta 2 Max, but since all of these products are nearly always sold at a significant discount to their MSRP, I wasn't really sure how good of a deal it was. There are existing price trackers, but I realized there was another interesting aspect here. I wasn't just looking for the best price, but for the best price per Watt hour (Wh). I realized there was an opportunity to build a price comparison site specifically for this product category.

I primarily develop in VS Code using devcontainers, so that's what I did for this project. I ended up with 2 different projects, one to crawl the sites to get prices, and one for the public website. Each was a separate VS Code project (devcontainer) and git repo. I used Claude Code (Opus 4.5 and 4.6) via the VS Code plugin. I used the Claude Code Max (5x, $100/month) subscription, and only ran out of quota once during the project.

Throughout this project, I didn't write any code, or build any config files. I made some very minor edits by hand on occasion, but Claude Code wrote almost every line of every file in both projects.

I won't go through every step in the process, but I did want to highlight a few experiences and learnings...

Price Discovery

Clearly a key aspect of this is capturing the prices for each product and documenting their changes over time. I started by asking Claude about potential tools for this, and it ended up recommending PriceGhost. This is a pretty new project that was also built almost entirely with AI. I liked the pitch that it could use AI to help figure out the right price instead of leveraging the 'old school' approaches of parsing ever changing HTML pages.

It worked fine for a little while, but as I added more sites and more pages I ran into issues. One was that this is a new project and the code isn't really battle tested. I had to fork the project and make some changes (I submitted a PR) to get it to work for some pages where the title was too long, but for others it just wouldn't work at all. I also found that the AI aspect of it wasn't really effective. I was using Ollama as my model, but it would not reliably find the right price on the page.

I went back to Claude and discussed the challenges and we ended up settling on ChangeDetection.io. This is a more established and traditional tool, but Claude told me that most (all?) of the sites I would be scraping used Shopify and we could often just pull the JSON file for each product and easily get the product without parsing HTML.

But the real magic here was having Claude write a script that would automatically setup the watches for me. It queried my production database, found the URLs of each product, used the ChangeDetection API to create a watch, and then updated the production database with the watch's ID so I could sync the prices.

This is the kind of script that I could always write, but Claude Code did it in ~ 30 seconds and could make any changes needed to update, etc.

But the real long term issue here is how difficult will it be to maintain the price scraping functionality? As a non-revenue generating side project, I will only have so much patience and it isn't quite so automated that I can just ignore it. That was the promise of PriceGhost but it isn't really there yet.

Product Discovery

The most surprising discovery in this project was using Claude to build up the list of brands, products, and product relationships (which expansion batteries worked with which base stations). This is where the whole project would have fallen apart without AI. Realistically, I would not have spent the time to collect and format this information. It would have taken quite a few hours, and it is boring, mostly mindless work that is not justified for a side project.

However, it was not perfect. I did discover a few minor issues it made, but it was able to self-correct them and do additional research instead of me having to go manually fix them.

But this is where I really burned the tokens. Having Claude research websites and generate the product listings was very intense, and initially I would only add one brand per coding session. It often took >25% of my quota to research a single brand. In hindsight, I wonder if I should have used Sonnet for these tasks.

The one time I did run out of tokens was when I had it research all of the brands and fix any mistakes it made, which it did do, but ran through the full quota. 

Coding

Writing the code for the project was really the least surprising and interesting part. It is a pretty vanilla Next.JS app that uses Server Side Rendering (SSR) to query the data and build out the static content. I had no real issues getting it to build the pages I wanted, and each feature I added pretty much worked without issue.

While I expected this to be the largest section of learnings and revelations, it was really just straight forward. I used Plan Mode to outline and iterate on each feature I wanted to add, and then once it was correct Claude Code just built it out without issue.

I could have used projects like Spec Kit to manage the requirements and features, and I'd like to explore something like that for a future project, but for the scope of this project it was unnecessary.

Deployment

As a side project, I didn't want to spend a lot of money hosting this, but I did want to put it out in the world, so I asked Claude about the best options for 'free' plans and settled on Supabase and Vercel. Claude walked me through the whole process of setting the accounts and deploying the projects. Going from running locally to fully deployed in the cloud took less than 30 minutes.

I did pony up some cash for a real domain, in case it actually becomes something.

Conclusion

While I embarked on this project to get more hands on experience using Claude Code to write custom software, the biggest surprise I found was how useful it was automating data collection and data entry tasks. Those are the kinds of things that cause a project like this to be abandoned because the 'fun part' is over and there is just 'work' left. Instead, the project remained interesting and enjoyable all the way through.

That said, I was still pretty impressed with its ability to write code. For green field side projects like this, I really don't see myself hand writing code anymore.

I also did discover another similar project called WhichWatts, which seems like a more established and more feature complete version. It has also gone through the effort to commercialize it using Affiliate links. Although I do think my addition of comparing bundles (Base + additional battery combinations) makes it easier to compare target capacity by $/Wh.

That does bring up an interesting final point. I think the cost to build a site like this is going to go down dramatically, but the effort to maintain and commercialize it will still exist. I expect we'll see a lot of cool software get built but abandoned because it will be too niche to justify the ongoing effort. That's always been true to a degree, I think it will just accelerate.

Note: This post was written by a human, with editing feedback from Claude.

JVC Projector HDMI CEC Control

I recently upgraded my projector to a 4k JVC model along with an upgraded AV Receiver (Denon). I previously used an old Harmony remote to control the whole system, but with the new components and the realization that I really only use the Apple TV as an input device, I wanted to simplify the remote setup.

The Apple TV and the Denon worked well together using HDMI CEC. This allows the Apple TV to turn the Denon AV Receiver on and off as needed, as well as to send volume up/down and mute signals over the HDMI cable.

However, the JVC Projectors do not support HDMI CEC, so I still needed the projector remote to turn it on and off. I could have re-programmed the Harmony remote, but they don't make them anymore, and the Apple TV remote is smaller and has all the functionality I need.

The JVC Projectors do support remote control over TCP/IP. I did some research and wrote a quick Go library to control the JVC Projector remotely. There are good libraries in other languages and some mobile apps (see the README in that repo) but I didn't find any in Go.

The next step was to listen to the HDMI CEC bus to trigger the correct On/Off commands at the right time. That leads to the inspiration for this project. I came across John Lian's post on HDMI-CEC. While he was addressing a different issue, it made me realize that I could build a 'bridge' between the HDMI-CEC bus and the JVC Projector.

I used the broad approach John outlined by setting up a Raspberry Pi (Model 2 B+ in my case) and connecting it to my Denon AV Receiver. The cec-client tool worked to see all the traffic on the bus, and it was fairly straightforward to identify the two different messages sent by the Apple TV I needed to listen to.

From there I wrote up a simple go app that would run the cec-client tool and parse its output. When it saw one of the two messages it used the Go library I wrote to turn the projector on or off.

I then set up the app to run as a service on the Raspberry Pi. So now to turn on (or off) the entire system I just need to turn the Apple TV on (or off).

Surviving the Great PSPS of 2025

Colorado is having a dry winter, and over the past few days we've had very high winds along the Front Range area, where I live. These conditions create a high risk for wildfires, as evidenced by the Marshal Fire in 2021. Xcel instituted a Public Safety Power Shutoff (PSPS) event twice in 3 days to reduce this risk.

The first event started about 10a on Wednesday and lasted until Noon on Thursday (14 hours). The second event started at 6a on Friday and lasted into Saturday (over 24 hours). 

There are many ways to deal with an outage like this. Some of these include (from less to more advanced):

- Buy ice, candles. Low tech but it works

- Portable Batteries to power essentials

- Portable Generator to power essentials

- Whole Home Generator

- Bi-Directional EV

- Solar + Battery

While the ideal situation would be whole house solar with battery backup, for an uninterrupted supply of power, it is also an expensive solution that is only beginning to make economical sense if you can use it on a variable rate power plan to shift your daily usage. See Base Power in Texas.

A gas generator is a well tested and durable solution, either portable or a larger whole-house version. The downsides are that they require regular maintenance if not used regularly, they are noisy, and they burn gas.

While I do have (and love) my EV, it is not capable of bi-directional use. IE, I can't draw power from it other than to drive or through the cigarette adapter (generally limited to ~100W). 

What I do have is a portable battery and solar setup I primarily use to augment the onboard solar and batteries of my camper. But the setup also works well in this situation.

My Setup (Prices current of 12/2025, including sales):

- EcoFlow Delta 2 Max ($850)

- EcoFlow 400W Solar Array ($599)

- Renology 400W Solar Array ($499)

- EcoFlow 800W Alternator Charger ($289)

The Delta 2 Max is a 2kWh portable battery with inverter. This means it can provide 2 kW of power for an hour (or 200 W for 10 hours). It can also charge from the solar panel arrays (up to 2) or the Alternator Charger (among other sources).

During the outage, I used extension cords to plug in our refrigerator, chest freezer, and networking gear (Internet, WIFI, etc), plus some phone/laptop chargers and lights. 

The networking components draw a constant ~100W. The Refrigerator and Freezer each draw about 100-200W while the compressor is running, and the rest are generally less than 100W. 

This means that the average draw was somewhere around 200-300W throughout the day. 

While the sun is out (which is pretty brief as we approach the winer solstice) I used the solar arrays to keep the battery charged. But since they days were short, and there was quite a bit of cloud cover, the 800W of solar panels averaged only a couple hundred watts, generally about break-even. It is important to remember that even in peak conditions, the panels generally don't deliver their advertised power. I think about 350W is the best I've seen off either panel in the direct Colorado sunshine.

So for the rest of the day I utilized the 800W Alternator charger installed in our pickup truck (normally used to tow our camper). The Alternator Charger is an adapter that is hard wired to the battery terminals and outputs up to 800W to the EcoFlow battery. This means it is roughly capable of charging the battery (assuming no power draw) in about 2 1/2 hours. 


This setup kept all the essentials running through the outage. The battery did drain overnight, but for the few hours the power was out the refrigerator and freezer stayed cold enough.

I only had to run the truck to charge the battery in the evening after the sun set, and then again right before bed to tip it off for overnight.

Overall this was pretty successful. We didn't have the constant noise from a gas generator, although we did have the truck noise in the evenings. It kept the food cold and the family entertained during the outage. With the truck standing in as a gas (diesel) generator to top up the battery, this solution can provide power indefinitely.

Since the primary usage of this setup is for camping, it is a nice bonus as a home backup solution. From a cost perspective, it is pretty expensive compared to a portable gas generator. A high end portable Honda that can output much more power would cost about 1k (less than half of this solution), but it would have to be running the entire time you want power.

The whole house solutions are much nicer, but for the limited use cases (this is the first outage of this size I can remember in 15 years here) they are very expensive for the value.

For shorter outages, the EcoFlow Delta 2 Max by itself, or with a single set of panels does compare well to a gas generator for a silent, more environmentally friendly approach. But for longer outages like this one it requires the gas backup (in this case the Alternator Charger).

One limitation we did discover is that the battery backup for our internet provider is only about 12 hours, but luckily for us they added a gas generator during the second outage to keep everything running.

Go Web Server using Lambda Function URLs

It has been possible to use Lambda Functions to host Go (Golang) Web Servers for years. I wrote about a project doing this back in 2016. More recently, the github.com/apex/gateway project provided a drop in replacement for net/http's ListenAndServe function that supported AWS REST APIs, and later the AWS HTTP APIs.

Amazon recently released the ability to access a Lambda Function directly via URL, without needing to configure the REST APIs or HTTP APIs. This remove an extra layer of configuration, complexity, and potentially cost, for deployments. 

I created a fork of the github.com/apex/gateway at github.com/ericdaugherty/gateway that supports this new approach. A trivial example of usage in the README is:

package main

import (
	"fmt"
	"log"
	"net/http"
	"os"

	"github.com/ericdaugherty/gateway"
)

func main() {
	http.HandleFunc("/", hello)
	log.Fatal(gateway.ListenAndServe(":3000", nil))
}

func hello(w http.ResponseWriter, r *http.Request) {
	// example retrieving values from the api gateway proxy request context.
	requestContext, ok := gateway.RequestContext(r.Context())
	if !ok || requestContext.Authorizer["sub"] == nil {
		fmt.Fprint(w, "Hello World from Go")
		return
	}

	userID := requestContext.Authorizer["sub"].(string)
	fmt.Fprintf(w, "Hello %s from Go", userID)
}

This is effective and addresses the issue, but can make local development a challenge. I often replace the main implementation above with:

func main() {

	// check and see if we are running within AWS.
	aws := len(os.Getenv("AWS_REGION")) > 0 
        http.HandleFunc("/", hello)

	// run using apex gateway on Lambda, or just plain net/http locally
	if aws {
		gateway.ListenAndServe(":8080", nil)
	} else {
		http.ListenAndServe(":8080", nil)
	}
}

This uses the net/http implementation of ListenAndServe locally during development, and the gateway library when deployed via Lambda. No configuration is needed, as the AWS_REGION environment variable is automatically set by AWS when running as a Lambda Function.

One limitation of the Lambda Function URLs is that you can only access the Lambda Function URL via the aws domain (ex: https://<function id>.lambda-url.<aws region>.on.aws/). One easy solution to this is to use AWS CloudFront. For a simple use case, create a SSL Certificate in AWS for your custom domain, and then create a CloudFront distribution, using the SSL Certificate and the AWS Function URL as the default origin. If you use this approach, you should also make sure to set the HTTP Cache-Control Header on your cache-able responses to improve performance and reduce your Lambda invocations. 

You can use a more complicated CloudFront approach if your project uses a lot of static assets. The static assets can be deployed via S3 and the CloudFront distribution can pull from S3 or Lambda depending on the request path, but for smaller or low traffic deployment, everything can be served from the Lambda function.

From here you can build your application using any traditional tooling that is supported by the Go http/net library.


Package Delivery Detector

The following is an outline of how and why I built a tool to detect when packages are delivered to my front porch. If you have enough of the right infrastructure in place, this will serve as a guide to get it setup yourself. All the source code is available, so if your setup is different, you can take and modify what I've done to work for you.

This solution is built using Go, AWS, Google Vision, and Docker.

Motivation

In November I attended Amazon's AWS re:Invent conference to catch up on the current state of the cloud. I came away from the conference inspired to leverage an area that I didn't have a ton of experience, Machine Learning. I attended several SageMaker sessions and came up with a project idea:

A tool that would use Machine Learning to detect when a new package was delivered to my home.

I already had a lot of key infrastructure in place, including a Ubiquity UniFi camera that was pointed at my front porch. I also had a Synology Diskstation with Docker that I used to run both my UniFi controller and UniFi NVR.

This project took a long time. I kicked off work in early December and I just recently got to the point where I'm happy with the results.

Image Capture

The key to this project is to build a Machine Learning model that can determine if there is a package at my front door. To build this model, I needed a lot of input data, so I started with a small app that would take a snapshot from my camera every few minutes and save it.

Luckily, UniFi cameras make it easy to grab a JPEG of the current frame. This is what my front door looks like right now:


I was able to grab this image with a simple HTTP GET request to the camera's IP. Example: http://192.168.1.5/snap.jpeg

In order to enable this, I did have to turn on Anonymous Snapshots directly on the camera. Navigate to http://192.168.1.5/camera/config and click:

Since I would be running my image capture tool within my local network, I did not need to expose the camera to the Internet.

While this camera placement works well for my every-day use, it contains a lot of extra data that isn't relevant to detecting a package, so I needed to crop each image down to just the area where I thought packages might be placed. Here is the result of my crop:


I now had everything in place to build out an app to start capturing images. I wrote the app in Go (golang) as it is perfect for this type of systems programming. The app simply polls the camera every 10 minutes, grabs the image, crops it (if desired), and stores it in an S3 bucket or on the local file system.

The source code is available on GitHub.

I wrapped the app in a Docker container and deployed it on my Synology. The Docker image is available on Docker Hub. Here is a sample Docker run command:

docker run --restart always \
 -d \
 -e TZ='America/Denver' \
 -v /volume1/imagefetcher:/img \
 --name imagefetcher \
 ericdaugherty/imagefetcher \
 -imageURL http://192.168.1.5/snap.jpeg \
 -dir /img \
 -sleepHour 22 \
 -wakeHour 7 \
 -rect 0,400,900,1080

This creates and runs a new Docker container that captures the images, crops them, and stores them locally.

Since packages are generally not delivered in the middle of the night, and the night images are much harder to see, I decided to only run the image fetcher (and eventually the package detector) from 7a to 10p.  So I added parameters to the ImageFetcher tool to stop capturing images overnight.

The -e TZ='America/Denver' parameter sets the timezone for the Docker container. This is important so that the timestamps are correct and the logic to sleep and wake work correctly.

The -v /volume1/imagefetcher:/img parameter maps the directory on the Synology to /img in the container, and then later the -dir /img specifies that the snapshots should be written to /img in the container, which will result in them being stored in /volume1/imagefetcher on the Synology.

If you would prefer to store the images on S3, you can add these parameters:
    -e AWS_ACCESS_KEY_ID='<Your Access Key ID' \
    -e AWS_SECRET_ACCESS_KEY='<Your Access Key Secret>' \

to the docker run command and this parameter:
    -s3Bucket <bucket name> \

to the imagefetcher command. You can then drop the -v /volume1/imagefetcher:/img and -dir /img, or keep them both and store the images twice!

Now you wait and capture data... I started with a month's worth of images before I trained the first model, but I continue to capture images and plan on training a new model with the larger set.

Machine Learning Model

You should now have a S3 bucket or directory full of JPEG images. Now comes the fun part, you need to manually sort the images into two different labels.  I chose the highly descriptive 'package' and 'nopackage' labels. 

I use a MacBook, so I used finder to quickly preview the images and run through them until I saw a change of state. Then I moved all the images into either the 'package' or 'nopackage' directory. Repeat until you've processed all of the images.

This is pretty labor intensive, but it did go faster than I expected it.

You should end up with two folders, named 'package' and 'nopackage'.

I then spent quite a while trying to figure out how to train the model using SageMaker. I found this pretty frustrating as I'm not really fluent in Python, and it turns out the Machine Learning space is pretty large and not super-obvious to pick up. Luckily, I came across a post from Jud Valeski where he was building a similar tool to determine when the wind blew the covers off his patio furniture. He used Google Vision for his solution, so I took a look.

As it turns out, using Google Vision to build a simple model is drop dead simple. I signed up for Google Cloud account, created a project, and then created a new Google Vision model. To create the model, I simply had to zip up the 2 directories and upload it. In about 10 minutes, I had a functional model!

Google also provides you a HTTP Endpoint that you can use to evaluate images against your model. You simply post a JSON body including your image base64 encoded, and it gives you back what label matches, along with its confidence level.

Package Detector

With the trained model in place and a public endpoint I can hit, all that was left was to build the final tool.

The final source code is available on GitHub. The Docker image is also available on Docker Hub.

I lifted much of the logic from the imagedetector to grab and crop the JPEG image. I then wrote new code to base64 encode the image and upload it to Google Vision. Based on the response, if a package is detected, an email is sent out to notify me.

The current version supports email as the notification tool.  I leveraged Amazon's Simple Email Service (SES), but you can use any SMTP server you have appropriate access to.

This tool supports two triggers. The simple approach is to specify a simple interval, ex: -interval 5 and it will check every 5 minutes. However, I realized that the Unifi NVR is already doing motion detection, so if I could trigger based on that, it would only evaluate when there was a reason to do so. I came across a cool project by mzak on the Unifi Community Forums. Here is the GitHub Repo. Mzak realized that the NVR wrote a line to a log file (motion.log) every time motion was detected or ended. I leveraged his work to build a go library that would also monitor the log file. To use this, you must map the location of the motion.log file into the docker container. I do so with -v /volume1/docker/unifi-video/logs:/nvr and then point the packagedetector at this location with the -motionLog /nvr/motion.log parameter.

You can run the docker using the following command:
sudo docker run \
    --restart always \
    -d \
    -e TZ='America/Denver' \
    -v /volume1/packagedetector:/pd \
    -v /volume1/unifi-video/logs:/nvr \
    --name packagedetector \
    ericdaugherty/packagedetector:latest \
    -imageURL http://192.168.1.5/snap.jpeg \
    -rect 0,400,900,1080 \
    -motionLog /nvr/motion.log \
    -cameraID AABBCCDD1122 \
    -gAuthJSON /pd/google-json-auth-file.json \
    -gVisionURL https://automl.googleapis.com/v1beta1/projects/... \
    -sleepHour 22 \
    -wakeHour 7 \
    -emailFrom test@example.com \
    -emailTo test@example.com \
    -emailServer email-smtp.us-east-1.amazonaws.com \
    -emailUser  \
    -emailPass "" \
    -emailOnStart true

I now receive an email every time a new package is delivered!

Looking ahead, I'm interested in building in support for SMS or even push notifications, although I would also need to build an iOS app for that. I also plan on continuing to refine the model with additional images until I'm confident it will be correct nearly all the time.

Simple HipChat AddOn in Go

We use HipChat at work to stay connected. It works well and has some fun plugins, including Karma. I'll bet a few of you can guess what happens when you create a karma system in a company full of smart technical folks.

THEY GAME THE SYSTEM.

One of the more interesting hacks they discovered is to use the 'find and replace' functionality built into HipChat to secretly give (or more likely take) karma to/from another person. The proper usage of the Find/Replace functionality is something like:
> I can typo good.
> s/typo/type

The first line would be a typo that you made, and the second line allows you to 'fix' the typo. The key part here is that the second line modifies the first line and is not visible to any other users.

Who could let this fun discovery go unused? Not our team! Our chats were quickly filled with Karma Ninjas giving and taking karma from behind a thin veil of secrecy.

Luckily the Karma bot is open source. So I forked the source, made a fix and submitted a pull request right?

WRONG. Karma is written in JavaScript and writing JavaScript doesn't make me happy. So I decided to create a Karma Ninja bot instead to 'out' the folks attempting to be Karma Ninjas. Since I've been exploring Go on AWS Lambda recently, I figured this would be a great excuse to write more Go!

Starting Points

There are a few options out there already that really helped accelerate the effort.

First, the eawsy folks created a great shim for Lambda that allows you to deploy Go easily and it runs FAST. Their solution is much faster from a cold start than the other NodeJS shims out there.

Second, Nicola Paolucci has a great post on the Atlassian Blog about building a HipChat addon in Go.

Finally, there is a HipChat Addon project by David Jonas on GitHub.

The eawsy Lambda shim is being actively developed and the team was very responsive to a few of the issues/questions I had, including one that is apparently a bug in Go 1.8.

The Atlassian blog post does a great job of walking through how to build an add-on. This was the template I ended up using for my addon. The only note I have here is that Nicola uses the roomId to store the authentication token, but that only works for single room installations. I used the OAuthId field instead which seems to work for both single room and global installations.

David's project did not compile out of the box because of changes in the JWT Token upstream project. I thought about trying to fix is, but ultimately I wanted to understand how it all worked so I wrote mine without the template. I may go back and try to fix that project as an alternate implementation.

State

When I was envisioning building the plug-in, I assume I could build it statelessly. I assumed you could simply register a webhook with a regex and a callback URL that would return a message. But the webhook does not appear to let you return a message, you need to make a separate call into the HipChat API to post a message. Because of this, you need to maintain some state between calls.

EDIT: My first guess was correct, you can return a response to the webhook call and post a message. I ended up over-complicating it but the point was to learn more about AWS so it was still time well spent. Here is info on how to pass a response to the webhook.

I explored using both DynamoDB and S3 to maintain the needed state, which is really just an OAuth Token. They both worked fine but ultimately I chose to use S3 as it was the simpler and cheaper option for this use case.

Amazon does publish an AWS SDK in Go which made interacting with DynamoDB and S3 pretty easy. I do think the SDK is a bit 'low level' and could probably be made a lot easier to use with some convenience functions, but I'm just happy they support Go so I shouldn't complain.

The eawsy team does the necessary work to make the Lambda environment available from Go, meaning that you can simply use the Go SDK without any explicit authentication credentials or setup. By default, you simply inherit the IAM Role of the Lambda function. In this case, I simply needed to add permission to access DynamoDB and Se.

Code

The code itself is pretty straightforward. Like most go projects, I'm always impressed with how much you can get done in just a few lines of code across a handful of files. I published the project on GitHub: ericdaugherty/hipchat-karmacop.

AWS

This code does require some AWS setup. You need to define a Lambda function and build and upload the project using the Makefile (see the README for instructions).

You also need to setup an Amazon API Gateway entry to reference your AWS Lambda function. This will give you a public URL that your AddOn will reside at.

An S3 bucket is required to store the OAuth token used to post responses back to the HipChat API.

Finally, you need to add permission to read and write to S3 to the Lambda IAM Role configured for the Lambda Function.

That's It

This was a fun little project and shows a good use case for using Lambda to implement add-ons to cloud applications like HipChat.

How and Why I Became a 'Manager'

After my previous post about my approach to technical interviewing, I received some requests to write more about my career path. In this post, I'm will attempt to answer a question I get asked somewhat regularly: Why did I stop writing code and become a manager?

How did I choose Software?

Before we can dive into why I became a manager, we need to explore a bit more about who I was. Growing up, I loved Legos. I would spent hours building custom lego creations. As I got older, my family got a Commodore 64 and I spent hours playing cartridge games, watching Zaxxon fail to load from cassette tape, and typing in programs printed in computer magazines.

While I was in High School, I ran a dial-up BBS on a computer made from parts cobbled together from old computers and occasional purchases from Computer Shopper. I took all three classes in our computer lab at the High School: Typing, BASIC Programming, and Computer Drafting. Typing was the most important of those three. When I exhausted every class at the High School, I took a Turbo Pascal class at the local community college. If I wasn't already convinced (I was), after that I knew I wanted to write software for a living.

I went to the University of Illinois to study Computer Science. Studying Computer Science at UIUC was the fulfillment of a dream for me, but I quickly made a small change.  During my Freshman year I switched to Computer Engineering. This was precipitated by my experience with the Introduction to Algorithms class. It involved way too much math and theoretical thinking. I just wanted to learn was how computers worked. The Computer Engineering program offered a lot more low-level classes and less theoretical programming classes. My favorite class was our microprocessor design, where we started with basic logic gates and built up to a pipelined processor, including writing a program and running it on the processor. My second favorite class was x86 Assembly programming. I was happy with my choice to switch to Computer Engineering, and it put me in the position I dreamed of as a kid, I got to get paid to write software.

Did I like Software?

After graduating I started working for a consulting company. They did a great job of training new college graduates with a well-defined self-directed training course in C++. That was, practically speaking, the first and last time I got paid to write C++. The world was quickly transitioning to the web and Java. I was among the first members of our staff to learn Java for new projects, and being a primary Microsoft shop, a little J++ as well.

It was amazing. I was learning something new every day, and getting to build applications that real people were using every day. It was also when I first learned to dislike process and management.

This is probably best exemplified by a quick story. On the first project I had an opportunity to lead, I was asked to document the object model. According to our process, this involved opening up Microsoft Word, and defining each class and public method, including method signatures and JavaDoc formatted documentation in Microsoft Word. I thought it was a complete waste of time, I was frustrated that management didn't understand software development, and I just wanted all the 'useless' people to get out of the way and let me build software.

I had another experience that furthered my distain for anyone other than developers. We were implementing Enterprise Application Integration (EAI) software written by Active Software for one of our clients. This involved moving data between several different enterprise systems. The pattern we used was to define a single message format for each data type that would contain all the data that could be consumed by every end systems. Active Software called this a 'Canonical Message'. However, apparently the client didn't understand what the world 'canonical' meant, and I spent the better part of an afternoon and the entire evening in a conference room with the entire project team attempting to come up with a new way to describe a canonical message without using the word canonical. It drove me CRAZY. I wanted everyone to get out of my way and let me go build something.

I swore I would never do anything but write code.

What Changed?

Fast forward a few years, and I had built up a solid skill-set, primarily in Enterprise Java but also a solid understanding in C#.Net. I learned how to build websites in Oracle Application Server 1.0, proprietary frameworks, Enterprise Java, Spring, and other variations. 

I loved learning new frameworks. Each one brought solutions to problems I had struggled with, and as I learned, a new set of problems. 

And ultimately, this was the primary factor in my evolution. After a while, a new framework brought a quick review and shrug instead of excitement and a deep examination.

My focus over this period was building enterprise applications running inside large companies. These are important systems and 'fun' to work on, but at the end of the day are largely reading data from a database, displaying it in HTML, capturing some changes, and writing data back to the database. It got old, especially when I became somewhat disillusioned with the evolution of the frameworks.

During this time, I also got to spend a lot more time working directly with clients. While this is often a frustrating experience, it allowed me to learn about a lot of different companies, industries, and business models. I came to respect the challenges that they each faced and the solutions that they had developed.

These factors caused me to pick my head up and look around at the world I lived in, and wonder what else I could be doing.

What Did I Know?

After spending close to a decade working primarily for consulting companies, I had broad but shallow knowledge of a lot of industries. I had worked for Fast Food, Manufacturing, Insurance, HR, Logistics, Banking, Financial Exchanges, and Retailers. I knew a little about a lot, but I didn't have deep knowledge of any. I realized that what I really understood was the consulting industry.

This worked out well since I currently worked for a consulting company. I looked around I started seeing other problems I could solve. Luckily, I was part of a fast-growing company with the trust of the ownership.

It started small. We had a small project and needed a part time PM. I stepped as the technical architect, but I also worked with the client to make sure we were delivering what they needed, and keeping them up to date on the project status and the challenges.

I also learned how the owner interviewed and I started helping out as the company was growing quickly and he was doing more interviews than he could handle alone.

I built trust and over time I took on more responsibility.

How Did I Learn?

I often get asked how I learned how to manage a company. The answer is pretty simple: just like I learned how to code. I read a lot and learned through trial-and-error. I also had great mentors.

This is an example conversation I often had with the owner of one of the companies:
Me: We should be doing X.
Him: Why?
Me: Because <a lot of reasons I thought made sense>.
Him: No.
Me: Why? 
Him: Because <a lot of reasons I had not thought about>.

I would take what he said and reformulate my pitch based on this new information. It would often take several iterations before I either got a yes or I became convinced my idea wouldn't work for our company.

The great thing about the owner was that nearly every yes was followed by "go do this, I'm holding you accountable". This was a great way to grow my responsibility and take ownership of my ideas and see them through.

This conversation happened a lot, and over time I really understood what he cared about, and what his overall philosophy was. I agreed with much of his philosophy, but not all of it. We had many spirited discussions over the years, and I came away with a well developed philosophy of my own.

During this time, I expanded my reading habits beyond technology. I started reading business books, articles and anything and everything that taught me about how businesses worked. I learned about public companies by reading their 10K filings. I took a deeper look at our clients and tried to learn why they made decisions that I had previously thought were... strange. 

And That Was That?

My transition to 'manager' wasn't over. My family decided to relocate to Colorado. Starting over in a new city with almost no professional network, I chose to go back to senior technical positions where I was actively writing code. I still have a passion for building software and, luckily enough, skills that were still valuable. But I eventually grew restless and started looking for a way to leverage my broader skills.

Then a very important event in my story happened: I took the family to Disney World. No, I didn't have an epiphany on my tenth ride on "It's A Small World", we only rode it 3 times. Instead, I met the CEO of Xcellent Creations on the flight home and found an opportunity to leverage the expertise I'd developed over the years to help grow a small mobile shop into an industry leader that we eventually sold to WPP.

Ultimately, I enjoy solving problems and building things. Whether those are Legos, software, or organizations, it doesn't really matter. What matters is that it that the problems are challenging and that I'm working with great people.

My hope is that the organization that we've built today does for our employees what my past companies did for me. I want to help our team expand their skills, take on new challenges, grow as individuals, and view the world differently. And along the way, ship great apps.