Red Rocks

Programming in the Small

March 10, 2010

I believe that successful application development today is about ’tweaks and sub-features’, not major functionality.  But my thought process for this post was kicked off by an interesting post by Mike Taylor: ‘Whatever happened to programming?’ Mike laments the evolution of programming.  He is nostalgic for the day when writing a program was about creating something from scratch, instead of assembling the various pieces.

I think his assessment of the transition is fairly accurate.  The number of frameworks and libraries in projects today far exceeds the number used in projects 5, 10, or 20 years ago.  This is especially true in the Java world, where build tools like Maven gained traction because they handled the dependency management.  And now, a non-trivial Java project can easily incorporate 100 or more jar files, while a trivial ‘boiler plate’ web application can easily have 20 jars.

In many ways this is frustrating.  It has also given rise to what I call the cut and paste programmer.  You can actually achieve reasonably impressive results simply by searching for the right libraries, and them assembling them together but cutting and pasting example code found via Google.

From a business perspective, these are all good things.  The level of skill required to produce results is lower, and the speed of development has greatly increased.  We are standing on a very tall foundation.

This also means that the major functionality of many applications is provided mostly by libraries and frameworks.  The heavy lifting parts are not really heavy anymore.  I think Jeff Atwood hits this nail on the head when he stated on a Stack Overflow podcast episode that the major features of Stack Overflow themselves are fairly trivial.  The real value is that it is a collection of many small features and tweaks that make the overall system successful (I can’t find the reference, so I apologize if I paraphrased incorrectly).  I think this point is right on.  Most major ‘features’ are trivial to implement today using the rich set of libraries that exist.  Building a website that has questions an answers like stack overflow is trivial.  Making it successful is hard.  And the difference is all in the fine print.

Jeff discussed at some length the time they spent with the syntax parser (markdown) and instructions on the ‘ask a question’ page.  Small changes to how the information is displayed and highlighted are much more important then the major feature of saving the question to a database and displaying it.

Successful applications today are about the user experience.  There are very few applications that are truly innovative themselves and could not be replicated by gluing together a set of frameworks.

Real innovation today is in the small.  This is also why I believe that the rise of Appliance Computing is here, and that Write Once Run Anywhere languages are inferior to native client applications.  It is the difference between a iPhone web application and a native app.  They both have the same features, but the experience can be very different.  In the end, the real value is the small efficiencies in the application, not the large features. 

Web File Extensions are Bad

March 4, 2010

I hate file extensions on websites.  It is an unnecessary leaky abstraction, and is discouraged by the W3C.  All file extensions are not necessarily bad, but any file extension that exposes the underlying technology implementation is.  Any .pl, .php, .asp, .aspx, .jsp, .do, .struts, etc extensions are B A D.

I’ve talked about this before, and come up with some workarounds to build extension-less java web applications before.

However, I’ve come across what I think is a better way, thanks to a post a couple years ago by Matt Raible.

I came across the issue using the Spring WebMVC DispatcherServlet.  I want all dynamic URLs handled by the Spring Controllers, using Annotations.  However, mapping the DispatcherServlet to /
means that every URL will be processed by the DispatcherServlet, including the .jsp views returned by the Controller.  As I mentioned in the previous post, you can ‘unmap’ content and have it handled by a default or jsp servlet in some app servers, but not all.

You can also try to map specific subsets of URLs to spring.  However, this is harder than it sounds.  By default, Spring will map the wildcard portion of the URL matched, not the entire URL.  So if you have /foo/* as your url-pattern, the controller with a mapping of /foo/one will not match /foo/one, but instead matches /foo/foo/one.

It appears that you can use the property alwaysUseFullPath to change this behavior, but it did not seem to work as expected for me.

Instead, there is a more generalized solution, as Matt suggested.  URL Rewriting.

The URL Rewrite Filter project provides a Filter that you easily define rewrite rules for, just like in Apache.  So I setup my DispatcherServlet to match *.spring, I setup a rule to rewrite all extension-less requests to .spring, and I setup my annotations to have the .spring extension.

Now my web application can handle the odd html, png, or other static files if necessary, but does not expose any implementation details in the URLs.  Perfect.

For reference, here are the relevant portions of my config:

Web.xml

Developing a Google App Engine (GAE) app using Maven

March 3, 2010

If you want to develop a Google App Engine (GAE) application using Maven, you can either use the Maven plugin maven-gae-plugin, which requires non-trivial hacking on your pom.xml, or you can keep your pom clean and create a simple Ant script.

My pom is a simple web application pom, with no specific GAE configuration.  I then created a build.xml in my project root that looks like this:

<project>
  <property name="sdk.dir" location="/opt/appengine-java-sdk-1.3.1" />

  <import file="${sdk.dir}/config/user/ant-macros.xml" />

  <target name="runserver" depends=""
      description="Starts the development server.">
    <dev_appserver war="target/yourappname-1.0-SNAPSHOT" />
  </target>

</project>

Using this, you can run your application in the GAE sandbox without having it take over your pom.

You can also have the ant task perform a maven package to insure everything is updated by adding an exec target to the runserver task.

You can read more about the full range of Ant tasks available for GAE, but I found this simple script helpful to get up and running quickly in the GAE sandbox without much effort.

Whole House Audio/Video Distribution

February 24, 2010

I have my house wired so that every television can access a shared set of sources (mostly).  I wanted this solution because everything I watch is recorded.  Therefore, I wanted to access each of my three DVRs on every television in the house.  Here is how I accomplished it.

First, I located all of the DVRs in the basement.  They are each run to every television in the house using different transmission mechanisms.  Here is a general overview of my system layout:

Siri - The Next Generation of Appliance Computing?

February 23, 2010

In my previous post, I discussed the trend toward computing appliances (ie DVRs, Kindles, etc.) instead of general purpose computers.  On the recommendation of Merlin Mann on MacBreak Weekly, I downloaded the Siri iPhone application and gave it a try.  Wow.

The Siri application attempts to be the ubiquitous Star Trek computer.  Just ask it a question and it will give you the answer.  It provides both voice and text interaction modes, and an easy user interface that exposes common feature easily.

It won’t do everything you want, and I’m not sure this specific application will be something I use regularly, but at a minimum it provides an interesting example of where the world is going.  Imagine using this with what I imagine will become ubiquitous Bluetooth headsets.  Just ask a question and have the answer spoken to you.  We are not there yet, but we’re starting to get really close.

This is an example of what I consider the appliance trend, applied to software.  The application provides easy access to a common (if limited) set of features that are intuitive to use.

Why the iPad will succeed, and the Rise of the Computing Appliance

February 22, 2010

The iPad is an appliance, and it will be successful.  But before we get to that, we need to start at the beginning.

When I was a kid I ran a dial-up BBS service (The Outhouse) on a computer cobbled together from old donated computers and a few new parts I’d purchased.  I would take apart old computers (donated by my friends parents after their companies discarded them) and test the various parts for something I could scavenge.  I spent hours pouring over the massive Computer Shopper magazine to find the best deal on a new hard drive or modem.  This was the very definition of an (economy) do-it-yourself general purpose computer.

In my first two decades of computing I never purchased a pre-built computer.  I always assembled new computers from parts, or upgrade my existing computer (often replacing everything but the case and CD-ROM drive).  Pricewatch.com was my favorite site for a long time.

But along came a new device that started a big change, TiVo.  I bought my first TiVo around the year 2000.  It was something I could have built myself, but I realized that the convenience of having a dedicated appliance was worth the cost.  I just wanted it to work, and it did. Very well.

In the years since, I’ve been transitioning away from general purpose computers to appliances.  The Linux and Windows desktop/servers I used to run (24/7) have been replaced by a Linux based wireless router (Linksys WRT 54 GL) and a NAS (ReadyNAS NV+).  They provide nearly all the services the old general purpose computers provided with a few exceptions, and each of those exceptions have been moved to the cloud.  I’ve moved my website hosting and email hosting to the cloud using GoDaddy (< $5 month for web hosting) and Google Apps (Free).  They provide a better quality of service, at a minimal cost.  Along the way I migrated from writing and hosting my own email server (Java Email Server), to hosting email at GoDaddy, to free hosting at Google.  That is quite a shift in effort and cost.

The same progression is true with my other devices.  I now exclusively use laptops, and have not assembled a desktop more than 4 years.  I’ve also embraced Apple devices, which have more of an appliance feel than do other devices.  I use iTunes to manage my music, a couple of iTunes Airport Express units to listen to music on my stereos, and an iPhone as my mobile music device, and phone.  While one could argue that these are not really a change to appliances, I think they match the general trend.  Appliances provide a pre-defined (somewhat inflexible) experience that ‘just works’ as long as you stay in the provided feature set.  This is exactly what Apple excels at doing.

Finally, I’ve adopted a Kindle.  While I could read on a laptop, or an iPhone, this single purpose device excels at linear reading (ie Books).  I use it every day.

There are several reasons for this trend.  One is simple economics.  As the years go buy, I have more discretionary income, and more demands on my time (namely two young children).  The convenience of buying appliances versus tinkering has certainly changed for me.

I think there is more to the story though.  As the computing and home electronic fields mature, it becomes easier and more cost effective to create appliances that fit into our worlds.  Which brings us to the iPad.

The iPad is not the first device to attempt to move the general purpose computing environment to an appliance.  One could argue that it is really a descendant of the WebTV concept.  Take the primary activities people use general purpose computers for and put them into an appliance.  This brings up a brief and interesting digression…

Apple does not create markets.  Apple waits until a market is ready, and then delivers a product with impressive polish and ease of use.  MacBooks don’t do anything a similarly priced PC can’t do, they are just prettier and easier to use (or have been traditionally).  The iPod was not the first portable MP3 player, it was just better (including the iTunes ecosystem).  The iPhone didn’t break new ground on smart phone functionality, it was just better (again, including the iTunes and AppStore ecosystem).  Finally, the iPad isn’t new either.  Microsoft has had a TabletPC version of their Operating System since 2001.  The previously mentioned WebTV provided email and web browsing as an appliance experience.

Apple is attempting to build on these ideas, with Apple’s traditional polish, iTunes ecosystem, and of course, Reality Distortion Field.  I don’t know that version 1 of the iPad will be a success, but I am convinced that appliance computing will become a significant mainstream success.

Final Note to Developers: As Software Developers, we will always use a general purpose computer.  Just as a carpenter utilizes a set of tools to build a house, we will utilize a set of tools (ie general purpose computers) to build appliances.  Our goal should always be building applications in the appliance mindset.  My youngest child (2 years old) can turn on my iPhone and open her favorite puzzle game.  All of our computing experience should be this easy.

Creating a TimeMachine SparseBundle on a NAS

February 18, 2010

I use a ReadyNAS NV+ as my backup drive and bulk storage.  Although the newer firmware directly supports TimeMachine, I’ve never been able to get that to work.  (This probably has something to do with the fact that I was upgrading and downgrading my NV+ Firmware quite a bit to debug a separate issue).

However, I did find a great tool to create SparseBundles that you can use on a NAS (or any external disk).

BackMyFruitUp.  First, it is a great name.  Second,  it is a simple and easy tool.  The Tool I actually use is ‘Create Volume Backup,’ a subproject of BackMyFruitUp, which you can download from this page

You hardly need instructions.  Unzip it, run it, and type in the size you want for the sparsebundle.  Then just copy it to your destination share and point Time Machine at it.  Done.

Of course, I wouldn’t need it now if my Time Machine SparseBundle hadn’t become corrupted.  Luckily I didn’t need it.  I also perform a seperate rsync backup on occasion to insure I have a ‘basic’ backup of my user directory as well.

Ignorance Spam

February 18, 2010

I’ve been getting a lot of spam recenctly.  But this isn’t normal spam, it is actually ’legitimate’ bulk, although I didn’t sign up for it.  What is going on?

Someone out there, apparently with the name Emily Daugherty, thinks that my gmail address is actually her gmail address.  She’s been signing up for all sorts of websites using her (really my) email address  in the past few months.  That results all sorts of useless email for me that is not caught by normal spam filters.

Today she finally tried to reset my gmail password!  I’m not sure if she really doesn’t know her email address, or is simply really, really, really bad at typing it in.  Either way, it needs to stop.

I need to somehow convince her to stop using my email address.  Unfortunately, I don’t know her real address, since apparently she mostly doesn’t either.

Any ideas?

Does Write Once Run Anywhere Work?

February 16, 2010

Yes, and No.

Write Once, Run Anywhere, a slogan created by Sun to evangelize the virtues of the Java Platform, is a controversial approach to software development.

Write Once, Run Anywhere (WORA) is accomplished through an abstraction layer between the ‘compiled code’ and the operating system and processor.  This abstraction usually takes the form of a Virtual Machine or Runtime, such as the Java Virtual Machine (JVM), Microsoft’s Common Language Runtime (CLR), Flash Player (or Air runtime), or one of the many interpreted language runtimes (Perl, PHP, Ruby, etc.).  These runtimes convert the intermediate language into device specific code that can execute on the local operating system and processor.  While this overhead introduces extra steps, which slow down execution, they also provide features not (easily) available in native code, such as garbage collection and Just In Time (JIT) compilers which can optimize the code while it executes, as opposed to at compilation time.

So does it work?  Yes, and No.

Server

WORA languages have achieved a significant level of success on the server side.  Code that runs on large servers and interacts with clients over HTTP or other protocols is almost always written in some form of WORA, whether it is Java, .Net, PHP, Ruby, Perl, or other interpreted languages.  There is no advantage to using native code in these cases.  All interactions with the user are through an intermediate protocol/interface, such as HTML over HTTP for websites, XML over HTTP for web services, or various other formats and protocols used to exchange information between servers and clients or other servers.

There are certainly some applications developed for servers in native code.  Database servers are the most common example, but LDAP servers, webservers (Apache), and others are compiled to native code.  However, there are WORA versions of each of these examples, and many of the native applications were first written before WORA languages took off.

There is no denying that WORA is a huge success on the server side.

Client

Which brings us to No.

Client application development has struggled on the client side.  The biggest challenge is User/Human Interface Guidelines (HIG).  User or Human interface guidelines are published by various Operating System vendors (Microsoft, Apple) that define a set of recommendations on how an application should look and interact with the user.  Applications that follow these look like ‘Windows’ or ‘Mac’ applications.

With WORA, application developers have two choices.  Follow the guidelines of a specific platform, and ignore the others, or compromise between the various target platforms, creating an application that doesn’t match any platform. 

Early Java desktop applications looked like Java applications.  They were obviously different from the other applications that users were used to interacting with, and were often shunned.  This has led to a negative view of WORA applications in general, as John Gruber comments on a Jason Kincaid article:

iTunes Export 2.2.1 Released

February 12, 2010

iTunes Export exports playlists defined in your iTunes Music Library to standard .m3u, .wpl (Windows Media), .zpl (Zune), or .mpl (Centrafuse) playlists. iTunes Export also supports copying the original music files with the playlist to facilitate exporting to other devices. iTunes Export is open source and freely available for use.

The 2.2.1 release features updates and bug fixes to the console and GUI versions

In both versions: