Smart phones: Open, Closed, and Fragmented

Android is Open, iOS is closed.  Well, that is one way to look at it.  Steve Jobs would prefer: Integrated vs. Fragmented.  As we've learned from politics, (Estate Tax vs. Death Tax), how you name something can dramatically change people's perceptions.

I don't believe most of the facts in this discussion are in dispute.  Apple and Google take a very different approach to their mobile operating systems.

Apple takes a very controlling closed/integrated approach.  You can only publish an application on an iPhone if Apple approves it.  The approval process can be opaque at times, though it is getting better.  As Henry Ford said: "Any customer can have a car painted any colour that he wants so long as it is black"; the iPhone comes in one color, one screen size, one form factor.  Old devices are supported for a while with the latest OS, but users are certainly encouraged to run the latest OS, with a somewhat recent version of the hardware.

Google's approach is open and free.  You can build any application you want for Android.  You can launch your own Application Store.  You can ship Android using Bing as your search engine.  You can use any screen size, form factor, or even any color!  Android is a platform on which you can build a mobile operating system for your device from.  You can choose the defaults, or you can customize it.

Both of these approaches have costs/pains associated with them...

iPhone Limitations: You can only use your iPhone to do what Apple approves of.  Well, that is partially true.  You can use any part of the Internet (excluding Flash) using the browser, but there are a large number of applications that Apple will never approve, and therefore cannot be used on an iPhone.  Interested in Swype for the iPhone?  Sorry, not approved.

Android Fragmentation: There has been much discussion of late about the fragmentation of the Android space.  Netflix stated that they will support Windows Phone 7 before Android, and that their Android support will be on a device by device basis.  This is because there is no unified security model that they can use to insure people won't 'steal' the streaming content (a ridiculous limitation).  Rovio came out with a list of unsupported Android smart phones for its popular Angry Birds game.  It seems the different hardware configurations make the game play different across devices.  These are two very popular applications that are struggling to provide a solution on the Android platform due to its 'openness'.  Developing for Android is harder than iOS because you must handle the different physical and software configurations that exist.  That is a much smaller issue on iOS.

In addition to these issues, other questions have been raised about the state of applications on Android.  John Gruber, a noted Apple enthusiast, asked: "Where Are the Android Killer Apps?"  While he is certainly biased towards Apple, I think the question is valid.  Does Android have killer apps, or simply ports/clones of iOS applications?

Android 'Free' Fallout: Scoble has a post about the iPhone and Android application ecosystems.  He points out that regardless of overall market-share, the perception is that iPhone users spend more money per device than Android users.  And that the iOS market is where developers want to be.  The integrated Apple approach drives more eyes to their (single) app store.  And those uses are more likely to already have accounts setup and be able to do 'one click purchases'.  Google has also been slow to enable application purchases globally, which has driven more developers to release free 'ad supported' versions on Android than iOS.

So who wins?  Neither.  Both ecosystems have their issues.  iOS is and will continue to be a major success and huge market for paid applications.  If you are developing a non-controversial (to Apple) application, it is a great bet.  The Android platform will be huge.  Android will drive nearly every non-Microsoft or Apple based mobile device made in the next few years.  There will be a wide variety of hardware and customized software versions released, and it will enable the development of some exciting 'custom' mobile solutions that are simply not feasible (or even possible) on iOS.

As a developer, both platforms have a lot of appeal, but they are very different to work with.  There is no clear winner.

Android is a Success, but for Whom?

There is no arguing that the market share of devices built on the Open Source Android operating system is impressive.  The Android platform, judged by adoption, is a success.

But who are the winners?

Google, who is spending the money to develop and market Android, obviously hopes to gain from their effort.  While they do not sell Android, they support it to foster more traffic to Google and the web in general, which will, in theory, sell more ads.  When asked about the ad revenue from Android, CEO Eric Schmidt said: "Trust me that revenue is large enough to pay for all of the Android activities and a whole bunch more."  Google’s Jonathan Rosenberg estimated Google's Android related revenue at $1b annually.  However, this revenue is all indirect.  Would Google be just as well off if the smart phone market were entirely iPhone devices defaulted to use Google and YouTube?

But Android is beginning to be used in ways that would appear to be neutral or negative to Google.  Rumors swirled in early September that Verizon was replacing Google Search with Bing on all of its Android phones.  Those were followed up quickly with denials, stating that Bing would not be used on ALL Android phones, but that it would be the default on some.  So Verizon is shipping some Android phones defaulted to use Bing search.  Also, Google competitor Baidu is reportedly working to build Android powered smart phones with all Google references replaced with Baidu.  Neither of these uses will drive revenue, direct or indirect, to Google.  These examples illustrate Android's openness.  No one, not even Google, can control how Android is used.

I believe Android is, and will continue to be, a success for Google.  The various other uses of Android do not diminish the value of the normal uses of Android.  The existence of Android certainly provides a better situation for Google than a smart phone market dominated by Apple and Microsoft alone.

Fall at Rocky Mountain National Park

Last weekend I went to Rocky Mountain National Park to take in some of the fall colors.  Here is what it looked like:



You can see more at my Flickr photostream.

Using Tycho to Build an OSGi Project

I recently migrated the build process for an application from a monolithic Eclipse Buckminster build to a Maven build using Tycho.  Our source code was componentized and runs in an OSGi container, but our build was not, making it difficult to version each component individually.

Because all of our OSGi meta-data was already stored in the OSGi MANIFEST.MF files, we wanted a built process that would leverage that investment, while providing us the flexibility and functionality  a generalized build tool provides.  Maven and Tycho fit the bill.

Tycho is only supported on Beta releases of Maven 3.  We used Beta 2 for our build.  Tycho is simply installed as a build plugin, so all you need to get started is the Beta release of Maven 3.

The Maven/Tycho setup is pretty simple.  We defined a parent POM that provided the Tycho dependency, the child modules to build, and the P2 Update Sites that provided any dependencies needed by the component.  The Tycho part of the config looked like:

<build>
<plugins>
  <plugin>
    <groupId>org.sonatype.tycho</groupId>
    <artifactId>tycho-maven-plugin</artifactId>
    <version>${tycho.version}</version>
    <extensions>true</extensions>
  </plugin>
  <plugin>
    <groupId>org.sonatype.tycho</groupId>
    <artifactId>target-platform-configuration</artifactId>
    <version>${tycho.version}</version>
    <configuration>
      <resolver>p2</resolver>
    </configuration>
  </plugin>
  <plugin>
    <groupId>org.sonatype.tycho</groupId>
    <artifactId>maven-osgi-packaging-plugin</artifactId>
    <version>${tycho.version}</version>
    <configuration>
      <format>'${build.qualifier}'</format>
    </configuration>
  </plugin>      
</plugins> 
</build> 

We used Tycho version 0.9.0 for our build.  The build.qualifier allows a Source Control revision number or other number to be used for SNAPSHOT or incremental builds.  This value replaces the .qualifier part of the OSGi version number string.

For each component we use the 'package' Maven build command, which produces an P2 Update Site.  When other components have dependencies on a component, it references its update site in the parent pom file.  This looks something like this:

<repositories>
  <repository>
    <id>newco-core</id>
    <url>http://updatesite.newco/dev/component/branch</url>
    <layout>p2</layout>
  </repository>
</repositories>

This allows us to version and build each component individually.  We can then mix/match the P2 Update Sites that are created to produce custom aggregate sites that can be tailored as needed.  We currently used the P2 Mirror Ant task to produce the composite sites.

The pom file for each OSGi module or feature is trivial.  It is just a reference to the parent project, and the type of OSGi module that it is.  Here is a sample:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <artifactId>myComponent</artifactId>
    <groupId>com.ericdaugherty</groupId>
    <version>1.0.0-SNAPSHOT</version>
    <relativePath>../pom.xml</relativePath>
  </parent>
  <groupId>com.ericdaugherty</groupId>
  <artifactId>com.ericdaugherty.myComponent</artifactId>
  <version>1.1.0-SNAPSHOT</version>
  <packaging>eclipse-plugin</packaging>
</project>

So now we can build each of our 'components' individually, without making a change to our development process.  Our development teams continue to maintain the configuration and dependencies in the OSGi MANFIEST.MF files.

We ran into a few bumps in the road along the way, but were always able to resolve it in a way that both Eclipse and Tycho accepted.

Overall, I'm very happy with the result.  We can now compile a wider range of projects (including Scala) that we could not build before with Buckminster, and have much more fine grained control.

Apple Deprecates Java, Escalates War on Flash

Apple made a few surprise moves yesterday.  Engadget noticed that the new MacBook Air computers do not ship with a Flash Plug-in pre-installed.  This is another escalation in the Apple vs. Flash battle that started with the iOS devices and is now creeping into the OS X computers.  However, unlike iOS, you can still manually install the Flash Plug-in if you choose.  In the end, this is an inconvenience, but nothing that can't be resolved by a user in 5 minutes since Adobe develops the Flash Plug-in.

The second move, and the one more relevance to me, is the deprecation of Java on OS X.  The Apple/Java relationship is interesting.  While Sun produces JVMs for most major platforms (Linux, Solaris, Windows), it does not provide a version for OS X.  Instead, Apple provides its own version of the JDK, developed at least in part by Apple.

My belief is that Apple is hoping (or already made deal) that Oracle will take over the development of the JDK for OS X.  While Apple is ruthless about attaining control over their entire stack, Java is not a critical part of any of their future plans, so why waste engineering resources on it.  Instead, let Oracle spend the effort, just as they already do for Windows, Linux, and Solaris.

However, if Oracle does not take over development (although really, how could they not) then this could be a significant negative impact on MacBook sales, as the MacBook Pro is the chosen development platform for many Java developers.

I think the most realistic risk is that there will be a large gap between when Apple stops developing and when Oracle takes over.  However, deprecated does not mean "we re stopping all work".  It simply means, that the end of the road is near.  Although given the slow release cycle of Java itself, I suppose we have a while before it becomes a serious problem.

Forest Fires

Today while hiking near Estes Park, CO I saw smoke from two different forest fires.  The larger fire was the Fourmile Canyon fire near Boulder that broke out this morning.  It is not yet contained and the smoke can be seen throughout the Denver area.

The second fire was a smaller file near Lumpy Ridge in Estes Park.  I believe there have been fires in this area already this year, and at this point this one is believed to be minor.

There were really high winds today, which enables the fires to spread quickly and makes it difficult to fight from the air.  Good luck to all the firefighters working to contain them.  Here are a few pictures from each:

Fourmile Canyon:


DSC_0904.jpg

Lumpy Ridge:
DSC_0910.jpg

The Power of Simplicity

Small changes in functionality can make big impacts on usefulness, and therefore the value of tools and applications. This has always been true, but is more obvious on mobile devices where limited user input, network access, and screen size magnify the value of a good user experience (UX).

One example of this phenomenon is Wikipedia. On the iPhone, I can look up data on Wikipedia using Safari, or using one of the Wikipedia iPhone applications. At first glance, an iPhone Wikipedia application is absurd. After all, it simply uses the built in Safari to render public web pages to the user. The exact same pages can be viewed using the built in Safari browser, and you can add a bookmark directly to the Wikipedia page to your iPhone home screen. But one difference makes all the difference. That one difference is auto-completion. On the mobile site, Wikipedia does not provide for search auto-completion. You must type in a full search and execute it. If you searched correctly, you can then select the result and view the page. Using the application, you simply start typing, and then select the correct match of your partial search to view the page.

This probably saves less then 30 seconds per usage. But looking up an answer in Wikipedia should be a sub-minute activity, making it a significant difference. And that difference makes me more likely to use the application, increasing its value.

A second example is Instapaper.  Instapaper is essentially a bookmarking service. Through various means, you can mark a website for reading later.  Then, using various client interfaces (Web, iPhone, etc.) you can read that article later.

When I first read about it, I didn't get it at all. What's the point? There are several existing ways to do the same thing. You can use Delicious, Bookmark items in your browser, email links to yourself, etc. It doesn't really allow you to do anything you couldn't do already.

What it does do is make that same function easier. As I read through my RSS feads, I will often find a longer article I don't want to read right away. Now, I simply use the Instapaper bookmark in a desktop browser (javascript that sends the URL to Instapaper) or the Send To Instapaper button in Mobile Safari to bookmark it. Then later, when I'm sitting around with 5 minutes to kill, I'll pull up Instapaper on my iPhone and read an article.

After a few weeks of usage, I found that I am reading many more long articles with better reading comprehension. Instead of hurrying through an article when I wasn't dedicating my full attention, or just skipping it because I wasn't 'that interested', I now Instapaper it, and read it when I'm ready.

The functionality provided by Instapaper and the iPhone Wikipedia application are both trivial enhancements over existing options. But they provide just enough grease to make a task that was possible before a little easier.  That can be a huge value.

When looking for that next great idea, remember, you don't need to invent an amazing new product category, you just need to make one thing easier to use.

Is Scala Too Complex?

Is Scala too complex to become the 'New Java'?  This question has been debated before, and there are some good existing posts including:
I tend to mostly agree with their arguments that Scala is not inherently more complex, but instead is different and has complexity in different ways.  The syntax is different, and you need to learn a few different rules, and some features like Traits (multi-inheritance) and implicit conversions can make tracking down how things work a bit more difficult.  In the end though, I don't believe Scala is itself more complex than Java.

But I want to take a different angle on this question.  I believe that the use of Scala to build Domain Specific Languages illustrates how some of Scala's features can be used to create code that can be complex to understand, but that may be OK.

Domain Specific Languages
Domain Specific Languages, or DSLs, provide a syntax suited to a specific problem domain.  Scala provides a set of features that enable the development of Scala libraries that enable a DSL or pseudo-DSL.

Lift is a web framework written in Scala.  It provides what I consider a Scala based DSL for creating web applications.  Let's examine some sample code from a simple Lift Snippet.  (A Lift Snippet is somewhat related to a Servlet):
class A {
  def snippet (xhtml : NodeSeq) : NodeSeq = bind("A", xhtml, "name" -> Text("The A snippet"))
}
This example (from Exploring Lift site - pdf) provides a trivial snippet implementation.  It simply replaces the <A:name /> XML tag with "The A snippet" text and would be called by the following XHTML:
<lift:A.snippet>
  <p>Hello, <A:name />!</p>
</lift:A.snippet> 
There are a few things going on here to make this work. First, the bind method is statically imported. The import looks like:
import net.liftweb.util.Helpers._
This imports all the methods defined in the Helpers class into your class, enabling bind to be called without prefix. The Helpers class itself is just a roll-up of 9 other Helper class, so you could instead:
import net.liftweb.util.BindHelpers._
or even:
import net.liftweb.util.BindHelpers.bind
if you want strict control over what you are including.  However, the point here is not that you can do static imports, which are possible in Java as well, but that the Lift framework makes heavy use of static imports to help create its DSL.

The second part requires two important concepts, implicit functions and operator overloading.  The bind method has a few overloaded versions, but a typical case uses the following signature:
def bind(namespace : String, xml : NodeSeq, params : BindParam*)
In our simple example, the first two parameters are pretty straight forward, but how does:
"name" -> Text("The A snippet")
Get converted to a BuildParam?  The answer is implicit functions and operator overloading.  The BindHelpers Object (statically imported with Helpers._), contains an implicit method with the following signature:
implicit def strToSuperArrowAssoc(in : String) : SuperArrowAssoc
The SuperArrowAssoc object defines a method (operator overload) ->.  This operator/method is overloaded to take different parameters, including scala.xml.Text. So in our simple example:
"name" -> Text("The A snippet")
The "name" string is implicitly converted to a SuperArrowAssoc, and the -> method is executed with a scala.xml.Text instance as the parameter, and returns a BindParam, which is then passed to the statically imported bind method.

Got all that?

It is a bit complicated, and it took me a little while to track down how it all worked.  However...  There are two important points to make before concluding that this is 'bad'.

First, the disparity between IDE functionality for Scala and Java is enormous.  It is reasonably easy to understand any piece of code quickly in Java, in large part because of the IDE.  It is trivial to understand inheritance trees, all references to a method/variable,  all subclasses, etc.  What I would like to see in Scala is easy decoding/indication of implicit methods and static imports.  The Scala 2.8 plugin for Eclipse still fails at handling basic Class and import statement auto-complete most of the time, let alone handling more complex values(1).

Second, and I think more interestingly, I'm not sure it matters if you 'understand' the code.  The point of a DSL is to make it easy to build something in a specific domain.  Yes, you are coding Scala, but do you really need to know where/how everything is defined/wired?  If I'm building a web site, do I care how "x" -> "y" gets converted and handled by Lift?  What you have is a case where you end up learning the DSL syntax.  I think this is different than learning a library API in Java.  A library API uses traditional Java syntax, so you are just learning what methods exit.  In Lift, you have to learn syntax itself, because there is no obvious way to 'discover' what is possible with the Lift library.  An example:

Using Java and a traditional Library, I can use auto-complete to find the 'primary' classes, and use method auto-complete to look at the methods.  Assuming it is a reasonably designed API, I can probably figure out how to use it without much more.  Of course reference thing JavaDoc will help and is reccomended to make sure everything works as you assume, but you can get a long way on your own.

In Lift, there is no easy way to auto-discover how to interact with the framework using the DSL.  You must read documentation/samples to understand what is available, or dig through endless pages of ScalaDoc to deduce the possible options.

In the end, I don't believe Scala is inherently more complex than Java.  That said, I think it is easier to write hard to understand Scala code than it is with Java.  Writing a DSL in Scala can make for code that is nearly impossible to 'infer proper usage' from the traditional means, but that isn't surprising.  It is a DOMAIN SPECIFIC LANGUAGE, in essence, you created a new syntax, and you have to acknowledge that that learning that is different than learning the language it is written in.

I remain a fan of Lift and Scala.  Even with these issues, I feel that the value they deliver is worth the learning curve necessary to become proficient. 

(1) - But I am still very appreciative that the plugin exists at all, and hope they keep up the hard work!

Recursive Snippet Processing with Lift

I really like how Lift's 'template' engine works.  In short, you define XML tags that map to a Class and Method for execution.  For instance, a basic HTML template looks like:
<lift:MyClass.myMethod>
  <div>Hello, <my:name/>.  Welcome to my sample web app</div>
</lift:MyClass.myMethod>
This will result in the myMethod function on MyClass being called, which can then easily replace <my:name/> with a dynamic value.

The real power comes from the fact that the Lift framework will continue to (re)process the XML until all Lift tags have been resolved.  This means that a call to one snippet can produce a call to one or more snippets.

I came across an example of this on a recent project.  I wanted to produce the same HTML block for multiple snippets.  My first effort at refactoring produced something similar to this:
<lift:MyClass:showAttr1 eager_eval="true" name="Attribute 1">
  <lift:embed what="attribute" />
</lift:MyClass>
My MyClass looked like:
class MyClass extends AttributeHelper {

  private val attributeDefinitnion = ...
  val name = S.attr("name").openOr("Unnamed Attribute")

  def showattr1(xml:NodeSeq) : NodeSeq = {
    attrHelperBind(attributeDefinition, name, xml)
  } 
}
The AttributeHelper trait defined the attrHelperBind method which took the attributeDefinition and used the bind method to replace the XML tags defined in the attribute template that was embedded in the body.  Note, I needed the eager_eval="true" attribute so that the embed tag would be executed before the showAttr1 tag.

This worked well and greatly reduced the amount of boiler plate code needed for each attribute.  However, since Lift will continue to evaluate the XML until all the tags are processed, I realized I could further improve it.  I created a generic snippet that simply returned the following block:
<lift:MyClass.myMethod eager_eval="true">
   <lift:embed what="attribute" />
</lift:MyClass.myMethod>
This allowed me to have a very generic entry in my HTML:
<lift:Myhelper.helper snippet="MyClass.myMethod"/>
The implementation of this Snippet is:
def helper(xml:NodeSeq) : NodeSeq = {
  val snippet = S.attr("snippet").openOr("Helper.default")
  new Elem("lift", snippet, Attribute("eager_eval", Text("true"), Null), TopScope,
    <lift:embed what="attribute" />)
}
This simply produces the original XML block, which will then be processed normally. The Elem call produced a element named <lift:{snippet}> with the body
<lift:embed what="attribute" />. You must use the Elem object to create the XML because you cannot have dynamic tag names in XML literals.  IE:
def myXml(name:String) = {
  <lift:{name}>Body</lift:{name}>
}
is not legal as the XML literal will not be parsed correctly.

This ability to 'recursively' process the Lift XML tags enables the development of easy helper methods to allow the final XHTML templates to be very concise and readable.

Using Comet with Lift to Dynamically Update Pages

Lift, a web framework written in Scala, provides easy integration with Comet, a server side HTML push model.

The Lift Demo site provides a good overview of the basic Comet usage.  A CometActor has two main parts.  The render method, which is executed when the page is first requested, generates any initial content required for the page.  The actor message method, called when the CometActor receives a message, is responsible for triggering an update on the page.

The render method is similar to the render method on any snippet.  It usually has a bind method that replaces XML tags in the template with dynamic content.  This can be visible HTML, JavaScript methods that will be used during updates, or a combination of the two.

The Comet Chat example has the following render method:

// display a line
private def line(c: ChatLine) = bind("list", singleLine,
                "when" -> hourFormat(c.when),
                "who" -> c.user,
                "msg" -> c.msg)

// display a list of chats
private def displayList(in: NodeSeq): NodeSeq = 
chats.reverse.flatMap(line)

override def render = 
  bind("chat", bodyArea,
       "name" -> userName,
       AttrBindParam("id", Text(infoId), "id"),
       "list" -> displayList _)
Here, the render method sets the chat:name tag to userName, and the chat:list tag is processed by the line method (via displayList).

The actor methods are where it gets interesting.  The actual method can be one of low(med/high)Priority or messageHandler.  These methods define PartialFunctions that can be used to handle incoming messages.  There are two basic approaches that can be used from here.  The first approach is to update some internal state and force the render method to replace the existing content.  This would look something like:

var messageText : List[String] = Nil 

override def messageHandler = {
    case message : String => messageText ::= message; reRender(false)
}

Assuming you have a render method that takes the messageText and renders it out as HTML, the page will reflect the contents of messageText, and will be updated each time a message is received.  However, if the amount of changes are small relative to the rendered block, or you are creating an ever-increasing list of content, this can be inefficient.

The second approach for the actor message methods is to call a JavaScript method on the client with the updated information.  This would look something like:

override def messageHandler = {
  case message : String => partialUpdate(OnLoad(AppendHtml("logContents", <div>{message}</div>) & JsRaw("autoScroll();")))
}

In this example, the partialUpdate method is used to push a string of JavaScript Commands to the web page.  AppendHtml is a shortcut to a jQuery method to add HTML to a specific element. This is combined with a call to the autoScroll method which we assume is already defined on the page and acts on the newly updated information in some way (i.e. automatically scrolling the div to the bottom so the updated HTML is visible).  This does not trigger a new call to the render method.

Note that the appendHtml method is defined net.liftweb.http.js.jquery.JqJsCmds instead of net.liftweb.http.js.JsCmds as it is a bridge to jQuery (hence the Jq prefix). 

The combined power of server side pushes, Actors, and convenient helper methods makes it very easy to build a page that is updated in real time.

Deploying a Scala Lift Application in an OSGi Container

My current project involves building a Lift web application and deploying it in our OSGi application container.  I've been working with Scala on and off for a while, and I've always been interested in Lift.  With the release of Scala 2.8 and Lift 2.0, I decided it was time to give Lift a real try on my current project. 

The easiest way to deploy a WAR file is using Pax Web's War Extender. This allows you to simply deploy a WAR file with an updated MANFIEST.MF file (making it an OSGi Bundle) in the same container as Pax Web. In my example I will build a WAR file as a standard OSGi Plugin and build it using Eclipse, but you could also build a normal WAR file using Maven or SBT and add the OSGi attributes to the MANIFEST.MF file and deploy it with Pax Web.

The following steps assume:
  • Eclipse is the IDE
  • Scala-IDE (Scala Plugin) is installed.
  • Scala 2.8
  • Lift 2.0 (Need the Scala 2.8 Snapshot from Scala-Tools)
The first step was to create a standard OSGi Plug-In Project.  Then edit the project file to add the Scala Build Command
<buildcommand> 
  <name>ch.epfl.lamp.sdt.core.scalabuilder</name>
</buildcommand>
to buildSpec, and remove the Java Builder

Then add the Scala Nature
<nature>ch.epfl.lamp.sdt.core.scalanature</nature>
to natures. I added both to the top of the respective sections.

You will then need to reload the Project (Close/Reopen) and it should be Scala-enabled.

I then merged the Lift Template project into my Eclipse Project.  I copied the src/main/scala directory into my src directory, and src/main/webapp into the project root.

At this point Eclipse should see the Scala source files, but they will not compile as the Lift libraries are not yet in the classpath.

I downloaded the required dependencies from the various Maven repositories and added them to the WEB-INF/lib directory.  For my initial project I needed:
  • joda-time
  • lift-actor
  • lift-common
  • lift-json
  • lift-util
  • lift-webkit
  • paranamer-generator
  • slf4j
  • org.apache.commons.fileupload
Remember, if you are using Scala 2.8 to download the 2.8.0 version of the lift libraries (which are currently under snapshots instead of releases).  Once you have all the dependencies, the Lift sample code should compile.

You can now work on deploying the bundle.

You will need the Pax Web WAR Extender bundle and all its dependencies.  They can be found in this Maven Repo, and are outlined here for version 0.7.1.

Once the Pax Web bundles are deployed, you should be able to deploy your bundle.  The Pax Web WAR Extender will scan all bundles for a web.xml file and attempt to deploy them if it finds one.

By default it uses the Bundle's Symbolic Name (Bundle-SymbolicName: xxx) as the context root, but you can specify your own by adding the following line to your MANIFEST.MF:
Webapp-Context: /
to deploy as the root context or
Webapp-Context: mycontext
to deploy as /mycontext

That was all it took to get a sample Lift application up and running. I can now use the OSGi container to reference other dependencies and continue to build the application in Eclipse.

A couple of notes:
You can deploy the Lift libraries and dependencies as OSGi bundles instead of in the WEB-INF/lib directory of the bundle. At the time of this writing, the Lift OSGi Bundles were not available for 2.8 yet, but all they need are an updated MANIFEST.MF file.

As I noted at the beginning, you can simply edit the MANIFEST.MF file of any WAR and deploy it this way. If your Lift app does not depend on other OSGi bundles this may be the easiest approach.

Hiking with a DSLR Camera

Update: I've posted an updated version of this here: Hiking with a DSLR Part 2.

I just spent a week in Rocky Mountain National Park hiking with my family. It was a great week, and it allowed me to give my new hiking setup a full workout.

My camera is a Nikon D300, and my primary hiking lens is the Nikkor 17-55 2.8.  Since I have small kids, I carry a child carrier, either a Kelty FC3 or FC1.  None of my normal carrying solutions (outlined below) worked for hiking, so I needed something different.

I came across the Cotton Carrier, which is a chest harness for the camera.  You screw a small round attachment into the bottom of your camera, and it slides into the chest harness.  It also provides a Velcro strap to immobilize the camera further, although I only use this if I need to scramble up rocks or jog.  It also provides a safety strap that I attach to where a normal camera strap would attach.

In this setup, I don't use a camera strap at all.  Between the harness and the safety strap, the camera isn't going anywhere, and it provides for clean access to the camera.

When I first got my carrier, it didn't come with the safety strap, and I kept a very short camera strap on the camera to provide for a little extra safety when I had it out of the carrier.  When they added this feature they offered it for free (plus postage) which was a nice touch.

This setup worked pretty well, but it was frustrating to use the Tripod which I sometimes carry with me on hikes.  To address this, I moved to a solution where I 'always' used a quick-release solution.

I use Arca Swiss compatible plates and clamps.  Initially, I had a generic plate for my D300 and a Kirk QRC-2 clamp on my tripod.  This worked alright, but the plate was bulky so I always took it off the camera when I wasn't using it, which was time consuming.

I looked into using quick release plates with my Cotton Carrier and realized all I needed was an additional clamp.  You simply screw the Cotton Carrier round attachment to the Clamp, and then you simply clamp the clamp to the camera.  This approach increases the distance from the camera to the carrier a little, but I didn't find it to be a problem.

I also upgraded my generic plate to a Kirk plate made specifically for the D300, the Kirk PZ-122.  This plate was MUCH better than the generic plate I used before.  It is much more low profile, and has a 'stopper' you can put on one side to the plates can only go on and off on one side.  Additionally, it has a second screw hole, so you can attach the camera to other things that use this attachment without removing the plate.  Since I regularly use a Rapid Strap, that screws into that hole, it turned out to be a great feature.

All in all, I'm very happy with the Cotton Carrier + Kirk Plates/Clamp system in general, and for Hiking in particular.

The harness looks like it covers your entire chest, but only the bottom band (about 1 1/2 inches) is really snug against your chest.  It does get a little sweaty, but not bad considering, and nothing compared to my hiking backpack.

When I'm not hiking, I use one of the following (all of which I like for specific uses):
The SlingShot is a great 'all day' bag for most outings.  The 200 can carry my D300 with the 17-55 lens attached in the main hold, with two other (modestly sized) lenses.  I also carry an extra battery, data card, the R Strap, Filters, and other miscellaneous gear I may need throughout the day.  It provides easy access to the camera without taking the pack off, and is reasonably light.

The R Strap works great when you are shooting a lot with the same lens and a 'casual' level of movement.  It is not good for walking on non-level ground, walking long distances, or doing a lot of squatting.

I rarely use the camera with a normal camera strap, but it is the traditional approach.

Java Email Server 2.0 Beta 3 Released

Java Email Server (JES) is an open source email server (SMTP/POP3) written in Java.

This release is the third Beta version of the new 2.0 development branch. This is an incremental update to Beta 2 and contains the following updates:
  • (New Feature) A recipient policy is now offered for all incoming messages originating from local domains. See the rcptPolicy.conf file for details.
  • Further relaxed the default security settings so that cleartext passwords are accepted.
  • Unlimited jurisdiction cryptography is not required by default anymore.
  • The digest-MD5 SASL mechanism is off by default now.
While the belief is that JES 2.0 Beta 3 is stable, we will continue with Beta releases in the 2.0 branch until we feel confident that the 2.0 code is stable and production safe. Please provide feedback on this release in the JES Google Group, even if it is just letting us know you are using JES without any issues.

You can download this release from the project home page.

iTunes Export 2.2.2 Released

iTunes Export exports playlists defined in your iTunes Music Library to standard .m3u, .wpl (Windows Media), .zpl (Zune), or .mpl (Centrafuse) playlists. iTunes Export also supports copying the original music files with the playlist to facilitate exporting to other devices. iTunes Export is open source and freely available for use.

The 2.2.2 release features updates and bug fixes to the console and GUI versions

For the GUI version:

  • Fixed bug where iTunesExport would hang when attempting to export multiple playlists with the same name. Now, the last playlist will be exported.
  • Fixed bug where iTunesExport would hang when attempting to copy a file that does not exist. Now the file will be skipped.
For the Console version:
  • Reduced memory usage. Changed method to avoid DTD look-ups that uses less memory.
If you find any issues or have questions please email me (eric@ericdaugherty.com). Please include which application and version (GUI or Console, 2.2.2 etc.) of iTunes Export and the operating system you are using.

Acquisitions - Palm and HP, Siri and Apple

Two interesting acquisitions were announced today, one exciting, one disappointing.

First, the exciting one.  Apple acquired Siri.  I've used the Siri app for the iPhone, and was very impressed.  It attempts tries to be your digital secretary, and actually does it quite well.  It abstracts you away from how it finds out information, and just presents what you want to know.  It works great for the simple cases so far, and I think over time will become very good at much more.  For Apple, who I believe is focusing on making computers into appliances, this is a great fit.  As I discussed in my post about the iPad, I believe the next evolution of the computing space is creating computing appliances, not computers.  Thinking of the iPad as a computer with a touch screen instead of a computer with a keyboard is wrong, just as thinking of a TiVo as a computer with a video capture card is wrong.  Yes, both analogies are technically correct, but they both miss the point.  Both are computing appliances, not computers, and the rules and expectations for them should be different.  I'm very excited to see what Apple will do with Siri.

And now for the disappointing one.  HP acquires Palm.  Palm was once a great company.  They nearly single handily created the PDA (Personal Digital Assistant) market, with the Palm Pilot.  The Palm Pilot dominated the market for years, until the advent of smart phones.  Then they created the smartphone market with their Treo line (technically they bought Handspring, which was formed by the same folks), and extended their domination.  I was an early adopter back in 1998 and used Palm Pilots and Treos up until last year when I switched to an iPhone 3GS.

If was very excited about Palm's new OS, but in the end it was too little, too late.  If they had launched the WebOS/Pre a year before the iPhone came out, they may still be dominating the landscape.  And they certainly should have been able to do that.  The Palm OS was great in the 90's, worked OK in the early '00's, but was really showing its age by 2005.  They waited far to long to move to the next generation.

I don't see how being acquired by HP will change their position.  iPhone and Android are locked arm-in-arm for the smartphone market.  iPhone owns the proprietary walled garden space, and Android is the open, extensible choice.  Microsoft and RIM are still hanging around, mostly in the corporate market.  There just isn't room for Palm.

HP is not an innovative company today, and is more known for their existing relationships and sales channels than engineering.  While I don't believe any company could have really saved Palm, an acquisition by HTC or another up and comer in the space would have been interesting.

Why Can't Google and Apple just Get Along?

Google and Apple are at war.  Google entered Apple's 'home turf' with Android, and Apple is entering Google's 'home turf' with iAd.  There is no question that the war is on.

In most situations, competition is a great thing.  It drives companies to innovate and produce better products and services.  While that will certainly happen with their respective mobile operating systems in this situation, I believe these are two companies that would be much stronger working together.

Google and Apple are good at very different things.  Gruber nails it when he says:
No better comparison of the cultural differences between Google and Apple than to compare Google Docs and iWork. iWork has no form of cloud based syncing or collaboration; the appeal of the apps (both on the Mac and iPad) is that it helps you create beautiful documents. Google Docs is all about cloud-based syncing and collaboration; its example documents are downright homely.
Google is great at building services that scale.  Gmail, Google Docs, and Google Maps are all great services, usually with simple and effective interfaces.  Apple is great at creating intuitive user interfaces and well engineered hardware.  As an end user, I want my services provided by Google on hardware built by Apple.

More specifically, I want my web user interfaces built by Google, and my native applications built by Apple.  I enjoy the close integration I have on my iPhone between the Calendar, Mail, and Contact applications and Google Apps (which is ironically made possible by Microsoft's ActiveSync).  I like Google's web interface to Gmail and Calendar.  I want them both, and I want them to work together.

In my ideal world, Google would provide a service interface for these services, and a generic web interface that is accessible anywhere, on any (modern) browser.  Apple (and others) would provide native applications that utilize these services.  I want to be able to access my data using either the browser (provided by Google) or a native app (by Apple) on either my iPhone or MacBook.  I wish the Address Book application on my MacBook worked as well with Gmail as the Contacts app does on my iPhone.

It should work this way across all services.  iWork should be able to store and edit files on Google Docs, which could then be edited using the web interface as well.  Everything lives in the cloud, with the ability to cache local copies with the native applications (or advanced browsers).  

Unfortunately, it looks like Apple and Google are moving farther apart, instead of closer together.  It is clear that Google first 'invaded' Apple's space with Android, but Apple is certainly a company that holds grudges and is very aggressive.   Adobe knows this all too well.

As an end user, I'm afraid this is one area where competition may not produce the best result, but who knows.  Neither Google nor Apple are going away anytime soon, and they may both surprise us with what they do.  Let's just hope it is about creating better products instead of damaging the competition.

Flash Builder 4 Reference Card

I recently published a Reference Card with DZone: Getting Started with Flash Builder 4.  The Reference Card covers the new features of Flash Builder 4 and includes a basic overview of Flex application development.  Check it out!

I enjoyed working with DZone to publish this card, and the editorial process was great.  I hope you find it useful.  Let me know what you think!

Google Is Great, When It Works

I'm a big fan of Google.

While I've written my own email server, I've adopted Google Apps for nearly all of the domains I manage.  Gmail is a great tool, and has freed me from the tendency to over-organize my mail.  I can now find things easier and quicker than I ever did using Outlook or Thunderbird simply using minimal tagging and the built in search functionality.

I've used Google Search, Google Apps (Mail, Calender), Google Reader, Blogger, Google Docs,  Google Code, Google Web Toolkit (GWT), and the Google App Engine.  Great stuff.

If it works.

However, if something goes wrong, it can be difficult to find the answer.  Your best bet is to Google for the solution of course.  Hopefully someone has encountered the issue before and can point you in the right direction.  If not...

I recently migrated my Blogger setup (for this blog) from FTP publishing to a Custom Domain hosted by Google.  The transition went fairly smoothly (as I documented here), and while I was a bit annoyed I had to change my setup, it was fairly painless and hey, the price is right.

But I ran into an issue with my RSS (Atom) feed.  I've been using Feedburner for quite a while to track the number of subscribers.  When I used FTP publishing I simply edited the template to point to my Feedburner feed.  However, now that I switched to hosted mode, I can't edit the template the same way.  So most new subscribers are using the base Google feed, not the Feedburner feed. 

Google does offer an option to handle this, 'Post Feed Redirect URL', with the description: "If you have burned your post feed with FeedBurner, or used another service to process your feed, enter the full feed URL here. Blogger will redirect all post feed traffic to this address. Leave this blank for no redirection." Of course, when I enter my Feedburner feed in this URL, I get the error: 'This URL would break your feed, resulting in a redirect loop. Leave the field blank to serve your feed normally.'  Based on all the reading I've done, my setup appears to be correct and this should work.  Faced with limited support options, I posted a question to Google's help forums.  I got a response that appears to suggest that something internally needs to be reset, but no help from any Google resources.  So, I guess I'm stuck.

Or course, you can point out that I'm getting what I pay for, which is true.  Blogger is a free service, and I'm not entitled to any specific level of support.  But that doesn't make the situation any less frustrating.

At least this issue is minor.  In the end, it doesn't really matter.  But suppose I had in issue with Gmail.  What would I do then?  Well, I guess the answer would be upgrade to a Pro account, and then demand support, but that is only because I'm using Google Apps instead a plain Gmail account.

The risk of free...

Programming in the Small

I believe that successful application development today is about 'tweaks and sub-features', not major functionality.  But my thought process for this post was kicked off by an interesting post by Mike Taylor: 'Whatever happened to programming?' Mike laments the evolution of programming.  He is nostalgic for the day when writing a program was about creating something from scratch, instead of assembling the various pieces.

I think his assessment of the transition is fairly accurate.  The number of frameworks and libraries in projects today far exceeds the number used in projects 5, 10, or 20 years ago.  This is especially true in the Java world, where build tools like Maven gained traction because they handled the dependency management.  And now, a non-trivial Java project can easily incorporate 100 or more jar files, while a trivial 'boiler plate' web application can easily have 20 jars.

In many ways this is frustrating.  It has also given rise to what I call the cut and paste programmer.  You can actually achieve reasonably impressive results simply by searching for the right libraries, and them assembling them together but cutting and pasting example code found via Google.

From a business perspective, these are all good things.  The level of skill required to produce results is lower, and the speed of development has greatly increased.  We are standing on a very tall foundation.

This also means that the major functionality of many applications is provided mostly by libraries and frameworks.  The heavy lifting parts are not really heavy anymore.  I think Jeff Atwood hits this nail on the head when he stated on a Stack Overflow podcast episode that the major features of Stack Overflow themselves are fairly trivial.  The real value is that it is a collection of many small features and tweaks that make the overall system successful (I can't find the reference, so I apologize if I paraphrased incorrectly).  I think this point is right on.  Most major 'features' are trivial to implement today using the rich set of libraries that exist.  Building a website that has questions an answers like stack overflow is trivial.  Making it successful is hard.  And the difference is all in the fine print.

Jeff discussed at some length the time they spent with the syntax parser (markdown) and instructions on the 'ask a question' page.  Small changes to how the information is displayed and highlighted are much more important then the major feature of saving the question to a database and displaying it.

Successful applications today are about the user experience.  There are very few applications that are truly innovative themselves and could not be replicated by gluing together a set of frameworks.

Real innovation today is in the small.  This is also why I believe that the rise of Appliance Computing is here, and that Write Once Run Anywhere languages are inferior to native client applications.  It is the difference between a iPhone web application and a native app.  They both have the same features, but the experience can be very different.  In the end, the real value is the small efficiencies in the application, not the large features. 

Web File Extensions are Bad

I hate file extensions on websites.  It is an unnecessary leaky abstraction, and is discouraged by the W3C.  All file extensions are not necessarily bad, but any file extension that exposes the underlying technology implementation is.  Any .pl, .php, .asp, .aspx, .jsp, .do, *.struts, etc extensions are B A D.

I've talked about this before, and come up with some workarounds to build extension-less java web applications before.

However, I've come across what I think is a better way, thanks to a post a couple years ago by Matt Raible.

I came across the issue using the Spring WebMVC DispatcherServlet.  I want all dynamic URLs handled by the Spring Controllers, using Annotations.  However, mapping the DispatcherServlet to /* means that every URL will be processed by the DispatcherServlet, including the .jsp views returned by the Controller.  As I mentioned in the previous post, you can 'unmap' content and have it handled by a default or jsp servlet in some app servers, but not all.

You can also try to map specific subsets of URLs to spring.  However, this is harder than it sounds.  By default, Spring will map the wildcard portion of the URL matched, not the entire URL.  So if you have /foo/* as your url-pattern, the controller with a mapping of /foo/one will not match /foo/one, but instead matches /foo/foo/one.

It appears that you can use the property alwaysUseFullPath to change this behavior, but it did not seem to work as expected for me.

Instead, there is a more generalized solution, as Matt suggested.  URL Rewriting.

The URL Rewrite Filter project provides a Filter that you easily define rewrite rules for, just like in Apache.  So I setup my DispatcherServlet to match *.spring, I setup a rule to rewrite all extension-less requests to .spring, and I setup my annotations to have the .spring extension.

Now my web application can handle the odd html, png, or other static files if necessary, but does not expose any implementation details in the URLs.  Perfect.

For reference, here are the relevant portions of my config:

Web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

  <filter>
    <filter-name>UrlRewriteFilter</filter-name>
    <filter-class>org.tuckey.web.filters.urlrewrite.UrlRewriteFilter</filter-class>
  </filter>
  
  <filter-mapping>
    <filter-name>UrlRewriteFilter</filter-name>
    <url-pattern>/*</url-pattern>
  </filter-mapping>
  
  <servlet>
    <servlet-name>SpringMVCServlet</servlet-name>
    <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
      <init-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/your.config.file.location.xml</param-value>
      </init-param>
    <load-on-startup>1</load-on-startup>
  </servlet>
  
  <servlet-mapping>
    <servlet-name>SpringMVCServlet</servlet-name>
    <url-pattern>*.spring</url-pattern>
  </servlet-mapping>
  
</web-app>
urlrewrite.xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE urlrewrite PUBLIC "-//tuckey.org//DTD UrlRewrite 3.0//EN"
  "http://tuckey.org/res/dtds/urlrewrite3.0.dtd">

<urlrewrite>
  <rule>
    <from>/$</from>
    <to type="forward">home</to>
  </rule>

  <rule>
    <from>^([^?]*)/([^?/\.]+)(\?.*)?$</from>
    <to last="true">$1/$2.spring$3</to>
  </rule>
  
</urlrewrite>
Sample Controller
@Controller
@RequestMapping("/foo")
public class SimpleDataController {

  @RequestMapping("/bar.spring")
  public ModelAndView bar() {
    ...
  }
}

Developing a Google App Engine (GAE) app using Maven

If you want to develop a Google App Engine (GAE) application using Maven, you can either use the Maven plugin maven-gae-plugin, which requires non-trivial hacking on your pom.xml, or you can keep your pom clean and create a simple Ant script.

My pom is a simple web application pom, with no specific GAE configuration.  I then created a build.xml in my project root that looks like this:
<project>
  <property name="sdk.dir" location="/opt/appengine-java-sdk-1.3.1" />

  <import file="${sdk.dir}/config/user/ant-macros.xml" />

  <target name="runserver" depends=""
      description="Starts the development server.">
    <dev_appserver war="target/yourappname-1.0-SNAPSHOT" />
  </target>

</project>

Using this, you can run your application in the GAE sandbox without having it take over your pom.

You can also have the ant task perform a maven package to insure everything is updated by adding an exec target to the runserver task.

You can read more about the full range of Ant tasks available for GAE, but I found this simple script helpful to get up and running quickly in the GAE sandbox without much effort.

Whole House Audio/Video Distribution

I have my house wired so that every television can access a shared set of sources (mostly).  I wanted this solution because everything I watch is recorded.  Therefore, I wanted to access each of my three DVRs on every television in the house.  Here is how I accomplished it.

First, I located all of the DVRs in the basement.  They are each run to every television in the house using different transmission mechanisms.  Here is a general overview of my system layout:

Most of the components are in the basement, with the exception of the disc based components (DVD Players and game consoles). 

The Basement and Family Room are in close proximity, allowing direct wiring of all the devices.  The DVRs are wired directly to the TV using Component and S-Video connections.  The sound is sent back to the basement from the DVD and game consoles using digital audio connections.  The speakers are wired directly from the AV Receiver in the basement.

The rest of the rooms require some alternative transmission mechanism.  For the Standard Definition (SD) televisions, I use a Channel Vision E4200 RF converter.  This devices takes up to 4 standard definition sources (audio and video) and modulates them onto broadcast channels.  I then combine the Antenna feed with the Channel Vision output and run it on the RG-6 that runs to each television in the house.  Now ever TV can tune to a channel, say 63,  and display the output from the SD DirecTV DVR.  I also have a Cat-5 run to each television with an IR Sensor.  This allows signals from that room to be transmitted back to the basement and be 'seen' by the components there.

For the second HD TV, I use a system from Audio Authority (Model 9871 + Wall Plates) to transmit a High Def (HD) component video signal, digital audio signal, and IR signal to the second room.  This system requires a converter box at the source and destination, but allows all of the signals to travel over a pair of Cat-5 cables. 

Here is an overview of the wiring:
The RF Modulator is a great solution for sending audio and video to SD televisions.  I even have an SD feed from the HD DVR so you can watch down-converted versions of the HD shows on any TV as well.

The Audio Authority system is great for transmitting HD video, as long as it is component and not HDMI.  See my rant against HDMI for more information.

I use a Niles IR system to capture/repeat the IR signals.  It has worked well, although there does appear to be a quality difference in the IR sensors.  Spend the money to get a good one.

For music I use my laptop to stream music to one of two Airport Express devices, attached to each AV Receiver in the house.  It isn't a Sonos, but it works.

I have a Harmony remote in each room setup to control the local TV and the DVRs.

Overall, I've been very happy with this setup.  It has been in place for over three years now and just works.  There are certainly other solutions to this problem, but I've been pleased with this for my needs.

Siri - The Next Generation of Appliance Computing?

In my previous post, I discussed the trend toward computing appliances (ie DVRs, Kindles, etc.) instead of general purpose computers.  On the recommendation of Merlin Mann on MacBreak Weekly, I downloaded the Siri iPhone application and gave it a try.  Wow.

The Siri application attempts to be the ubiquitous Star Trek computer.  Just ask it a question and it will give you the answer.  It provides both voice and text interaction modes, and an easy user interface that exposes common feature easily.

It won't do everything you want, and I'm not sure this specific application will be something I use regularly, but at a minimum it provides an interesting example of where the world is going.  Imagine using this with what I imagine will become ubiquitous Bluetooth headsets.  Just ask a question and have the answer spoken to you.  We are not there yet, but we're starting to get really close.

This is an example of what I consider the appliance trend, applied to software.  The application provides easy access to a common (if limited) set of features that are intuitive to use.

Why the iPad will succeed, and the Rise of the Computing Appliance

The iPad is an appliance, and it will be successful.  But before we get to that, we need to start at the beginning.

When I was a kid I ran a dial-up BBS service (The Outhouse) on a computer cobbled together from old donated computers and a few new parts I'd purchased.  I would take apart old computers (donated by my friends parents after their companies discarded them) and test the various parts for something I could scavenge.  I spent hours pouring over the massive Computer Shopper magazine to find the best deal on a new hard drive or modem.  This was the very definition of an (economy) do-it-yourself general purpose computer.

In my first two decades of computing I never purchased a pre-built computer.  I always assembled new computers from parts, or upgrade my existing computer (often replacing everything but the case and CD-ROM drive).  Pricewatch.com was my favorite site for a long time.

But along came a new device that started a big change, TiVo.  I bought my first TiVo around the year 2000.  It was something I could have built myself, but I realized that the convenience of having a dedicated appliance was worth the cost.  I just wanted it to work, and it did. Very well.

In the years since, I've been transitioning away from general purpose computers to appliances.  The Linux and Windows desktop/servers I used to run (24/7) have been replaced by a Linux based wireless router (Linksys WRT 54 GL) and a NAS (ReadyNAS NV+).  They provide nearly all the services the old general purpose computers provided with a few exceptions, and each of those exceptions have been moved to the cloud.  I've moved my website hosting and email hosting to the cloud using GoDaddy (< $5 month for web hosting) and Google Apps (Free).  They provide a better quality of service, at a minimal cost.  Along the way I migrated from writing and hosting my own email server (Java Email Server), to hosting email at GoDaddy, to free hosting at Google.  That is quite a shift in effort and cost.

The same progression is true with my other devices.  I now exclusively use laptops, and have not assembled a desktop more than 4 years.  I've also embraced Apple devices, which have more of an appliance feel than do other devices.  I use iTunes to manage my music, a couple of iTunes Airport Express units to listen to music on my stereos, and an iPhone as my mobile music device, and phone.  While one could argue that these are not really a change to appliances, I think they match the general trend.  Appliances provide a pre-defined (somewhat inflexible) experience that 'just works' as long as you stay in the provided feature set.  This is exactly what Apple excels at doing.

Finally, I've adopted a Kindle.  While I could read on a laptop, or an iPhone, this single purpose device excels at linear reading (ie Books).  I use it every day.

There are several reasons for this trend.  One is simple economics.  As the years go buy, I have more discretionary income, and more demands on my time (namely two young children).  The convenience of buying appliances versus tinkering has certainly changed for me.

I think there is more to the story though.  As the computing and home electronic fields mature, it becomes easier and more cost effective to create appliances that fit into our worlds.  Which brings us to the iPad.

The iPad is not the first device to attempt to move the general purpose computing environment to an appliance.  One could argue that it is really a descendant of the WebTV concept.  Take the primary activities people use general purpose computers for and put them into an appliance.  This brings up a brief and interesting digression...

Apple does not create markets.  Apple waits until a market is ready, and then delivers a product with impressive polish and ease of use.  MacBooks don't do anything a similarly priced PC can't do, they are just prettier and easier to use (or have been traditionally).  The iPod was not the first portable MP3 player, it was just better (including the iTunes ecosystem).  The iPhone didn't break new ground on smart phone functionality, it was just better (again, including the iTunes and AppStore ecosystem).  Finally, the iPad isn't new either.  Microsoft has had a TabletPC version of their Operating System since 2001.  The previously mentioned WebTV provided email and web browsing as an appliance experience.

Apple is attempting to build on these ideas, with Apple's traditional polish, iTunes ecosystem, and of course, Reality Distortion Field.  I don't know that version 1 of the iPad will be a success, but I am convinced that appliance computing will become a significant mainstream success.

Final Note to Developers: As Software Developers, we will always use a general purpose computer.  Just as a carpenter utilizes a set of tools to build a house, we will utilize a set of tools (ie general purpose computers) to build appliances.  Our goal should always be building applications in the appliance mindset.  My youngest child (2 years old) can turn on my iPhone and open her favorite puzzle game.  All of our computing experience should be this easy.

Creating a TimeMachine SparseBundle on a NAS

I use a ReadyNAS NV+ as my backup drive and bulk storage.  Although the newer firmware directly supports TimeMachine, I've never been able to get that to work.  (This probably has something to do with the fact that I was upgrading and downgrading my NV+ Firmware quite a bit to debug a separate issue).

However, I did find a great tool to create SparseBundles that you can use on a NAS (or any external disk).

BackMyFruitUp.  First, it is a great name.  Second,  it is a simple and easy tool.  The Tool I actually use is 'Create Volume Backup,' a subproject of BackMyFruitUp, which you can download from this page

You hardly need instructions.  Unzip it, run it, and type in the size you want for the sparsebundle.  Then just copy it to your destination share and point Time Machine at it.  Done.

Of course, I wouldn't need it now if my Time Machine SparseBundle hadn't become corrupted.  Luckily I didn't need it.  I also perform a seperate rsync backup on occasion to insure I have a 'basic' backup of my user directory as well.

Ignorance Spam

I've been getting a lot of spam recenctly.  But this isn't normal spam, it is actually 'legitimate' bulk, although *I* didn't sign up for it.  What is going on?

Someone out there, apparently with the name Emily Daugherty, thinks that my gmail address is actually her gmail address.  She's been signing up for all sorts of websites using her (really my) email address  in the past few months.  That results all sorts of useless email for me that is not caught by normal spam filters.

Today she finally tried to reset my gmail password!  I'm not sure if she really doesn't know her email address, or is simply really, really, really bad at typing it in.  Either way, it needs to stop.

I need to somehow convince her to stop using my email address.  Unfortunately, I don't know her real address, since apparently she mostly doesn't either.

Any ideas?

Does Write Once Run Anywhere Work?

Yes, and No.

Write Once, Run Anywhere, a slogan created by Sun to evangelize the virtues of the Java Platform, is a controversial approach to software development.

Write Once, Run Anywhere (WORA) is accomplished through an abstraction layer between the 'compiled code' and the operating system and processor.  This abstraction usually takes the form of a Virtual Machine or Runtime, such as the Java Virtual Machine (JVM), Microsoft's Common Language Runtime (CLR), Flash Player (or Air runtime), or one of the many interpreted language runtimes (Perl, PHP, Ruby, etc.).  These runtimes convert the intermediate language into device specific code that can execute on the local operating system and processor.  While this overhead introduces extra steps, which slow down execution, they also provide features not (easily) available in native code, such as garbage collection and Just In Time (JIT) compilers which can optimize the code while it executes, as opposed to at compilation time.

So does it work?  Yes, and No.

Server

WORA languages have achieved a significant level of success on the server side.  Code that runs on large servers and interacts with clients over HTTP or other protocols is almost always written in some form of WORA, whether it is Java, .Net, PHP, Ruby, Perl, or other interpreted languages.  There is no advantage to using native code in these cases.  All interactions with the user are through an intermediate protocol/interface, such as HTML over HTTP for websites, XML over HTTP for web services, or various other formats and protocols used to exchange information between servers and clients or other servers.

There are certainly some applications developed for servers in native code.  Database servers are the most common example, but LDAP servers, webservers (Apache), and others are compiled to native code.  However, there are WORA versions of each of these examples, and many of the native applications were first written before WORA languages took off.

There is no denying that WORA is a huge success on the server side.

Client

Which brings us to No.

Client application development has struggled on the client side.  The biggest challenge is User/Human Interface Guidelines (HIG).  User or Human interface guidelines are published by various Operating System vendors (Microsoft, Apple) that define a set of recommendations on how an application should look and interact with the user.  Applications that follow these look like 'Windows' or 'Mac' applications.

With WORA, application developers have two choices.  Follow the guidelines of a specific platform, and ignore the others, or compromise between the various target platforms, creating an application that doesn't match any platform. 

Early Java desktop applications looked like Java applications.  They were obviously different from the other applications that users were used to interacting with, and were often shunned.  This has led to a negative view of WORA applications in general, as John Gruber comments on a Jason Kincaid article:
Jason Kincaid nails it: “write once, run everywhere” has never worked out. It’s a pipe dream.
In the context of client applications, I have to (mostly) agree.

There are exceptions.  In the Java world, nearly every developer uses an Integrated Development Environment written in Java, whether it is Eclipse, IntelliJ IDEA, or NetBeans.  But developers are a very different target audience than general computer users.

Another example is Flash and Flex applications.  Often delivered in the web browser, there are no real Human Interface Guideline that govern their interactions, other than the expected HTML experience.  This can work, but it can also be horribly painful, as many people have discovered trying to find a menu on a Restaurant's website. 

Mobile

There is a third act to this story.  Mobile.

Apple has take the mobile market by storm with its iPhone and App Store.   With over 100,000 applications written for the iPhone, the iPhone has become THE mobile development platform.  And every one of these applications was compiled to native code.

A consistent user experience is even more important on a mobile device with a limited display and user input capability.  Apple's success is in part due to its consistent device design.  Every iPod/iPod Touch/iPad version has a single home button, and a touch screen.  There are two screen sizes, the iPod size, and the iPad size.  While individual phone capabilities do very (memory, speed, GPS, Compas, etc.) the primary interface components are all the same.   By using a software keyboard on the devices, the keyboard is the same across all devices and applications.  All of this makes developing applications for the platform much more predictable and enjoyable.

The Windows Mobile and Android platforms both share a wide variety of device form factors, screen sizes, physical buttons, and device features.  This makes it much more difficult to build an application that is easy and intuitive to use across the platform.  And I think the quality and quantity of applications on the Windows Mobile and Android platforms demonstrate this point.

Solution

There is a solution, of sorts.  HTML in the browser is the most successful WORA language and runtime for client applications since the ANSI/VT100 terminal.  By creating a common language and interface, applications could be written for all operating systems easily, without the pain of violating their human interface guidelines.  The browser itself conformed to the local guidelines, and users expected the experience in the browser to be different from a native application.

It is time to evolve this paradigm to the next level.  HTML 5 is a good first step.  It provide the ability to display video, store data locally, and draw 2D graphics in a standardized way.  But to be successful, these features and more need to be implemented consistently across browsers, enabling developers to truly develop great WORA client applications.

As an intermediate step, frameworks and libraries that abstract the browser differences away is a short term solution.  JavaScript libraries such as Prototype and jQuery abstract the browser implementation differences while frameworks like Google's Web Toolkit (GWT) provide a platform to develop client applications that just happen to run in the browser.

Realistically, I think tools like GWT are the future.  As a Flex developer, I enjoy the ability to quickly and easily create rich applications that will render the same on ever user's machine.  But I would prefer that the Flex applications would compile to HTML and JavaScript, so they could be run native in the browser.

In the future, we will be developing using various language and platforms, but they will all compile down to code that runs native in the browser.  Or so I hope.

iTunes Export 2.2.1 Released

iTunes Export exports playlists defined in your iTunes Music Library to standard .m3u, .wpl (Windows Media), .zpl (Zune), or .mpl (Centrafuse) playlists. iTunes Export also supports copying the original music files with the playlist to facilitate exporting to other devices. iTunes Export is open source and freely available for use.

The 2.2.1 release features updates and bug fixes to the console and GUI versions

In both versions:
  • Enhanced the playlist name filter to include characters from the Latin 1 Supplement Block.
  • Added(Console)/Changed(GUI) 'addIndex' logic. Now uses incrementing index instead of iTunes Song Index. Index is in the order iTunes has the songs in for each playlist.
For the Console version:
  • Replaced ad-hoc URL decoding of file paths with URLDecode class. Now non-ASCII characters are handled correctly in file names.
If you find any issues or have questions please email me (eric@ericdaugherty.com). Please include which application and version (GUI or Console, 2.2.1 etc.) of iTunes Export and the operating system you are using.

Integrating GraniteDS and BlazeDS with a Spring WebMVC Application

Both GraniteDS and BlazeDS provide support for remote calls and messaging using Adobe's AMF protocol. However, when it comes to integrating them with a Spring application that already uses Spring's DispatchServlet, the projects start to show some differences.

In a previous post, I outlined the steps to getting GraniteDS 2.0 setup with Spring. However, this approach results in two separate Spring contexts, so my Spring Service with the 'singleton' scope was being loaded twice. Not good.

I found that GraniteDS 2.1 did support better Spring integration. You can see a blog post here that describes the process, or their updated documentation.  Note that the blog post seems to be somewhat out of date.  One issue is the schema URL: 
http://www.graniteds.org/config/granite-config-2.1.xsd in the blog instead of http://www.graniteds.org/public/dtd/2.1.0/granite-config-2.1.xsd.

The BlazeDS approach has a good overview here, and the documentation is pretty good too.  In my case, I used the @RemotingDestination annotation on my service beans instead of adding:
<flex:remoting-destination />
to the Spring config as I'm using auto-wiring for most of my beans.

There are a couple of things that bothered me about the GraniteDS approach as opposed to the BlazeDS approach.

First, the BlazeDS approach retains a (simplified) services-config.xml file, so the traditional configuration options are still available, and the integration with Flex/Flash Builder works as well.  Not a big deal, but it stays closer to the existing conventions.

Secondly, I was able to get BlazeDS working much faster. The project simply seems to be more mature, and the documentation and examples are clearer. The Granite documentation notes that they are working to achieve parity with the BlazeDS Spring integration, so it is likely that they will be able to close this gap. But for now, it seems to be a work in progress.  It is only available in the 2.1 version, which is currently at RC2.  I also had to use the 2.x version of spring-security-core instead of the 3.0 version, as Spring seems to have refactored the location of the BadCredentialsException class.

All that said, GraniteDS does provide more features than BlazeDS, so the comparison may not be entirely fair. I found the Granite ActionScript code generator (gas3) to work well, although it seemed to miss some import statements before it would compile in Flex 4.  However, the community around BlazeDS seems larger at this point, with the expected increase in polish.  However, competition is good, so hopefully the future versions of GraniteDS continue to improve.

Either way, it is good to see both of these projects working to provide easy integration with an existing SpringWebMVC project. Adding new Flex functionality to existing Flex applications should be very painless.

Migrating from Google Blogger FTP to a Custom Domain

Up until today, my blog was hosted at http://www.ericdaugherty.com/blog using Blogger's FTP publishing service. This service uploaded html files to my GoDaddy hosting account whenever a new post was created. While it worked reasonably well, and I liked the control it provided (since I had a backup copy of my entire blog on my hosting site), it did have a few issues. While I was not excited about moving my blog, I understood Google's reasoning.

Google provides a 'Custom Domain' publishing option, allowing Google to host the blog using a DNS name that you own. In my case, I chose http://blog.ericdaugherty.com as my new custom domain name. I could not keep my current address because Custom Domain publishing does not support publishing to subdirectories.

My first step was to create the blog.ericdaugherty.com CNAME and point it to Google's hosting DNS name, ghs.google.com. I use GoDaddy as my registrar and DNS Server and the changes took effect almost immediately. To test, I opened the new address in the browser, and was greeted with a Google hosted page that stated 'Not Found'. Good so far.



Step two was to update my blogger configuration to use Custom Domain publishing with the blog.ericdaugherty.com CNAME. I also used the 'missing files' host option to host all of the pictures I'd uploaded previously. I used my existing host at www.ericdaugherty.com to host the missing files. I hit save and reloaded the site. It immediately loaded, but was missing all of the formatting and images. My template assumed it was being loaded from www.ericdaugherty.com, so the links to my images and style sheets were broken.

I also used some PHP includes to generate the common navigation blocks on my site, so I had to copy that code into my template as they were no longer hosted within the same server. After I made the appropriate changes to the template, the blog displayed correctly. However, the Blogger NavBar now appeared at the top of the blog. I followed the instructions I found at here, which boil down to adding this to my css file:
#navbar-iframe {
display: none !important;
}
I also noticed that my favicon was no longer showing. I added the following to my Blogger template to direct requests back to my main site:
<link href='http://www.ericdaugherty.com/favicon.ico' rel='shortcut icon'/>
<link href='http://www.ericdaugherty.com/favicon.ico' rel='icon'/<
I use FeedBurner to track my RSS subscribers, so I updated my FeedBurner settings to point to the new URL. If I had not used FeedBurner, I would have needed to add a redirect to my .htaccess file (see below).

Finally, I needed to add redirects from the existing site to the new location. I already had an .htaccess file for my existing site, so I edited the file to include new redirects. I already had my RewriteEngine turned on with:
RewriteEngine on
So I just needed to add the new rewrite rules using a 301 (permanent) redirect to notify any requesting agents (including search engines) that the content has been permanently moved. I also included a redirect for my old rss.xml file to the FeedBurner URL, to make sure anyone who had directly subscribed to the feed was instead using the FeedBurner version. Finally, I excluded the uploaded_images directory, as the hosted blogger site will still reference those images for the old blog posts.
RewriteCond %{REQUEST_URI} /blog/rss.xml [nc]
RewriteRule ^(.*) http://feeds.feedburner.com/EricDaugherty [r=301,nc]

RewriteCond %{REQUEST_URI} !^/blog/uploaded_images/.*$
RewriteRule ^blog/(.*) http://blog.ericdaugherty.com/$1 [r=301,NC]
That's it. It took a bit of effort but wasn't too bad. If you are reading this post you, then the new settings are working. Let me know if you see any errors!