Whole House Audio/Video Distribution

I have my house wired so that every television can access a shared set of sources (mostly).  I wanted this solution because everything I watch is recorded.  Therefore, I wanted to access each of my three DVRs on every television in the house.  Here is how I accomplished it.

First, I located all of the DVRs in the basement.  They are each run to every television in the house using different transmission mechanisms.  Here is a general overview of my system layout:

Most of the components are in the basement, with the exception of the disc based components (DVD Players and game consoles). 

The Basement and Family Room are in close proximity, allowing direct wiring of all the devices.  The DVRs are wired directly to the TV using Component and S-Video connections.  The sound is sent back to the basement from the DVD and game consoles using digital audio connections.  The speakers are wired directly from the AV Receiver in the basement.

The rest of the rooms require some alternative transmission mechanism.  For the Standard Definition (SD) televisions, I use a Channel Vision E4200 RF converter.  This devices takes up to 4 standard definition sources (audio and video) and modulates them onto broadcast channels.  I then combine the Antenna feed with the Channel Vision output and run it on the RG-6 that runs to each television in the house.  Now ever TV can tune to a channel, say 63,  and display the output from the SD DirecTV DVR.  I also have a Cat-5 run to each television with an IR Sensor.  This allows signals from that room to be transmitted back to the basement and be 'seen' by the components there.

For the second HD TV, I use a system from Audio Authority (Model 9871 + Wall Plates) to transmit a High Def (HD) component video signal, digital audio signal, and IR signal to the second room.  This system requires a converter box at the source and destination, but allows all of the signals to travel over a pair of Cat-5 cables. 

Here is an overview of the wiring:
The RF Modulator is a great solution for sending audio and video to SD televisions.  I even have an SD feed from the HD DVR so you can watch down-converted versions of the HD shows on any TV as well.

The Audio Authority system is great for transmitting HD video, as long as it is component and not HDMI.  See my rant against HDMI for more information.

I use a Niles IR system to capture/repeat the IR signals.  It has worked well, although there does appear to be a quality difference in the IR sensors.  Spend the money to get a good one.

For music I use my laptop to stream music to one of two Airport Express devices, attached to each AV Receiver in the house.  It isn't a Sonos, but it works.

I have a Harmony remote in each room setup to control the local TV and the DVRs.

Overall, I've been very happy with this setup.  It has been in place for over three years now and just works.  There are certainly other solutions to this problem, but I've been pleased with this for my needs.

Siri - The Next Generation of Appliance Computing?

In my previous post, I discussed the trend toward computing appliances (ie DVRs, Kindles, etc.) instead of general purpose computers.  On the recommendation of Merlin Mann on MacBreak Weekly, I downloaded the Siri iPhone application and gave it a try.  Wow.

The Siri application attempts to be the ubiquitous Star Trek computer.  Just ask it a question and it will give you the answer.  It provides both voice and text interaction modes, and an easy user interface that exposes common feature easily.

It won't do everything you want, and I'm not sure this specific application will be something I use regularly, but at a minimum it provides an interesting example of where the world is going.  Imagine using this with what I imagine will become ubiquitous Bluetooth headsets.  Just ask a question and have the answer spoken to you.  We are not there yet, but we're starting to get really close.

This is an example of what I consider the appliance trend, applied to software.  The application provides easy access to a common (if limited) set of features that are intuitive to use.

Why the iPad will succeed, and the Rise of the Computing Appliance

The iPad is an appliance, and it will be successful.  But before we get to that, we need to start at the beginning.

When I was a kid I ran a dial-up BBS service (The Outhouse) on a computer cobbled together from old donated computers and a few new parts I'd purchased.  I would take apart old computers (donated by my friends parents after their companies discarded them) and test the various parts for something I could scavenge.  I spent hours pouring over the massive Computer Shopper magazine to find the best deal on a new hard drive or modem.  This was the very definition of an (economy) do-it-yourself general purpose computer.

In my first two decades of computing I never purchased a pre-built computer.  I always assembled new computers from parts, or upgrade my existing computer (often replacing everything but the case and CD-ROM drive).  Pricewatch.com was my favorite site for a long time.

But along came a new device that started a big change, TiVo.  I bought my first TiVo around the year 2000.  It was something I could have built myself, but I realized that the convenience of having a dedicated appliance was worth the cost.  I just wanted it to work, and it did. Very well.

In the years since, I've been transitioning away from general purpose computers to appliances.  The Linux and Windows desktop/servers I used to run (24/7) have been replaced by a Linux based wireless router (Linksys WRT 54 GL) and a NAS (ReadyNAS NV+).  They provide nearly all the services the old general purpose computers provided with a few exceptions, and each of those exceptions have been moved to the cloud.  I've moved my website hosting and email hosting to the cloud using GoDaddy (< $5 month for web hosting) and Google Apps (Free).  They provide a better quality of service, at a minimal cost.  Along the way I migrated from writing and hosting my own email server (Java Email Server), to hosting email at GoDaddy, to free hosting at Google.  That is quite a shift in effort and cost.

The same progression is true with my other devices.  I now exclusively use laptops, and have not assembled a desktop more than 4 years.  I've also embraced Apple devices, which have more of an appliance feel than do other devices.  I use iTunes to manage my music, a couple of iTunes Airport Express units to listen to music on my stereos, and an iPhone as my mobile music device, and phone.  While one could argue that these are not really a change to appliances, I think they match the general trend.  Appliances provide a pre-defined (somewhat inflexible) experience that 'just works' as long as you stay in the provided feature set.  This is exactly what Apple excels at doing.

Finally, I've adopted a Kindle.  While I could read on a laptop, or an iPhone, this single purpose device excels at linear reading (ie Books).  I use it every day.

There are several reasons for this trend.  One is simple economics.  As the years go buy, I have more discretionary income, and more demands on my time (namely two young children).  The convenience of buying appliances versus tinkering has certainly changed for me.

I think there is more to the story though.  As the computing and home electronic fields mature, it becomes easier and more cost effective to create appliances that fit into our worlds.  Which brings us to the iPad.

The iPad is not the first device to attempt to move the general purpose computing environment to an appliance.  One could argue that it is really a descendant of the WebTV concept.  Take the primary activities people use general purpose computers for and put them into an appliance.  This brings up a brief and interesting digression...

Apple does not create markets.  Apple waits until a market is ready, and then delivers a product with impressive polish and ease of use.  MacBooks don't do anything a similarly priced PC can't do, they are just prettier and easier to use (or have been traditionally).  The iPod was not the first portable MP3 player, it was just better (including the iTunes ecosystem).  The iPhone didn't break new ground on smart phone functionality, it was just better (again, including the iTunes and AppStore ecosystem).  Finally, the iPad isn't new either.  Microsoft has had a TabletPC version of their Operating System since 2001.  The previously mentioned WebTV provided email and web browsing as an appliance experience.

Apple is attempting to build on these ideas, with Apple's traditional polish, iTunes ecosystem, and of course, Reality Distortion Field.  I don't know that version 1 of the iPad will be a success, but I am convinced that appliance computing will become a significant mainstream success.

Final Note to Developers: As Software Developers, we will always use a general purpose computer.  Just as a carpenter utilizes a set of tools to build a house, we will utilize a set of tools (ie general purpose computers) to build appliances.  Our goal should always be building applications in the appliance mindset.  My youngest child (2 years old) can turn on my iPhone and open her favorite puzzle game.  All of our computing experience should be this easy.

Creating a TimeMachine SparseBundle on a NAS

I use a ReadyNAS NV+ as my backup drive and bulk storage.  Although the newer firmware directly supports TimeMachine, I've never been able to get that to work.  (This probably has something to do with the fact that I was upgrading and downgrading my NV+ Firmware quite a bit to debug a separate issue).

However, I did find a great tool to create SparseBundles that you can use on a NAS (or any external disk).

BackMyFruitUp.  First, it is a great name.  Second,  it is a simple and easy tool.  The Tool I actually use is 'Create Volume Backup,' a subproject of BackMyFruitUp, which you can download from this page

You hardly need instructions.  Unzip it, run it, and type in the size you want for the sparsebundle.  Then just copy it to your destination share and point Time Machine at it.  Done.

Of course, I wouldn't need it now if my Time Machine SparseBundle hadn't become corrupted.  Luckily I didn't need it.  I also perform a seperate rsync backup on occasion to insure I have a 'basic' backup of my user directory as well.

Ignorance Spam

I've been getting a lot of spam recenctly.  But this isn't normal spam, it is actually 'legitimate' bulk, although *I* didn't sign up for it.  What is going on?

Someone out there, apparently with the name Emily Daugherty, thinks that my gmail address is actually her gmail address.  She's been signing up for all sorts of websites using her (really my) email address  in the past few months.  That results all sorts of useless email for me that is not caught by normal spam filters.

Today she finally tried to reset my gmail password!  I'm not sure if she really doesn't know her email address, or is simply really, really, really bad at typing it in.  Either way, it needs to stop.

I need to somehow convince her to stop using my email address.  Unfortunately, I don't know her real address, since apparently she mostly doesn't either.

Any ideas?

Does Write Once Run Anywhere Work?

Yes, and No.

Write Once, Run Anywhere, a slogan created by Sun to evangelize the virtues of the Java Platform, is a controversial approach to software development.

Write Once, Run Anywhere (WORA) is accomplished through an abstraction layer between the 'compiled code' and the operating system and processor.  This abstraction usually takes the form of a Virtual Machine or Runtime, such as the Java Virtual Machine (JVM), Microsoft's Common Language Runtime (CLR), Flash Player (or Air runtime), or one of the many interpreted language runtimes (Perl, PHP, Ruby, etc.).  These runtimes convert the intermediate language into device specific code that can execute on the local operating system and processor.  While this overhead introduces extra steps, which slow down execution, they also provide features not (easily) available in native code, such as garbage collection and Just In Time (JIT) compilers which can optimize the code while it executes, as opposed to at compilation time.

So does it work?  Yes, and No.

Server

WORA languages have achieved a significant level of success on the server side.  Code that runs on large servers and interacts with clients over HTTP or other protocols is almost always written in some form of WORA, whether it is Java, .Net, PHP, Ruby, Perl, or other interpreted languages.  There is no advantage to using native code in these cases.  All interactions with the user are through an intermediate protocol/interface, such as HTML over HTTP for websites, XML over HTTP for web services, or various other formats and protocols used to exchange information between servers and clients or other servers.

There are certainly some applications developed for servers in native code.  Database servers are the most common example, but LDAP servers, webservers (Apache), and others are compiled to native code.  However, there are WORA versions of each of these examples, and many of the native applications were first written before WORA languages took off.

There is no denying that WORA is a huge success on the server side.

Client

Which brings us to No.

Client application development has struggled on the client side.  The biggest challenge is User/Human Interface Guidelines (HIG).  User or Human interface guidelines are published by various Operating System vendors (Microsoft, Apple) that define a set of recommendations on how an application should look and interact with the user.  Applications that follow these look like 'Windows' or 'Mac' applications.

With WORA, application developers have two choices.  Follow the guidelines of a specific platform, and ignore the others, or compromise between the various target platforms, creating an application that doesn't match any platform. 

Early Java desktop applications looked like Java applications.  They were obviously different from the other applications that users were used to interacting with, and were often shunned.  This has led to a negative view of WORA applications in general, as John Gruber comments on a Jason Kincaid article:
Jason Kincaid nails it: “write once, run everywhere” has never worked out. It’s a pipe dream.
In the context of client applications, I have to (mostly) agree.

There are exceptions.  In the Java world, nearly every developer uses an Integrated Development Environment written in Java, whether it is Eclipse, IntelliJ IDEA, or NetBeans.  But developers are a very different target audience than general computer users.

Another example is Flash and Flex applications.  Often delivered in the web browser, there are no real Human Interface Guideline that govern their interactions, other than the expected HTML experience.  This can work, but it can also be horribly painful, as many people have discovered trying to find a menu on a Restaurant's website. 

Mobile

There is a third act to this story.  Mobile.

Apple has take the mobile market by storm with its iPhone and App Store.   With over 100,000 applications written for the iPhone, the iPhone has become THE mobile development platform.  And every one of these applications was compiled to native code.

A consistent user experience is even more important on a mobile device with a limited display and user input capability.  Apple's success is in part due to its consistent device design.  Every iPod/iPod Touch/iPad version has a single home button, and a touch screen.  There are two screen sizes, the iPod size, and the iPad size.  While individual phone capabilities do very (memory, speed, GPS, Compas, etc.) the primary interface components are all the same.   By using a software keyboard on the devices, the keyboard is the same across all devices and applications.  All of this makes developing applications for the platform much more predictable and enjoyable.

The Windows Mobile and Android platforms both share a wide variety of device form factors, screen sizes, physical buttons, and device features.  This makes it much more difficult to build an application that is easy and intuitive to use across the platform.  And I think the quality and quantity of applications on the Windows Mobile and Android platforms demonstrate this point.

Solution

There is a solution, of sorts.  HTML in the browser is the most successful WORA language and runtime for client applications since the ANSI/VT100 terminal.  By creating a common language and interface, applications could be written for all operating systems easily, without the pain of violating their human interface guidelines.  The browser itself conformed to the local guidelines, and users expected the experience in the browser to be different from a native application.

It is time to evolve this paradigm to the next level.  HTML 5 is a good first step.  It provide the ability to display video, store data locally, and draw 2D graphics in a standardized way.  But to be successful, these features and more need to be implemented consistently across browsers, enabling developers to truly develop great WORA client applications.

As an intermediate step, frameworks and libraries that abstract the browser differences away is a short term solution.  JavaScript libraries such as Prototype and jQuery abstract the browser implementation differences while frameworks like Google's Web Toolkit (GWT) provide a platform to develop client applications that just happen to run in the browser.

Realistically, I think tools like GWT are the future.  As a Flex developer, I enjoy the ability to quickly and easily create rich applications that will render the same on ever user's machine.  But I would prefer that the Flex applications would compile to HTML and JavaScript, so they could be run native in the browser.

In the future, we will be developing using various language and platforms, but they will all compile down to code that runs native in the browser.  Or so I hope.

iTunes Export 2.2.1 Released

iTunes Export exports playlists defined in your iTunes Music Library to standard .m3u, .wpl (Windows Media), .zpl (Zune), or .mpl (Centrafuse) playlists. iTunes Export also supports copying the original music files with the playlist to facilitate exporting to other devices. iTunes Export is open source and freely available for use.

The 2.2.1 release features updates and bug fixes to the console and GUI versions

In both versions:
  • Enhanced the playlist name filter to include characters from the Latin 1 Supplement Block.
  • Added(Console)/Changed(GUI) 'addIndex' logic. Now uses incrementing index instead of iTunes Song Index. Index is in the order iTunes has the songs in for each playlist.
For the Console version:
  • Replaced ad-hoc URL decoding of file paths with URLDecode class. Now non-ASCII characters are handled correctly in file names.
If you find any issues or have questions please email me (eric@ericdaugherty.com). Please include which application and version (GUI or Console, 2.2.1 etc.) of iTunes Export and the operating system you are using.

Integrating GraniteDS and BlazeDS with a Spring WebMVC Application

Both GraniteDS and BlazeDS provide support for remote calls and messaging using Adobe's AMF protocol. However, when it comes to integrating them with a Spring application that already uses Spring's DispatchServlet, the projects start to show some differences.

In a previous post, I outlined the steps to getting GraniteDS 2.0 setup with Spring. However, this approach results in two separate Spring contexts, so my Spring Service with the 'singleton' scope was being loaded twice. Not good.

I found that GraniteDS 2.1 did support better Spring integration. You can see a blog post here that describes the process, or their updated documentation.  Note that the blog post seems to be somewhat out of date.  One issue is the schema URL: 
http://www.graniteds.org/config/granite-config-2.1.xsd in the blog instead of http://www.graniteds.org/public/dtd/2.1.0/granite-config-2.1.xsd.

The BlazeDS approach has a good overview here, and the documentation is pretty good too.  In my case, I used the @RemotingDestination annotation on my service beans instead of adding:
<flex:remoting-destination />
to the Spring config as I'm using auto-wiring for most of my beans.

There are a couple of things that bothered me about the GraniteDS approach as opposed to the BlazeDS approach.

First, the BlazeDS approach retains a (simplified) services-config.xml file, so the traditional configuration options are still available, and the integration with Flex/Flash Builder works as well.  Not a big deal, but it stays closer to the existing conventions.

Secondly, I was able to get BlazeDS working much faster. The project simply seems to be more mature, and the documentation and examples are clearer. The Granite documentation notes that they are working to achieve parity with the BlazeDS Spring integration, so it is likely that they will be able to close this gap. But for now, it seems to be a work in progress.  It is only available in the 2.1 version, which is currently at RC2.  I also had to use the 2.x version of spring-security-core instead of the 3.0 version, as Spring seems to have refactored the location of the BadCredentialsException class.

All that said, GraniteDS does provide more features than BlazeDS, so the comparison may not be entirely fair. I found the Granite ActionScript code generator (gas3) to work well, although it seemed to miss some import statements before it would compile in Flex 4.  However, the community around BlazeDS seems larger at this point, with the expected increase in polish.  However, competition is good, so hopefully the future versions of GraniteDS continue to improve.

Either way, it is good to see both of these projects working to provide easy integration with an existing SpringWebMVC project. Adding new Flex functionality to existing Flex applications should be very painless.

Migrating from Google Blogger FTP to a Custom Domain

Up until today, my blog was hosted at http://www.ericdaugherty.com/blog using Blogger's FTP publishing service. This service uploaded html files to my GoDaddy hosting account whenever a new post was created. While it worked reasonably well, and I liked the control it provided (since I had a backup copy of my entire blog on my hosting site), it did have a few issues. While I was not excited about moving my blog, I understood Google's reasoning.

Google provides a 'Custom Domain' publishing option, allowing Google to host the blog using a DNS name that you own. In my case, I chose http://blog.ericdaugherty.com as my new custom domain name. I could not keep my current address because Custom Domain publishing does not support publishing to subdirectories.

My first step was to create the blog.ericdaugherty.com CNAME and point it to Google's hosting DNS name, ghs.google.com. I use GoDaddy as my registrar and DNS Server and the changes took effect almost immediately. To test, I opened the new address in the browser, and was greeted with a Google hosted page that stated 'Not Found'. Good so far.



Step two was to update my blogger configuration to use Custom Domain publishing with the blog.ericdaugherty.com CNAME. I also used the 'missing files' host option to host all of the pictures I'd uploaded previously. I used my existing host at www.ericdaugherty.com to host the missing files. I hit save and reloaded the site. It immediately loaded, but was missing all of the formatting and images. My template assumed it was being loaded from www.ericdaugherty.com, so the links to my images and style sheets were broken.

I also used some PHP includes to generate the common navigation blocks on my site, so I had to copy that code into my template as they were no longer hosted within the same server. After I made the appropriate changes to the template, the blog displayed correctly. However, the Blogger NavBar now appeared at the top of the blog. I followed the instructions I found at here, which boil down to adding this to my css file:
#navbar-iframe {
display: none !important;
}
I also noticed that my favicon was no longer showing. I added the following to my Blogger template to direct requests back to my main site:
<link href='http://www.ericdaugherty.com/favicon.ico' rel='shortcut icon'/>
<link href='http://www.ericdaugherty.com/favicon.ico' rel='icon'/<
I use FeedBurner to track my RSS subscribers, so I updated my FeedBurner settings to point to the new URL. If I had not used FeedBurner, I would have needed to add a redirect to my .htaccess file (see below).

Finally, I needed to add redirects from the existing site to the new location. I already had an .htaccess file for my existing site, so I edited the file to include new redirects. I already had my RewriteEngine turned on with:
RewriteEngine on
So I just needed to add the new rewrite rules using a 301 (permanent) redirect to notify any requesting agents (including search engines) that the content has been permanently moved. I also included a redirect for my old rss.xml file to the FeedBurner URL, to make sure anyone who had directly subscribed to the feed was instead using the FeedBurner version. Finally, I excluded the uploaded_images directory, as the hosted blogger site will still reference those images for the old blog posts.
RewriteCond %{REQUEST_URI} /blog/rss.xml [nc]
RewriteRule ^(.*) http://feeds.feedburner.com/EricDaugherty [r=301,nc]

RewriteCond %{REQUEST_URI} !^/blog/uploaded_images/.*$
RewriteRule ^blog/(.*) http://blog.ericdaugherty.com/$1 [r=301,NC]
That's it. It took a bit of effort but wasn't too bad. If you are reading this post you, then the new settings are working. Let me know if you see any errors!

Excluding Content from url-pattern in Java web.xml

A while ago I blogged about my frustration with my inability to exclude certain URLs from a url-pattern mapping. In short, I wanted to map everything except /static/* to a dispatch servlet.

There is a way to do this, in some servlet servers. Many (most, all?) servlet containers define a default servlet to handle static content. Be default, any unmapped URLs will be handled by the default servlet. However, you can explicitly map certain URLs to this default servlet, achieving a de-facto url-exclude pattern.

Here is an example:
<servlet-mapping>
<servlet-name>default</servlet-name>
<url-pattern>/static/*</url-pattern>
</servlet-mapping>

<servlet-mapping>
<servlet-name>SpringMVCServlet</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
Where in this case, you have SpringMVCServlet defined in your web.xml, but not default. The default servlet definition is inherited from the servlet container.

The order shouldn't matter, as long as your mapping for the default servlet is more specific.

This is known to work in at least Jetty and Tomcat.