Great Whatsit

Tomorrow morning (10/17/2007), I’ll be posting over on my favorite community blog, The Great Whatsit. The post will be available at 8:00AM EST, so way before you west coast suckers wake up.

The topic is non-existent. Instead I just review the new Radiohead album, the new Me’shell Ndegeocello album, the concert I attended on Friday, and a hilarious YouTube video that I finally got bored of watching about 5 minutes ago. Enjoy!

Radiohead - In Rainbows
Radiohead – In Rainbows

Over the last 3 weeks, several major artists have parted ways with record labels and their traditional business and marketing strategies in favor of leveraging new-fangled Internet mechanisms for distribution and promotion.

Radiohead announced a week or two back that they would offer their new album “In Rainbows” for digital download, and let their fans pick the price they wanted to pay for it. This is a revolutionary (and maybe crazy) idea, and we’ll need to wait to see how it turns out. As one commenter put it, this “cuts straight to the moral dilemma of downloading,” but it also puts the question squarely on the fans: how much is this music worth to you?

For me, it’s less of a moral dilemma than a simple question. I don’t quibble with the argument that stealing music is wrong, but I do take issue with the cost of music, DRM-strapped files, and the fact that some record label is taking ninety cents on the dollar for every CD that goes out the door. The behavior of these major corporations doesn’t change the basic laws or ethics around illegal file sharing, but it is refreshing to see artists taking the music industry out of the equation. I downloaded Radiohead’s album today, and my price point was about 5 euros (about $7). I would pay about $10 for the CD or vinyl. That’s the standard Dischord price and I think it’s fair.

The album, by the way, is worth every penny I paid. Go get it. Besides the music, I’m thrilled that the money goes directly to Radiohead and whomever they have worked with to get this album out. This is what it should feel like to interact with your music and favorite artists. It feels good, empowering and personal.

Shortly after Radiohead’s announcement, Nine Inch Nails announced that the band will no longer have a relationship with a record label, and will heretofore be considered “free agents.” I don’t know what that means, and frankly I don’t care, but it’s another chip in the foundation of an already weak music empire. Funk rockers Jamiroquai and the crap Brit-pop outfit Oasis have made similar announcements.

Yesterday came the kicker from the godmother of pop, Madonna, that she too would forgo the support of the major music industry. Madonna’s business savvy has always been part of her brilliance as an artist, so the fact that she has made this decision suggests that the tide has turned conclusively against the labels.

Their options are slimming — if they can’t convince artists big artists like Madonna to stay on, they will lose their major revenue stream, which means they won’t have as much capitol to invest in up-and-comers. Conversely, if these young acts only view the major labels as a stepping stone to independence, rather than the other way around, the label’s expectation that they can milk an artist for 3 or 4 albums before putting them out to pasture will be gone. In short, their revenue model went bust.

The landscape is wide open, and fans and artists are winning. Digital downloads, Internet promotion, viral marketing, crowd-sourced videos, mash-ups. This has been on the horizon for a long time, but it’s by no means a stretch to say that the future of music is now.

Olive DJ Booth
View From the DJ Booth @ Olive

The infamous Jeannie Yang and I will be trading off on the turntables at Olive tonight. Drop by for a drink, some delicious food and good music. Olive is under new ownership, and they have a revamped menu and some new faces behind the bar, but the same great vibe and tasty stuff. Jeannie will be spinning her eclectic house records, and I’ll be augmenting her set with intermittent old school hip-hop, soul, funk and anything else that pops into my head. I’ll be aided by the wonders of my new toy, Serato Scratch Live. Come check it out!

I’ve been contributing regularly enough on the greatwhatsit blog that I shouldn’t really consider myself “guest” blogging over there (As I’ve done in the past). I’m really a contributor there. My posts appear occasionally as part of the “West Coast Wednesdays.” I will try to post here before hand to give reader(s) a head’s up.

Today’s post is a critique of the Burning Man festival. I was originally going to just poke fun at how seriously the BM clique takes themselves, but after giving it a bit more thought, I realized that I actually had some criticisms of the festival. Have a read and enjoy! If you are going to comment, please do so over on TGW.

I’ve been getting shelled with comment spam recently, so I just added a CAPTCHA plugin (called reCAPTCHA) that will hopefully weed out all of the annoying links to bad porn sites, viagra offers and online gambling sites showing up in my mailbox. Readers have been saved from this annoyance because I moderate all of my new commenters, but it’s starting to wear on me.  Sadly, this will also mean no more ‘cool’ comment spam, but that’s life.

CAPTCHA stands for “Completely Automated Public Turing Test to tell Computers and Humans Apart,” and is basically a way to weed out bots by using a simple ‘reverse’ Turing test that sets up a challege/response that a computer will most likely fail and a human will most likely pass. It’s a very practical application of Artificial Intelligence techniques (although applied for the opposite reason – to see how stupid a computer is, not how smart it is).

This particular implementation is cool because, while it does require an additional step for users who want to enter comments on my blog (aka, Jeannie), each successful ‘pass’ of a CAPTCHA on my site will help advance the cause of digitizing books. Cool, right?

Tomorrow I have a post on my favorite community blog “The Great Whatsit.”  The topic is music and it’s effect on perception.  Subject header “South Philly Through Your Arteries.”  The post should be live at 8:00AM Eastern.

Read and enjoy!

Warning: the following post is pretty boring. Move on before it’s too late, or read on if you are a geek!

In my downtime between contracts, I decided it was high time I finally automated the process of generating my resume in a variety of formats. Most people rarely need to update their resumes, and when they do, they might only need to create one or two versions of it — say an HTML version and a Word version. It’s fairly easy for most people to just update their resume by hand and make a couple of different versions of it when the need arises.

Being a consultant, however, my needs are more complicated. My contracts are generally short in nature — ranging anywhere from one month to a year, which means 1) I have to update my resume frequently, and 2) I have a long list of consulting work that can make my resume look kind of disorganized – a year here, six months there, etc, so it’s ideal to be able to show a summary version of my resume, and allow a greater level of detail if necassary. I consult for various kinds of organizations, and I perform different kinds of work depending on the needs of the organization, which means that I want to be able to highlight different components of my skill set based on context. Moreover, my clients have different requirements for how they want to receive my resume, which means I typically need to be able to provide my resume in at least four formats: HTML, Word, PDF, or Plain Text.

Add all of these requirements together, and you can see that maintaining my resume manually is a major league in the butt. It quickly becomes cumbersome to make sure all the formats are up to date, that I can provide a detailed or summary level view of my work, and that all of the different versions are in sync across formats.

My goal was to have a model driven, single document containing all of my information, and use that document as a ‘system of record’ from which any number of views and formats could be generated. When I was in graduate school, I actually created a resume exactly along these lines in a Document Engineering class. The document was written in XML, compliant with an XML Schema that I had designed. I refined the schema and the document, and used it as my starting point. The schema itself was designed to be finely grained and applicable to any technically-oriented resume. Once the model and document were completed, I turned towards generating the necassary views.

For the HTML version, I wrote a series of PHP classes representing the various objects in my XML resume: Person, Qualifications, Degrees, Experience, Projects and Publications, etc. I then simply read in the XML document, loaded the objects, and generated two different views: an overview and a detail view. I chose to develop PHP objects rather than use a simple XML transform because ultimately I would like to use these objects for a more graphical, dynamic representation of my skill set. For now, though, I simply wanted to dynamically generate the content that I wanted to display on my website.

That was the easy part. The challenge came when I needed to replicate these views as text, pdf and html. I spent a lot of time screwing around with LaTex/Tex and various PHP classes and talking to several people before finally opting to use XSL-FO, (aka XSL 1.0), and the Apache FOP processor to get the job done. I’ve got to say that the latest version of FOP (0.9.2) is the cat’s meow. Using FOP, it was possible to use a single XSL stylesheet to publish to RTF (Readable in Word) and PDF.

I spent more time than I would have liked solving my resume problems, but the outcome is that I never have to hand edit my resume in multiple formats again. That’s well worth the work. My resume itself also showcases some of my technical skills in XML, PHP and CSS. All of the code used to generate my resume is available in the code examples area of my website. If you want to use any of the code to generate your own resume, feel free to download and use all of the code. The XML document containing my resume is also available. Just ask if you need help, and please give credit where it’s due.

Today, I’ll be blogging once again over on The Great Whatsit, one of my favorite community blogs. My topic today is “Life and Death in the Long Tail of Music.” The post should be up on the site today around 8:00 AM New York time.

The posting is a bit scattered, but I hope entertaining… Please read and enjoy. All of my greatwhatsit posts (all two of them!) can be accessed here.

Recent Listening Screen Shot one
Recent Listening Screen Shot

Last Sunday I wrote a PHP tool that displays my recently played tracks in the side bar of my blog. I used the web services APIs from three sources to do this: Last.FM (recent tracks), MusicBrainz (album name), and Amazon (album art, label, etc.). My main motivation for writing this application was to replace the “Now Playing” application that provided similar functionality for my blog. I lost that plug-in, along with the license key, when my PC crashed a few weeks ago. I could have re-installed the Now Playing plugin, or used one of several other plug-ins for WordPress out there, but I wanted to see how easy or hard it would be to do this myself. I considered this exercise a baby-step along the way towards migrating the browsing and discovery capabilities of Orpheus from a fat client application to a web-based tool. There are miles to go before I get there, but this is a start.

I called this tool a ‘mash-up‘ in the title, and to the extent that it fits wikipedia’s definition of a “web application that seamlessly combines content from more than one source into an integrated experience,” it may loosely be considered one, provided we remove the adverb “seamlessly” from that description. Hitting up 3 data sources iteratively produces some unseemly latency. I could have removed MusicBrainz from the equation if Last.FM published the album name in their XML feed of recent tracks, but they don’t. So 3 web services it is. At any rate, this is my first mash-up, so yay for me.

And yay for you, because I’m posting the code here for others to use. It’s been tested in WordPress and Mozilla Firefox. It is a tad slow, but easy to configure and use. Be aware, there is little in the way of error handling, so if any of the 3 web services has problems, all goes to hell in a handbasket. I’ve seen this happen on the MB query when I have crazy long track names, usually on Classical music. This code is licensed under Creative Commons‘ “Attribution-NonCommercial-ShareAlike 2.5” license. For those interested in how I built this, and what I learned in the process, read on!

Before I started hitting up various web services, my first brilliant idea was to hack the iTunes API, take all of the relevant track metadata, query Amazon for album art and all kinds of other good stuff, and post it to my blog as XML for parsing. This is exactly what Brandon’s tool does, so I would essentially be rebuilding his system with less and different features to suit my needs. Of course, this approach required that I know C/C-Objective, which I don’t. After nodding off reading some code examples, I decided to defer my mastery of C for a later date. Ultimately, if I am going to migrate Orpheus to the web, I’ll need some simple little iTunes plugin, but that can wait. I discovered during my research that it is possible to query the iTunes XML database directly without working through the iTunes API, providing a real time “snap shot” of the library. But there are challenges with doing this as well, and most of the data I could get from the iTunes DB I could get elsewhere. For now, I would avoid working with any local data at all, and rely exclusively on existing web service data and only one local plug-in, AudioScrobbler.

I was already using the AudioScrobbler plug-in for iTunes to post my listening behavior to last.FM. And, bless their hearts, last.FM is kind enough to offer up a WS API for accessing said data (as well as much more!). So I could get a live, on demand XML representation of my recently listened-to tracks via the Last.FM Web Service. As I mentioned earlier, Last.FM’s web service for recent tracks doesn’t return all of the metadata about a track. Most notably missing is the name of the album. Without the name of the album, an artist name and a track name only provide a partial picture of the song in question. Most ID3 tags describe the album name, so why isn’t it available on my recent listening tracks XML feed?

I don’t know if this ‘bug’ is related to the data the audioscrobbler plugin sends to last.FM, or last.FM just not publishing the track data in its entirety. Whatever the reason, I needed the album name in order to build a useful query for Amazon. I decided to use MusicBrainz to attempt to determine the album name. MB’s Web Service is cool, but somewhat ill-suited for my very specific and unusual request. I needed to know, given an artist name and a track name, what the most likely album was that the track appeared on. This is admittedely an ass-backwards way of going about things, but I needed that question answered. Tracks, naturally, can show up on a variety of albums — the Single, the EP, the LP, the bootleg, the remix, etc. My queries returned some peculiar results in a few circumstances, so I decided to employ some additional logic to decide if the album name returned from the MB query was reliable enough to use. This approach means I don’t get the name of the album in a lot of circumstances, which sucks. You can see how several of the albums have no cover art. If can find the (correct) album on MB, the code will query the Amazon web service for album art and all the other goodies they have.

Once all of the data is collected, it gets parsed and posted as an unordered HTML list. Links to Last.FM and Amazon pages are included, and mousing over the image or track listing will show what time the track was played (in London time, unfortunately…). Pretty spiffy.

All of this was done using REST requests (no Soap here) and PHP to parse the resulting XML files. I avoided using XSLT for processing the XML because my web server doesn’t have XSLT enabled in PHP. Plus, the data needed to get into PHP at some point, so I decided to just do the parsing in PHP using xml_parse_into_struct. I relied on several great resources to build this. These two were the most useful. Visit my site for other useful sites.

Download recentListening here. Feedback is always appreciated. Except negative feedback. Keep that to your bitter self!

Old System -- All in one
Old System — All in one
Old Dell
Old Dell
New Gear!
New Gear!
New Media Setup
New Media Setup
New Office Setup
New Office Setup

Since 1999, the hub of my home computing environment was a Dell Precision 220 (dual P3, 1.3 Ghz processors; 265MB SDRAM; ultrawide SCSI hard drives (40 GB), an IDE hard drive (80GB), with an eclectic array of PCI cards and peripherals. I ran Windows 2000 on it, never upgrading to XP. I love this machine because it is so funky and I’ve spent days tinkering with the hardware and software. I bought it years ago to crunch a lot of data, and over time it has grown in size, complexity and importance.

I rely on this computer to:

  1. be my primary business computer. I use it to work, all day, every day.
  2. be my print server.
  3. store my digital music collection, and integrate it with my analog stereo system.
  4. play my dvd’s on my TV and serve up the audio to my home stereo.

Last Thursday this old Dell suffered yet another crash, and I found during the reboot and subsequent diagnoses that my OS was unrecoverable and my boot disk damaged. Brutal. I knew my data was safe, but I was facing another McGyver type solution to keep this thing lumbering along. I could spend hours patching up this machine, or I could bite the bullet and spend mad dollars to upgrade my home computing environment. I chose the latter. This was the day I would upgrade/overhaul my home computing environment. I had already given some thought to how I might fill of my computing needs.

I decided to go modular. Instead of trying to cram all of my myriad computing needs into on box, I’d split up the system. Buy a couple, maybe three machines, and link them up via my home network. I pretty much knew the components: a Windows laptop for work and a Mac Mini for my media server. I also needed a couple of peripherals. I wanted an Airport Express to be a print server, and I needed to pull out the 80GB IDE in the Dell and extract the precious data on it. The third photo to the right shows what I bought:

  1. An HP Pavilion DV1688 laptop.
  2. A Mac Mini
  3. An Airport Express server dock. From here, I should be able to serve up all of my media files to my entire home network.
  4. An internal IDE drive encasement. Pull the IDE drive, mount it to the Mini, and everthing’s cool, right? Sike. mount_ntfs: your bff. This command saved my digital music collection: mount_ntfs /dev/disk4s1 /Volumes/mounted_data

All said, this overhaul cost me about $2,500 and took me about two days to set up. I’m quite pleased with the new architecture. It allows me to remove several old modules, and gives me a lot of new functionality and mobility. The new footprint is much smaller, clearing up all kinds of space in my apartment. My processing power has quadrupled, memory has increased eight-fold, and hard drive capacity has gone up 80 GB. All of this at less than half of what my previous system cost. I should note I’m still not fully recovered – – I’m still missing two major features that I had: the “Now Playing” plugin, and the “” plugin (both for the blog). I should get those set up in the next few days.

My favorite part of the new architecture is that I can access my music collection from my bedroom laptop, and play the music through my main home stereo speakers (living room and kitchen) *and* in the speakers in my bedroom. I’ve achieved the goal of being able to play music in all the rooms in my house. Now that’s fresh!

« Previous PageNext Page »