Developer & standards advocate (Google)
Ads Via The Deck
Developer & standards advocate (Google)
I’m Mark Pilgrim. I’m a developer advocate for Google. I write open source books and advocate open standards. Most recently, I’ve been advocating HTML5 and related web standards.
On my phone:
On my desktop:
That’s it, really. There’s other stuff that got installed by default, but I’m not attached to it, I rarely use it, and I don’t remember what it’s called. Most of my daily “applications” are really web pages.
There are no CDs or DVDs anywhere in my house, except some originals in the attic. My kids watch movies on an AppleTV set-top box and play games on a soft-modded Wii. All the Wii games we’ve bought have been “ripped” to an external hard drive, and the Wii boots into a custom launcher. I listen to digital music on my computer, via the network-enabled Squeezebox in the dining room, or on an old iPod in the living room that I hooked up to a Klipsch iGroove.
I’m a three-time (soon to be four-time) published author. When aspiring authors learn this, they invariably ask what word processor I use. It doesn’t fucking matter! I happen to write in Emacs. I also code in Emacs, which is a nice bonus. Other people write and code in vi. Other people write in Microsoft Word and code in TextMate+ or TextEdit or some fancy web-based collaborative editor like EtherPad or Google Wave. Whatever. Picking the right text editor will not make you a better writer. Writing will make you a better writer. Writing, and editing, and publishing, and listening – really listening – to what people say about your writing. This is the golden age for aspiring writers. We have a worldwide communications and distribution network where you can publish anything you want and – if you can manage to get anybody’s attention – get near-instant feedback. Writers just 20 years ago would have killed for that kind of feedback loop. Killed! And you’re asking me what word processor I use? Just fucking write, then publish, then write some more. One day your writing will get featured on a site like Reddit and you’ll go from 5 readers to 5000 in a matter of hours, and they’ll all tell you how much your writing sucks. And most of them will be right! Learn how to respond to constructive criticism and filter out the trolls, and you can write the next great American novel in edlin.
People then ask what format I use to write my books. That’s a more interesting question. It’s still completely orthogonal to becoming a better writer, but I’m a text wonk, so I’ll tell you. I wrote the original Dive Into Python in DocBook XML. Then Dive Into Greasemonkey, then documentation for Universal Feed Parser, all in DocBook. I self-published “Dive Into Python” in HTML, PDF, Word, and plain text. For years, there they sat, a list of downloads in different formats. Then I looked at my logs and realized that very few people ever downloaded it at all, and those that did mostly downloaded the HTML version. This was an epiphany. I publish my work in HTML, people primarily read my work in HTML, so it makes sense to write in HTML too. Writing in one format and converting it to HTML is not worth the mental and technical overhead. HTML is not just one output format among many; it is the format of our age. This epiphany was one of the reasons I got involved in HTML5.
Dive Into Python 3 was my first major work that I wrote entirely in HTML. (I had to convert the entire book to Microsoft Word format as part of the print publication process. That was… unpleasant.) My next book, Dive Into HTML5, is also written in HTML, and my editor tells me that they will handle the nasty business of converting it into suitable formats for print. This may become a factor in choosing a publisher for future books: the ability to avoid Microsoft Word altogether.
I have an Apple IIe in my attic. My parents bought it in 1984. We used it exclusively for five years; I wrote my first program on it, I wrote my first poem on it, my mother ran her first business on it. We sold it to a family friend in 1989, and she used it as her primary computer for 10 more years, until 1999. A few years ago, I paid her to ship it back to me. The damn thing still works – color monitor, 80-column card, original disk drives, everything. Most of my 25-year-old 5.25-inch floppy disks still work. Of course there’s no software being written for it anymore (except Silvern Castle, God bless you), but what it could do in 1984, it can still do just as well in 2009.
I’ve had my current desktop for a little over two years. I want to continue using it for another 20. I mean that literally: this computer, this keyboard, this mouse, these three monitors. 20 years. There’s no technical reason the hardware can’t last that long, so it’s a matter of whether there will be useful software to run on it. First, there’s the operating system. People throw away computers every day because they’re “too slow” to run the latest version of their preferred operating system. Linux (and open source in general) is not immune to this, but I think it’s more immune than proprietary operating systems. Debian only recently dropped official support for Motorola 68K machines; that’s stuff like the Mac IIci that I bought off the clearance rack at Microcenter in 1992. The latest version of Debian still runs on my old PowerPC “G4” Apple laptop, even though the latest version of Apple’s operating system doesn’t. Commercial vendors have a vested interest in upgrading you to the latest and greatest; supporting the old stuff is unglamorous and expensive. Commercial open source vendors aren’t really much better than commercial proprietary vendors in this regard, but community-led Linux distributions can afford to have different priorities.
Next in the software stack is drivers. Everything from the network card to the graphics card to the sound card needs a working driver. Linux has the most comprehensive driver support of any operating system, ever. Yes, I’m including Windows in that statement. People think Linux driver support sucks because newer hardware sometimes only works with proprietary Windows drivers. That’s true, but there’s a lot more old hardware in the world than new hardware, and Linux has superior support for older hardware because the community writes and maintains their own drivers. People throw away computer accessories every day because they upgrade their operating system and can’t find functioning drivers. (Will that scanner you bought in 1999 still work on your shiny new 64-bit Windows 7 machine? I wouldn’t bet on it.) All of my hardware is supported today by open source drivers, which removes one of the primary reasons that people throw away working hardware. Again, I’m not saying Linux never drops support for older hardware, but the cycle is longer and the incentives are different.
Next up is applications. Open source has the clear advantage here, because communities can recompile and redistribute other people’s software for multiple platforms. I’m currently running a 64-bit operating system on 64-bit hardware. With few exceptions, all of the software I run can also be recompiled to run on 32-bit operating systems. This is so common now that we take it for granted, but it’s really quite remarkable. No doubt there will soon be 128-bit hardware available at reasonable prices, and then other advances after that. And Linux distributions will take advantage of newer hardware, but they can also continue supporting older hardware for much longer than proprietary operating system vendors, which rely on individual developers to support each platform. So if there’s an operating system that still runs on my hardware 20 years from now, I’m pretty sure I’ll be able to run Emacs on top of it.
But hey, you asked for my dream setup. That’s it: one computer for 20 years.