Compiled Software is Here to Stay

I sometimes hear claims that the web browser and web apps will replace traditional operating systems (like Mac OS X) and compiled native applications (such as iPhone apps). In particular, Google is developing a new operating system based solely on their Chrome web browser; and Palm/HP smartphones similarly use an operating system based on web technologies.

But while these web technologies are great and useful for many things, compiled software is here to stay. This is because the most innovative applications often require the most processing power and the latest features of a platform — attributes that can only be achieved with compiled software. Meanwhile, the networking technologies used primarily by web applications today can also be utilized by compiled software. Because of this, the most innovative user experiences are usually going to be compiled. And I think that’s bad news for web-only operating systems.

A little background

When I say “compiled software,” I’m talking about any application that is technically compiled and optimized for a particular hardware system. This includes most desktop Mac and Windows applications, native iPhone apps that you get from Apple’s App Store, and anything else that is written for a particular processor chip/operating system combination.

The alternative is called “interpreted” software. Examples include standard web pages (HTML, CSS), fancier web applications (JavaScript, Flash, etc.), Java applications (both web applets and desktop versions), and programs written in newer languages such as Python and Ruby.

Whereas compiled software translates programmer code into computer instructions at the outset before you even download the application, interpreted software translates into computer instructions in real time as you use the application.

The advantage of the interpreted approach is that it’s easier to run on many different devices. Since the translation to computer instructions happens at the last minute, you can write a program once and then run it on any processor / operating system that knows how to do the translation. (A web browser is one such “translator.”) In some cases, it can also be easier to write interpreted software.

The compiled approach, on the other hand, has significantly better performance. Converting to machine instructions requires processor time and uses up battery power. When you do all this work before you even ship the software, the app runs faster and drains less battery. It’s even better if the software is specifically optimized for the device (for example, taking advantage of special graphics chips).

“Fast enough”?

I did some performance tests six months ago and found that web applications run about three to 50 times slower than native compiled applications, depending on the task. Although incredible strides have been taken to narrow this performance gap, the gap is fundamentally here to stay — the tradeoffs between interpreted and compiled software are simple facts of computer science.

But, the argument goes, today’s or tomorrow’s powerful computers are “fast enough” to support many useful web applications despite the performance gap. And at face value, this is perfectly obvious. We had spreadsheets 20 years ago on machines that were literally a thousand times slower. You would certainly hope that we could replicate that functionality with web apps today.

And at any given point in time, it’s hard for us to imagine what we could possibly do with even more powerful computers. (Bill Gates famously once said, “64K should be enough for anyone.”) One of the easiest things to imagine doing is taking advantage of the new speed to allow web applications to run faster. The thinking goes as follows: “the performance gap is only 3-50x. So in [2, 5, 10] years, when computers are [3, 50, hundreds] of times more powerful, web apps will perform just fine, and take over from desktop apps.”

But history has shown that we have always been able to take advantage of more processing power to accomplish tasks that were previously impossible, if not unimaginable. For example, Apple famously transitioned the personal computer from a word-processing and personal-finance machine into a “digital hub” for your music, photos, and videos (all of which require substantial processing power to manage). Only now, almost a decade later, do web apps have the necessary horsepower to manage our digital media. And Apple is now in the process of bringing these higher-horsepower tasks to mobile devices.

Ongoing research in computer science makes it clear that this historical trend will continue. For example, “machine learning” algorithms for applications such as games, speech recognition, augmented reality, and many others all perform increasingly better as they are allowed to use more and more processing cycles.

The most innovative emerging applications will tend to be the ones that can make use of the most processing power now. For these applications, there is no such thing as “fast enough.”

“Write once, run anywhere”?

Cross platform frameworks come with the promise of letting you develop a single application that can be run on any supported platform. You write one code base, and the framework does the hard work of making your app work everywhere.

The problem with this claim is that each platform is different. If the differences were merely cosmetic, it wouldn’t be a big deal — make them look like Mac buttons on the Mac, Windows buttons on Windows. But new devices like the touchscreen iPhone and iPad make it clear how limited the whole notion of cross-platform compatibility is. User interfaces designed for mouse and keyboard simply don’t work well on a touchscreen. Interfaces designed for large screens don’t work well on small screens. Even with very similar platforms (e.g. Mac and Windows), there are subtly different UI paradigms that cross-platform frameworks usually fail to respect.

Each platform also has a unique set of available features, which limits the possibilities for cross platform frameworks.  As Steve Jobs put it,

The [cross platform framework] may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. We cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms.

Apple later relented, allowing apps to be built using cross-platform frameworks. But Jobs’ drawbacks still apply. Apps that use these frameworks are constrained to yesterday’s feature set.

Web applications face these same constraints. In theory, they can run on any web browser, whether it’s Mac, Windows, Linux; Safari, Internet Explorer, Firefox; laptop, tablet, or smartphone. But in practice, web apps have to be adapted to truly meet the needs of each platform (for example, Gmail and many other sites have smartphone- and iPad-specific versions).

For billions of websites, the least common denominator set of features is plenty. But if you want to write innovative software (as distinguished from innovative content), chances are that the features you need will not be readily available on all of the relevant platforms. For these important applications, “write once, run anywhere” is a myth.

“Cloud computing”

The user experience of installing and running software has traditionally been much better on the web than on desktop operating systems. Consider: from any internet-connected device in the world, one need only type “facebook.com” to access a powerful, extensive social networking application. There is no need to start a download, find it, decompress it, install it, and run it. Web applications are discoverable and viral since they can be shared with a simple URL.

But there is no reason in principle why compiled apps can not also be delivered in this way. Apple has demonstrated this with its App Store’s integrated download process. They could go even farther by letting developers split up their applications into discreet chunks that only get downloaded when necessary. The downsides of this browsing experience would be exactly the same downsides as on the web: the intermediary downloads can be slow, the internet connection can be broken, etc.

Web applications are also touted for their ability to push out updates immediately without the user needing to do anything. But this can (and should) be applied to native apps too; in fact, Google has finely honed the upgrade process for its Chrome web browser so that patches are securely downloaded and installed without the user even noticing.

There is no reason that compiled software cannot take advantage of all the “cloud” features that the Internet enables. Compare Google Docs, the productivity web apps, with Microsoft Office, the dominant compiled version. As I see it, the advantages of Google Docs currently are:

  • Easy to access from any computer
  • Easy to collaborate in real time and share documents with others
  • Free (supported by advertising)

The interesting thing about this list is that these advantages once again do not depend on the web browser platform. Indeed, many of the most important iPad and iPhone apps are native clients to back-end web services (e.g. Twitter, Instapaper, Flipbook, NPR, etc). Not only do these apps make it easy to collaborate and share and view advertisements, they also take advantage of being compiled to provide innovative, responsive, battery-conserving user interfaces.

Conclusion

Ben Ward wrote, “If you want to build the most amazing user interface, you will need to use native platforms. A single vendor’s benevolent curation of their framework will always outpace the collaborative, interoperable developments of the web…. But the web will always be the canonical source of information and relationships.”

What will be the fate of platforms based around interpreted-only software? It already seems pretty clear that WebOS smartphones are not going to survive. The performance tradeoffs are too dramatic on tiny, power-hungry mobile phones. I suspect Chrome OS will fare at least a little better, because web apps on cheap laptops can now do most of what mainstream users need, with a user experience that’s not terrible.

Chrome OS targets essentially the same market as the iPad — people who have only light computing needs or who want a secondary, more portable computer. Apple has shown that it’s a viable market. And for now, most web apps have been designed for a mouse and keyboard (rather than touch). Chrome OS’ dearth of viruses, fast startup, and assumed price point below Apple puts it in a decent position.

But Chrome OS will never come close to replacing systems that are based on compiled software. Even if most of your computer time is spent in a web browser, why sacrifice the native applications that can do more with the same hardware? Games that are more responsive and more realistic; photo managers that are more powerful and flexible; office apps that use less battery; social media clients that are optimized for your screen size? Those native applications already exist on iPad, alongside an excellent web browser.

Today you can edit HD video on an iPhone via a native app; it will be years before the same experience is possible with web apps. And the cycle will continue — the video editing of today is the real-time artificial intelligence of tomorrow. Anyone who wants to be near the cutting edge will choose the products that have the new and exciting features.

I appreciate the simplicity of reducing everything to a web browser. But the iPad demonstrates how much more you can do with tightly integrated, compiled software running on relatively cheap and battery-constrained processors. Expect more web-like features to make their way to future iPad software, such as automatic upgrades and data synchronization. Expect web apps to remain important for lowest common denominator tasks. And rest assured that compiled software is here to stay.

Many of the ideas in this article are based on links and analysis I read on Daring Fireball and asymco over the past six months.


Update: Technology Review magazine published an essay by the CTO of the Opera web browser which follows the line of reasoning that web technologies will be “fast enough” in the future to overshadow native apps. I wrote a letter to the editor. (Update: they published part of the letter in the magazine!)

One of the things I like most about the articles in Technology Review is that they consider technology within the real-world context of business and politics. The authors rarely get lost in technological hype that ignores practical obstacles.

I thought Håkon Wium Lie’s notebook contribution “Web Wins” (TR March/April 2011) was an unfortunate exception to this norm. He concludes that “native apps will become a footnote in the history of computing.” Even allowing some room for hyperbole, this statement is foolish. Native applications have been the norm for decades on personal computers; similarly, native software has dominated the history of mobile devices since the earliest cell phones and PDAs. Even if the majority of apps do become web-based in the future, calling this long history a “footnote” borders on the absurd.

Worse, however, is that Lie’s argument is a purely technological one. He argues that new web technologies “handle many computing-intensive tasks” that now allow web applications to approach the performance of existing native apps. But any student of disruptive innovation theory can tell you that technological innovations tend to start out in proprietary systems where the full software and hardware stack can be tuned to meet the needs of the new application. Web standardization will always lag behind these path breakers. By the time today’s new web technologies become standard, the next wave of native applications will have emerged in areas such as augumented reality and machine learning, and it will take another few years for web technology to catch up.

There is plenty of room for debate about the extent to which important software will be ported to the web. But it would be delusional to believe that native apps will go away altogether.


Update 2: John Gruber points out:

We should perhaps use “web app” to mean any app that is built around HTTP communication, and “browser app” to mean a kind of web app written in HTML/CSS/JavaScript which runs in a web browser. Things like iOS and Android Twitter clients are web apps, in my mind, they’re just written using platform-native toolkits.

“Browser app” seems like a reasonable choice of terminology to me.


Update 3: Matt Gemmell wrote an interesting article comparing native apps and browser apps from the perspective of frames of interaction — how many windows you have to “reach through” to get at the app itself. He argues that the cognitive cost of this nesting negatively impacts the user experience for browser apps.


Update 4: (Oct, 2011) Apple has released a suite of “iCloud” services whose primary goal is to bring web-like data synchronization to native apps.


Scale successes

“What is the ratio of the time I spend solving problems to the time I spend scaling successes?”

-Chip Heath & Dan Heath

Trying on old ideas

“Old ideas… do not vanish, and when there is a crisis, and people lose hope… they fetch them out and try them on again.”

-Theodore Zeldin

Meta-moderation

I like to say, “moderation in all things — but not too much.”

In other words, keep your moderation in moderation. This sort of sounds like a paradox, but I don’t think it is. Obsessing over moderation is just as unhealthy as obsessing over anything else. You end up worrying about whether your life is perfectly in balance. It shouldn’t be! It should get a little out of whack. Maybe even a lot out of whack, every now and then!

I think this is just the sometimes-overlooked deeper meaning of moderation.

p.s. In chaos theory terminology, I think this concept is related to the chaotic boundary between uniformity and randomness, which is self-similar at all scales. Moderation modulating moderation (modulating moderation, and so on). Uniformity taking hold and then randomly switching to a different uniformity. Randomness that is uniform — usually. But that’s a blog post for another day.

Toleration is not enough

Gandhi’s life confirms that toleration is an insufficient remedy even when practiced by a very exceptional man.

This is from Theodore Zeldin’s excellent book An Intimate History of Humanity. The chapter it came from points out that despite Ghandi’s incredible charisma, intellect, and patience, his mission was ultimately a failure. He wanted English colonialists and native Indians to live together peacefully — to tolerate each other. The peace did not last.

Zeldin finds that tolerance works when times are good. But as soon as there is a shortage or conflict of interest, those who were tolerated quickly become the bad guys. The finger-pointing begins and the conflict escalates.

What is needed in the long run is respect.

The recent movie Invictus shows how rugby symbolized Nelson Mandela’s deep respect for white citizens and their culture. In one scene, Mandela insists that his team of bodyguards should include an equal ratio of white and black officers. This demonstrates to the country that Mandela respects the white officers so much that he trusts them with his life; in turn, the white officers demonstrate their respect for the president by protecting him. Like Ghandi, Mandela led by example. But Mandela realized that toleration was not enough; respect was necessary for lasting peace.

I find this insight surprisingly applicable to everyday life. It’s easy to fall back on toleration when times are good. I’ve seen this (and have sometimes been guilty) with roommates and coworkers. When interaction is minimal or interests are aligned, things go smoothly. But as soon as opinions differ or hard constraints arise, there is escalating conflict and extreme difficulty at reaching consensus or compromise.

Respect is harder than toleration. It requires understanding and empathy. It requires a willingness to embrace truths that are not your own truths. Humility.

The lesson here is to learn to recognize the difference between tolerance and respect, which can often look similar on the surface.

The Innovator’s Dilemma

After reading Disrupting Class and several articles about disruptive technology on the asymco blog, I decided I should go to the source and read The Innovator’s Dilemma by Clayton M. Christensen, published in 2000. It’s one of those books that seems fairly obvious in retrospect — now that ten years have passed and its lessons have largely been absorbed into business practice and culture.

The book is based on Christensen’s PhD thesis, which originally looked at technology and business trends in the hard disk drive industry. He found that some technologies (such as improved read-write heads) served to “sustain” existing product lines and cement the dominance of existing companies, while other technologies (such as smaller form factors) ended up “disrupting” existing products to the extent that once-dominant companies sometimes went out of business in just a few years.

The reason these companies failed was not that they were poorly managed, but because the disruptive products were in completely separate markets (and accompanying “value networks”). The existing companies were simply not designed to compete in those new markets. For example, 5-inch drives were sold to minicomputer makers, while 3.5-inch drives were sold to personal computer makers (with shorter design cycles, higher volumes, and lower profit margins). The existing minicomputer customers had no need for 3.5-inch drives, so the 5-inch manufacturers saw no market and no need to produce them until it was too late and other startup companies were already dominating the emerging market for personal computer hard drives (3.5-inch).

In other words, the businesses of making and selling 5-inch versus 3.5-inch drives were so different that being the dominant expert in hard drive technology was not actually much of an advantage. In fact, it was a disadvantage because the whole organization was designed to compete in the old business and naturally fought attempts to undercut that business.

But how do you know if a given product idea is going to be disruptive?

One clue: disruptive products are usually simpler, less powerful, and have smaller profit margins than existing products. So they need to find markets that value product attributes like convenience, reliability, and ease of use over sheer power. For example, business accounting software in the nineties was driven by the needs of large enterprise customers and so was quite complex and powerful. Quicken disrupted this market by creating a simpler, cheaper product based on its personal finance software. This was so much easier to use that it quickly gained an 80% market share among small business owners who did not need all those extra features.

What makes technologies “disruptive” rather than just “niche” is when they progress far enough to compete up-market with existing product lines. For example, Quicken continued to add features so that larger and larger businesses were able to use its software, pushing out the old software companies to only serve the largest enterprise customers. Potential disruptive technologies should have a plausible development plan that will eventually displace existing products up-market.

The big take-aways are:

1. If you want to start a new company, do it with a product idea that is likely to be disruptive. Otherwise, you have very little chance of making any headway against existing players.

2. Generally the only way to manage disruptive technologies from within an existing company is to create a totally separate organization with the sole purpose of going after that disruptive technology. If you don’t keep it separate enough, resources will inevitably be borrowed to take care of existing business and the new products will languish.

Apple has a better record than most for its ability to disrupt its own products before competitors get the chance. Horace Dediu makes a good argument that the iPhone should be seen not as “a better phone” but as a disruptive technology for personal computers: a simpler and more convenient way to accomplish computing tasks such as email and web surfing. The inclusion of a phone capability just makes it all the more convenient. I know at least one person who decided to get an iPhone instead of a new laptop; and Apple’s iPad is even more competitive with laptop computers. iPhones and iPads will continue to “move up-market” by adding the ability to conveniently handle ever more computing tasks. As this happens, Macs and other desktop PCs will increasingly be seen as high-end tools for power users.

2001: Space Art

I just watched 2001: A Space Odyssey, mostly with the goal of better understanding nerd cultural references. I hadn’t realized until I looked at the DVD jacket that it was released way back in 1968, shortly before the first real-life moon landing in 1969.

I assume (and skim from wikipedia) that 2001 is legendary for its pioneering special effects (such as simulated zero-gravity environments and spaceship fly-bys) and the philosophical and scientific questions it raises. I’m not going to try to dispute its status as a work of genius. I remember enjoying the book version when I read it many years ago.

But of course, by this point in history, artificial intelligence has been thoroughly discussed, and the astronomical cost of space travel makes the lavish and enormous spacecraft in the movie seem absurd (for example, the jupiter-bound ship is way bigger than necessary for supporting a mere six crew members).

And it seemed to me that the parts of the film which actually moved the plot forward could have been condensed down to about 15 minutes. The rest is better interpreted as space art, to be enjoyed at leisure in a gallery while pondering the nature of humanity.

All of this is to say that I found the movie to be extraordinarily boring.

But at least I’m one step closer to understanding what the heck my co-workers are talking about…

Practical people

“What is the point of having discussions with practical people who always say you cannot change the world?”

-Theodore Zeldin

Dramatic photo


I took this photo from the Queen Anne neighborhood in Seattle (walking distance from my office), looking southwest towards Elliot Bay.

Camera: iPhone 4.

Post-processing: Digitally removed power lines via Photoshop.