Client Fragmentation: Apps making software more expensive

I'm hearing stories of companies that want to ship on multiple platforms, where the only solution for a company is to hire 10 people to deal with it. And this seems like a lot, until you think about what's happened over the last few years.

You want to ship a client for the  iPhone, Android, RIM, Mac, Windows (Linux?). Of course, to make it more fun, Microsoft has managed to fragment XP against Vista...certain features just aren't available in XP.

Back in 2000, you might take VC funding to pay for expensive Sun servers and bandwidth. Now you need a pile of dollars to hire enough people to port your app to five platforms.

For the most part, all of these client platforms use separate languages, skills, toolchains, and development environments. You have to physically shift from a Mac to a PC, or you have to launch Eclipse to write Java code, or other days xCode, or emacs, or Visual Studio. This is all relatively new, since in the past, most developers could work in one place, and they got good at it.

Now you might be expecting history to repeat itself: one platform gets all the market share, and everyone writes software for it! That would be the 1990s outcome. (It was fun when the worst bugs we had to deal with were "A LaserJet shared from NT4 to Win98 prints upside down." Seriously. Recompile.)

Of course, I left out the Web, which is the market share leader right now. And that's actually one of the reasons it's different this time.

So to me, the absolutely brilliant thing Google has done with Android is to fragment client development. (They might not see it this way.)

With Android, I have to write Java code. And honestly, it was hard enough to make my C++ stuff work in Objective C. If I really want to be portable from desktop to mobile, I need at least three code bases now, and that's ignoring RIM and separate resources for iPad, etc. When you cross language boundaries, there is no #ifdef shared code, there is no "fix this on platform X", there's just hard work.

You end up moving a code process ("this doesn't compile anymore") to a manual one ("our project manager said I should implement this feature sometime"), and that is dangerous and error-prone territory.

Google's approach looks smart for the Web, because it makes browser compatibility look like a cakewalk. In short, client fragmentation makes writing code that runs in the browser a lot more attractive than it was 5 years ago.

Everyone knows that Apple has been even more insistent on the current fragmentation, and in the short term it has helped them: they have locked up the best developers right now. Developers hate switching environments, and so more people seem to be making apps for iPhone than any other platform, Windows included.

But in the long-term, innovation will happen in the places where it's easy for a team of 2 people to get something done that everyone can use, and I don't think successful businesses will be built around a single client platform. (Good demos absolutely will run on one platform. But after that, there's hard work.)

I think fundamentally, every time you switch languages or IDEs or physical machines to do a task, you should probably have more people involved. And as we shift languages, we've moved processes that might have been "fix the code to make it compile on the Mac" to some meta-programming process, like "we have to reimplement this feature in Java for Android."

When you have 5 developers trying to coordinate a feature set, you need people to manage it, and people to re-design it, and software gets rapidly more complicated and slower to make. Bugs are slower to fix, because your RIM guy moved over to do the iPad version, and he doesn't want to shift gears this week, maybe next? So there's another project manager to keep track of that task. Big company processes, for the smallest projects.

My project f.lux has given me some of these headaches. It's a C++ app on Windows, an Objective-C app on Mac, and a C thing on Linux. And as I'm looking through Android kernel source thinking about hooks to access Snapdragon hardware that are not currently exposed, or trying to figure out how to hack in .kext's on jailbroken iPads, I realize fundamentally this is a worse world than having one environment to deploy to. It makes me want to write Windows apps ten years ago, or Web apps instead.

One possible secondary effect of this fragmentation is that open source can help solve some of the details pretty well. But it is important to say that most of open source is corporate-funded today, and only a few very important projects get seriously committed hobbyist involvement, so I don't think this is a panacea in most ways.

For apps and businesses that need to add features and fix bugs quickly, we've got an expensive mess right now.

In the meantime, you can write for the browser, and do a client or two where it makes sense.

The world has definitely changed.

1 comment:

  1. You are right, but its not as hopeless as it seems.

    There are some people working on improvements... The Mono project lets you write .NET code on Linux, OSX, Windows, iOS (untested to see if it will get squashed by Apple yet, but apps are going through still), and soon Android.

    The gui toolkits are native on each, so that code is still forked, but one may argue that is for the better.

    OpenGl does work everywhere using TaoFramework.

    Performance is all over the place.. for example, on Android, it's faster than Dalvik.


    What I see happening out in the field, is a lot of companies inventing their own custom cross-platform environments to get around everything you mention.

    Web is getting better, but it's still a pain to write for all the browsers and screen sizes.

    ReplyDelete