[Update: Jeff LaMarche (author of one of the best iPhone books on the market) wrote one of his trademark 'no tact' responses to this post. I'd be very interested to know what people think about his post.]

[Update 2: Guy English (aka kickingbearchimes in on this debate with what is the best response I've seen from a hardcore Cocoa developer. Basically: Apple's tools are probably better at producing better iPhone apps, but let's see what MonoTouch and Flash can deliver before we definitively say that they are no good.]

Novell recently announced a product called MonoTouch, which allows developers to write iPhone applications using C#, a language invented by Microsoft (but since standardised). It’s a very clever piece of work that allows someone without experience of Objective-C – the only option that Apple gives you for iPhone development – to write an iPhone application with a reduced learning curve.

Yesterday, Adobe followed suit and announced that they are working on a way to make native iPhone applications with their Flash technology.

Naturally, this is a good thing. Talented C# and Flash developers will be able to write excellent iPhone apps and we can all go home happy. The fact that two heavyweights in the technical space have made these tools is a massive compliment to Apple’s achievement with the iPhone.

Not if you listen to many Objective-C programmers it isn’t.

Before I launch off into a rant here, I should probably get my retaliation in first. I made the effort to learn Objective-C before MonoTouch was even a twinkle in Miguel de Icaza‘s eye. I’ve made an effort to become engaged with the Mac developer community, and even have a shipping (if small) Mac application to my name. I like Objective-C, I like Cocoa and I love the iPhone and Mac platforms. I like learning new languages and platforms, and Objective-C is one of the most pleasant I’ve worked on.

Objective-C was the natural choice for Apple as the development language for the iPhone because that’s what modern Mac applications are written in, and the Mac developer community were always going to be the most likely to start writing iPhone apps. The iPhone was not guaranteed to be the massive success it is today, and developer adoption of the SDK has been key to selling it to the public (‘There’s an app for that‘). There is no rational way to criticise Apple for their choice of technology for the platform.

But, here’s the problem. Objective-C is very much an acquired taste. Roughly speaking, it takes the elegance of Smalltalk and mashes it up with the power of C. The result is something that is very elegant in many respects, but gives you the ability to get right into the guts of things if you need to.

The downside is that it has an odd syntax compared to most other languages, which arguably has not been helped by some changes introduced in Objective-C 2.0. On the iPhone, you also have to write your own memory management code, something that is not necessary in any other mainstream high-level language.

If I write a sloppy iPhone application, it will waste memory. Waste enough memory and the application will crash. And users hate applications that crash.

This is because Objective-C on the iPhone lacks Garbage Collection, an automatic memory management technique that is used in virtually all modern languages, and this makes developing code for the iPhone a little harder than for other platforms.

The thing is that memory management – once one of the hardest technical challenges a developer faced – has been removed as a consideration for most developers. Many developers have grown up in a world with, in effect, unlimited memory and no penalty for not thinking about how much memory is being allocated by their program.

This changes on the iPhone because it has a very limited amount of RAM – something like 24MB per application, compared to the 4GB that is the default configuration in a MacBook Pro. The difference between those two numbers is like that between a drop of water and a swimming pool. No one worries about wasting water in a swimming pool.

This is one of the things that makes developers new to the platform less productive on the iPhone. The strange Objective-C square bracket syntax is another. The fact that you are forced to use C-style header files is another. The amount of code you need to define properties is yet another. These are the issues that MonoTouch is designed to solve.

There are many talented developers working on the .NET platform. The .NET platform is a brilliant bit of software engineering, and perhaps the best product that has ever come out of Redmond. Don’t let your dislike of Microsoft products cloud your judgement on either of those points. Most .NET developers will already know several languages, especially if they work with web technologies. These are likely to include C#, Visual Basic, JavaScript, SQL, XSLT and more. It’s not a fear of learning a new language that makes developing iPhone apps in C# attractive. It’s productivity.

Here’s the thing. C# is designed to solve precisely the same set of problems as Objective-C. The language designers set themselves different parameters, and ended up with very different results, but if there’s a feature in one language, there is bound to be its equivalent in the other. Lambdas in C#? Try blocks in Objective-C. Interfaces in C#? Try protocols in Objective-C. Categories in Objective-C? Try extension methods in C#. Like many rules of thumb, this is not entirely accurate, but it’s close enough.

Yet Objective-C developers reacted with derision – including ones who work for Apple on the language tools – when MonoTouch was announced. I find this deeply frustrating.

Further to this, there are large C# and ActionScript codebases out there that, hitherto, would have had to have been rewritten in Objective-C if they were to have found their way onto the iPhone. Who would have gained from this effort? If I’m a Flash developer who has written a neat Flash game (say, DiceWars), I would have to learn a whole new technology and language in order to get my game on the iPhone, costing me probably weeks or months of additional effort, and I would end up with two independent sets of code that I would need to maintain separately. How does anyone benefit from this? Certainly users don’t.

For far too long, techies have been obsessed with the workings of what they do rather than the results of those workings. With only a statistically insignificant number of exceptions, most people using an iPhone would not know Objective-C if it smacked them in the face with a sackload of gold-plated unicorn shit. More than that: they simply don’t care. Not because they’re careless, but because it is genuinely irrelevant to their lives.

Users just want to get things done. Many have grown up feeling nothing but frustration with their computers, which they see as things imposed on them rather than things they actually want to use. The iPhone is a watershed device in the sense that it is the first mainstream general purpose device that people actually want to use. They find excuses to use it. That’s the kind of delight I’m talking about.

But, to at least one Objective-C developer, if you don’t learn Objective-C and use that to develop your iPhone apps then you are a less than worthy coder.

There are many reasons why this is wrong-headed, but none more than that the end-user doesn’t care. If the non-native tools are incapable of producing a polished iPhone experience, then users will notice and vote with their feet. If not, then what on earth is the problem with using them?

There are good reasons to prefer Objective-C, not least because it is officially supported by Apple, is theoretically documented in full, has a large community of expert, friendly developers willing to help others and is guaranteed to be supported for the foreseeable future. If I were developing an iPhone application today, I’d definitely be doing it in Objective-C and Cocoa Touch.

Like me, many developers will welcome the chance to learn a new language and a new platform. But many will just want to write an app and get it into the hands of their users as quickly as they can.

The Mac developer community is, on the whole, very welcoming, polite and probably more imbued with expertise than any other I’ve been involved in. In my experience, Mac software is vastly superior to anything found on Windows, and much of it comes from independent developers rather than large corporations. There’s much to like about it. But the flip side of this is that many Mac developers have a near religious attachment to Objective-C that has the potential to stand in the way of many a developer’s desire for productivity.

It boils down to this: if MonoTouch and Adobe’s Applications for iPhone platform are no good, they will die. Developers who use them will sell fewer apps in the App Store. Objective-C developers will carry on developing apps in their preferred style and creating apps that sell well. But if the rival techniques are sound, everyone gets to write great iPhone apps, and the result is happy users, and an even wider adoption of the iPhone platform.

Who on earth could have a problem with that?

Leave a comment

Required fields are marked. Don’t worry - we won’t do anything fishy with your email address.