New Swift language shows Apple history

Still reflects conventions going back to original adoption of Objective-C

A closer look at SwiftLike many Apple programmers (and new programmers who are curious about iOS), I treated Apple’s Swift language as a breath of fresh air. I welcomed when Vandad Nahavandipoor updated his persistently popular iOS Programming Cookbook to cover Swift exclusively. But I soon realized that the LLVM compiler and iOS runtime have persistent attributes of their own that have not gone away when programmers adopt Swift. This post tries to alert new iOS programmers to the idiosyncrasies of the runtime that they still need to learn.

Delegates

Like all interactive systems, iOS notifies classes when something they care intensely about–a button press by a user, for instance–has occurred. To create a successfully responsive app, you certainly want to know as soon as a user has pressed a button. Unlike other user interfaces, iOS requires you to define a new class called a delegate to receive the all-important event telling you that the user has asked you for something. Sometimes you can define your original class as its own delegate, but you must in every case understand the concept of a “delegate” and define it to receive the critical event.

Apple enthusiasts like to present the delegate notion (I refuse to enshrine it through the term “pattern”) as a nice division of responsibilities. I haven’t seen any explanation why other systems, which also have to respond to user events, allow a class with the important user interface elements to respond to its own events. The delegate notion is an Apple oddity that survived the transition from Objective-C to Swift. Go along with it and learn to implement it.

Memory management

A bit of history for those who are not too impatient. Objective-C was a mostly ignored language in the 1980s (those who wanted C to support objects turned to C++, which had its own dragons), but was implemented by the Free Software Foundation in its GNU compiler, a great advance over other C compilers at that time. Steve Jobs had a soft spot for the GNU compiler and therefore based his mostly forgotten NeXT computer on Objective-C.

This historical footnote seems to be the only reason the LLVM compiler and its well-supported Objective-C became the official iPhone language and thus, by some reckonings, why Objective-C went from one of the most irrelevant computer languages to one of the most critical ones to learn after the iPhone was released.

Thanks for your patience–now, what was I talking about? (Andy, check your heading. Memory management.) LLVM offers garbage collection, but Apple chose to reject its use in iOS, because it can slow down an app at a critical time. When you write your app using, say, Java, you never know when garbage collection will raise its head and take over the CPU to handle background logistics that have no interest to the user. So Apple eschewed LLVM’s garbage collection, but instituted a fairly sophisticated memory management system of its own,

Please excuse another digression: why is memory management important? Well, C and C++ leave memory management up to the programmer, and the result is scads of apps that don’t release memory when they should. I personally have experienced the slow-down and eventual hanging of my laptop due to memory leaks from poorly managed memory; I’m sure others have too. If you need to reboot your system every week or so, it’s because some program depended on programmer-controlled memory management and introduced a bug that left a memory leak.

Memory management is hard because programs can’t depend on the programming concept of scope to get rid of unwanted memory. Scope is supposed to protect you from memory leaks by removing the memory allocated to the data that a function or loop defines. But many functions return variables to their calling functions, and these variables must be kept in memory as long as they’re used. When can they be released? What if no function has a clear ownership of a variable and no one ever releases it? You start allocating memory until you reach infinity.

What does Apple do to improve memory management? It understands that some variables are used only locally and should be released–good design–but assumes, by default, that variables shared with outside functions are allocated forever. You can specify, through the weak and unowned keywords, that variables should quietly go away when your function doesn’t need them anymore. If you use these keywords correctly, you won’t load down the user’s device with fatal memory leaks.

It turns out that weak variables are critical to using delegates without introducing memory leaks. You need to learn to use delegates, so you need to learn iOS’s memory managements system and weak variables. Bite the bullet.

Optional variables

Swift also preserved another oddity of the iOS runtime: the use of optional variables.

A typical optional variable is a pointer to some data allocated by your app. Suppose you try to retrieve an image or some other data, which may or may not exist. Success results in a variable pointing to the image or other data. Failure is conveniently indicated by an undefined variable, which points to a special term called nil.

Nil is hard to explain. It’s kind of a lacuna in the universe. If you have a counter, for instance, it can reach zero, and you expect it to do so. A nil counter is very different–it means the counter has no meaning, could not possibly represent a value, and is invalid wherever you refer to it.

Relational databases allow NULL values, which don’t indicate zero, but instead “This has no meaning.” That’s a very powerful concept, but one that easily leads to errors. For instance, if you use a relational database such as Oracle or MySQL and don’t check for NULL values, you may get erroneous results.

The seminal C language, which has formed the context in which other modern languages grew and still is used for key infrastructure, has long struggled with NULL values. Dereferencing a NULL pointer is still one of the key errors in C, and programmers are advised to write code checking that their pointer is not NULL before looking for data there.

Fast-forward to iOS. It offers optional variables, which can either be nil or contain an actual value. (I’ll spare you the pedantic discussion of the meaning of nil and NULL.) I know of only one other language that allows “optional” variables with nil values: the relatively little used OCaml.

Optional variables make sense in situations such as when you request a file or a resource over the Internet and you can’t get access to it. If you fail, you get a nil. So you have to be constantly alert to the possibility that an optional variable might be nil. Swift provides the ? character to determine whether you’re dealing with a real value, and the ! character to say, “Don’t worry; I’ve checked this optional value and it has real information in it.”

I see no reason to make a big deal over optional values, because if you check for something like:

if let theImage = image{
  {
    /* process the image */
  }
else
  {
    println("No image available")
  }

the “if” statement returns false not only on nil, but on zero. Zero is treated as false, and so is a false value for a Boolean variable. The “if” statement doesn’t know the difference between zero for a numerical value, a false Boolean value, or a reference to data that happens to be nil.

This heralds a theoretical weakness for optional variables (how can you distinguish the legitimate value of zero from nil?) but in practical terms will have no effect. You already know the difference between a counter and a value you retrieve from an outside source, such as a web page. You know when you are checking a counter, and know what to do when it reaches zero. You know, in contrast, when you are checking a value you retrieved from an Internet operation, where nil indicates some network failure. So your program will never be confused, even though a zero counter and a nil Internet value produce the same result for an “if” statement.

The bottom line is to understand optional values in Swift and to handle them respectfully. Few other languages or environments require this, although the concept of nil or NULL is nearly universal.

Conventional language features

In this article, I’ve tried to highlight iOS oddities that might slow down programmers who have studied other modern languages. Much of Swift will be comfortable to programmers who who kept up to date with modern programming practices. A few such structures include:

  • Blocks, which Swift uses for closures and callbacks.

  • Parameterized arguments, which allow methods to be loaded down with optional arguments. The C language offers an equally versatile paradigm, which is to accept a struct as a single argument, but many modern language libraries prefer to string out separate arguments.

  • Immutable variables, which are most useful in functional programming contexts, but are offered by other languages as well.

  • Protocols, the Apple term for what Java calls interfaces.

If you feel comfortable with the concepts in this article, along with Apple’s conventions for long-named functions, arguments, and data constants, Swift should not present a high hurdle.

Editor’s note: Dive deeper into Swift with “Swift Development with Cocoa” by Paris Buttfield-Addison, Jonathon Manning, and Tim Nugent.

Public domain objective lens illustration courtesy of Internet Archive.

tags: , , ,