Apps Blog

I write stuff for nerds.

I like geeky things. If you tweet at me, be technical.

Binaural accepts remote control events, making most app functions fully controllable via Control Center or the buttons on compatible earphones.

To handle remote control events, the first responder must implement the -remoteControlReceivedWithEvent: method. In my first implementation this was handled by my root UIViewController, but I noticed it was hovering around 300 lines of code - time to refactor.

The logic I wanted to extract included communicating with the synthesizer (i.e. the object that generates the binaural beats) and managing the timers used to handle the seek events. Nothing depended on other parts of the root controller. Additionally, I considered this piece of code as a separate, global entity - which should handle remote control events no matter what the current view and view controller hierarchy might be.

I decided to create a UIResponder subclass that encapsulates these responsibilities, and add it to the responder chain. I chose to insert this responder in the chain between the root controller and the window - a very global place, independent of the current view controller hierarchy. Perfect for the situation.

Here's the desired responder chain:

This turned out to pretty straightforward. Here's the relevant code from the root view controller: // From the @interface @property (strong, nonatomic) UIResponder *nextResponder; // From the @implementation - (void)awakeFromNib { self.nextResponder = ({ XXRemoteControlResponder *remoteControlResponder =; remoteControlResponder.nextResponder = UIApplication.sharedApplication.keyWindow; remoteControlResponder; }); }

And here's the XXRemoteControlResponder class: // XXRemoteControlResponder.h @interface XXRemoteControlResponder : UIResponder @property (strong, nonatomic) UIResponder *nextResponder; @end // XXRemoteControlResponder.m @implementation XXRemoteControlResponder - (instancetype)init { self = [super init]; if (!self) return nil; [UIApplication.sharedApplication beginReceivingRemoteControlEvents]; return self; } - (void)remoteControlReceivedWithEvent:(UIEvent *)event { // Handle remote control event } @end

To explain the above code:

  1. Both classes have a nextResponder property. This implicitly overrides the -nextResponder method, so whatever we set that property to will be considered the next responder for that object.
  2. We set the root controller's next responder to our own XXRemoteControlResponder object.
  3. We set the XXRemoteControlResponder object's next responder to the key window.

This ensures that our object will receive all unhandled remote control events.

As always: if there's a better way to do this, I'd love to know.

I recently had a hunch that Samples was loading too many images at app launch. I had to figure out how to get a list of what images are loaded, no matter if the source is a Storyboard or a call to +[UIImage imageNamed:].

Here's how I did it, step by step:

  1. Bring up the Breakpoint Navigator (⌘7)
  2. Click the little + button in the bottom left corner
  3. Select Add Symbolic Breakpoint…
  4. Set up your breakpoint as shown above:
    • Type imageNamed: in the Symbol field
    • Click Add Action, select Debugger Command
    • In the command field, type po $r2
    • Check Automatically continue after evaluating
  5. Run on device (⌘R)

Now all the names of images that are about to be loaded will be logged to the Debug Area. For added fanciness, you can select Debugger Output there, so your trace won't be polluted by other log messages coming from your target.

For the curious: po $r2 will print the Cocoa object pointed by the r2 register, which is where the first argument of an Objective-C message is stored - in our case, the name of the image to load. The value for self is stored in r0, and the value for _cmd (the selector to invoke) is stored in r1. Check out the documentation for objc_msgSend and friends to learn more about the Objective-C runtime.

If there's an easier way to accomplish this, let me know!

iOS 7 Predictions.


What about an article on iOS 7 that's based on fact and research instead of wishful thinking? You're welcome.

XPC and UIRemoteViewController

Let's start by introducing the two technologies that will be the basis for the entire discussion.


What is XPC? In the OSX docs, Apple defines it like this:

The XPC Services API, part of libSystem, provides a lightweight mechanism for basic interprocess communication integrated with Grand Central Dispatch (GCD) and launchd. The XPC Services API allows you to create lightweight helper tools, called XPC services, that perform work on behalf of your application.

In other words, it's a framework you can build daemons with. XPC is responsible for spawning your daemon every time you need its services, shutting it down when it's no longer needed, and providing a protocol you can use to talk to it.

They go on and mention how this, combined with sandboxing, can be used to improve the security of an app - in other words, adopting privilege separation:

With traditional applications, if an application becomes compromised through a buffer overflow or other security vulnerability, the attacker gains the ability to do anything that the user can do. To mitigate this risk, OS X provides sandboxing—limiting what types of operations a process can perform.
In a sandboxed environment, you can further increase security with privilege separation—dividing an application into smaller pieces that are responsible for a part of the application’s behavior. This allows each piece to have a more restrictive sandbox than the application as a whole would require.

You would split your app in multiple components with different privileges, that safely communicate via XPC, so if one gets compromised the damage done will be limited.

This OSX technology, as it turns out, is also part of iOS 6.

Remote View Controllers

I first learned about Remote View Controllers reading Ole Begemann's awesome three part article. If you want the nuts and bolts, go read it now - I'll wait for you.

_UIRemoteViewController and _UIRemoteView are two new private classes in iOS 6, and they're built on top of XPC. They basically allow one process/app to seamlessly create a view controller whose contents are managed by another process/app, without changes in the presenter code.

As far as we can tell, the remote app will just have a normal View Controller and draw to its view as usual, but the contents of that view will be streamed in a secure way to a host view in the presenter app, instead of being displayed directly.

They are conceptually similar to Android's Activities, but a Remote View Controller:

  1. Will always run in a separate process
  2. Won't necessarily take over the whole screen/window, or take you out of your app
  3. Is presented like any other View Controller

And these features are really important. The first one is great news for security. The second allows you to avoid using the awful browser-style "back" mechanism to navigate between apps (and OS views), and gives the presenting app control over the flow. The third one makes it seamless for developers, and allows Apple to replace existing VCs with RVCs without breaking any apps.

So what?

Apple is already using these technologies internally on iOS 6; you might call it a private API. Let's see how these new technologies are used today and how they could be used tomorrow, both by Apple and third-party developers.

Applications, current and future

System View Controllers

In iOS 6, the system mail composer (MFMailComposeViewController) is a Remote View Controller. It works exactly like it always has - including respecting your UIAppearance directives -, and the API to invoke it is the same, but it runs in a separate process.

Other presentable system controllers - the contact picker, photo picker, message composer, etc. - aren't… yet. My theory is that Apple is field-testing these technologies with iOS 6, to ensure everything works as expected. They're also dog-fooding them to their own developers to ensure the public APIs, once available, will be robust and easy to use.

New System View Controllers

I think RVCs will allow for at least one new system view controller to be built: the infamous in-app browser. Thousands of apps have their own, and I'm sure more will after iOS 7 comes out - and they are really inconsistent, and they miss a lot of Safari functionality. Not to mention all the time spent by developers to recreate the basic browser functionality.

This new controller will look basically like Safari, but it'll just have the page title and a "Done" button on the navigation bar - no address box or search box. No tabs. It would be presented modally by default (i.e. will slide up from the bottom of the screen, just like the mail composer). And of course it would have Add to Bookmarks, Reading List, Reader, Nitro and all that Safari goodness. Maybe even iCloud tabs!

It might look like this:

Bonus feature: all WebViews will be RVCs, which means they can run Nitro with no security concerns.

The remote WebViews theory has been confirmed by recent findings in iOS 6.1. It also appears that full-screen iAds will be displayed using a remote web view controller. We'll see if Apple decides to add an in-app browser to the mix.

Default apps

Why can't you choose a default web browser, or mail client, in iOS? If you think about it, those two "apps" are deeply tied into the OS. Any app can invoke a mail composer, or host a WebView. But at least it's Apple code that gets invoked, so you kind of have to trust it - they're also making the sandbox, so if their code is broken, the entire security model falls apart. If they let any app respond to those events, it would mean running two third-party apps in the same process, which of course is a hole in the sandbox.

There's also the matter of having more stuff to configure which isn't very iOS-like, but you can easily solve that by hiding this somewhere deep in Settings where only dedicated users will find it. Or only having that section appear if you have already installed an app that claims it's a mail client or browser, in the same way an app can claim it's for routing.

With RVCs, any app could e.g. present a third-party app's mail composer in a secure way. Problem solved!

I don't think they're gonna let you choose a default Photo app, just because no one asked for it. Additionally, with the changes I'm going to talk about in the next section, Messages will be on par with third-party apps.


Another great addition to iOS 6 was the abstract UIActivity mechanism and its UI counterpart, UIActivityViewController. An app can now just say "I want to share an image", and iOS will pop up a control with possible sharing destinations. iOS provides a bunch of built-in destinations, including Twitter and Facebook. Third-party developers can provide additional destinations in the context of their own app, but not to other apps.

You guessed it: with RVCs, and a couple new additions to the iOS SDK, third-party apps could expose UIActivities to the whole system, in a secure, sandboxed way. I'm thinking you would add a new "sharing bundle" to your app bundle - probably at most one per app -, with some metadata indicating what kind of data your activity supports. This sharing bundle will probably contain a full, separate application, with its own Storyboard and everything, so it launches as fast as possible and doesn't waste resources - but hopefully, Xcode will allow some kind of easy code sharing with the main app.

What does this mean for end users? Their UIActivityViewControllers will start displaying activities for App Store apps they installed. So they can share a picture from Photos to Instagram, or a URL from Notes to Tumblr - with a consistent UI and navigational model, and without compromising the security of their device.

By the way, the Facebook sharing sheet is a Remote View Controller. The Tweet Sheet? Not yet.

Custom UIs in Settings

You're probably familiar with Settings Bundles. What if those bundles could include a RVC in iOS 7, so you can build your Settings UI however you want? That would allow all apps to move their preferences/settings screen to the Settings app, and end the user confusion caused by how those screens are sometimes in Settings and sometimes in the app.

This of course has all sorts of problems associated with it, e.g. the Settings app might lose its visual consistency. But nothing seems insuperable - after all, there's an app approval process for a reason.

Custom UIs in Notification Center

What if RVCs open the doors for developers to present their notifications differently? No, I'm not talking about widgets - let's hope that will never happen. I'm talking about the standard notifications we get on iOS every day. What if their UI could be made by a third-party?

I don't see a lot of value in changing the looks or functionality of the system "banners", or even the notifications in Notification Center. But this could allow for some sort of "quick-reply" functionality. Say you get a banner from Messages, and there's a button on the banner with a compose icon. You tap it and get a small Tweet Sheet-like reply composer for that message. You hit send and it just takes you back to whatever app you were using. Same thing for third-party apps.

Custom UIs in Siri

Let's assume for a second that there's a way for developers to teach something to Siri without disrupting the rest of its functionality, turning it into a dumb list of voice commands that you have to learn and repeat, or making the UX worse. I know it sounds crazy, but bear with me.

As you know, Siri rarely takes you to an app when you ask it (him? her?) something. You usually get a "card" right there in the Siri UI. Third-party app developers will need to be able to inject a card in Siri whenever it understands that what you said has to be handled by their app. With RVCs, this becomes the easy part of extending Siri.

Speculation (just for fun)

Here are a few more features that might come with iOS 7. Nothing in particular suggests these features will come, so take this with a grain of salt.

More (many more!) system activities

More activities with text

I'd like to make a Note out of this email I'm viewing in Mail, or a Reminder with its subject. I want to make a Calendar event out of any piece of text containing a date. What if the Cut/Copy popup had a Share button on it?

More activities with URLs

Make a note out of a webpage. Save a URL to your Safari bookmarks. Save a URL to your Reading List! I'm actually shocked the last one isn't in iOS 6 already.

Keychain in the Cloud

If I had to think of one of the worst things in terms of UX that we have today, I might say logging in. Typing passwords. What if your Mac's keychain synced with your iPhone's? You log in once on one device, and that's it. On the web you'd get autofill everywhere you signed in before. In apps, you would just be logged in, even just after installing.

Pasteboard in the Cloud

If you turn this on, when you copy something on one device, you could paste it on any other device, Macs included. I'd love to have this.

iPhoto in the Cloud

Wouldn't it be great if you could just sync all of your pictures and personal videos via iCloud? Combine this with iTunes Match, and you'll never have to sync with iTunes again. I think Apple is working on iTunes's bloat, just not how you would expect.

I think they would have to add new and bigger iCloud storage plans to accommodate all of this data, and they're not going to be free. But if they pull this off, iTunes as a sync backend will no longer be needed.


Realistically, probably not all of the above will make it to iOS 7. But if these technologies are brought to the surface, it's still going to be a radical transformation in the OS dynamics, and a whole new experience for users - without compromising what's good about iOS today. See you at WWDC.

P.S. here's my new favorite Cocoa identifier: PFUbiquityKnowledgeVector.

Core Animation Recipes.


With this article, I'd like to share some bite-sized pieces of information I've learned while working on some of my latest projects, including Thermal. There's also a sample project attached. Let's start by introducing Core Animation.

Core Animation is responsible for almost everything you see on your iOS device's screen. You're probably familiar with UIKit, a lot of which is built on top of Core Animation: every UIView is backed by a CALayer. UIKit provides a high-level and powerful framework that can be used to do 80% of everything. If you're reading this though, you probably think that's not good enough.

With Core Animation, you can do everything UIKit does and much more. However, this comes at a price: you get a lower-level interface, which is harder to learn and to use effectively. In some circumstances you'll need to know what the weak points of the hardware your app is running on are, just to get decent performance.

Core Animation is built on top of two even-lower-level frameworks: OpenGL and Quartz. Quartz is used for drawing/rendering 2D graphics. OpenGL takes care of compositing them, generating the scene that ultimately gets displayed - taking advantage of hardware acceleration.

So why write about it? The way I see it, Core Animation is one of the tools that allow you to take your app's UX to the next level. It's a key component in making apps beautiful, responsive, smooth, and realistic in the way they react to user input. Together, these attributes result in products that feel trustworthy, delightful and fun. And that's exactly how they should feel.

This article is all about Core Animation, so I thought it would be nice to only use that API. As a result, there will be no drawing code - no -drawRect: at all.

The players


Core Animation's workhorse! Think about it as a lower-level UIView. Can display all kinds of content. Supports things like drop shadows and rounded corners out of the box. Almost all of its properties can be animated.

As I mentioned earlier, every UIView has a CALayer (you can get to it using the layer property). Every layer can have any amount of sublayers, which are are very similar to subviews in UIKit.


A subclass of CALayer, specialized in rendering shapes. Whatever shape you can draw with a CGPath, CAShapeLayer can display.

shapeLayer.path = [UIBezierPath bezierPathWithOvalInRect:shapeLayer.bounds].CGPath;


Another subclass of CALayer that can display all kinds of axial gradients (a.k.a. linear gradients), with any number of stops and colors.


CALayers support masking: they have a mask property that you can set to another CALayer. The alpha channel of the mask layer will determine what portion of the layer will be displayed.

Note, from Apple's docs:

The layer you assign to this property must not have a superlayer. If it does, the behavior is undefined.

In other words, a layer shouldn't be used as a mask for two (or more) layers. It just doesn't work. Clone the original mask layer every time you want to reuse it.

Drop shadows

Drop shadows are very easy to implement.

Just set these properties on any CALayer:

  • shadowColor a shadow can be given any color. Do you want to make something glow? That's just a white shadow.
  • shadowOpacity a number in the [0..1] range that does what you think it does.
  • shadowRadius this represents the strength of the blur effect that is applied to the shadow, and thus its perceived size. Defaults to 0.
  • shadowOffset a shadow offset of [5,10] will drop the shadow 5px to the right and 10px below your layer's contents. Defaults to [0,0], which means that you won't see the shadow unless it has a non-zero radius.
  • shadowPath if you set this property, Core Animation will use this path as a basis to compute the shadow instead of using the alpha channel of the layer. You can use this to generate a shadow that has nothing to do with the layer's contents - the layer might even be empty.

Inner shadows

Inner shadows are not very easy to implement.

I've searched for solutions to this for a while. Apparently, people on the interwebz are doing this by first creating a shape that is the negative of the shape you want to give a drop shadow to, then giving a drop shadow to this negative shape, and then masking the negative shape with the original shape, so that only its shadow remains. This works, but generating the negative shape is very hard. So I've developed my own solution - it's similar but easier to implement, and it covers the vast majority of cases where you might want an inner shadow. It's a three step process:

  1. Create a path by stroking the path of the shape you want to have an inner shadow, using CGPathCreateCopyByStrokingPath()
  2. Create a sublayer, set its shadow properties as desired, and set its shadowPath to the stroked path
  3. Mask the sublayer with the original shape

In other words, what we're doing is making the contour of the shape drop a shadow, which of course will drop it inside and outside of the shape. But then we mask away the part of the shadow we don't need, which is very easy as the mask coincides with the shape.

This said, I would be way happier if Core Animation supported inner shadows out of the box, like it does for drop shadows.

You can see this in action in the first screen of the attached example project, where you can also move your finger around to manipulate the shadowOffset property of some layers. Check out GBShadowsView.m in the sample project for the full code.


Some people have trouble implementing infinite rotation, which you might use for e.g. a custom spinner while waiting on network activity. Or to create an (astronomically inaccurate) space scene.

It's actually quite simple.

CABasicAnimation* earthRotationAnimation; earthRotationAnimation = [CABasicAnimation animationWithKeyPath:@"transform.rotation.z"]; earthRotationAnimation.toValue = [NSNumber numberWithFloat:M_PI * 2.0]; earthRotationAnimation.duration = 10; earthRotationAnimation.repeatCount = INFINITY; [self.earthImageLayer addAnimation:earthRotationAnimation forKey:@"rotationAnimation"];

If your object isn't rotating around its center, just create an empty layer or view where you want your object to be, and add your object in the middle of it. Core Animation is smart enough, so the performance hit when adding empty layers or views is negligible if not null.

In the sample app, the starry background is generated using a CAEmitterLayer, which is at the core of Core Animation's particle system. You can create a lot of nice effects with particles, so check the code in GBRotationView.m if you're interested.


A CAGradientLayer has two interesting properties, both animatable:

  • locations an array of NSNumber objects that define the gradient stops in unit coordinates, e.g. @[@(0.0f), @(0.3f), @(0.9f), @(1.0f)]. If nil, Core Animation will assume you want uniformly spread stops.
  • colors an array of CGColorRef objects that specifies the color of each stop.

To create the animated "barcode scanner" thingy in the picture, I set my CAGradientLayer's colors to [transparent, red, transparent] and then I animated the locations back and forth between [0.3, 0.35, 0.4] and [0.6, 0.65, 0.7], creating the illusion of a red laser moving up and down.

Doing this with a gradient is a terrible idea, but this might give you some insight into what can be accomplished with gradients alone. Can't wait to see what crazy animated psychedelic patterns you can come up with :)

Fun with masking

Did you know that if you give a shadow to a layer you'll use as a mask, the shadow will be part of the mask? And that everything will still be animatable?

What if we used this to create a cool, animated "flashlight effect"? Check out the sample project to see how this is done - it's very simple.

A note about performance

We used a bunch of shadows today. Keep in mind that computing the shadow from the layer's alpha channel (which is the default behavior) is a very slow operation. You should always set shadowPath, so Core Animation can skip that operation - this has a dramatic effect on performance.

At the top of GBMaskingView.m there's a SET_SHADOW_PATHS #define. Try setting that to NO, so you can see the huge performance hit with your own eyes.

I plan to write an entire article about performance on iOS, which will include a Core Animation section, so stay tuned.

Sample project

Download the sample project, hack away, and have fun. Tweet at me if you have questions, and I’ll make sure to update this article.

Planet images included with the sample project courtesy of some guy.



I redesigned my website. About time, uh? I went from Wordpress to static HTML. Was this a smart move? Not sure yet, but time will tell. I called this visual style Tachikoma - because it's simple, fun and… well, blue.

My goal with the redesign was to create one of the few websites in the world that are better when Safari Reader or Readability are off. Try it, and let me know.

A new article will be out soon™.

I’ve just been struck by a simple idea that will most likely fix what many have been referring to as “iMessage overload”, i.e. how all of your devices notify you of incoming iMessages. It gets quite annoying, especially if you own an iPhone, an iPad, and a Mac (or many!). So I thought I’d share.

The operating systems that run on these devices, iOS and OSX, have a built-in idle timer. In other words, they can detect if and when you’re actually using a certain device. That’s how iOS devices auto-lock after a certain time you’re not using them, or how your screensaver kicks in in OSX.

I propose we use the idle timer to decide which devices should alert you if an iMessage arrives. iMessages should always be pushed to all of your devices, but the notification (sound/popup/badge/icon bouncing…) will only be delivered on the one you’re actually using.

Let’s say we consider a device to be idle for iMessage after 5 minutes in which no interaction occurs with it. So even if the device’s screen is on, after 5 minutes it’s considered idle.

We would only notify the user on the devices that are not idle. If all of your devices are idle, than we’ll notify you on all of them, including notifications of iMessages that have been pushed but not yet read.

Here’s an example. You’re sitting in your couch, watching a movie on your 27″ iMac. Your iPhone is in your pocket, and you’re iMessaging on your iPad. You probably haven’t interacted with your iMac in a while, so your iPad will the only device considered active, and that will notify you. All is good. But the movie ends, and you leave your house, carrying only your iPhone with you. Now consider the following scenarios:

  • You check the time on your iPhone while leaving the house. Now, and for the next 5 minutes, both your iPhone and iPad will be considered active. For 5 minutes, they’ll both notify you of incoming iMessages. You don’t really care, because your iPad is at home. After 5 minutes, your devices back at home will be quiet and you’ll still be notified timely on your iPhone.
  • You leave your iPhone in your pocket while leaving the house. For 5 minutes (at worst) you won’t be notified of iMessages unless you touch your iPhone (just checking the lock screen for notifications, without unlocking, would be enough). But, as soon as your iPad becomes idle, you’ll receive notifications for all the iMessages that you missed right on your iPhone (and Mac), and by reading one you will make your iPhone the only active device, which means that future iMessages will pop up on your iPhone only.

I believe this approach would mitigate the issue so much that most people won’t even worry about it, or realize it’s an issue at all. Technically, it’s just a matter of storing the IDs of your active and idle devices on iCloud, something Apple can surely do while blindfolded. Worst case scenario, you will be notified of incoming messages 5 minutes late. If you’re really anxious about an incoming message, just check the lock screen and you’ll be notified in an instant – the messages are already there.

Why 5 minutes? It sounds like a reasonable balance: receiving iMessages 5 minutes late is not a big deal, and if you haven’t used a device for 5 minutes it’s pretty safe to declare it idle. I’m sure Apple can find a better timeout if needed. They could also use the auto-lock times you specify on your iOS devices to give you some control over this, or the screensaver timeout on OSX.

What do you think? Am I missing something here, or is it that simple to fix this?

[UPDATE] Sure enough, Apple started doing basically this shortly after I wrote this article. Their magic number, however, is ~10 seconds. In hindsight, 5 minutes is way too long, but 10 seconds still feels too short - too many times I received a notification on my Mac while I was reaching for my iPhone. I'm curious to see if they're going to adjust that number in the future.

This article describes a technique that allows full table cell customization with great performance and compatibility with Cocoa’s default behaviors.

Customizing UITableViewCells (from now on, “cells”), is something you will eventually do if you use tables in your apps, and need more flexibility than what the standard UITableViewCellStyle* options offer. I’ve come across a number of ways to do this, some good enough for some purposes, some really awful. There are very easy methods that result in terrible performance, and very hard ones that just don’t work in some scenarios. Custom drawing is the way to get performance, but if you don’t do it correctly you’ll break the standard behavior of cells for, e.g., selection and animation.

Apple has it’s own example. It works very well for the purposes of the demo app, but doesn’t work well in general. Try drawing a custom cell background in the -drawRect: method and you’ll understand. The animation on cell selection is gone. What’s going on?

Another good example comes from Atebits’ blog, where the secret behind the performance and customization of table cells in Tweetie (now Twitter) is unveiled. Well, kind of. That’s just part of the story, but it allowed me to better understand what influences scrolling performance on iOS.

In both this examples, cell selection behavior is simulated by drawing on a transparent background, at least when the cell is in a selected state. This way, the blue selected background is visible underneath the custom drawn content. This doesn’t work well with the deselection animation, though: if you look closely, you’ll see that the custom drawn content does not animate from the selected to the not selected state. This isn’t a great problem if you’re just drawing text, but for some kinds of content this flashing can be really distracting or just plain ugly. It also involves animation with transparent views, which might not be fast enough in some scenarios.

Can we do better?

To start off, we need to clearly understand how cell drawing works. Please note that this article only applies to plain cells. Grouped cells are another story, but if you understand the technique proposed here and study Apple’s documentation, you’ll be able to customize those as well.

Two views are involved: the Content View and the Selected Background View. Both are properties of any UITableViewCell object.

  • In the normal – not selected – state, the selectedBackgroundView is nil. The contentView draws the content on a transparent background. The background color you see is the background color of the table (UITableView).
  • When a cell is selected, Cocoa creates a (blue by default) selectedBackgroundView and places it just below the contentView. The contentView is also redrawn to reflect this state change, e.g. text usually goes from black to white.
  • When a cell is deselected, an animation starts. The selectedBackgroundView alpha property is animated from 1.0 (fully opaque) to 0.0 (fully transparent). Since there’s the table background below, this looks like a crossfade between the two backgrounds. At the end of the animation, the contentView is redrawn in the not selected state.

This animation is very important when used in combination with the ubiquitous navigation controllers: when the user taps the back button, it lets him know where he was coming from, making his navigation easier. That’s why Apple’s Human Interface Guidelines recommend it.

Apple also recommends to use as few views as possible, and to keep them fully opaque. This is a huge performance boost, since iOS can avoid compositing different transparent views – a slow operation.

Ideally, we should just use one, custom-drawn opaque view for our cells. That’s what we’re going to do! This means we’re going to draw background and content on the contentView. Do you see the problem? Yes, no animation! iOS will simply redraw your view at the end of the selectedBackgroundView animation, which will be invisible because it’s completely covered by our opaque contentView. What can we do? The simple answer is: animate the cell ourselves.

Here’s the steps we’ll follow:

  1. We’ll properly subclass UITableViewCell and make it use our custom view.
  2. We’ll write drawing code for our custom view: it will draw the background and the content, properly reflecting our cell’s selection state.
  3. We’ll override the -setSelected:animated: method of UITableViewCell and use it to perform our custom animation.

Subclassing UITableViewCell

This is the easy part. Create a UITableViewCell subclass, and override the -initWithStyle:reuseIdentifier: method, adding something like this:

CGRect viewFrame = CGRectMake(0.0, 0.0, self.contentView.bounds.size.width, self.contentView.bounds.size.height); self.customView = [[[CustomTableViewCellView alloc] initWithFrame:viewFrame cell:self] autorelease]; [self.contentView addSubview:self.customView];

As you can see, we’re passing the cell object to the view. This will enable us to understand what state the cell is in when drawing. Our custom view is added to the contentView as a subview, filling it entirely. Since all content fields (imageView, textLabel…) of the superclass are nil until assigned a value, and we have an opaque view covering everything, the only thing the system will actually draw is our custom view, resulting in super smooth scrolling.

Creating a custom view

Subclass UIView. Make sure you have a “constructor” (init method) that:

  1. Accepts a UITableViewCell as a parameter and stores it in a class property for later use.
  2. Sets the view’s opaque property (self.opaque) to YES.

Override the -drawRect: method and add your custom drawing code, it may look like this:

if (self.cell.selected || self.cell.highlighted) { // draw the cell (background and content) // in the selected/highlighted state } else { // draw the cell (background and content) // in the normal state }

Notice we’re giving the same meaning to the selected and highlighted properties. In some situations, you might want to do something more sophisticated with them.

Animating between states

First, in our UITableViewCell subclass, we’re going to override the -setSelected:animated: method, with code like this:

if (animated) { // animation code [super setSelected:selected animated:NO]; // more animation code } else { [super setSelected:selected animated:NO]; }

As you can see, we’re always making the superclass change its selection state without animation. We don’t want that animation interfering with ours, and wasting resources.

For the actual animation (remember, it’s a crossfade from the normal state to the selected state), we’re going to use a bitmap technique. What we’ll do is:

  1. Take a bitmap “screenshot” of the current cell appearance (not selected)
  2. Call [super setSelected:selected animated:NO] – this causes our cell to be redrawn in the “selected” state
  3. Take a bitmap “screenshot” of the current cell appearance (selected)
  4. Create two UIImageView objects and initialize them with the two bitmaps captured in steps 1 and 3
  5. Add both UIImageView objects as subviews of the contentView
  6. Create an animation which fades in the second UIImageView object while fading out the first one
  7. At the end of the animation, remove both UIImageView objects from the contentView

The “screenshot” code will probably look like this:

[self.contentView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage* bitmapSelected = UIGraphicsGetImageFromCurrentImageContext();

The animation code will probably be similar to the following:

bitmapSelectedView.alpha = 1.0; bitmapDeselectedView.alpha = 0.0; [UIView beginAnimations:@"deselect" context:nil]; [UIView setAnimationDuration:0.5f]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(animationDidStop:finished:)]; bitmapSelectedView.alpha = 0.0; bitmapDeselectedView.alpha = 1.0;  [UIView commitAnimations];

In your -animationDidStop:finished: method, you will call -removeFromSuperview on both bitmapSelectedView and bitmapDeselectedView. I suggest you use tags for this purpose, instead of retained class properties. Even better, if you’re targeting iOS 4.0 and above, use the block-based animation methods of UIView.

I’m sure you can figure out the rest on you own.


I hope this will help some of you out there. I know I struggled to find something like this, and had to develop my own solution. These kind of custom cells are a nice addition to my apps, allowing for flexibility and performance.

An alternate technique might involve using two views, one as a subview of the contentView and another one as a subview of the selectedBackgroundView. This might allow full customization with good enough performance for most scenarios, and it might be a little easier to understand. But I think the approach described above is both simpler and faster. If you don’t want to have drawing code for the background and the content in the same class, just do like I do, and create a hierarchy of UIView subclasses.

Note: this technique doesn’t work in the situations where Cocoa resizes your contentView, of course. The table background is visible around your now smaller contentView. So you’ll also have to implement a custom “edit mode” and a custom “index bar” if you’re planning to use these features.

If you have any questions, feel free to tweet at me, and I’ll get back to you!