App Performance in iOS

Joe Williams
9 min readMay 13, 2019

This blog post is an extension of a talk I gave at Mobile Monthly in Leeds at the Sky Betting & Gaming offices. If you’re local to Leeds, come down to the next one!

Why?

App performance is hugely important for developers, but can sometimes go overlooked. Often, we are in a position of privilege meaning we only really ever run our apps on the highest spec devices, with huge data plans. This means we don’t really see the implications of not being massively performant.

I was recently working on a project that included flying a drone. I had an app which took raw image data from the drone’s camera feed, processed it and scanned a barcode of a parcel. I needed to ensure that an Out of Memory event didn’t occur, leading to a potentially fatal crash where somebody could get hurt. As a result, I needed to ensure the app was as performant as possible. Now, in even more recent projects, app performance has become one of the centre-most considerations when developing a feature.

Knowing where to get started can be overwhelming, though. The aim of this talk, therefore, is to show you how to profile your app and fix common bugs.

Three-Pronged Approach

When doing any kind of development work, it’s important to take a relatively logical approach to solving problems. It allows us to retrace our steps, and keep an eye on changes we’ve made. When looking for performance bottlenecks, I like to split this into three reasonable steps: search, fix, test.

Searching with Instruments

Let’s take a look at a common example. I have a demo app called Twinstagram that shows high resolution images, and how many people have liked the image. However, the scrolling performance is incredibly laggy, and on older models, is totally unusable.

As you can see on a 5s, it’s really hard to use this app. In this situation, we could go ahead and make assumptions about what might be causing these issues, but it’s better to act on evidence to make optimisations. Not only that, but it gives us the opportunity to actually measure these improvements, and provide metrics about what we’ve done.

Instruments

Instruments is a hugely powerful set of tools that give us a really great insight into what’s going on under-the-hood in our application. A great tool we can use from this arsenal is the Time Profiler. Time Profiler allows us to perform time-based sampling of processes running on the system’s CPU. What that means is, we can visualise all the threads we’re operating on and inspect call trees that led us to that point.

The problem with Instruments is it can be incredibly overwhelming and intimidating. Let’s take a look at Instruments together in action.

There’s a tonne going on here, so it’s important to take a little time to break it down and look at what’s happening.

The top section, marked 1, is showing CPU and thread usage graphically. It shows us where spikes are occurring relative to when they happened. It allows you to filter in on specific moments in time, and zoom in or zoom out. For us, we’ll filter on when we were scrolling.

The lower section, 2, shows the call tree and the weight of the process on that specific thread. In the red circle, we can see that 68% of all work we’re doing is being done on the main thread. As we drill down into these processes, you get detailed information about where the calls are being made. This is where things can become overwhelming.

From this broad picture, without understanding much about instruments, we can reason that something other than UI is being performed on the main thread. We can therefore make a pretty well reasoned assumption that this main thread overload is one of the contributing factors to slow scrolling. As we drill into these by clicking the arrows, we can see what’s calling who so we know precisely where to make our improvements.

Tip: If you don’t know where to look in the call tree, focus on areas that have the highest weighting. Clicking the arrow whilst holding option will open the call tree to the highest weighted point.

Now comes the detective part. As we can see in the above, lots of these calls are coming from Apple API’s, particularly ImageIO or QuartzCore. It can be tempting to ignore these calls; after all, it’s not code that you have explicitly written, right? Ignore these calls at your own peril!

What we can identify is a large part of the performance bottleneck is a direct result of image processing on the wrong thread. So, with that knowledge at hand, we can head back over to our code, and look for places we’re doing something with images on the main thread.

Fixing: Round 1

As we can see, we’re loading raw data into an image on the main thread, and this is the root cause of our bottleneck. Subsequently, we can refactor to operate on the correct threads, and re-run, and see if we’ve made any discernible change.

And just like that, we’ve identified a bottleneck, fixed it, and improved scrolling performance. We did what instruments told us: moved heavy processing into the background thread. Now we can all go home, right? Not quite.

Let’s take another trip down Instrument lane, this time using Allocations.

We’re making the configure(cell:indexPath:) call in the cellForItem method, meaning each one of these images is downloaded every time it scrolls into view. Moreover, they are taking up huge amounts of memory — between 3–7mb each. On older devices, it’s not going to be long until you get an Out of Memory event — or, OOM. So, although we’ve “improved” performance visually, we’ve not completely solved the issue. We now need to look at how we can reduce that memory consumption.

OperationQueue and Image Downsampling

GCD is a fantastic API that gives us the ability to manage asynchronous tasks and control threading. The limitation of it, however, is that you don’t have much control over those tasks. You can’t choose the order, cancel them when they’re no longer vital, or resume them should you wish. NSOperation is a great layer built on top of GCD that gives you all this.

We use it by creating a custom operation subclass through which we can perform our own tasks.

Let’s break this down. We initialise the operation with a PhotoAPI struct. This struct has a mutable image property on it, which we can modify with a newly downloaded, downsampled image. We then cache that image in a global NSCache object. If we have an image stored in the cache, we return that. This way, we save having to re-download, and save having to perform the CPU intensive operation of downsampling. We also have reference to the parent isCancelled bool, which means that if we scroll away, we can cancel an operation, and restart it later on.

Now, in our CollectionViewController, we simply call startDownload(for:at:) with the relevant information, and we’ll downsample, cache and return the image. PendingOperations is simply a class that lives for the duration of the ViewController which holds reference to the operation for a specific IndexPath. We can then hook into UIScrollView delegate methods so suspend and resume operations as required.

One Step Beyond

At this point, we’ve done what we’ve set out to do: we’ve fixed the scrolling performance, and we’ve managed that huge memory consumption. There’s further improvements we can make if we have the time though. We could look at only loading the images for visible cells. This way, we’d only download the images a user has specifically requested.

Making Gains Elsewhere

Now that we’ve fixed performance bottlenecks, we can now make performance improvements.

We’ll start by tracking dropped frames in our application. This is a well covered topic, and particularly useful for app developers who are building games. It’s important to retain a frame rate of 60fps, or 16.7ms. Here’s how to set up a CADisplayLink to log tracking frames. For the purpose of this blog, I’ve set it up in the AppDelegate.

There are a number of potential candidates for slowing frame rate, but we’ll cover three.

Transparent Views

Transparent views a typically views with an alpha of 0, and no backgroundColor set. Transparent views mean drawing, and drawing means CPU and GPU usage when we definitely don’t need that. In large lists of content, this can hog memory, reduce battery, and tank performance. You can identify these views by checking the Color Blended Layers debug option in simulator

To improve transparent views, simply set the alpha of the view to 1, and the backgroundColor to match the color of the parent view. Of course, this can’t always be achieved, but where it can it should be your default choice.

Off-screen Rendering

Off-screen rendering occurs when an app draws a bitmap in memory instead of directly on the screen. As a result, the render server tries to render a layout it doesn’t fully know about, meaning it has to switch context between off and on-screen. These switches can really affect performance, particularly on older devices with lower CPU/GPU capacity.

These renderings usually occur in instances such as rounding corners, adding drop shadows to a view, or modifying a view’s layer directly. Instead, it’s better to use a UIBezierPath for rounding and mask the layer. Alternatively, when rounding UIImage’s you can go one step further by drawing the image inside a specific path within a UIGraphicsContext.

Auto Layout

Auto Layout could have a full blog post on it, and how to improve its efficiency. For the brevity of this blog post, I’ll simply link out to an excellent WWDC talk here, which goes over some improvements you can make. My suggestions and key takeaways from the WWDC sessions I’ve watched are the following:

Wrapping Up

Performance is important in any app, and shouldn’t be overlooked. I think the most important takeaway about performance is that it should be seen as an extension to accessibility. As software engineers, we’re generally in a position of privilege: we have the highest spec phones, with huge data plans. This typically means that, without good principles in place, we can overlook lower end devices and network consumption. If we built our apps without these principles in mind, we’d be limiting use of our apps to those that can’t afford, or simply don’t have access to, these options. Therefore, it’s imperative we build out apps to consider network and battery consumption, scroll performance, etc.

Secondly, don’t be afraid of digging around in Instruments. The worst that can happen is you learn something! It’s intimidating but totally worth it in the long run, and the impact on your app will no doubt be huge. Good luck!

--

--

Joe Williams

iOS Engineer @ Sky Betting & Gaming | Indie Dev @ Expodition Podcast App