5 Myths About Mobile Web Performance
Recently we’ve heard some myths being repeated about mobile HTML performance that are not all that accurate. Like good urban myths, they sound compelling and plausible. But these myths are based on incorrect premises, misconceptions about the relationship between native and web software stacks and a scattershot of skewed data points. We thought it was important to address these myths with data that we’ve collected over the years about performance, and our own experiences doing optimizations of mobile web app performance.
Myth #3: Mobile browsers are already fully optimized, and have not improved much recently
Reality: Every mobile browser has a feature area where it outperforms other browsers by a factor of 10–40x. The Surface outperforms the iPhone on SVG by 30x. The iPhone outperforms the Surface on DOM interaction by 10x. There is significant room to further improve purely from matching current best competitive performance.
Myth #4: Future hardware improvements are unlikely to translate into performance gains for web apps
Ok: The Essentials
“At Sencha, we know that a good developer can create app experiences that are more than fast enough to meet user expectations using mobile web technologies and a modern framework like Sencha Touch.”
Very few mobile apps are compute intensive: no-one is trying to do DNA sequencing on an iPhone. Most apps have a very reasonable response model. The user does something, then the app responds visually with immediacy at 30 frames per second or more, and completes a task in a few hundred milliseconds. As long as an app meets this user goal, it doesn’t matter how big an abstraction layer it has to go through to get to silicon. This is one of the red herrings we’d like to cook and eat.
At Sencha, we know that a good developer can create app experiences that are more than fast enough to meet user expectations using mobile web technologies and a modern framework like Sencha Touch. And we’ve been encouraged by the performance trends on mobile over the last 3 years. We’d like to share that data with you in the rest of the post.
It’s not our intention to claim that mobile web apps are always as fast as native, or that they are always comparable in performance to desktop web apps. That wouldn’t be true. Mobile hardware is between 5x and 10x slower than desktop hardware: the CPU’s are less powerful, the cache hierarchy is flatter and the network connection is higher latency. And any layer of software abstraction (like the browser) can charge a tax. This is not just a problem for web developers. Native iOS developers can tell you that iOS CoreGraphics was too slow on the first Retina iPad, forcing many of them to write directly to OpenGL.
Digging Deeper Into the Myths
For even more detail on hardware and software advances in iOS hardware, see Anandtech’s review from last October.
There have been similar levels of improvement in the Android platform. From our testing lab, we assembled a collection of Android platforms from the last three years that we believe represent typical high end performance from their time. The four phones we tested were:
- Samsung Captivate Android 2.2 (Released July 2010)
- Droid Bionic Android 2.3.4 (Released September 2011)
- Samsung Galaxy Note 2 Android 4.1.2 (Released September 2012)
- Samsung Galaxy S4 Android 4.2.2 (Released April 2013)
As you can see below, there has also been a dramatic improvement in SunSpider scores over the last four years. The performance jump from Android 2.x to Android 4.x is a 3x improvement.
In both cases, the improvement is better than what we’d expect from Moore’s Law alone. Over 3 years, we’d expect a 4x improvement (2x every 18 months), so software is definitely contributing to the improvement in performance.
Testing What Matters
As we previously mentioned, SunSpider has become a less interesting benchmark because it only has a faint connection with the performance of most applications. In contrast, DOM Interaction benchmarks as well as Canvas and SVG benchmarks can tell us a lot more about user experience. (Ideally, we would also look at measurements of CSS property changes as well as the frames per second of css animations, transitions and transforms — because these are frequently used in web apps — but there are still no hooks to measure all of these correctly on mobile.)
First DOM interaction tests: we used Dromaeo Core DOM tests as our benchmark. Below are the results from testing on our 4 Android models. We indexed all Core DOM results (Attributes, Modifications, Query, Traversal) to the Captivate’s performance, and then took the average of the 4 Core DOM indices.
As can be seen, there was a 3.5x performance improvement from Android 2.x to 4.x, although the S4 was only a minor improvement on the Note 2. We can also look at Dromaeo results on iOS. Sadly, we can’t do a comparison of performance across older versions of iOS, but we can show the steady improvement across multiple generations of iPhone hardware. Interestingly, the performance improvement even within iOS6 across these device generations is better than the increase in CPU speed, which means that improvements in memory bandwidth or caching must be responsible for the better than Moore’s Law performance increase.
In order to show that there is still a lot of potential for browsers to match each other’s performance, we also benchmarked the Surface RT. The poor performance of DOM interactions in IE has always been a source of performance frustration, but it’s worth pointing once again the large performance gap in DOM interaction performance on the iPhone vs. the Microsoft Surface RT running IE10. One of the myths we’d like to destroy is that the mobile software stack is as good as it gets. This is not true for Windows RT- a 10x performance gap is waiting to be filled. (We’ll show where iOS falls down in a later benchmark.)
And remember, this is on top of a big performance gain from iOS4 to iOS5. As can be seen, the performance improvement in Canvas2D performance (7x) is better than the CPU improvement over the same time period (4x) and reflects the improved GPU and GPU software stack on the iPhone over time. GPU improvements and the increased use of GPU acceleration needs to be factored in to the increase in mobile web performance improvement.
When we look at the same tests on Android, we see an interesting data point — a lack of correlation between CPU performance and Canvas. The big change from Android 2 is that Android 2 had no GPU acceleration of Canvas at all. Again, a purely software change — GPU acceleration — is the big driver of performance improvement.
SVG is another area that shows the broader story about web performance. Although not as popular as Canvas (largely because Canvas has become so fast), SVG has also shown solid performance improvements as hardware has improved. Below is a test from Stephen Bannasch that tests the time taken to draw a 10,000 segment SVG path. Once again, the test shows constant steady improvement across hardware generations as the CPU and GPUs improve (since these are all on iOS6).
The main gap in performance is in software: the Surface RT is more than 30x faster than the iPhone 5 (or the iPad 4 — we also tested it but haven’t shown it.) In fact, the Surface RT is even 10x faster than Desktop Safari 6 running on my 1 year old Macbook! This is the difference that GPU acceleration makes. Windows 8/IE10 fully offloads SVG to the GPU, and the result is a huge impact. As browser makers continue to offload more graphics operations to the GPU, we can expect similar step-function changes in performance for iOS and Android.
In addition to long path draws, we also ran another SVG test from Cameron Adams that measures the frames per second of 500 bouncing balls. Again, we see constant performance improvement over the last four generations of hardware.
What’s more interesting than the improvement is the absolute fps. Once animations exceed 30 frames per second, you’ve exceeded the fps of analog cinema (24 fps) and met viewer expectations for performance. At 60 fps, you’re at the buttery smoothness of GPU accelerated quality.
Real World Performance: Garbage Collection, Dynamic Languages and More
We hope that the preceding tour through mobile performance has demonstrated a few things, and demolished a few myths. We hope we’ve shown that:
- This performance increase is driven by both hardware and software improvements
- Luckily, these other parts that impact performance are also improving rapidly, including DOM interaction speed, Canvas and SVG.
And although we could show it with high speed camera tests, all mobile web developers know that performance of animations, transitions and property changes has vastly improved since the days of Android 2.1, and they continue to get better each release.
Without this level of indirection, it’s very easy to produce poor performing mobile web app experiences — like the first generation mobile web app from Facebook. We believe that applications written on top of UI Frameworks like jQuery Mobile that are too closely tied to the underlying DOM will continue to suffer from performance problems for the foreseeable future.
Bringing It All Together
There is a lot of data and a lot of different topics covered in this piece, so let me try to bring it all together in one place. If you’re a developer, you should take a few things away from this:
- Mobile platforms are 5x slower than desktop — with slower CPUs, more constrained memory and lower powered GPUs. That won’t change.
- Graphics have also been getting much faster thanks to GPU acceleration and general software optimization. 30fps+ is already here for most use cases.
- Garbage collection and platform rendering constraints can still bite you, and it’s essential to use an abstracted framework like Sencha Touch to get optimal performance.
- Take advantage of the remote debugging and performance monitoring capabilities that the mobile web platforms: Chrome for Android now has a handly fps counter, and compositing borders can show you when content has been texturized and handed off to the GPU and how many times that texture is then loaded.
We hope that the review of performance data has been a helpful antidote to some of the myths out there. I want to thank everyone at Sencha who contributed to this piece including Ariya Hidayat for review and sourcing lots of links to browser performance optimizations, and Jacky Nguyen for detail on Sencha Touch abstractions and performance optimizations.