What’s New in Android P: Changes for Developers
Google released the developer preview for Android P yesterday, allowing people to take a look at the changes the yet to be named version of Android brings. We've taken a look at some of the new features yesterday. Here are some more that piqued our interest.
Support for HDR VP9 video and HEIF image formats
HEIF, or 'High-Efficiency Image File Format,' is designed to maintain twice as much image data as a JPEG image, while keeping the file size nearly identical. Android P now includes native support for the file format. HEIF gained popularity last year when Apple began supporting it on macOS High Sierra and iOS 11. Select iPhone and iPad models can also take images in HEIF format. Google is also highlighting HEIF's smaller file size as a way for apps to reduce data usage.
Android P also adds support for HDR VP9 Profile 2, so it's easier for developers to build apps that play HDR content. However, we don't have a lot of HDR-ready phones ready right now. It's a step in the right direction, for when HDR is a bit less exclusive than it is now.
Faster app load times and less memory usage thanks to ART improvements
Android P will bring new improvements to the Android Runtime aka ART. The performance and efficiency boosts are linked to decreased app startup time and memory usage. Initially, the difference will be marginal, at best. But we can better results over time, once developers begin to take advantage it better. ART will now use more profiles to reduce the footprint of compiled code by using on-device profile information to rewrite DEX files on the device, decreasing load times and system DEX memory usage.
Improved neural networks APIs for machine learning and AI developers
With Android P, Google is expanding the API to support nine new operations. Google's neural network API supported on-device model creation, compilation, and execution, meaning you could not only build a model as required on the device, but you could also run it. Now Google is adding a bunch of new tools for neural networks in Android P: "Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze." To be honest, I have no idea what any of those words mean.
Most of the above changes won't mean much to the end user. The onus of translating the changes into something a user can experience is upon app developers, who are expected to take advantage of it to create a better user experience.