Why Doesn’t Every iOS Photo App Support High-Resolution Output?

This content first appeared as a forum post on iphoneographers.net. I am maintaing a copy on tinrocket.com for periodic updating. Head to iphoneographers.net for discussion of this topic!

 

“Full resolution” support in photo apps is an active topic with iPhoneographer’s. I’m starting this thread so developers and users can talk about why it’s wanted, developers can talk about why it’s not always possible, or—oops—why they thought it wasn’t important!

First, let’s define some terms:

  • Native Resolution” means the highest native pixel resolution supported by the camera on the device running the app.
  • Full Resolution” refers to the image size that’s imported into an app, i.e., a 20 MP DSLR image copied to the iPhone/iPad and opened in Snapseed.
  • High Resolution” is a loose term to refer to either Native or Full Resolution.
  • Low Resolution” does not necessarily mean anything less than Native Resolution, but refers to a resolution that would not be considered “acceptable quality” by users.
  • Less-Than-Native Resolution” simply means anything less than Native Resolution.

As a user of photo apps, I think native resolution is great to have but full resolution support is even better.

As a photo app developer, I know that it’s sometimes hard to support even native resolution, so we pick a resolution we think is reasonable based on “how we think people will want to use the image” and support that. Then the App Store reviews start rolling in…

So, here’s my off-the-cuff list of technical and not-so-technical reasons and/or excuses for why apps sometimes don’t support high resolutions.

  • Ignorance: The developer does not know that high resolution support is important to users
  • Developer experience: aka, “High resolution is hard!” It’s much easier to draw to the screen (to either CoreGraphics or OpenGL) and export that view to the camera roll. I really feel there’s no excuse for shipping an app with this minimal level of functionality IF it is possible to draw the same results to high resolution off-screen memory.
  • Nature of the image creation method: There are some techniques where it’s ONLY possible to export the visible graphics area, such as an OpenGL app that accumulates many images over time, a “slow shutter” app, or a “liquid photo” simulation. In these cases, the image processing is occurring in realtime at the expense of resolution, where increased resolution would impact memory and/or processing time.
  • Camera hardware limitations: Some camera modes on the iPhone and iPad have limits on the resolution supported. For instance, realtime previewing of photo filters isn’t just limited by the amount of time required to process the image, iOS is also sending the app a smaller-than-native resolution version of the image.Although the iPhone will allow the capturing of a high resolution image while generating low resolution previews for real-time processing, there is a hardware delay when the camera makes the switch. The high resolution capture will not match the last low resolution preview that was generated—it occurs at a later point in time—which is a problem for capturing action shots. That’s why some burst mode apps don’t support native resolution—the developer chose time time accuracyover resolution.
  • Memory: It’s very easy for Apple’s built-in camera app to support native resolutions because it only needs to keep a couple versions of the image around, i.e., HDR mode captures 2 copies of an image, blends it into a third, and creates another copy of the result to save to the camera roll. More complicated apps may need to make many copies of the image as well as image maps of equal size (textures, fancy borders) and keep them in active memory so you can preview different combinations of effects reasonably fast.
  • CPU: As powerful as the iPhone and iPad are, they still have a mobile, not desktop class, CPU. Much of the first generation iPhone’s magic was that it used fast graphics processing on a GPU at the user interface level to hide this from the user—fancy transitions made the iPhone feel responsive and fast. The GPU is highly optimized and specialized for the work it can do. The majority of photo apps don’t—or can’t—use the GPU for image processing, which I’ll explain in a bit. Suffice it to say, some apps have to use the slower CPU because it’s “more universal” in the types of calculations that need to be done.
  • Bandwidth: If a photo app depends on the internet, then image size may be limited because of the time and bandwidth required to transmit images.
  • Scalability: Some graphics algorithms will execute on the CPU in a reasonable amount of time when working on a less-than-native resolution image. If the algorithm was scaled to larger resolutions it would quickly become “too slow” and make users unhappy. Example: Image filters where a radius is used, such as a blur. This is implemented as a “window of pixels” by the developer, so a blur filter with a radius of 3 translates to a window 5×5 pixels, which is applied to every individual pixel in the image. If an image has 16×16 image has 256 total pixels and a filter of radius 3 is run on it, then roughly 5×5×256 = 6,400 pixels are processed. (I say roughly because the image edges can be handle many ways: either they are ignored, or the source image is padded, the pixels wrap around, etc.) When we scale the image up, we increase the total number of pixels that need processing. So, if we were working with a 16×16 image and now want to process a 32×32 image, the 32×32 image is not “double the size” of the 16×16 image—the total number of pixels is actually squared: Our 16×16 image had 256 pixels but now our 32×32 image has 1024 pixels—four times as many.In order to use our original filter at a higher resolution and maintain the visual effect achieved at lower resolutions, the filter needs to be scaled as well: For the above increase in image size, our 5×5 filter is now 11×11 making the total number of pixels processed 11×11×1024 = 123,904 pixels. That’s 19.36 times more pixels to process than an image that, at first glance, appeared to be “double the size!” This increase in processing time is why developers sometimes put a cap on the maximum image size. Another reason would be if the filter cannot be scaled with the image size, and after a certain size the quality of the resulting image is reduced. An example of this might be a filter that generates cartoon edges on images: For very large images, the resulting lines may look too thin, so the developer may decide to cap the max resolution so lines always look good.
  • Writing for the CPU is less complex than writing for the GPU: There’s a lot of graphics power packed into the iPhone and iPad in the GPU—the thing that moves the interface graphics around and drives 3d games—but it’s more complex to write code for the GPU than the CPU. Sometimes it’s hair-pulling, tear-inducingly complex. I’m avoiding using the word hard on purpose—that’s relative from developer to developer—but it is a fact that it’s more complex.So, the developer may opt to write their image processing code on the CPU, but they run into limitations on how long they can reasonably take to process an image, and hence the resolution is limited.
  • The CPU is more flexible than the GPU: There are many graphics algorithms that are easy to write for the CPU but can’t be easily translated to the GPU for increased speed.Again, the developer may have to write their image processing code on the CPU and run into the limitations on how long they can take to process an image, so the resolution is again limited.
  • The concept of native resolution is irrelevant: For photo apps that convert images to geometric shapes such as circles or triangles, the source photo is only used as a jumping off point for the final image the user sees. In some cases, the source photo will sometimes be resized to no larger than 512×512, converted to geometry, and then tossed out. The source photo has been Transformed into a different format. Here, the resolution supported is arbitrary—it’s up to the developer to set a limit on how big they want to make the output and/or whether to support vector PDF for ‘Infinite’ Resolution.
  • The app is old: A photo app written 3 years ago may have had and acceptable quality of output when it was first released, but as better iPhone cameras were released, the app gradually supports only “low resolution.” The app may or may not have been updated, or the way the app was originally built isn’t easily updated to support higher resolutions.
  • The app needs to support a range of devices: Sometimes, the app developer will pick the lowest common [hardware] denominator, i.e., “3G iPhone” to simplify development and support.
  • High resolution support sometimes takes more resources: Adding support for high resolution may mean taking extra time to develop and test the app. It’s up to the developer to know whether they can afford it or not. It may be faster to release an app with acceptable quality resolution, gauge how well the app as a whole (and not just by the resolution it supports) is received by users, before committing to adding high resolution support in a future update.
  • Value in high resolution: The app developer may see high resolution support as being valuable to a subset of users—”Pro” users for example—and may hold back on high resolution support so it can be introduced as an In App Purchase or different version of the app, such as a desktop version.

 

August 22, 2012; Updated to correct kernel size math and add a new item, “Value in high resolution”.

February 11, 2013; Translated and reblogged (with permission) en Español.

Comments are closed.